Redis connector reference
editRedis connector reference
editThe Redis connector is built with the Elastic connectors Python framework and is available as a self-managed connector client. View the source code for this connector (branch 8.15, compatible with Elastic 8.15).
Availability and prerequisites
editThis connector was introduced in Elastic 8.13.0, available as a self-managed connector client.
To use this connector, satisfy all connector client prerequisites. Importantly, you must deploy the connectors service on your own infrastructure. You have two deployment options:
- Run connectors service from source. Use this option if you’re comfortable working with Python and want to iterate quickly locally.
- Run connectors service in Docker. Use this option if you want to deploy the connectors to a server, or use a container orchestration platform.
This connector is in technical preview and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Technical preview features are not subject to the support SLA of official GA features.
Usage
editTo set up this connector in the UI, select the Redis tile when creating a new connector under Search → Connectors.
For additional operations, see Using connectors.
Deploy with Docker
editYou can deploy the Redis connector as a self-managed connector client using Docker. Follow these instructions.
Step 1: Download sample configuration file
Download the sample configuration file. You can either download it manually or run the following command:
curl https://raw.githubusercontent.com/elastic/connectors/main/config.yml.example --output ~/connectors-config/config.yml
Remember to update the --output
argument value if your directory name is different, or you want to use a different config file name.
Step 2: Update the configuration file for your self-managed connector
Update the configuration file with the following settings to match your environment:
-
elasticsearch.host
-
elasticsearch.api_key
-
connectors
If you’re running the connector service against a Dockerized version of Elasticsearch and Kibana, your config file will look like this:
# When connecting to your cloud deployment you should edit the host value elasticsearch.host: http://host.docker.internal:9200 elasticsearch.api_key: <ELASTICSEARCH_API_KEY> connectors: - connector_id: <CONNECTOR_ID_FROM_KIBANA> service_type: redis api_key: <CONNECTOR_API_KEY_FROM_KIBANA> # Optional. If not provided, the connector will use the elasticsearch.api_key instead
Using the elasticsearch.api_key
is the recommended authentication method. However, you can also use elasticsearch.username
and elasticsearch.password
to authenticate with your Elasticsearch instance.
Note: You can change other default configurations by simply uncommenting specific settings in the configuration file and modifying their values.
Step 3: Run the Docker image
Run the Docker image with the Connector Service using the following command:
docker run \ -v ~/connectors-config:/config \ --network "elastic" \ --tty \ --rm \ docker.elastic.co/enterprise-search/elastic-connectors:8.15.5.0 \ /app/bin/elastic-ingest \ -c /config/config.yml
Refer to DOCKER.md
in the elastic/connectors
repo for more details.
Find all available Docker images in the official registry.
We also have a quickstart self-managed option using Docker Compose, so you can spin up all required services at once: Elasticsearch, Kibana, and the connectors service.
Refer to this README in the elastic/connectors
repo for more information.
Configuration
edit-
host
(required) -
The IP of your Redis server/cloud. Example:
-
127.0.0.1
-
redis-12345.us-east-1.ec2.cloud.redislabs.com
-
-
port
(required) -
Port where the Redis server/cloud instance is hosted. Example:
-
6379
-
-
username
(optional) -
Username for your Redis server/cloud. Example:
-
default
-
-
password
(optional) -
Password for your Redis server/cloud instance. Example:
-
changeme
-
-
database
(required) -
List of database index for your Redis server/cloud. * will fetch data from all databases. Example:
-
0,1,2
-
*
This field is ignored when using advanced sync rules.
-
-
ssl_enabled
- Toggle to use SSL/TLS. Disabled by default.
-
mutual_tls_enabled
-
Toggle to use secure mutual SSL/TLS. Ensure that your Redis deployment supports mutual SSL/TLS connections. Disabled by default. Depends on
ssl_enabled
. -
tls_certfile
-
Specifies the certificate from the Certificate Authority. The value of the certificate is used to validate the certificate presented by the Redis instance. Depends on
mutual_tls_enabled
. -
tls_keyfile
-
Specifies the client private key. The value of the key is used to validate the connection in the Redis instance.
Depends on
mutual_tls_enabled
.
Documents and syncs
editThe connector syncs the following objects and entities:
- KEYS and VALUES of every database index
- Permissions are not synced. All documents indexed to an Elastic deployment will be visible to all users with access to the relevant Elasticsearch index.
Sync rules
editBasic sync rules are identical for all connectors and are available by default.
Advanced Sync Rules
editAdvanced sync rules are defined through a source-specific DSL JSON snippet.
Use advanced sync rules to filter data at the Redis source, without needing to index all data into Elasticsearch.
They take the following parameters:
-
database
: Specify the Redis database index as an integer value. -
key_pattern
: 2.key_pattern
: Pattern for finding keys in Redis. -
type
: Key type for the Redis.Supported values:
-
HASH
-
LIST
-
SET
-
STREAM
-
STRING
-
ZSET
-
Provide at least one of the following: key_pattern
or type
, or both.
Advanced sync rules examples
editExample 1
editFetch database records where keys start with alpha
:
[ { "database": 0, "key_pattern": "alpha*" } ]
Example 2
editFetch database records with exact match by specifying the full key name:
[ { "database": 0, "key_pattern": "alpha" } ]
Example 3
editFetch database records where keys start with test1
, test2
or test3
:
[ { "database": 0, "key_pattern": "test[123]" } ]
Example 4
editExclude database records where keys start with test1
, test2
or test3
:
[ { "database": 0, "key_pattern": "test[^123]" } ]
Example 5
editFetch all database records:
[ { "database": 0, "key_pattern": "*" } ]
Example 6
editFetch all database records where type is SET
:
[ { "database": 0, "key_pattern": "*", "type": "SET" } ]
Example 7
editFetch database records where type is SET
:
[ { "database": 0, "type": "SET" } ]
Connector Client operations
editEnd-to-end Testing
editThe connector framework enables operators to run functional tests against a real data source, using Docker Compose. You don’t need a running Elasticsearch instance or Redis source to run this test.
Refer to Connector testing for more details.
To perform E2E testing for the Redis connector, run the following command:
$ make ftest NAME=redis
For faster tests, add the DATA_SIZE=small
flag:
make ftest NAME=redis DATA_SIZE=small
By default, DATA_SIZE=MEDIUM
.
Known issues
edit- The last modified time is unavailable when retrieving keys/values from the Redis database. As a result, all objects are indexed each time an advanced sync rule query is executed.
Refer to Known issues for a list of known issues for all connectors.
Troubleshooting
editSee Troubleshooting.
Security
editSee Security.