Elastic Notion Connector reference
editElastic Notion Connector reference
editThe Notion connector is written in Python using the Elastic connector framework. View the source code for this connector (branch 8.x, compatible with Elastic 8.17).
Elastic managed connector reference
editView Elastic managed connector reference
Availability and prerequisites
editThis managed connector was introduced in Elastic 8.14.0 as a managed service on Elastic Cloud.
To use this connector natively in Elastic Cloud, satisfy all managed connector requirements.
This connector is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
Usage
editTo use this connector in the UI, select the Notion tile when creating a new connector under Search → Connectors.
If you’re already familiar with how connectors work, you can also use the Connector APIs.
For additional operations, see Connectors UI in Kibana.
Create a Notion connector
editUse the UI
editTo create a new Notion connector:
- In the Kibana UI, navigate to the Search → Content → Connectors page from the main menu, or use the global search field.
- Follow the instructions to create a new native Notion connector.
For additional operations, see Connectors UI in Kibana.
Use the API
editYou can use the Elasticsearch Create connector API to create a new native Notion connector.
For example:
resp = client.connector.put( connector_id="my-{service-name-stub}-connector", index_name="my-elasticsearch-index", name="Content synced from {service-name}", service_type="{service-name-stub}", is_native=True, ) print(resp)
PUT _connector/my-notion-connector { "index_name": "my-elasticsearch-index", "name": "Content synced from Notion", "service_type": "notion", "is_native": true }
You’ll also need to create an API key for the connector to use.
The user needs the cluster privileges manage_api_key
, manage_connector
and write_connector_secrets
to generate API keys programmatically.
To create an API key for the connector:
-
Run the following command, replacing values where indicated. Note the
id
andencoded
return values from the response:resp = client.security.create_api_key( name="my-connector-api-key", role_descriptors={ "my-connector-connector-role": { "cluster": [ "monitor", "manage_connector" ], "indices": [ { "names": [ "my-index_name", ".search-acl-filter-my-index_name", ".elastic-connectors*" ], "privileges": [ "all" ], "allow_restricted_indices": False } ] } }, ) print(resp)
const response = await client.security.createApiKey({ name: "my-connector-api-key", role_descriptors: { "my-connector-connector-role": { cluster: ["monitor", "manage_connector"], indices: [ { names: [ "my-index_name", ".search-acl-filter-my-index_name", ".elastic-connectors*", ], privileges: ["all"], allow_restricted_indices: false, }, ], }, }, }); console.log(response);
POST /_security/api_key { "name": "my-connector-api-key", "role_descriptors": { "my-connector-connector-role": { "cluster": [ "monitor", "manage_connector" ], "indices": [ { "names": [ "my-index_name", ".search-acl-filter-my-index_name", ".elastic-connectors*" ], "privileges": [ "all" ], "allow_restricted_indices": false } ] } } }
-
Use the
encoded
value to store a connector secret, and note theid
return value from this response:resp = client.connector.secret_post( body={ "value": "encoded_api_key" }, ) print(resp)
const response = await client.connector.secretPost({ body: { value: "encoded_api_key", }, }); console.log(response);
POST _connector/_secret { "value": "encoded_api_key" }
-
Use the API key
id
and the connector secretid
to update the connector:resp = client.connector.update_api_key_id( connector_id="my_connector_id>", api_key_id="API key_id", api_key_secret_id="secret_id", ) print(resp)
const response = await client.connector.updateApiKeyId({ connector_id: "my_connector_id>", api_key_id: "API key_id", api_key_secret_id: "secret_id", }); console.log(response);
PUT /_connector/my_connector_id>/_api_key_id { "api_key_id": "API key_id", "api_key_secret_id": "secret_id" }
Refer to the Elasticsearch API documentation for details of all available Connector APIs.
Connecting to Notion
editTo connect to Notion, the user needs to create an internal integration for their Notion workspace, which can access resources using the Internal Integration Secret Token. Configure the Integration with following settings:
-
Users must grant
READ
permission for content, comment and user capabilities for that integration from the Capabilities tab. - Users must manually add the integration as a connection to the top-level pages in a workspace. Sub-pages will inherit the connections of the parent page automatically.
Configuration
editNote the following configuration fields:
-
Notion Secret Key
(required) -
Secret token assigned to your integration, for a particular workspace. Example:
-
zyx-123453-12a2-100a-1123-93fd09d67394
-
-
Databases
(required) -
Comma-separated list of database names to be fetched by the connector. If the value is
*
, connector will fetch all the databases available in the workspace. Example:-
database1, database2
-
*
-
-
Pages
(required) -
Comma-separated list of page names to be fetched by the connector. If the value is
*
, connector will fetch all the pages available in the workspace. Examples:-
*
-
Page1, Page2
-
-
Index Comments
-
Toggle to enable fetching and indexing of comments from the Notion workspace for the configured pages, databases and the corresponding child blocks. Default value is
False
.
Enabling comment indexing could impact connector performance due to increased network calls. Therefore, by default this value is False
.
Content Extraction
editRefer to content extraction.
Documents and syncs
editThe connector syncs the following objects and entities:
-
Pages
-
Includes metadata such as
page name
,id
,last updated time
, etc.
-
Includes metadata such as
-
Blocks
-
Includes metadata such as
title
,type
,id
,content
(in case of file block), etc.
-
Includes metadata such as
-
Databases
-
Includes metadata such as
name
,id
,records
,size
, etc.
-
Includes metadata such as
-
Users
-
Includes metadata such as
name
,id
,email address
, etc.
-
Includes metadata such as
-
Comments
-
Includes the content and metadata such as
id
,last updated time
,created by
, etc. - Note: Comments are excluded by default.
-
Includes the content and metadata such as
- Files bigger than 10 MB won’t be extracted.
- Permissions are not synced. All documents indexed to an Elastic deployment will be visible to all users with access to the relevant Elasticsearch index.
Sync rules
editBasic sync rules are identical for all connectors and are available by default.
Advanced sync rules
editA full sync is required for advanced sync rules to take effect.
The following section describes advanced sync rules for this connector, to filter data in Notion before indexing into Elasticsearch. Advanced sync rules are defined through a source-specific DSL JSON snippet.
Advanced sync rules for Notion take the following parameters:
-
searches
: Notion’s search filter to search by title. -
query
: Notion’s database query filter to fetch a specific database.
Examples
edit======= Example 1
Indexing every page where the title contains Demo Page
:
{ "searches": [ { "filter": { "value": "page" }, "query": "Demo Page" } ] }
======= Example 2
Indexing every database where the title contains Demo Database
:
{ "searches": [ { "filter": { "value": "database" }, "query": "Demo Database" } ] }
======= Example 3
Indexing every database where the title contains Demo Database
and every page where the title contains Demo Page
:
{ "searches": [ { "filter": { "value": "database" }, "query": "Demo Database" }, { "filter": { "value": "page" }, "query": "Demo Page" } ] }
======= Example 4
Indexing all pages in the workspace:
{ "searches": [ { "filter": { "value": "page" }, "query": "" } ] }
======= Example 5
Indexing all the pages and databases connected to the workspace:
{ "searches":[ { "query":"" } ] }
======= Example 6
Indexing all the rows of a database where the record is true
for the column Task completed
and its property(datatype) is a checkbox:
{ "database_query_filters": [ { "filter": { "property": "Task completed", "checkbox": { "equals": true } }, "database_id": "database_id" } ] }
======= Example 7
Indexing all rows of a specific database:
{ "database_query_filters": [ { "database_id": "database_id" } ] }
======= Example 8
Indexing all blocks defined in searches
and database_query_filters
:
{ "searches":[ { "query":"External tasks", "filter":{ "value":"database" } }, { "query":"External tasks", "filter":{ "value":"page" } } ], "database_query_filters":[ { "database_id":"notion_database_id1", "filter":{ "property":"Task completed", "checkbox":{ "equals":true } } } ] }
In this example the filter
object syntax for database_query_filters
is defined per the Notion documentation.
Known issues
edit-
Updates to new pages may not be reflected immediately in the Notion API.
This could lead to these pages not being indexed by the connector, if a sync is initiated immediately after their addition. To ensure all pages are indexed, initiate syncs a few minutes after adding pages to Notion.
-
Notion’s Public API does not support linked databases.
Linked databases in Notion are copies of a database that can be filtered, sorted, and viewed differently. To fetch the information in a linked database, you need to target the original source database. For more details refer to the Notion documentation.
-
Documents'
properties
objects are serialized as strings underdetails
.Notion’s schema for
properties
is not consistent, and can lead todocument_parsing_exceptions
if indexed to Elasticsearch as an object. For this reason, theproperties
object is instead serialized as a JSON string, and stored under thedetails
field. If you need to search a sub-object fromproperties
, you may need to post-process thedetails
field in an ingest pipeline to extract your desired subfield(s).
Refer to Known issues for a list of known issues for all connectors.
Troubleshooting
editSee Troubleshooting.
Security
editSee Security.
Self-managed connector reference
editView self-managed connector reference
Availability and prerequisites
editThis connector was introduced in Elastic 8.13.0, available as a self-managed self-managed connector.
To use this connector, satisfy all self-managed connector prerequisites. Importantly, you must deploy the connectors service on your own infrastructure. You have two deployment options:
- Run connectors service from source. Use this option if you’re comfortable working with Python and want to iterate quickly locally.
- Run connectors service in Docker. Use this option if you want to deploy the connectors to a server, or use a container orchestration platform.
This connector is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
Usage
editTo use this connector in the UI, select the Notion tile when creating a new connector under Search → Connectors.
For additional operations, see Connectors UI in Kibana.
Create a Notion connector
editUse the UI
editTo create a new Notion connector:
- In the Kibana UI, navigate to the Search → Content → Connectors page from the main menu, or use the global search field.
- Follow the instructions to create a new Notion self-managed connector.
Use the API
editYou can use the Elasticsearch Create connector API to create a new self-managed Notion self-managed connector.
For example:
resp = client.connector.put( connector_id="my-{service-name-stub}-connector", index_name="my-elasticsearch-index", name="Content synced from {service-name}", service_type="{service-name-stub}", ) print(resp)
PUT _connector/my-notion-connector { "index_name": "my-elasticsearch-index", "name": "Content synced from Notion", "service_type": "notion" }
You’ll also need to create an API key for the connector to use.
The user needs the cluster privileges manage_api_key
, manage_connector
and write_connector_secrets
to generate API keys programmatically.
To create an API key for the connector:
-
Run the following command, replacing values where indicated. Note the
encoded
return values from the response:resp = client.security.create_api_key( name="connector_name-connector-api-key", role_descriptors={ "connector_name-connector-role": { "cluster": [ "monitor", "manage_connector" ], "indices": [ { "names": [ "index_name", ".search-acl-filter-index_name", ".elastic-connectors*" ], "privileges": [ "all" ], "allow_restricted_indices": False } ] } }, ) print(resp)
const response = await client.security.createApiKey({ name: "connector_name-connector-api-key", role_descriptors: { "connector_name-connector-role": { cluster: ["monitor", "manage_connector"], indices: [ { names: [ "index_name", ".search-acl-filter-index_name", ".elastic-connectors*", ], privileges: ["all"], allow_restricted_indices: false, }, ], }, }, }); console.log(response);
POST /_security/api_key { "name": "connector_name-connector-api-key", "role_descriptors": { "connector_name-connector-role": { "cluster": [ "monitor", "manage_connector" ], "indices": [ { "names": [ "index_name", ".search-acl-filter-index_name", ".elastic-connectors*" ], "privileges": [ "all" ], "allow_restricted_indices": false } ] } } }
-
Update your
config.yml
file with the API keyencoded
value.
Refer to the Elasticsearch API documentation for details of all available Connector APIs.
Connecting to Notion
editTo connect to Notion, the user needs to create an internal integration for their Notion workspace, which can access resources using the Internal Integration Secret Token. Configure the Integration with following settings:
-
Users must grant
READ
permission for content, comment and user capabilities for that integration from the Capabilities tab. - Users must manually add the integration as a connection to the top-level pages in a workspace. Sub-pages will inherit the connections of the parent page automatically.
Deploy with Docker
editYou can deploy the Notion connector as a self-managed connector using Docker. Follow these instructions.
Step 1: Download sample configuration file
Download the sample configuration file. You can either download it manually or run the following command:
curl https://raw.githubusercontent.com/elastic/connectors/main/config.yml.example --output ~/connectors-config/config.yml
Remember to update the --output
argument value if your directory name is different, or you want to use a different config file name.
Step 2: Update the configuration file for your self-managed connector
Update the configuration file with the following settings to match your environment:
-
elasticsearch.host
-
elasticsearch.api_key
-
connectors
If you’re running the connector service against a Dockerized version of Elasticsearch and Kibana, your config file will look like this:
# When connecting to your cloud deployment you should edit the host value elasticsearch.host: http://host.docker.internal:9200 elasticsearch.api_key: <ELASTICSEARCH_API_KEY> connectors: - connector_id: <CONNECTOR_ID_FROM_KIBANA> service_type: notion api_key: <CONNECTOR_API_KEY_FROM_KIBANA> # Optional. If not provided, the connector will use the elasticsearch.api_key instead
Using the elasticsearch.api_key
is the recommended authentication method. However, you can also use elasticsearch.username
and elasticsearch.password
to authenticate with your Elasticsearch instance.
Note: You can change other default configurations by simply uncommenting specific settings in the configuration file and modifying their values.
Step 3: Run the Docker image
Run the Docker image with the Connector Service using the following command:
docker run \ -v ~/connectors-config:/config \ --network "elastic" \ --tty \ --rm \ docker.elastic.co/enterprise-search/elastic-connectors:8.17.0.0 \ /app/bin/elastic-ingest \ -c /config/config.yml
Refer to DOCKER.md
in the elastic/connectors
repo for more details.
Find all available Docker images in the official registry.
We also have a quickstart self-managed option using Docker Compose, so you can spin up all required services at once: Elasticsearch, Kibana, and the connectors service.
Refer to this README in the elastic/connectors
repo for more information.
Configuration
editNote the following configuration fields:
-
Notion Secret Key
(required) -
Secret token assigned to your integration, for a particular workspace. Example:
-
zyx-123453-12a2-100a-1123-93fd09d67394
-
-
Databases
(required) -
Comma-separated list of database names to be fetched by the connector. If the value is
*
, connector will fetch all the databases available in the workspace. Example:-
database1, database2
-
*
-
-
Pages
(required) -
Comma-separated list of page names to be fetched by the connector. If the value is
*
, connector will fetch all the pages available in the workspace. Examples:-
*
-
Page1, Page2
-
-
Index Comments
-
Toggle to enable fetching and indexing of comments from the Notion workspace for the configured pages, databases and the corresponding child blocks. Default value is
False
.
Enabling comment indexing could impact connector performance due to increased network calls. Therefore, by default this value is False
.
Content Extraction
editRefer to content extraction.
Documents and syncs
editThe connector syncs the following objects and entities:
-
Pages
-
Includes metadata such as
page name
,id
,last updated time
, etc.
-
Includes metadata such as
-
Blocks
-
Includes metadata such as
title
,type
,id
,content
(in case of file block), etc.
-
Includes metadata such as
-
Databases
-
Includes metadata such as
name
,id
,records
,size
, etc.
-
Includes metadata such as
-
Users
-
Includes metadata such as
name
,id
,email address
, etc.
-
Includes metadata such as
-
Comments
-
Includes the content and metadata such as
id
,last updated time
,created by
, etc. - Note: Comments are excluded by default.
-
Includes the content and metadata such as
- Files bigger than 10 MB won’t be extracted.
- Permissions are not synced. All documents indexed to an Elastic deployment will be visible to all users with access to the relevant Elasticsearch index.
Sync rules
editBasic sync rules are identical for all connectors and are available by default.
Advanced sync rules
editA full sync is required for advanced sync rules to take effect.
The following section describes advanced sync rules for this connector, to filter data in Notion before indexing into Elasticsearch. Advanced sync rules are defined through a source-specific DSL JSON snippet.
Advanced sync rules for Notion take the following parameters:
-
searches
: Notion’s search filter to search by title. -
query
: Notion’s database query filter to fetch a specific database.
Examples
edit======= Example 1
Indexing every page where the title contains Demo Page
:
{ "searches": [ { "filter": { "value": "page" }, "query": "Demo Page" } ] }
======= Example 2
Indexing every database where the title contains Demo Database
:
{ "searches": [ { "filter": { "value": "database" }, "query": "Demo Database" } ] }
======= Example 3
Indexing every database where the title contains Demo Database
and every page where the title contains Demo Page
:
{ "searches": [ { "filter": { "value": "database" }, "query": "Demo Database" }, { "filter": { "value": "page" }, "query": "Demo Page" } ] }
======= Example 4
Indexing all pages in the workspace:
{ "searches": [ { "filter": { "value": "page" }, "query": "" } ] }
======= Example 5
Indexing all the pages and databases connected to the workspace:
{ "searches":[ { "query":"" } ] }
======= Example 6
Indexing all the rows of a database where the record is true
for the column Task completed
and its property(datatype) is a checkbox:
{ "database_query_filters": [ { "filter": { "property": "Task completed", "checkbox": { "equals": true } }, "database_id": "database_id" } ] }
======= Example 7
Indexing all rows of a specific database:
{ "database_query_filters": [ { "database_id": "database_id" } ] }
======= Example 8
Indexing all blocks defined in searches
and database_query_filters
:
{ "searches":[ { "query":"External tasks", "filter":{ "value":"database" } }, { "query":"External tasks", "filter":{ "value":"page" } } ], "database_query_filters":[ { "database_id":"notion_database_id1", "filter":{ "property":"Task completed", "checkbox":{ "equals":true } } } ] }
In this example the filter
object syntax for database_query_filters
is defined per the Notion documentation.
Connector Client operations
editEnd-to-end Testing
editThe connector framework enables operators to run functional tests against a real data source, using Docker Compose. You don’t need a running Elasticsearch instance or Notion source to run this test.
Refer to Connector testing for more details.
To perform E2E testing for the Notion connector, run the following command:
$ make ftest NAME=notion
For faster tests, add the DATA_SIZE=small
flag:
make ftest NAME=notion DATA_SIZE=small
By default, DATA_SIZE=MEDIUM
.
Known issues
edit-
Updates to new pages may not be reflected immediately in the Notion API.
This could lead to these pages not being indexed by the connector, if a sync is initiated immediately after their addition. To ensure all pages are indexed, initiate syncs a few minutes after adding pages to Notion.
-
Notion’s Public API does not support linked databases.
Linked databases in Notion are copies of a database that can be filtered, sorted, and viewed differently. To fetch the information in a linked database, you need to target the original source database. For more details refer to the Notion documentation.
-
Documents'
properties
objects are serialized as strings underdetails
.Notion’s schema for
properties
is not consistent, and can lead todocument_parsing_exceptions
if indexed to Elasticsearch as an object. For this reason, theproperties
object is instead serialized as a JSON string, and stored under thedetails
field. If you need to search a sub-object fromproperties
, you may need to post-process thedetails
field in an ingest pipeline to extract your desired subfield(s).
Refer to Known issues for a list of known issues for all connectors.
Troubleshooting
editSee Troubleshooting.
Security
editSee Security.