Saved Objects service

edit

The Saved Objects service is available both server and client side.

Saved Objects service allows Kibana plugins to use Elasticsearch like a primary database. Think of it as an Object Document Mapper for Elasticsearch. Once a plugin has registered one or more Saved Object types, the Saved Objects client can be used to query or perform create, read, update and delete operations on each type.

By using Saved Objects your plugin can take advantage of the following features:

  • Migrations can evolve your document’s schema by transforming documents and ensuring that the field mappings on the index are always up to date.
  • a HTTP API is automatically exposed for each type (unless hidden=true is specified).
  • a Saved Objects client that can be used from both the server and the browser.
  • Users can import or export Saved Objects using the Saved Objects management UI or the Saved Objects import/export API.
  • By declaring references, an object’s entire reference graph will be exported. This makes it easy for users to export e.g. a dashboard object and have all the visualization objects required to display the dashboard included in the export.
  • When the X-Pack security and spaces plugins are enabled these transparently provide RBAC access control and the ability to organize Saved Objects into spaces.

This document contains developer guidelines and best-practices for plugins wanting to use Saved Objects.

Server side usage

edit

Registering a Saved Object type

edit

Saved object type definitions should be defined in their own my_plugin/server/saved_objects directory.

The folder should contain a file per type, named after the snake_case name of the type, and an index.ts file exporting all the types.

src/plugins/my_plugin/server/saved_objects/dashboard_visualization.ts.

import { SavedObjectsType } from 'src/core/server';

export const dashboardVisualization: SavedObjectsType = {
  name: 'dashboard_visualization', 
  hidden: false,
  namespaceType: 'multiple-isolated', 
  mappings: {
    dynamic: false,
    properties: {
      description: {
        type: 'text',
      },
      hits: {
        type: 'integer',
      },
    },
  },
  migrations: {
    '1.0.0': migratedashboardVisualizationToV1,
    '2.0.0': migratedashboardVisualizationToV2,
  },
};

Since the name of a Saved Object type forms part of the url path for the public Saved Objects HTTP API, these should follow our API URL path convention and always be written as snake case.

This field determines "space behavior" — whether these objects can exist in one space, multiple spaces, or all spaces. This value means that objects of this type can only exist in a single space. See Sharing Saved Objects for more information.

src/plugins/my_plugin/server/saved_objects/index.ts.

export { dashboardVisualization } from './dashboard_visualization';
export { dashboard } from './dashboard';

src/plugins/my_plugin/server/plugin.ts.

import { dashboard, dashboardVisualization } from './saved_objects';

export class MyPlugin implements Plugin {
  setup({ savedObjects }) {
    savedObjects.registerType(dashboard);
    savedObjects.registerType(dashboardVisualization);
  }
}

Mappings

edit

Each Saved Object type can define it’s own Elasticsearch field mappings. Because multiple Saved Object types can share the same index, mappings defined by a type will be nested under a top-level field that matches the type name.

For example, the mappings defined by the dashboard_visualization Saved Object type:

src/plugins/my_plugin/server/saved_objects/dashboard_visualization.ts.

import { SavedObjectsType } from 'src/core/server';

export const dashboardVisualization: SavedObjectsType = {
  name: 'dashboard_visualization',
  ...
  mappings: {
    properties: {
      dynamic: false,
      description: {
        type: 'text',
      },
      hits: {
        type: 'integer',
      },
    },
  },
  migrations: { ... },
};

Will result in the following mappings being applied to the .kibana index:

{
  "mappings": {
    "dynamic": "strict",
    "properties": {
      ...
      "dashboard_vizualization": {
        "dynamic": false,
        "properties": {
          "description": {
            "type": "text",
          },
          "hits": {
            "type": "integer",
          },
        },
      }
    }
  }
}

Do not use field mappings like you would use data types for the columns of a SQL database. Instead, field mappings are analogous to a SQL index. Only specify field mappings for the fields you wish to search on or query. By specifying dynamic: false in any level of your mappings, Elasticsearch will accept and store any other fields even if they are not specified in your mappings.

Since Elasticsearch has a default limit of 1000 fields per index, plugins should carefully consider the fields they add to the mappings. Similarly, Saved Object types should never use dynamic: true as this can cause an arbitrary amount of fields to be added to the .kibana index.

Writing Migrations

edit

Saved Objects support schema changes between Kibana versions, which we call migrations. Migrations are applied when a Kibana installation is upgraded from one version to the next, when exports are imported via the Saved Objects Management UI, or when a new object is created via the HTTP API.

Each Saved Object type may define migrations for its schema. Migrations are specified by the Kibana version number, receive an input document, and must return the fully migrated document to be persisted to Elasticsearch.

Let’s say we want to define two migrations: - In version 1.1.0, we want to drop the subtitle field and append it to the title - In version 1.4.0, we want to add a new id field to every panel with a newly generated UUID.

First, the current mappings should always reflect the latest or "target" schema. Next, we should define a migration function for each step in the schema evolution:

src/plugins/my_plugin/server/saved_objects/dashboard_visualization.ts

import { SavedObjectsType, SavedObjectMigrationFn } from 'src/core/server';
import uuid from 'uuid';

interface DashboardVisualizationPre110 {
  title: string;
  subtitle: string;
  panels: Array<{}>;
}
interface DashboardVisualization110 {
  title: string;
  panels: Array<{}>;
}

interface DashboardVisualization140 {
  title: string;
  panels: Array<{ id: string }>;
}

const migrateDashboardVisualization110: SavedObjectMigrationFn<
  DashboardVisualizationPre110, 
  DashboardVisualization110
> = (doc) => {
  const { subtitle, ...attributesWithoutSubtitle } = doc.attributes;
  return {
    ...doc, 
    attributes: {
      ...attributesWithoutSubtitle,
      title: `${doc.attributes.title} - ${doc.attributes.subtitle}`,
    },
  };
};

const migrateDashboardVisualization140: SavedObjectMigrationFn<
  DashboardVisualization110,
  DashboardVisualization140
> = (doc) => {
  const outPanels = doc.attributes.panels?.map((panel) => {
    return { ...panel, id: uuid.v4() };
  });
  return {
    ...doc,
    attributes: {
      ...doc.attributes,
      panels: outPanels,
    },
  };
};

export const dashboardVisualization: SavedObjectsType = {
  name: 'dashboard_visualization', 
  /** ... */
  migrations: {
    // Takes a pre 1.1.0 doc, and converts it to 1.1.0
    '1.1.0': migrateDashboardVisualization110,

    // Takes a 1.1.0 doc, and converts it to 1.4.0
    '1.4.0': migrateDashboardVisualization140,  
  },
};

It is useful to define an interface for each version of the schema. This allows TypeScript to ensure that you are properly handling the input and output types correctly as the schema evolves.

Returning a shallow copy is necessary to avoid type errors when using different types for the input and output shape.

Migrations do not have to be defined for every version. The version number of a migration must always be the earliest Kibana version in which this migration was released. So if you are creating a migration which will be part of the v7.10.0 release, but will also be backported and released as v7.9.3, the migration version should be: 7.9.3.

Migrations should be written defensively, an exception in a migration function will prevent a Kibana upgrade from succeeding and will cause downtime for our users. Having said that, if a document is encountered that is not in the expected shape, migrations are encouraged to throw an exception to abort the upgrade. In most scenarios, it is better to fail an upgrade than to silently ignore a corrupt document which can cause unexpected behaviour at some future point in time.

Do not attempt to change the migrationVersion, id, or type fields within a migration function, this is not supported.

It is critical that you have extensive tests to ensure that migrations behave as expected with all possible input documents. Given how simple it is to test all the branch conditions in a migration function and the high impact of a bug in this code, there’s really no reason not to aim for 100% test code coverage.

Client side usage

edit

References

edit

When a Saved Object declares references to other Saved Objects, the Saved Objects Export API will automatically export the target object with all of its references. This makes it easy for users to export the entire reference graph of an object.

If a Saved Object can’t be used on its own, that is, it needs other objects to exist for a feature to function correctly, that Saved Object should declare references to all the objects it requires. For example, a dashboard object might have panels for several visualization objects. When these visualization objects don’t exist, the dashboard cannot be rendered correctly. The dashboard object should declare references to all its visualizations.

However, visualization objects can continue to be rendered or embedded into other dashboards even if the dashboard it was originally embedded into doesn’t exist. As a result, visualization objects should not declare references to dashboard objects.

For each referenced object, an id, type and name are added to the references array:

router.get(
  { path: '/some-path', validate: false },
  async (context, req, res) => {
    const object = await context.core.savedObjects.client.create(
      'dashboard',
      {
        title: 'my dashboard',
        panels: [
          { visualization: 'vis1' }, 
        ],
        indexPattern: 'indexPattern1'
      },
      { references: [
          { id: '...', type: 'visualization', name: 'vis1' },
          { id: '...', type: 'index_pattern', name: 'indexPattern1' },
        ]
      }
    )
    ...
  }
);

Note how dashboard.panels[0].visualization stores the name property of the reference (not the id directly) to be able to uniquely identify this reference. This guarantees that the id the reference points to always remains up to date. If a visualization id was directly stored in dashboard.panels[0].visualization there is a risk that this id gets updated without updating the reference in the references array.