Elasticsearch Versioning Support
One of the key principles behind Elasticsearch is to allow you to make the most out of your data. Historically, search was a read-only enterprise where a search engine was loaded with data from a single source. As the usage grows and Elasticsearch becomes more central to your application, it happens that data needs to be updated by multiple components. Multiple components lead to concurrency and concurrency leads to conflicts. Elasticsearch's versioning system is there to help cope with those conflicts.
The need for versioning - an example
To illustrate the situation, let's assume we have a website which people use to rate t-shirt design. The website is simple. It lists all designs and allows users to either give a design a thumbs up or vote them down using a thumbs down icon. For every t-shirt, the website shows the current balance of up votes vs down votes.
A record for each search engine looks like this:
curl -XPOST 'http://localhost:9200/designs/shirt/1' -d'
{
"name": "elasticsearch",
"votes": 999
}'
As you can see, each t-shirt design has a name and a votes counter to keep track of it's current balance.
To keeps things simple and scalable, the website is completely stateless. When someone looks at a page and clicks the up vote button, it sends an AJAX request to the server which should indicate to elasticsearch to update the counter. To do so, a naive implementation will take the current votes value, increment it by one and send that to elasticsearch:
curl -XPOST 'http://localhost:9200/designs/shirt/1' -d'
{
"name": "elasticsearch",
"votes": 1000
}'
This approach has a serious flaw - it may lose votes. Say both Adam and Eve are looking at the same page at the same time. At the moment the page shows 999 votes. Since both are fans, they both click the up vote button. Now Elasticsearch gets two identical copies of the above request to update the document, which it happily does. That means that instead of having a total vote count of 1001, the vote count is now 1000. Oops.
Of course, the update api allows you to be smarter and communicate the fact that the vote can be incremented rather than set to specific value:
curl -XPOST 'http://localhost:9200/designs/shirt/1/_update' -d'
{
"script" : "ctx._source.votes += 1"
}'
Doing it this way, means that Elasticsearch first retrieves the document internally, performs the update and indexes it again. While this makes things much more likely to succeed, it still carries the same potential problem as before. During the small window between retrieving and indexing the documents again, things can go wrong.
To deal with the above scenario and help with more complex ones, Elasticsearch comes with a built-in versioning system.
Elasticsearch's versioning system
Every document you store in Elasticsearch has an associated version number. That version number is a positive number between 1 and 2 63-1 (inclusive). When you index a document for the very first time, it gets the version 1 and you can see that in the response Elasticsearch returns. This is, for example, the result of the first cURL command in this blog post:
{
"ok": true,
"_index": "designs",
"_type": "shirt",
"_id": "1",
"_version": 1
}
With every write-operation to this document, whether it is an index, update or delete, Elasticsearch will increment the version by 1. This increment is atomic and is guaranteed to happen if the operation returned successfully.
Elasticsearch will also return the current version of documents with the response of get operations (remember those are real time) and it can also be instructed to return it with every search result.
Optimistic locking
So back in our toy example, we needed a solution to a scenario where potentially two users try to update the same document at the same time. Traditionally this will be solved with locking: before updating a document, one will acquire a lock on it, do the update and release the lock. When you have a lock on a document, you are guaranteed that no one will be able to change the document.
In many applications this also means that if someone is modifying a document no one else is able to read from it until the modification is done. This type of locking works but it comes with a price. In the context of high throughput systems, it has two main downsides:
- In many cases it is simply not needed. If done right, collisions are rare. Of course, they will happen but that will only be for a fraction of the operations the system does.
- Locking assumes you actually care. If you only want to render a webpage, you are probably fine with getting some slightly outdated but consistent value, even if the system knows it will change in a moment. Reads don't always need to wait for ongoing writes to complete.
Elasticsearch's versioning system allows you easily to use another pattern called optimistic locking. Instead of acquiring a lock every time, you tell Elasticsearch what version of the document you expect to find. If the document didn't change in the meantime, your operation succeeds, lock free. If something did change in the document and it has a newer version, Elasticsearch will signal it to you so you can deal with it appropriately.
Going back to the search engine voting example above, this is how it plays out. When we render a page about a shirt design, we note down the current version of the document. This is returned with the response of the get request we do for the page:
curl -XGET 'http://localhost:9200/designs/shirt/1'
{
"_index": "designs",
"_type": "shirt",
"_id": "1",
"_version": 4,
"exists": true,
"_source": {
"name": "elasticsearch",
"votes": 1002
}
}
After the user has cast her vote, we can instruct Elasticsearch to only index the new value (1003) if nothing has changed in the meantime: (note the extra version query string parameter)
curl -XPOST 'http://localhost:9200/designs/shirt/1?version=4' -d'
{
"name": "elasticsearch",
"votes": 1003
}'
Internally, all Elasticsearch has to do is compare the two version numbers. This is much lighter than acquiring and releasing a lock. If no one changed the document, the operation will succeed with a status code of 200 OK. However, if someone did change the document (thus increasing its internal version number), the operation will fail with a status code of 409 Conflict. Our website can now respond correctly. It will retrieve the new document, increase the vote count and try again using the new version value. Chances are this will succeed. If it doesn't we simply repeat the procedure.
This pattern is so common that Elasticsearch's update endpoint can do it for you. You can set the retry_on_conflict parameter to tell it to retry the operation in the case of version conflicts. It is especially handy in combination with a scripted update. For example, this cURL will tell Elasticsearch to try to update the document up to 5 times before failing:
curl -XPOST 'http://localhost:9200/designs/shirt/1/_update?retry_on_conflict=5' -d'
{
"script" : "ctx._source.votes += 1"
}'
Note that the versioning check is completely optional. You can choose to enforce it while updating certain fields (like votes) and ignore it when you update others (typically text fields, like name). It all depends on the requirements of your application and your tradeoffs.
Already have a versioning system in place?
Next to its internal support, Elasticsearch plays well with document versions maintained by other systems. For example, you may have your data stored in another database which maintains versioning for you or may have some application specific logic that dictates how you want versioning to behave. In this situations you can still use Elasticsearch's versioning support, instructing it to use an external version type. Elasticsearch will work with any numerical versioning system (in the 1:263-1 range) as long as it is guaranteed to go up with every change to the document.
To tell Elasticssearch to use external versioning, add a version_type parameter along with the version parameter in every request that changes data. For example:
curl -XPOST 'http://localhost:9200/designs/shirt/1?version=526&version_type=external' -d'
{
"name": "elasticsearch",
"votes": 1003
}'
Maintaing versioning somewhere else means Elasticsearch doesn't necessarily know about every change in it. That has subtle implications to how versioning is implemented.
Consider the indexing command above. With internal versioning, it means "only index this document update if its current version is equal to 526". If the version matches, Elasticsearch will increase it by one and store the document. However, with an external versioning system this will be a requirement we can't enforce. Maybe that versioning system doesn't increment by one every time. Maybe it jumps with arbitrary numbers (think time based versioning). Or maybe it is hard to communicate every single version change to Elasticsearch. For all of those reasons, the external versioning support behaves slightly differently.
With version_type set to external, Elasticsearch will store the version number as given and will not increment it. Also, instead of checking for an exact match, Elasticsearch will only return a version collision error if the version currently stored is greater or equal to the one in the indexing command. This effectively means "only store this information if no one else has supplied the same or a more recent version in the meantime". Concretely, the above request will succeed if the stored version number is smaller than 526. 526 and above will cause the request to fail.
Important: when using external versioning, make sure you always add the current version (and version_type) to any index, update or delete calls. If you forget, Elasticsearch will use it's internal system to process that request, which will cause the version to be incremented erroneously.
Some final words about deletes.
Deleting data is problematic for a versioning system. Once the data is gone, there is no way for the system to correctly know whether new requests are dated or actually contain new information. For example, say we run the following to delete a record:
curl -XDELETE 'http://localhost:9200/designs/shirt/1?version=1000&version_type=external'
That delete operation was version 1000 of the document. If we just throw away everything we know about that, a following request that comes out of sync will do the wrong thing:
curl -XPOST 'http://localhost:9200/designs/shirt/1/?version=999&version_type=external' -d'
{
"name": "elasticsearch",
"votes": 3001
}'
If we were to forget that the document ever existed, we would just accept this call and create a new document. However, the version of the operation (999) actually tells us that this is old news and the document should stay deleted.
Easy, you may say, do not really delete everything but keep remembering the delete operations, the doc ids they referred to and their version. While that indeed does solve this problem it comes with a price. We will soon run out resources if people repeatedly index documents and then delete them.
Elasticsearch search strikes a balance between the two. It does keep records of deletes, but forgets about them after a minute. This is called deletes garbage collection. For most practical use cases, 60 second is enough for the system to catch up and for delayed requests to arrive. If this doesn't work for you, you can change it by setting index.gc_deletes on your index to some other time span.