Overview

edit

Natural language processing (NLP) refers to the way in which we can use software to understand natural language in spoken word or written text.

Classically, NLP was performed using linguistic rules, dictionaries, regular expressions, and machine learning for specific tasks such as automatic categorization or summarization of text. In recent years, however, deep learning techniques have taken over much of the NLP landscape. Deep learning capitalizes on the availability of large scale data sets, cheap computation, and techniques for learning at scale with less human involvement. Pre-trained language models that use a transformer architecture have been particularly successful. For example, BERT is a pre-trained language model that was released by Google in 2018. Since that time, it has become the inspiration for most of today’s modern NLP techniques. The Elastic Stack machine learning features are structured around BERT and transformer models. These features support BERT’s tokenization scheme (called WordPiece) and transformer models that conform to the standard BERT model interface. For the current list of supported architectures, refer to Third party NLP models.

To incorporate transformer models and make predictions, Elasticsearch uses libtorch, which is an underlying native library for PyTorch. Trained models must be in a TorchScript representation for use with Elastic Stack machine learning features.

As in the cases of classification and regression, after you deploy a model to your cluster, you can use it to make predictions (also known as inference) against incoming data. You can perform the following NLP operations: