txtai builds an AI-powered index over sections of text. txtai supports building text indices to perform similarity searches and create extractive question-answering based systems.
NeuML uses txtai and/or the concepts behind it to power all of our Natural Language Processing (NLP) applications. Example applications:
- paperai - AI-powered literature discovery and review engine for medical/scientific papers
- tldrstory - AI-powered understanding of headlines and story text
- neuspo - Fact-driven, real-time sports event and news site
- codequestion - Ask coding questions directly from the terminal
txtai is built on the following stack:
- sentence-transformers
- transformers
- faiss
- Python 3.6+
The easiest way to install is via pip and PyPI
pip install txtai
You can also install txtai directly from GitHub. Using a Python Virtual Environment is recommended.
pip install git+https://github.com/neuml/txtai
Python 3.6+ is supported
This project has dependencies that require compiling native code. Windows and macOS systems require the following additional steps. Most Linux environments will install without any additional steps.
-
Install C++ Build Tools - https://visualstudio.microsoft.com/visual-cpp-build-tools/
-
PyTorch Windows binaries are not on PyPI, the following url link must be added when installing
pip install txtai -f https://download.pytorch.org/whl/torch_stable.html
See pytorch.org for more information.
-
Run the following before installing
brew install libomp
See this link for more information.
The examples directory has a series of examples and notebooks giving an overview of txtai. See the list of notebooks below.
Notebook | Description | |
---|---|---|
Introducing txtai | Overview of the functionality provided by txtai | |
Extractive QA with txtai | Extractive question-answering with txtai | |
Build an Embeddings index from a data source | Embeddings index from a data source backed by word embeddings | |
Extractive QA with Elasticsearch | Extractive question-answering with Elasticsearch |
The following section goes over available settings for Embeddings and Extractor instances.
Embeddings methods are set through the constructor. Examples below.
# Transformers embeddings model
Embeddings({"method": "transformers",
"path": "sentence-transformers/bert-base-nli-mean-tokens"})
# Word embeddings model
Embeddings({"path": vectors,
"storevectors": True,
"scoring": "bm25",
"pca": 3,
"quantize": True})
method: transformers|words
Sets the sentence embeddings method to use. When set to transformers, the embeddings object builds sentence embeddings using the sentence transformers. Otherwise a word embeddings model is used. Defaults to words.
path: string
Required field that sets the path for a vectors model. When method set to transformers, this must be a path to a Hugging Face transformers model. Otherwise, it must be a path to a local word embeddings model.
storevectors: boolean
Enables copying of a vectors model set in path into the embeddings models output directory on save. This option enables a fully encapsulated index with no external file dependencies.
scoring: bm25|tfidf|sif
For word embedding models, a scoring model allows building weighted averages of word vectors for a given sentence. Supports BM25, tf-idf and SIF (smooth inverse frequency) methods. If a scoring method is not provided, mean sentence embeddings are built.
pca: int
Removes n principal components from generated sentence embeddings. When enabled, a TruncatedSVD model is built to help with dimensionality reduction. After pooling of vectors creates a single sentence embedding, this method is applied.
backend: annoy|faiss|hnsw
Approximate Nearest Neighbor (ANN) index backend for storing generated sentence embeddings. Defaults to Faiss for Linux/macOS and Annoy for Windows. Faiss currently is not supported on Windows.
quantize: boolean
Enables quanitization of generated sentence embeddings. If the index backend supports it, sentence embeddings will be stored with 8-bit precision vs 32-bit. Only Faiss currently supports quantization.
Extractor methods are set as constructor arguments. Examples below.
Extractor(embeddings, path, quantize)
embeddings: Embeddings object instance
Embeddings object instance. Used to query and find candidate text snippets to run the question-answer model against.
path: string
Required path to a Hugging Face SQuAD fine-tuned model. Used to answer questions.
quantize: boolean
Enables dynamic quantization of the Hugging Face model. This is a runtime setting and doesn't save space. It is used to improve the inference time performance of the QA model.