Semantic search with the inference API
Semantic search helps you find data based on the intent and contextual meaning of a search query, instead of a match on query terms (lexical search).
In this tutorial, learn how to use the inference API workflow with various services to perform semantic search on your data.
Amazon Bedrock <amazon-bedrock.html>
Azure AI Studio <azure-ai-studio.html>
Azure OpenAI <azure-openai.html>
Cohere <cohere.html>
ELSER <elser.html>
HuggingFace <#>
Mistral <#>
OpenAI <#>
Service Alpha <#>
Service Bravo <#>
Service Charlie <#>
Service Delta <#>
Service Echo <#>
Service Foxtrot <#>
Semantic search is a search method that helps you find data based on the intent and contextual meaning of a search query, instead of a match on query terms (lexical search).
Model details¶
The examples in this tutorial use Cohere's embed-english-v3.0
model, the all-mpnet-base-v2
model from HuggingFace, and OpenAI's text-embedding-ada-002
second generation embedding model.
You can use any Cohere and OpenAI models, they are all supported by the infrerence API.
For a list of recommended models available on HuggingFace, refer to the supported model list.
Azure based examples use models available through Azure AI Studio
or Azure OpenAI.
Mistral examples use the mistral-embed
model from the Mistral API.
Amazon Bedrock examples use the amazon.titan-embed-text-v1
model from the Amazon Bedrock base models.
Tip
Not seeing the tutorial? Select a service above to get started.