Orkes logo image
Product
Platform
Orkes Platform thumbnail
Orkes Platform
Orkes Agentic Workflows
Orkes Conductor Vs Conductor OSS thumbnail
Orkes vs. Conductor OSS
Orkes Cloud
How Orkes Powers Boat Thumbnail
How Orkes Powers BOAT
Try enterprise Orkes Cloud for free
Enjoy a free 14-day trial with all enterprise features
Start for free
Capabilities
Microservices Workflow Orchestration icon
Microservices Workflow Orchestration
Enable faster development cycles, easier maintenance, and improved user experiences.
Realtime API Orchestration icon
Realtime API Orchestration
Enable faster development cycles, easier maintenance, and improved user experiences.
Event Driven Architecture icon
Event Driven Architecture
Create durable workflows that promote modularity, flexibility, and responsiveness.
Human Workflow Orchestration icon
Human Workflow Orchestration
Seamlessly insert humans in the loop of complex workflows.
Process orchestration icon
Process Orchestration
Visualize end-to-end business processes, connect people, processes and systems, and monitor performance to resolve issues in real-time
Use Cases
By Industry
Financial Services icon
Financial Services
Secure and comprehensive workflow orchestration for financial services
Media and Entertainment icon
Media and Entertainment
Enterprise grade workflow orchestration for your media pipelines
Telecommunications icon
Telecommunications
Future proof your workflow management with workflow orchestration
Healthcare icon
Healthcare
Revolutionize and expedite patient care with workflow orchestration for healthcare
Shipping and logistics icon
Shipping and Logistics
Reinforce your inventory management with durable execution and long running workflows
Software icon
Software
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean leo mauris, laoreet interdum sodales a, mollis nec enim.
Docs
Developers
Learn
Blog
Explore our blog for insights into the latest trends in workflow orchestration, real-world use cases, and updates on how our solutions are transforming industries.
Read blogs
Check out our latest blog:
Conductor CLI Guide: Register, Run, Retry, and Recover Durable Workflows Without Leaving Your Terminal 💻
Customers
Discover how leading companies are using Orkes to accelerate development, streamline operations, and achieve remarkable results.
Read case studies
Our latest case study:
Twilio Case Study Thumbnail
Orkes Academy New!
Master workflow orchestration with hands-on labs, structured learning paths, and certification. Build production-ready workflows from fundamentals to Agentic AI.
Explore courses
Featured course:
Orkes Academy Thumbnail
Events icon
Events
Videos icons
Videos
In the news icon
In the News
Whitepapers icon
Whitepapers
About us icon
About Us
Pricing
Get a demo
Signup
Slack FaviconDiscourse Logo icon
Get a demo
Signup
Slack FaviconDiscourse Logo icon
Orkes logo image

Company

Platform
Careers
HIRING!
Partners
About Us
Legal Hub
Security

Product

Cloud
Platform
Support

Community

Docs
Blogs
Events

Use Cases

Microservices Workflow Orchestration
Realtime API Orchestration
Event Driven Architecture
Agentic Workflows
Human Workflow Orchestration
Process Orchestration

Compare

Orkes vs Camunda
Orkes vs BPMN
Orkes vs LangChain
Orkes vs Temporal
Twitter or X Socials linkLinkedIn Socials linkYouTube Socials linkSlack Socials linkGithub Socials linkFacebook iconInstagram iconTik Tok icon
© 2026 Orkes. All Rights Reserved.
Back to Blogs

Table of Contents

Share on:Share on LinkedInShare on FacebookShare on Twitter
Worker Code Illustration

Get Started for Free with Dev Edition

Signup
Back to Blogs
AGENTIC ENGINEERING

Orkes Conductor Embeddings Explained: The Tasks Behind Semantic Search & AI Workflows

Maria Shimkovska
Maria Shimkovska
Content Engineer
Last updated: November 25, 2025
November 25, 2025
5 min read

Related Blogs

Build an AI-Powered Loan Risk Assessment Workflow (with Sub-Workflows + Human Review)

Dec 4, 2025

Build an AI-Powered Loan Risk Assessment Workflow (with Sub-Workflows + Human Review)

Enterprise Uptime Guardrails: Build a Website Health Checker Workflow (HTTP Checks + Inline Logic + SMS Alerts)

Dec 2, 2025

Enterprise Uptime Guardrails: Build a Website Health Checker Workflow (HTTP Checks + Inline Logic + SMS Alerts)

Vector Databases 101: A Simple Guide for Building AI Apps with Conductor

Nov 26, 2025

Vector Databases 101: A Simple Guide for Building AI Apps with Conductor

Ready to Build Something Amazing?

Join thousands of developers building the future with Orkes.

Start for free

Here’s a quick and easy rundown of how to use Orkes Conductor’s LLM embedding tasks to turn your text into vectors, store them in a database, and use them for things like semantic search, recommendations, and smarter routing in your workflows.


TL;DR
  • LLM Generate Embeddings - Turns your text into numerical representations (“vectors”) that capture meaning.
  • LLM Store Embeddings - Saves those vectors in a special database so you can look them up quickly later.
  • LLM Get Embeddings - Searches that database to find the data most similar in meaning to what you’re looking for.

Cover illustration showing the three LLM Embeddings tasks Orkes Conductor offers.

If you've read recent articles on vector embeddings and vector databases you may be curious about what tasks we have available for you to work with embeddings in your workflows. This article goes exactly over those.

But first, a quick refresher: what are embeddings, really?

Embeddings turn text into numerical vectors (for example, the word "password" might become something like [2.3, 0.5, 1.3,...]) that capture meaning. That let's your workflows do smarter things, like:

  • help users find the right document even if they search with the “wrong” words,
  • detect similar support tickets or bug reports,
  • match customers to the right product or piece of content,
  • identify anomalies or suspicious behavior based on past patterns.
Quick Definition

"An embedding is a vector (list) of floating point numbers. The distance between two vectors measures their relatedness. Small distances suggest high relatedness and large distances suggest low relatedness." - OpenAI

Embeddings are also used in both semantic search and semantic routing. Both rely on semantic meaning (the intent or idea behind words, what someone truly means rather than the exact words they say).

  • Semantic search is the process of finding the most relevant results based on meaning. For example, a user might search for “reset login credentials”, and semantic search can surface documents titled “How to change your password”, because the meaning is similar, even though none of the words match.
  • Semantic routing uses embeddings to automatically decide where a workflow, message, or user request should go based on its meaning. Like directing a support ticket to the correct team (billing, technical, refunds) even if the user doesn’t use those words.

Here are the three Orkes Conductor tasks that make it easy to work with embeddings in your workflows.

LLM Generate Embeddings

Check out the docs

This task take any text you give it (like user queries, product descriptions, support tickets, logs, or documents) and converts it into an embedding to represent its meaning in numbers.

For example, if you have a workflow with a user input "I want to change my password", the task generates an embedding for that sentence and returns it in the result field, like this:

json
{
  "result": [
    0.015542618,
    -0.042827383,
    0.005124084,
    0.017521484,
    -0.0012842973,
    -0.0012122194,
    0.022370363,
... // Ellipses here because the vector is a long list
    0.017246278,
    0.038476497,
    -0.025659736,
    0.01224014,
    0.027992439,
    0.0032615254,
    0.003109179
  ]
}

The above list of numbers represents the vector’s dimensions. Each number is one dimension of the embedding.

Like this you can store the embedding in a vector database of your choice as part of your workflow.

LLM Store Embeddings

Check out the docs

Once you’ve generated your embeddings, this task lets you save them to a vector database so you can actually use them.

Store the embeddings in a vector database (Conductor has integrations with Pinecone, Weaviate, Postres, and MongoDB) so your workflows can efficiently index and search through large volumes of text or data.

Feel free to look more into how to integrate each one in the docs as well. Each database has its own little quirks, but the docs walk you through exactly how to set up whichever one you choose.

LLM Get Embeddings

Check out the docs

Retrieve the closest or most relevant embeddings for tasks like semantic search (e.g., “find me similar errors”), recommendations (“customers who wrote this review also liked…”), or intelligent routing.

The difference between LLM Get Embeddings and LLM Search Index

While going through the docs you may have noticed that there is a task called LLM Search Index, which is used to search a vector database as well.

The difference between the two tasks is:

  • LLM Get Embeddings searches with an existing embedding. You use this task when you already have an embedding (e.g., from LLM Generate Embeddings) and you want to avoid regenerating it or you want to run multiple lookups against the same vector.
  • LLM Search Index searches with raw text. It generates an embedding for that raw text using a model you specify and then searches the vector database using that embedding.
TaskYou provideWhat it doesBest for
LLM Search IndexNatural-language text queryGenerates a new embedding and searches the vector DBSemantic search from plain text
LLM Get EmbeddingsAn existing embedding vectorSearches the vector DB using that exact embeddingReusing embeddings or performing repeated lookups

Conclusion

Embeddings open the door to smarter workflows from semantic search to recommendations to intelligent routing (also referred to as semantic routing). With Conductor’s built-in tasks for generating, storing, and retrieving embeddings, you can add these capabilities pretty quickly.

If you’re ready to experiment, extend an existing workflow, or build something new, Conductor gives you a straightforward way to work with embeddings at any scale. Try it out and see how quickly you can turn ideas into real, production-ready AI features.