Orkes logo image
Product
Platform
Orkes Platform thumbnail
Orkes Platform
Orkes Agentic Workflows
Orkes Conductor Vs Conductor OSS thumbnail
Orkes vs. Conductor OSS
Orkes Cloud
How Orkes Powers Boat Thumbnail
How Orkes Powers BOAT
Try enterprise Orkes Cloud for free
Enjoy a free 14-day trial with all enterprise features
Start for free
Capabilities
Microservices Workflow Orchestration icon
Microservices Workflow Orchestration
Enable faster development cycles, easier maintenance, and improved user experiences.
Realtime API Orchestration icon
Realtime API Orchestration
Enable faster development cycles, easier maintenance, and improved user experiences.
Event Driven Architecture icon
Event Driven Architecture
Create durable workflows that promote modularity, flexibility, and responsiveness.
Human Workflow Orchestration icon
Human Workflow Orchestration
Seamlessly insert humans in the loop of complex workflows.
Process orchestration icon
Process Orchestration
Visualize end-to-end business processes, connect people, processes and systems, and monitor performance to resolve issues in real-time
Use Cases
By Industry
Financial Services icon
Financial Services
Secure and comprehensive workflow orchestration for financial services
Media and Entertainment icon
Media and Entertainment
Enterprise grade workflow orchestration for your media pipelines
Telecommunications icon
Telecommunications
Future proof your workflow management with workflow orchestration
Healthcare icon
Healthcare
Revolutionize and expedite patient care with workflow orchestration for healthcare
Shipping and logistics icon
Shipping and Logistics
Reinforce your inventory management with durable execution and long running workflows
Software icon
Software
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean leo mauris, laoreet interdum sodales a, mollis nec enim.
Docs
Developers
Learn
Blog
Explore our blog for insights into the latest trends in workflow orchestration, real-world use cases, and updates on how our solutions are transforming industries.
Read blogs
Check out our latest blog:
Conductor CLI Guide: Register, Run, Retry, and Recover Durable Workflows Without Leaving Your Terminal 💻
Customers
Discover how leading companies are using Orkes to accelerate development, streamline operations, and achieve remarkable results.
Read case studies
Our latest case study:
Twilio Case Study Thumbnail
Orkes Academy New!
Master workflow orchestration with hands-on labs, structured learning paths, and certification. Build production-ready workflows from fundamentals to Agentic AI.
Explore courses
Featured course:
Orkes Academy Thumbnail
Events icon
Events
Videos icons
Videos
In the news icon
In the News
Whitepapers icon
Whitepapers
About us icon
About Us
Pricing
Get a demo
Signup
Slack FaviconDiscourse Logo icon
Get a demo
Signup
Slack FaviconDiscourse Logo icon
Orkes logo image

Company

Platform
Careers
HIRING!
Partners
About Us
Legal Hub
Security

Product

Cloud
Platform
Support

Community

Docs
Blogs
Events

Use Cases

Microservices Workflow Orchestration
Realtime API Orchestration
Event Driven Architecture
Agentic Workflows
Human Workflow Orchestration
Process Orchestration

Compare

Orkes vs Camunda
Orkes vs BPMN
Orkes vs LangChain
Orkes vs Temporal
Twitter or X Socials linkLinkedIn Socials linkYouTube Socials linkSlack Socials linkGithub Socials linkFacebook iconInstagram iconTik Tok icon
© 2026 Orkes. All Rights Reserved.
Back to Blogs

Table of Contents

Share on:Share on LinkedInShare on FacebookShare on Twitter
Worker Code Illustration

Get Started for Free with Dev Edition

Signup
Back to Blogs
AGENTIC ANNOUNCEMENTS

GPT-5.2 Is Here—Now Put It to Work in Orkes Conductor

Maria Shimkovska
Maria Shimkovska
Content Engineer
Last updated: December 15, 2025
December 15, 2025
5 min read

Related Blogs

Universal Commerce Protocol (UCP) Explained: Beyond Google's Announcement

Jan 13, 2026

Universal Commerce Protocol (UCP) Explained: Beyond Google's Announcement

Introducing Proxy Support for HTTP Tasks in Orkes Conductor

Aug 4, 2025

Introducing Proxy Support for HTTP Tasks in Orkes Conductor

Announcing the Conductor Model Context Protocol (MCP) Server

Jun 9, 2025

Announcing the Conductor Model Context Protocol (MCP) Server

Ready to Build Something Amazing?

Join thousands of developers building the future with Orkes.

Start for free

GPT-5.2 just shipped. Here’s how to plug it into Orkes Conductor and start running real agentic workflows (with guardrails, observability, and the flexibility to switch models anytime).


Cover illustration for the article GPT-5.2 Is Here—Now Put It to Work in Orkes Conductor

GPT-5.2 is officially out (OpenAI started rolling it out on Dec 11, 2025), and it’s positioned as the new flagship for both general work and multi-step agentic tasks.

In this post, you’ll learn how to use GPT-5.2 with Orkes Conductor to build production grade agentic workflows with little to no code. Actually, to use GPT-5.2 without the fine tuning features OpenAI rolled out you don't need any code with Conductor. I am writing a follow up article on how you can easily also plug in GPT-5.2 using the fine tuning parts of the model. So stay tuned for that.

Bonus: will chat about model flexibility so you can see how you can swap LLM models without rewriting your orchestration. I will explain why this is super important too and will continue to remain important in the future.

What’s new in GPT-5.2 (and why it matters for agentic workflows)

OpenAI describes GPT-5.2 as its best general-purpose model, with improvements vs. GPT-5.1 in instruction following, accuracy/token efficiency, multimodality (especially vision), code generation (notably front-end UI), tool calling/context management, and spreadsheet understanding/creation. Pretty cool stuff.

You’ll also see new “agent-friendly” controls that matter when you’re running LLMs inside orchestrated workflows:

  • Reasoning effort, including a new xhigh level (useful when you want to “spend compute” only on the hard steps).
  • Verbosity control (dial response length without rewriting prompts—handy for tool outputs vs. user-facing outputs).
  • Allowed tools list (a practical guardrail for tool-using agents).
  • Compaction for long-running work (helps manage context over time).

Which GPT-5.2 model should you pick?

Here is OpenAI’s current guidance:

  • gpt-5.2: complex reasoning, broad world knowledge, code-heavy or multi-step agentic tasks
  • gpt-5.2-pro: tougher problems that may take longer but benefit from “harder thinking”
  • gpt-5.2-chat-latest: the ChatGPT-powered variant

(And if you’re building an interactive coding product specifically, OpenAI still points to gpt-5.1-codex-max as the coding-optimized option.)

How to Integrate GPT-5.2 into Orkes Conductor

Getting GPT-5.2 running in Orkes Conductor is intentionally simple. You don’t need to change your existing workflows or write new orchestration code—you just add the model as a provider and start using it.

Step 1: Get your OpenAI API key

If you don’t already have one, create an OpenAI account and generate an API key from the dashboard. This key is what allows Conductor to securely call GPT-5.2 on your behalf.

Step 2: Create an OpenAI integration in Conductor

In Orkes Conductor, create a new OpenAI integration and paste in your API key. From there, you can register one or more models under that integration, such as gpt-5.2 and gpt-5.1-pro.

You can also control which teams, services, or environments are allowed to use each model, which is useful for safely testing GPT-5.2 before rolling it out broadly across your environments and workflows.

Step 3: Use GPT-5.2 in your workflows

Once the integration is set up, GPT-5.2 becomes available as a drop-in option in Conductor’s LLM tasks (like LLM Chat Complete). From the workflow’s point of view, nothing else changes. You’re simply selecting a different model.

This means you can test GPT-5.2 in an existing workflow, compare it side-by-side with another model, and roll it into production very quickly.

Model Flexibility: Why Easy Switching Matters (and the Models We Support)

AI models change fast. New models ship, older ones get deprecated, pricing shifts, and performance improves in different areas over time. Locking your workflows to a single model is a short-term decision that quickly becomes a long-term risk.

That’s why Orkes Conductor is designed to treat models as replaceable components, not hard-coded dependencies.

Models you can run today

In addition to GPT-5.2, Orkes Conductor supports multiple LLM providers and model families, including:

  • OpenAI models (GPT-5.2, GPT-5.2-pro, and others)
  • Azure OpenAI deployments
  • Anthropic models
  • Google models
  • Open-source and self-hosted LLMs

You can mix and match these in the same platform—and even within the same workflow.

Why this flexibility matters

With Conductor, switching models is as simple as changing configuration, not rewriting logic. We wanted to give the best experience for building and maintaining your workflows, like:

  • Future-proofing: When a model is deprecated or a better one appears, you can switch with minimal effort.
  • Risk reduction: Add fallback models so your workflows keep running if a provider has an outage or a model becomes unavailable.
  • Cost and performance control: Route simple tasks to cheaper models and complex reasoning to more powerful ones.
  • Zero user impact: Your users don’t notice when models change—your workflows continue to behave the same way.