Technology has always been the lynchpin in driving progress — from the first ships built to the first industrial revolution to the computer age. The time between each successive technological wave has been shortening, and in the past few years, we have seen explosive growth in Artificial Intelligence (AI) and its capabilities.
While AI has been the epicenter of most tech buzz, it is just one tailwind in the wider trajectory toward greater automation in computer-based industries. In this article, we will explore the latest tech trends in AI, APIs, and automation that can impact your business. Find out how AI serves as the frontrunner and API as the technical bedrock that accelerates ongoing tech trends, with automation as the overarching motivation for the latest developments in the tech and business landscape.
The viral release of OpenAI’s ChatGPT3.5 in 2022 sparked a huge wave of excitement, as millions of people experimented with what it could do and what it meant for the future of work. Chatbots like ChatGPT are a form of intelligence known as Generative AI (GenAI), which is capable of producing naturalistic images, text, or even videos based on a given prompt.
At the core of GenAI are AI programs known as large language models (LLMs), which consume large amounts of training data in order to produce real-like data. Now, in 2024, ChatGPT is just one of the few thousand LLMs available for use. From enterprise offerings like Google Gemini, Amazon Bedrock, and Anthropic Claude to the 650K open-source models hosted on Hugging Face (as of May 2024), there is no shortage of choice when it comes to using GenAI as a copilot for work.
Unsurprisingly, the tech sector has been one of the first few to adopt GenAI in work. Many companies, big and small, have been using LLM chatbots like ChatGPT to produce, rewrite, and debug code, accelerating developer productivity. A survey by GitHub found that 92% of US-based developers are using AI coding tools for work, and 70% feel the benefits of using such tools.
Some early entrants to code completion tools are Tabnine, and more recently, GitHub Copilot and Google’s CodeGemma.
But beyond using GenAI for writing code, there are many more possibilities for AI-powered software development. At the keynote for the 2024 Apidays Singapore conference, Manjunath Bhat (VP Analyst - Gartner) remarked that AI could also play a role in other high-impact avenues, like suggesting what should be built or explaining legacy or indecipherable code.
Thus far, there is a fast-expanding array of AI tools for automating software tests, generating documentation, and answering developer questions.
Today, the average person may ask ChatGPT to summarize an essay, provide travel itineraries, or solve logic puzzles. Such chatbots are based on general-purpose models that can handle a gamut of tasks across domains.
While generic models have the advantage of being able to handle — with varying success — any task right away with no training required, they often fall short when the user requires a more accurate or complex response. Issues like hallucination, bias, or inaccuracies are some of the biggest challenges to implementing and scaling AI in a highly specialized business context.
Domain-specific models
A variety of domain-specific and multilingual models have entered the scene to address the limitations of generic models. These specialized models are trained with domain-specific datasets, allowing the model to learn and perform far better on specialized tasks.
Google’s Med-PaLM 2 model was trained with curated medical databases, becoming the first LLM to perform at 85% accuracy on medical examinations. Such domain-specific LLMs are the first crest in the next wave of AI development, where LLMs can be safely deployed in real-world contexts like assisting with tuberculosis diagnoses or analyzing past legal cases.
Already, there are LLMs like BloombergGPT and FinGPT for finance, ChatLAW for law, and ClimateBERT for climate and environment.
Plug-ins, API connectivity
Besides specialized models, another way to power up general models is through plug-ins or connectivity with APIs.
LLM plug-ins extend the model’s capabilities. Through API calls, these plugins can provide access to external databases or execute third-party tasks like making a hotel reservation. That’s how plug-ins enable LLMs to handle more complex requests like acting as a personal shopper or creating a financial report based on the latest data.
While OpenAI has rolled back ChatGPT’s plug-ins after a year of service, the game is still on to get LLMs hooked up with APIs. Ongoing projects like Gorilla (UC Berkeley) and RestGPT (Peking University) are working to build LLM-powered systems that can carry out real-world actions based on a command, like “create a Spotify playlist” or “book a flight to Paris”.
In these frameworks, LLMs act as a switchboard operator who knows the right APIs to call when prompted, then formulates the API request and parses the response for the user. API connectivity promises a new breakthrough for LLMs: beyond just a conversational partner, LLMs can act and carry out tasks in the real world, bringing us one step closer to natural language interfaces.
If the 2010s were the decade for the rise of the software-as-a-service (SaaS) model, then the 2020s are all about API as a Product (AaaP). Under a SaaS model, businesses offer their unique services and strengths through web-based applications, like Gmail for mail services or Zoom for video conferencing.
However, in recent years, applications are gradually becoming more modular and built using a microservice-based infrastructure. This means that application functionalities like payment, notifications, or even login credentials are added using APIs. Because APIs expose the capabilities of a service in a programmatic manner, it is much easier for developers to build products without having to code everything from scratch.
This change in software development heralds an API-based economy, where products and services are offered and accessed through APIs.
Stripe, Sendgrid, and Twilio—these are companies that have grown tremendously by selling third-party API functionalities to other enterprises. More and more API offerings are expected to crop up in the coming decade, especially for industry-specific APIs like payroll, open banking, government, and so on.
As the API market grows, Kong's 2023 API Impact Report estimates that APIs will contribute $14.2 trillion to the global economy by 2027, up from $10.9 trillion in 2023.
Along with the economic boom, there has been an explosion of tooling, standards, and platforms to support the growth of API as a Product. With so many APIs entering the market, it has become more difficult to discover and integrate each and every API into a given application. Two key trends stand out:
An API-first economy espouses an interface-first approach, which means exposing capabilities and hiding complexities. This hallmark of APIs has made it the foundation for the rapid growth of many tech trends, the most prominent of which is AI.
Since most LLMs have only been commercially available in recent years, many have been built with an API-first approach. Many LLMs can be programmatically accessed via APIs, making it super convenient for developers to integrate AI capabilities into their applications. In other words, API access has precipitated the rapid availability of AI-powered tools and features on the market within just the past two years.
APIs have also served as the backbone for the recent proliferation of no- and low-code application builders like Bubble and Xano. With APIs, users can create applications without having to know a programming language. These tools have also led to the rise of citizen developers, enabling businesses to quickly build applications and dashboards without a large developer team.
All these emerging trends in AI and API point to the same undercurrent of opportunity: lower barriers of entry to and automation of computer-based work.
Much like how the steam engine had automated blue-collar work in energy, manufacturing, and logistics, we are now entering a whole new era of automation across sectors, especially in computer-based industries.
Needless to say, AI is one of the key factors towards greater automation. Research by Goldman Sachs suggests that 18% of global work could be automated by AI, with the biggest impact felt in white-collar jobs like administration and business operations.
It is important to note that AI is unlikely to make entire job functions redundant. Rather, repetitive or low-impact tasks can be delegated to AI automation, allowing more time for high-impact or more complex work. For example, in customer support, GenAI-enabled chatbots can interact autonomously with customers to provide basic support and handle common inquiries, while human agents can handle more demanding cases.
Another key catalyst is the growing availability of automation tools, such as RPA (robotic process automation) software and orchestration platforms.
From AI to API to automation tools, we have yet again entered an age where computer-based technology will revolutionize how businesses are run. These trends promise greater productivity and connectivity across systems, allowing people to focus on high-impact work. In 2024, it is more important than ever for businesses to leverage new technology to drive value.
Conductor is an open-source orchestration platform that automates complex or long-running processes, such as AI integration flows, microservice application flows, DevOps processes, transactional flows, and more. With Conductor, developers can build and update durable workflows without the complexities of managing system failures, dependencies, or scalability.
Orkes Cloud is a fully managed and hosted Conductor service that can scale seamlessly according to your needs. When you use Conductor via Orkes Cloud, your engineers don’t need to worry about set-up, tuning, patching, and managing high-performance Conductor clusters. Try it out with our 14-day free trial for Orkes Cloud.