
We know that artificial intelligence is of our time. It is also arguably in its most important period of development and extension. But despite these truths, we rarely factor in the temporal element of AI development, tools or its resulting functions as they manifest themselves at the user level.
In a world where batch processing happens overnight, where gigabyte web and cloud connections happen extremely fast… and where some application functions happen instantaneously, shouldn’t we be factoring in the speed of AI by now? Data streaming platform specialist Confluent thinks we should; the company is now offering new capabilities in Confluent Cloud for Apache Flink designed to streamline and simplify the process of developing real-time AI applications.
What is Apache Flink?
As an aide memoire, Apache Flink is an open-source, distributed framework for stateful computations over both unbounded data streams (those that occur in streaming environments with no defined endpoint) and bounded data streams (in batch, where and endpoint is defined and known) for enterprise applications. It offers high-throughput and low-latency processing with support for event-time processing and state management.
Confluent says that Flink Native Inference cuts through complex workflows by enabling teams to run any open-source AI model directly in Confluent Cloud. Flink search unifies data access across multiple vector databases, streamlining discovery and retrieval within its single interface. New built-in machine learning (ML) functions here bring AI-driven use cases, such as forecasting and anomaly detection, directly into Flink SQL, making advanced data science easier. Together these innovations redefine how businesses can harness AI for real-time customer engagement and decision-making.
From Patchwork to Platform
“Building real-time AI applications has been too complex for too long, requiring a maze of tools and deep expertise just to get started,” said Shaun Clowes, chief product officer at Confluent. “With the latest advancements in Confluent Cloud for Apache Flink, we’re breaking down those barriers and bringing AI-powered streaming intelligence within reach of any team. What once required a patchwork of technologies can now be done seamlessly within our platform, with enterprise-level security and cost efficiencies baked in.”
According to analyst house McKinsey, 92% of companies plan to increase their AI investments over the next three years. Organizations want to seize this opportunity and capitalize on the promises of AI. But says Clowes, the road to building real-time AI apps is complicated. One of the primary reasons for this inherent complexity is the fact that developers have to juggle multiple tools, languages and interfaces to incorporate ML models. At the same time, they also need to pull “valuable data context” information (in meta form and in raw data form) from the many places where data lives. This fragmented workflow leads to costly inefficiencies, slowdowns in operations and AI hallucinations that can damage reputations.
“Confluent’s data streaming platform with Flink AI Model Inference simplified our tech stack by enabling us to work directly with large language models (LLMs) and vector databases for retrieval-augmented generation (RAG) and schema intelligence, providing real-time context for smarter AI agents. As a result, our customers have achieved greater productivity and improved workflows across their enterprise operations,” said Steffen Hoellinger, co-founder and CEO at Airy.
As a serverless stream processing solution designed and built to unify real-time and batch processing, Confluent Cloud for Apache Flink is said to eliminate the complexity and operational overhead of managing separate processing solutions. With these newly released AI, ML and analytics features, Clowes says that it enables businesses to streamline more workflows and unlock greater efficiency.
Setting the Tableflow
As part of its spring update news cycle, Confluent also announced advancements in Tableflow, a technology designed to access operational data from data lakes and warehouses. With Tableflow, all streaming data in Confluent Cloud can be accessed in popular open table formats, which helps when working on advanced analytics and real-time AI.
Support for Apache Iceberg is now generally available (GA). Plus, as a result of an expanded partnership with Databricks, a new early access program for Delta Lake, an open-source storage later for transactional processing with data lakes. Additionally, Tableflow now offers enhanced data storage flexibility and integrations with catalog providers, including AWS Glue Data Catalog and Snowflake’s managed service for Apache Polaris, and Snowflake Open Catalog.
“At Confluent, we’re all about making your data work for you, whenever you need it and in whatever format is required,” said Clowes. “With Tableflow, we’re bringing our expertise in connecting operational data to the analytical world. Now, data scientists and data engineers have access to a single, real-time source of truth across the enterprise, making it possible to build and scale the next generation of AI-driven applications.”
Why Real-Time AI Matters
The company says that AI projects are failing because old development methods cannot keep pace with new consumer expectations. These applications are expected to know the current status of a business and its customers and take action automatically.
For example, an AI agent for inventory management should be able to identify if a particular item is trending, immediately notify manufacturers of the increased demand, and provide an accurate delivery estimate for customers. This level of hands-free efficiency is simply not possible if business data is not getting to the analytics and AI systems in real time. Confluent insists that the “old ways” of batch processing lead to inaccurate results, where manually copying over data is unstable, not scalable and proliferates the silo problem. At a time when we are extremely focused on the infrastructure underpinning AI services possibly even more forcibly than we are on AI functions themselves, this is all timely stuff.