The latest update to the PostgreSQL database adds incremental backup capabilities that eliminate the need to rely on third-party tools along with a pg_createsubscriber tool that makes logical replication a more viable option for large databases by speeding up the initial data copy process.
PostgreSQL 17 also adds a revamped transaction subsystem capable of improving the performance of some applications by as much as 100% along with a JSON_TABLE and expanded SQL/JSON methods that makes it simpler for developers to build applications that span relational and non-relational data.
Tom Kincaid, senior vice president for database servers and tools for EDB, said the additions will further the adoption of Postgres by, in some cases, eliminating the need for a separate document database for applications based on the JSON data format.
Originally designed to provide an open source alternative for applications running on proprietary databases, PostgreSQL in recent years has emerged as the leading open source database alternative to commercial offerings that is capable of supporting a much wider range of applications and use cases. In fact, a recent EDB study found one-third of enterprises IT organizations are considering PostgreSQL for their next project.
Much of that shift is being driven by the extensibility of a platform that, in addition to providing access to relational data, is capable of processing JSON along with support for vector data that is used to train and extend large language models (LLMs), noted Kincaid. Earlier this year, EDB announced EDB Postgres AI, a platform for transactional, analytical and AI workloads that is based on PostgreSQL.
That approach eliminates the need to move data between various classes of databases that IT teams would also have to support in production environments, noted Kincaid. In effect, PostgreSQL enables IT organizations to reduce total costs by centralizing data management, he added.
Itโs not clear to what degree organizations are relying on PostgreSQL to manage data, but in the age of AI many organizations are now revisiting their database platform strategy. Much of the data used to train an AI model is unstructured. Centralizing the management of that data is at the core of any effort to extend the capabilities of any LLM.
However, there may still be use cases where, for example, performance requirements necessitate using, for example, a vector database. However, with each successive update to PostgreSQL itโs also apparent that the performance capabilities of the database are continuing to increase at a time when the number of applications that average enterprise IT organizations continues to rapidly expand.
Each IT organization will need to determine what database strategy makes the most sense of them. Application developers may initially build an application on one database platform only to see IT teams move that application to another database platform that is easier and less costly for them to manage. Regardless of approach, the one thing that is certain is the volume of data that needs to be managed is only going to continue to exponentially increase in the months and years ahead.