MongoDB news

MongoDB, Inc. today made generally available an 8.0 version of its namesake document database that increases both scalability and resiliency.

Specific improvements include a 32% increase in throughput, 56% faster bulk writes and 20% faster concurrent writes during data replication.

Additionally, sharding improvements in MongoDB 8.0 distribute data across multiple servers up to 50 times faster and at up to 50% lower starting cost, without the need for additional configuration or setup.

AWS

MongoDB 8.0 also now provides an ability to set a default maximum time limit for running queries, to reject recurring types of problematic queries, and to set query settings to persist through events like database restarts to ensure consistent performance levels for applications experiencing high demand.

At the same time, MongoDB 8.0 can now also handle higher volumes of time series data in addition to completing complex data aggregations more than 200% faster.

MongoDB has also added an ability to leverage compressed quantized vectors and automatically quantize full-fidelity vectors using Atlas Vector Search. Quantized vectors require significantly less memory (73% to 96% less) and are faster to retrieve, and when combined with an existing Search Nodes option enable organizations to launch queries against data stored both in MongoDB and a separate platform that is optimized for storing vector data in a way that scales independently of the MongoDB servers.

Finally, MongoDB has also added a MongoDB Queryable Encryption capability to store fully randomized encrypted data in the MongoDB database in a way than still enables expressive queries to run. That capability reduces the risk that data might be inadvertently exposed and exfiltrated without requiring organizations to have any specific cryptographic expertise.

Overall, MongoDB with this release has made more than 45 changes to the underlying architecture.

Sahir Azam, chief product officer for MongoDB, said collectively these capabilities, in addition to improving overall application performance and resiliency, will also reduce the total cost of processing data at scale. The goal is to make it simpler for organizations to replace legacy databases that were not designed, for example, to process massive amounts of unstructured data that will be required by modern applications infused with AI models, he noted.

MongoDB claims to now have more than 50,000 customers using its document database across multiple clouds and on-premises IT environments. Itโ€™s not clear at what rate organizations are staying current on the latest releases of MongoDB, but organizations that rely on managed database services are typically further along than IT teams that manage databases themselves.

The challenge, as always when it comes to document databases, is the initial use is typically driven by an application developer that views document databases as generally being more accessible than other platforms that require the expertise of a database administrator (DBA) to set up. Eventually, however, those document databases reach a level of scale that typically require the expertise of a centralized IT operations team to manage.

In fact, as more data is used to drive AI applications, the probability the document databases being used to drive them will need to be centrally managed will only increase.

Techstrong TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

SHARE THIS STORY

RELATED STORIES