Hewlett Packard Enterprise (HPE) has extended the scope of its managed GreenLake data storage offerings to include a block storage service running on the Amazon Web Services (AWS) cloud.
In addition, HPE is making available a 5.6PB edition of HPE GreenLake for Block Storage built on HPE Alletra MP managed service, which now includes built-in three-site replication to enhance data protection and support for the HPE Zerto Cyber Resilience Vault platform.
HPE GreenLake for Private Cloud Business Edition also now supports HPE Alletra MP and HPE SimpliVity Gen 11 platforms.
Finally, HPE is adding a HPE Timeless Program through which organizations that subscribe to a block storage service are provided an automatic upgrade to the next generation of HPE storage controllers. HPE claims IT teams will see, on average, a 30% reduction in the total cost of ownership (TCO).
Sanjay Jagad, vice president of product management for block storage at HPE, said the Greenlake service makes it possible to automatically scale hyperconverged compute and storage resources up and down independently of each other thanks mainly to the rise of NVMe-based platforms and advances in artificial intelligence of IT Operations (AIOps).
Rather than requiring a lot of storage management expertise, the GreenLake service automatically determines which class of storage to make available, based on the requirements of the application, he added.
Those capabilities are enabled by a combination of AI models embedded within HPE Infosight and OpsRamp, an AIOps platform acquired last year. In addition to being able to observe, predict and mitigate disruptions across virtual machine, storage, network, compute and cloud infrastructure, IT teams can use these capabilities to observe energy consumption and carbon emission trends via the HPE Sustainability Insight Center.
As organizations in the age of artificial intelligence (AI) gear up to process more data than ever, most will be revisiting their storage strategies. Most existing legacy systems are not designed to process terabytes of data across multiple applications in near real-time. The next decision then becomes to what degree do they want to manage those storage systems resources themselves, versus focusing on building the data pipelines AI models require. Regardless of approach, most of those models will run where data already resides, so if they decide to rely more on a managed service it will need to be one capable of processing data locally.
It’s not clear just yet at what pace organizations are moving to operationalize AI but inevitably they will be processing data at levels of unprecedented scale. The challenge is that storage systems will need to be able to dynamically scale up and down to address those requirements. Making capacity available on demand via a cloud-like operating model will in many cases make more sense than using capital budget to acquire servers and storage systems that might be idle for extended periods.
In the meantime, organizations should assess the level of data engineering expertise they have on tap today, on the assumption that AI is about sorely taxing an area of expertise that is already hard to find and retain.