Nutanix today at its .NEXT 2024 conference demonstrated a copilot tool that makes use of generative artificial intelligence to automate IT management tasks.
The company previewed an instance of the AI assistant that will soon be added to Nutanix Cloud Manager (NCM) before adding similar capabilities across its portfolio. The Nutanix Kubernetes Platform (NKP) that Nutanix gained by acquiring the assets of D2iQ already has a copilot capability enabled by OpenAI, but over time the company will be employing a mix of LLMs to automate tasks across its portfolio.
Nutanix CEO Rajiv Ramaswami said generative AI tools will make it simpler for IT teams to evolve into platform engineers that deliver services that application developers can easily invoke. In effect, platform engineering teams need to earn the trust of those developers, he added.
The challenge is that many IT teams still need to acquire the skills and expertise required to truly manage IT as a service, said Ramaswami.
On the plus side, generative AI should make it easier for IT teams to manage application environments at scale.
In fact, Guy Currier, an industry analyst for the Futurum Group, noted that while generative AI has become supremely important, it is also rapidly become table stakes. Any vendor that is not leveraging generative AI is being left behind, he added.
In general, many developers today provision infrastructure, but most of them would generally prefer to have an IT team manage on their behalf. IT teams are embracing platform engineering as a methodology for providing developers with services they can readily invoke. Less clear is to what degree those IT services will be provided by a DevOps team that programmatically manages IT infrastructure or IT administrators that typically rely more on graphical tools to automate tasks.
The one thing that is certain is that with the rise of generative AI, the amount of IT infrastructure that can be effectively managed at scale by existing, or potentially smaller, IT teams is increasing. However, the application environments themselves are simultaneously becoming more complex as organizations build and deploy microservices-based applications in production environments alongside legacy monolithic applications.
The hope is that one day it will become easier to manage those applications using a common set of infrastructures, but for the foreseeable future microservices-based applications constructed using containers will be deployed on Kubernetes clusters, while virtual machines will continue to be relied on to run monolithic applications.
Of course, it might be a while before microservices-based applications represent more than 50% of the workloads deployed. Until that level of critical mass is reached, many organizations will opt to rely on dedicated teams to manage them, at least until the cost of those teams reaches a threshold that forces a centralization decision.
One way or another, IT teams will need to evolve as the pace at which applications are being built and deployed continues to accelerate. While there is a clear appetite to build more applications, it doesnโt necessarily follow that there is any enthusiasm for expanding the size of the IT teams needed to build, deploy, update and maintain them.