A great number of the most exciting IoT use cases rely significantly on data to drive AI and machine learning capabilities. Whether related to predictive maintenance or any number of machine-centric automation processes, businesses are asking critically important questions about how best to develop a data infrastructure.
A recent interview conducted by datacenternews.asia with Hitachi Vantara’s VP of infrastructure solutions marketing, Bob Madaio, detailed how enterprises are struggling to manage the explosion of data from the IoT. Specifically, Mr. Madaio noted, “customers today are still learning where to keep their data, asking ‘how much of the data do I need to keep? Do I need to send any of it back to the data center? Can I do it all on the edge?’”
These are complicated questions and carry significant cost implications since data storage has been a topic of focus from the data center manager’s perspective for quite some time. Big data, in general, sparked a need for data centers to consider different data storage options (e.g. cloud, flash, offsite tapes, etc.) for many, many years in order to control data-carrying costs.
What brings greater demand to answering important data storage question now is summarized by Madaio. “Another implication of this machine learning and IoT data is managing – and what I mean by that is if you look at the core storage market, one of the big focus areas our customers have always said is – how can we scale the management if my data continues to grow at such a fast pace?”
“A lot of those technologies, like AI, are starting to push a lot of low-level decisions down to the software – so we’re deploying a new system, we’re adding capacity to database – we can run automation tools and use analytics where to put that data so it’s most effective.”