RELEVANCE OF STORAGE INFRASTRUCTURE AND DATA PIPELINE FOR AI EMPOWERMENT

The efficiencies of AI infrastructure usually focus on computing hardware like the GPUs, general-purpose CPUs, FPGAs, and tensor processing units. In general, these are responsible for training complex algorithms and making predictions based on those models. However, AI further demands a lot from data storage. It is wise to keep a potent compute engine well-utilized which requires feeding it with vast amounts of information as fast as possible. If the requirements remain unfulfilled, it could lead to clogging of the works and creating bottlenecks.

Moreover, optimizing an AI solution for capacity and cost, while scaling for growth, means taking a fresh look at its data pipeline. It also implies the AI readiness of an organization.

According to IBM, well performed AI looks simple from the outside in. Hidden from view behind every great AI-enabled application, however, lies a data pipeline that moves data— the fundamental building block of artificial intelligence— from ingest through several stages of data classification, transformation, analytics, machine learning, and deep learning model training, and retraining through inference to yield increasingly accurate decisions or insights.

Moreover, as noted by Venture Beat, an AI infrastructure standing for today’s needs will invariably grow with larger data volumes and more complex models. Beyond using modern devices and protocols, the right architecture helps ensure performance and capacity scale together.

It also notes that “in a traditional aggregated configuration, scaling is achieved by homogeneously adding compute servers with their own flash memory. Keeping storage close to the processors is meant to prevent bottlenecks caused by mechanical disks and older interfaces. But because the servers are limited to their own storage, they must take trips out to wherever the prepared data lives when the training dataset outgrows local capacity. As a result, it takes longer to serve trained models and start inferencing.”

Furthermore, efficient protocols like NVMe make it possible to disaggregate, or separate, storage and still maintain the low latencies needed by AI. At the 2019 Storage Developer Conference, Dr. Sanhita Sarkar, global director of analytics software development at Western Digital, gave multiple examples of disaggregated data pipelines for AI, which included pools of GPU compute, shared pools of NVMe-based flash storage, and object storage for source data or archival, any of which could be expanded independently.

View-Points of Research Specialists

According to McKinsey’s latest global survey, a 25 percent year-over-year increase in the number of companies using AI for at least one process or product has been observed. Nearly 44 percent of respondents said AI has already helped reduce costs. So if you haven’t evaluated your AI readiness in terms of optimized AI infrastructure and data pipeline, it’s time to catch up now.

“If you are a CIO and your organization doesn’t use AI, chances are high that your competitors do and this should be a concern,” added Chris Howard, Gartner VP.

As the AI investments are accelerating, IDC says spending on AI systems will hit almost US$98 billion three years from now, up from US$37.5 billion in 2019.

IDC also analyzed, “the largest share of technology spending in 2019 will go toward services, primarily IT services, as firms seek outside expertise to design and implement their AI projects.”

It is a clear indication that there is a need for professionals having knowledge of the intricacies of AI pipelines.

Leave A Reply

Your email address will not be published.

Cresta Help Chat
Send via WhatsApp