Micro Focus is now part of OpenText. Learn more >

You are here

You are here

With AI ramping up, it's time to rethink 'data temperature'

public://pictures/jasonbloomberg-tight-300x412.jpg
Jason Bloomberg President, Intellyx
 

Data temperature is a metaphor for how close to the CPU your data is. Data on tape is perhaps the coldest, while data in volatile memory—the familiar random-access memory (RAM) that all computers have—is the hottest.

Most data in the enterprise is on the cold side, tucked away on hard drives or networked storage of some sort. Only a small fraction heats up when someone needs to use some information for a particular purpose.

Artificial intelligence (AI), however, is changing the enterprise data heat map, as organizations increasingly leverage once-cold information to train their machine-learning (ML) and deep-learning (DL) models.

The enterprise use of AI may very well be in its infancy, but the writing is on the wall. It won’t be long until virtually every application is AI-enabled in one way or another, thus pressuring IT infrastructure teams to support a world where all data is hot.

Here's how to keep cool as hardware shifts heating up right now will soon affect your storage architect’s operations.

The rise of persistent memory

At its recent Data-Centric Innovation Day, Intel rolled out new technology to support this rush to AI: its Optane Persistent Memory.

Persistent memory is like RAM, but it’s nonvolatile: It stores information even when the system reboots or otherwise loses power. Optane chips function much like solid-state disk drives (SSDs), except that they plug into the slots on system motherboards intended for RAM. Intel's latest generation of Optane is almost as fast as RAM, and much faster than SSDs, according to Intel. 

Persistent memory promises to be disruptive in many ways, but the first thing Intel’s new chips disrupt is the economics of memory. “[Persistent memory chips] break through the memory economics that have held back developers,” said Navin Shenoy, vice president and general manager of mobile client platforms at Intel.

The reason: Compared to persistent memory, RAM is far more expensive, and you can put more Optane memory on the latest Intel motherboard than its specified maximum RAM capacity. That gives developers greater leeway when using the combination of Optane memory and RAM.

Systems using Optane can support “up to 36TB of system-level memory capacity, when combined with traditional DRAM” with the latest generation of Xeon processors, according to Intel. 

On enterprise-class servers, the economic value proposition persistent memory promises is especially strong for memory-intensive applications. “Optane is perfect technology for certain workloads, especially in-memory databases,” explained Bart Sano, vice president of platforms for Google.

[ Partner resource: Subscribe to Intellix's Cortex and Brain Candy Newsletters ]

Popular in-memory databases include AeroSpike and SAP HANA—and it’s no coincidence both vendors are Intel partners.

HANA in particular takes full advantage of Intel's new chip technology, a fact not lost on Google. “In the Google Cloud Platform, Intel provides persistent memory for HANA, which reduces the HANA startup time by a factor of 12X after maintenance windows,” explained Das Kamhout, senior principal engineer in the cloud platforms group at Intel.

AeroSpike’s story is similarly impressive, in large part because AeroSpike’s customers are using its technology for AI. “AeroSpike began with ad optimization,” said Brian Bulkowski, founder and CTO at AeroSpike. “Now it’s all AI, for example, fraud detection. PayPal processes hundreds of terabytes of data every day, and they use AI for fraud detection. There’s too much data otherwise. Their petabyte clusters would be constrained by RAM if not for Intel’s new Optane technology.”

[ Also see: 4 ways flash storage is reinventing your data center operations ]

AI and hot data

On the one hand, therefore, AI is heating up data. “The data temperature is rising,” explained Alper Ilkbahar, vice president and general manager, data center memory and storage solutions at Intel. “For example, simply storing images in the cloud is cold, while using AI to recognize faces in images is hot.”

Persistent memory technology is putting new capabilities into the hands of developers—capabilities where graphics processing units (GPU) fall short, in particular, for inference (applying trained ML and DL models to new datasets).

“We have been using GPUs at AventVi, but now we’re transitioning to CPUs for simpler development, especially for inference,” explained Zvika Ashani, CTO of Agent Video Intelligence (AgentVi), a video surveillance technology firm. “We were surprised that we could take workloads off of GPUs and put them on CPUs or a combination of CPU and GPU and get an immediate burst in performance.”

The original use of high-performance GPU chips was for graphics-intensive tasks such as running compute farms for rendering computer animation. The AI world soon co-opted the technology in search of maximum performance.

Now, as with its consumer and business computing chips, Intel is looking to bring GPU customers using the technology for AI back to the CPU.

There are two reasons why CPU technology is superior to GPUs, especially when it’s paired with persistent memory, Intel says. First, the CPU doesn’t have to send data to the GPU and back for processing, a step that introduces latency. Second, developers don’t have to program around how GPUs handle memory.

The challenge now is convincing partners to build AI tooling on its CPUs instead of GPUs. “We’re helping the Google TensorFlow team to recognize CPU performance as well as it recognizes GPU performance,” Lisa Spelman, VP and GM of Intel Xeon products and data center marketing for Intel said.

AI at the edge

Persistent memory doesn’t only go in servers, of course. It might fit into any computer system that uses RAM—especially as the Internet of Things (IoT) drives processing to the edge.

In fact, the increasing prevalence and burgeoning technology requirements of AI perform well at two different edges—the cloud or network edge (for example, when a content delivery network (CDN) serves information close to users’ locations), and the “edge edge”—the technology at user locations, including user interface devices themselves.

An example of AI at this “edge edge”: the AI-driven video recognition that AgentVi offers its customers. This technology can recognize people behaving suspiciously across thousands of live video feeds, for example.

Such inference requires AI processing either in the cameras themselves, or a single network hop away from them, which might be a piece of specialized hardware in a server closet at the customer facility.

Another example: medical equipment, or what some people are calling “the Internet of Medical Things.” “We offer AI at the edge in radiology at the MRI exam console, in near real time,” explained Stuart Schmeets, senior director of MRI R&D collaborations at Siemens Healthcare. “AI must keep pace with clinical workflows, but also have to keep costs down.”

[ Also see: Your users want AI. Is your code ready? ]

Hot data everywhere

The rise of ubiquitous AI, the increasing temperature of data, and the significance of persistent memory are driving a trend your team should pay attention to.

Companies that focus on bringing value to their customers by tapping the new technology will be the real winners in this race to mature AI.

It’s also important to consider the impact that persistent memory will have on the storage market. From the perspective of storage, the Optane memory adds an entirely new class of storage, one that is much faster than the fastest of SSDs.

However, persistent memory and SSDs are not interchangeable. As with most technologies, each one has its particular strengths.

Persistent memory thus gives storage architects another important tool in their tool belt to better meet the needs of customers as data heat up overall. And with the hockey-stick in interest in AI, it’s not a moment too soon.

Keep learning

Read more articles about: Enterprise ITData Management