Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Emerging server technologies: 6 hot trends to watch

public://webform/writeforus/profile-pictures/danieldern_webmug.jpg
Daniel P. Dern Freelance Writer, Trying Technology
 

Whether the servers running your company's applications are on your developers' desks, in your data center, or in your private cloud, the technologies inside the racks are what enable—or throttle—application speed, flexibility, and cost-effectiveness. The right new components and configurations, either added as upgrades, or "forklifted" in a hardware refresh, can be key to new activities such as analytics, big data, and machine learning.

There's no shortage of named and nameless server hardware, architecture, and methodology initiatives from vendors and collaborative consortia. There's the Open Compute Project, started by Facebook; IBM's Coherent Accelerator Processor Interface (CAPI); persistent memory and in-memory computing; and hyperconverged and composable systems, to cite just a few.

Some of this new server tech is already available, although it may not yet work with all of your existing hardware and software. Some is still in the development/testing/manufacturing/vetting pipeline.

Here's a quick look at six emerging technologies that IT Ops and Dev professionals should consider adding to their server tech watchlist.

1. ARM processors designed for cloud workloads

CPU chips continue to evolve into new architectures, while components and connectors shrink to unfathomably small sizes. "Moore's Law is not dead yet," says Richard Fichera, vice president and principal analyst for infrastructure and operations at Forrester Research. "Intel did a new architecture at 22 nanometers, then shrunk it to 14 nanometers ... and while 7 nanometers may be the limit of CMOS (complementary metal–oxide–semiconductor) as we know it, that's at least four design iterations away."

While x86 processors dominate, server chips based on Advanced RISC Machine (ARM) architectures are in the works from companies including AMD, AppliedMicro, Broadcom, Cavium, and QualComm. ARM designs can use less power, and they're a good fit for cloud workloads such as IaaS, PaaS, machine learning, and big data—with one big caveat: Existing software may need to be recompiled, ported, or completely rewritten to run on ARM-based servers.

2. Xeon Phi and other specialized hardware accelerators

Another way that servers can improve their overall processing power and speed is to add and use processors outside the CPU proper.

"We will see a proliferation of specialized processing and of architectures using them," says Forrester's Fichera. "There will be more use of specialized accelerators like Intel Xeon Phi, field-programmable gate arrays (FGPAs), and graphic processing units (GPUs), as lots of tiny cores in a mesh interconnect." For example, Fichera notes, "Oracle's latest SPARC-M7 server has specialized co-processors on the chip to offload and accelerate tasks from the CPU, like compression and encryption."

FPGAs are, as the name implies, integrated circuits with reprogrammable logic. But you can choose already-written programs to reprogram the ICS just as easily as you can choose from different apps on your computer or smartphone. Languages for FPGAs include OpenCL.

According to Michael S. Strickland, director and data center architect in Intel's Programmable Solutions Group, that flexibility is part of FPGA's appeal. (Intel recently bought FPGA vendor Altera.) "A single accelerator can be put to a variety of purposes and functions, like accelerating search, accelerating machine learning, being used as a smart network interface card, and more. A given FPGA can run concurrent functions, such as SmartNIC (network interface card) and search, at the same time, or simply be partially reconfigured to do different things at different times of the day, such as accelerating search until 5 PM, then partially reconfiguring for machine learning until 8 AM."

"Using GPUs can give you a 25x speed-up on specific calculations, like matrix multiplication," says Peter Christy, research director for networks at 451 Research. "That's helpful for anyone doing machine learning for applications like voice, image, and text recognition."

Server offerings that incorporate NVIDIA's Tesla P100 GPU accelerators for adding processing oomph have been announced, and some are already available, from companies including Dell, HPE, and IBM.

Many of the Intel Xeon Phi processor models are already available, with more scheduled for later in 2016.

3. 3D XPoint, peristent memory, and in-memory computing

Memory evolution includes everything from denser versions to in-memory computing, where programs and even humongous databases can be kept in RAM indefinitely.

RAM keeps getting higher-capacity per module, of course. "We are currently building 64GB DRAM modules, and we are seeing prototypes with 128GB on a single module," says Mike Mohney, senior technology manager in the DRAM group at Kingston Technology. "Next-gen platforms will support up to 256GB." Modules of 64GB are available now, with128GB modules shipping sometime next year.

And DRAM is no longer the only memory game in town.

"3D XPoint (pronounced "cross-point") from Intel and Micron is cheaper than DRAM and provides persistent storage as well," according to 451 Research's Christy. According to Intel, the technology is up to 1,000 times faster and 10 times denser than conventional memory.

According to WCCF Tech, "The roadmap of Optane-based SSDs and memory has been aligned with the overarching architecture that will be in play at the given time. This means that we are going to see Optane for the first time sometime near the end of 2016 (or early 2017 if Intel faces any delays)."

However, cautions Kingston's Mohney, "Having more memory isn't intrinsically better. You have to understand how architectures and configurations interact. For example, if you don't buy the right Xeon processor, the memory may not run as fast as it can. And on some systems, like the Intel Xeon E5 family, the more DIMMs you plug in, the slower it all runs—and latency is very important to some high-performance apps."

Nonvolatile RAM (persistent memory)

Using system memory as primary storage can yield significant performance improvements for many applications. But there's an obvious danger associated with using volatile memory as primary storage, as anyone who has lost not-yet-saved-to-disk work can attest after suffering a power failure.

One enabling solution is "buddy-systeming" each volatile RAM module with a nonvolatile one (NVRAM), along with a little battery (or ultracapacitor), plus circuitry to spring into action when power hiccups, preserving RAM contents long enough to save them to the NVRAM. Companies offering NVDIMM modules include AgigA Tech, HP, Micron, and Viking Technology.

New technologies may offer alternative nonvolatile option, such as Intel's Optane, based on its 3D XPoint memory technology.

In-memory computing

With large enough memory, it becomes possible to keep entire applications and databases in RAM: You can use memory instead of magnetic disks as primary storage, rather than just for caching. This can deliver big-time performance improvements by eliminating drive latency and the delays in I/O control protocols, such as serial advanced technology attachment (ATA) or serial-attached storage (SAS).

"For SAP, for example, being able to run in DRAM, and run analytics concurrently with the database makes a big difference because of the unique design of the SAP application server," says 451 Research's Christy.

And persistent memory makes this viable for critical applications and associated data.

"Using NVDIMMs as a persistent caching tier to SSDs or logging devices with OS (Linux, Windows) DirectAccess ... we've seen up to a 2x performance improvement on Microsoft SQL Server 2012 over using SSDs alone; up to 63% greater logging performance in Microsoft Exchange; and up to 3x faster performance with Microsoft SQL Server 2016," says Bret Gibbs, persistent memory product manager at HPE. 

4. NVM Express for faster storage

Even with persistent primary memory, there's still going to be a need for storage devices. With solid-state drives continuing to grow in capacity and drop in price, the performance bottleneck is the SATA/SAS interfaces that were designed to accommodate spinning drives' constraints. One solution: Non-Volatile Memory Express (NVMe), aka NVM Express.

Compared to Peripheral Component Interconnect Express (PCIe)-attached solid-state drives, "NVMe will provide much lower latency, and many times the bandwidth and IOPS, which is what database apps want," says Forrester Research's Fichera. "The NVMe fabric specification will let you do a direct read/write across the fabric, from any CPU to any NVMe device on this fabric. This means that racks of servers will have homogeneous big flash storage spaces, which in turn means that all kinds of storage will get a lot faster."

But, cautions Cameron Crandall, senior technology manager and part of the SSD team at Kingston Technology, "While you can retrofit NVMe cards to older servers, those won't scale. You'll need new racks with more NVMe sockets so you can do hot-swapping."

NVMe storage arrays are available from companies including Mangstor and ZStor.

5. Gallium Nitride ICs: Increasing server power efficiencies

Reducing waste power, cooling, and space aren't just data-center-size concerns; they're also battles fought inside the confines of each rack. And, sometimes, even one small change can make a big difference.

For example, power coming into a rack at 48 volts needs to be stepped down to 1V. The traditional power supply circuitry needs to do this in two stages: from 48V to 12V, and then from 12V to 1V. That's typically only about 79% efficient. The two-stage power supply eats up a bunch of precious space, plus you need empty space for cooling to remove waste heat from the conversion process.

Enter Gallium Nitride (GaN) integrated circuits (IC). According to Alex Lidow, CEO of Efficient Power Conversion (EPC), his company partnered with Texas Instruments (TI) to create a GaN-based power supply design. Using the design, he says, "Transistors using GaN instead of silicon can go directly from 48V to 1V, in one step. And using GaN on a server motherboard can potentially reduce power losses from power conversion within the rack by 50%. We are seeing efficiencies of 91% in power conversion, and overall power savings of about 15% of the total power consumption of the server farm, going from 66% system efficiency to 76%," he says.

However, GaN-based power ICs aren't something that can be retrofitted to existing gear: You'll need new motherboards. These power supplies will need new controllers and drivers, which companies such as TI have been busy creating, said Steve Tom, director of High Voltage Technology at TI. And they'll need GaN-savvy engineers. "We've written textbooks used in over 100 universities, we have graduate programs in over 100 universities, we're starting to see PhDs with experience in GaN," says EPC's Lidow.

Companies that build their own servers from scratch, making it easy to move to GaN-based power supplies—e.g., Google Open Architecture, Facebook, and Amazon—are early GaN adopters; expect this to show up in mainstream vendors over the next three to five years, using EPC's or other companies' GaN processes.

6. Converged and composable infrastructure

With servers, it's not just the components, but how they go together on each rack unit (1 u of space is about 1.7 inches high) and in the chassis, and, increasingly, how equipment installed in multiple racks and rows works together. Here are a few examples:

Converged infrastructure

Think of converged hardware as u- and rack-level stock-keeping units. Just as desktop and notebook PCs are purpose-configured for office productivity, gaming, CAD, and so on, converged infrastructure boxes for data center use are task-oriented configurations of servers, storage, and networking used for such applications as scalable transactions and virtual desktops. While this brings back the guesswork of "how much of what will I need?" for purchasing, it avoids wrangling individual hardware components onto the floor, reducing operational time and expense.

Companies offering converged infrastructure products include Cisco, Dell, HPE, and IBM.

Composable infrastructure

Think of composable systems, the next step beyond today's software-implemented hyperconverged infrastructure, as pooled hardware configured on the fly. Composable systems flexibly integrate computing, storage, and network resources for each task. The secret sauce making this possible in the longer term: replacing traditional copper wiring with optical connections that run within and between racks of components.

"Optical interconnects let you dynamically assemble server systems from CPU, storage, etc. anywhere in the data center," says 451 Research's Christy. Multirack photonic connections won't hit the enterprise for three to five years, he says, although rack-scale parts will be available sooner, particularly for systems based on the Open Compute Project specs.

Companies offering composable infrastructure include Cisco, HPE, and Intel.

Game-changers

All six of these emerging server technologies could be game-changers and therefore bear watching. Some could speed your IT operations (or let IT keep pace with processing and data growth). Some may eventually let you execute new, IT-based business procedures. Not all will be a match for your IT needs, and it's possible that some won't end up making it to market at all. But keep your eye on them: Some of these could be available within the timeframe of the typical two-to-four-year server refresh cycle.

These are just a few of the more promising server technologies, products, and methodologies in the development-to-market pipeline. But there are others. What are you tracking? 

Image credit: Flickr

Keep learning

Read more articles about: Enterprise ITData Centers