This article is part of the Technology Insight series, made possible with funding from Intel.
By now, you’ve seen the word “Optane” bandied about on VentureBeat (such as here and here) and probably countless other places — and for good reason. Intel, the maker of all Optane products, continues to commercialize its decade-long R&D investment in this new memory/storage hybrid.
But what exactly is Optane, and what is it good for? (Analytics and AI, to name two major use cases.) If you’re not feeling up to speed, don’t worry. We’ll have you covered on all the basics in the next few minutes.
The bottom line
- Optane is a new Intel technology that redefines the traditional lines between DRAM memory and NAND flash storage.
- Optane DC solid state drives provide super-fast data caching and agile system expansion; current capacities span from 375 GB to 1.5 TB.
- Capacity up to 512GB per persistent memory module; configurable for persistent or volatile operation; ideal for applications that emphasize high capacity and low latency over raw throughput.
- Optane DC memory a strong contender for data centers; future for clients. Capacities from 16 to 64 GB. Costs and advantages are case-specific, impacted by DRAM prices. Early user experience still emerging.
Now, let’s dive into some more detail.
Media vs. memory vs. storage
First, understand that Intel Optane is neither DRAM nor NAND flash memory. It’s a new set of technologies based on what Intel calls 3D XPoint media, which was co-developed with Micron. (We’re going to stumble around here with words like media, memory, and storage, but we will prefer “media.” Media encompasses the physical chips in which data is stored rather than the holding devices for data near the CPU or out across a SATA or PCI Express bus, reflecting “memory” and “storage,” respectively.) 3D XPoint works like NAND in that it’s non-volatile, meaning data doesn’t disappear if the system or components lose power.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
However, 3D XPoint has significantly lower latency than NAND. That lets it perform much more like DRAM in some situations, especially with high volumes of small files, such as online transaction processing (OLTP). As an example, consider the latency specs of the 3D NAND-based Intel SSD DC P4618 (80 µs read, 20 µs write) compared to the Optane-based SSD DC P4800X Series (10 µs read and write).
In addition, 3D XPoint features far higher endurance than NAND, which makes it very attractive in data center applications involving massive amounts of data writing. Taking those same two SSDs’ specifications, we see the P4618 at 53.35 petabytes written (PBW) over the drive’s lifetime compared to the Optane DC P4800X at 164 PBW.
When combined with Intel firmware and drivers, 3D XPoint gets branded as simply “Optane.”
So, is Optane memory or storage? The answer depends on where you put it in a system and how it gets configured.

Above: Intel often depicts the memory/storage continuum as a pyramid, with a small amount of fast, costly (per gigabyte) DRAM on top and a lot of slower, less costly storage on the bottom. Optane implementations slot between these two.
Optane memory
Consider Intel Optane Memory, the first product delivered to market with 3D XPoint media. Available in 16GB or 32GB models, Optane memory products are essentially tiny PCIe NVMe SSDs built on the M.2 form factor. They serve as a fast cache for storage. Frequently loaded files get stashed on Optane memory, alleviating the need to find those files on NAND SSDs or hard drives, which will entail much higher latency. Optane memory is targeted at PCs, but therein lies the rub. Most PCs don’t pull that much file traffic from storage and don’t need that sort of caching performance, And because, unlike NAND, 3D XPoint doesn’t require an erase cycle when writing to media, Optane is strong on write performance. Still, most client applications don’t have that much high-volume, small-size writing to do.

Above: Little larger than a stick of gum, Intel Optane Memory will appeal to users who benefit from frequent file caching. The higher the number of small files to cache, the more benefit can be expected.
Optane SSDs: Client and data center
Next came Intel Optane SSDs and Data Center (DC) SSDs. Today, the Intel Optane SSD 8 Series ships in 58GB to 118GB capacities, also using the M.2 form factor. The 9 Series reaches from 480GB to 1.5TB but employs the M.2, U.2, and Add In Card (AIC) form factors. Again, Intel bills these as client SSDs, and they certainly have good roles to play under certain conditions, especially when low latency is key to application performance. But NAND SSDs remain the go-to for clients across most desktop-class, low-demand applications, especially when price and throughput performance (as opposed to latency) are being balanced.

Above: Low-latency Optane SSDs come in several form factors. This helps deploying organizations find more opportunities to accelerate storage media accesses in a range of system types.
Things change once we step into the data center. The SKUs don’t look that different from their client counterparts — capacities from 100GB to 1.5TB across U.2, M.2, and half-height, half-length (HHHL) AIC form factors — except in two regards: price and endurance. Yes, the Intel Optane SSD DC P4800X (750GB) costs roughly double the Intel Optane SSD 905P (960GB). But look at its endurance advantage: 41 petabytes written (PBW) versus 17.52 PBW. In other words, on average, you can exhaust more than two consumer Optane storage drives — and pay for IT to replace them — in the time it takes to wear out one DC Optane drive. And, as noted earlier in our P4618 vs. P4800X discussion, Optane-based drives will deliver 2x to 8x faster access responsiveness, making them very beneficial in rapid transaction environments.
Optane DC Persistent Memory
Lastly, Intel Optane DC Persistent Memory modules (let’s say DCPMM for lack of an official acronym) place 3D XPoint media on DDR4 form factor memory sticks. (Note: There’s no DDR4 media on the module, but DCPMMs do insert into the DDR4 DIMM sockets on compatible server motherboards.) Again, Optane media is slower than most DDR4, but not much slower in many cases. Why use it, then? Because Optane DCPMMs come in capacities up to 512GB – much higher than DDR4 modules, which as of this writing top out at 128 GB each. Thus, if you have applications and workloads that prioritize capacity over speed, a common situation for in-memory databases and servers with high virtual machine density, Optane DCPMMs may be a strong fit.

Above: Optane DC persistent memory modules (DCPMM) currently look like DDR4 modules, but they do not integrate any DDR4 media, a fact that leads to confusion with some users. Note that systems with Optane DCPMM still require some DDR4 to operate.
The value proposition for DCPMM was stronger in early 2018 and early 2019, when DRAM prices were higher. This allowed DPCMMs to win resoundingly on capacity and per-gigabyte price. As DRAM prices have plummeted, though, the two have grown much closer on a per-gigabyte basis and pricing continues to fluctuate unpredictably, which is why you now hear Intel talking more about the capacity benefits in application-specific settings. As Optane gradually proves itself in enterprises, expect to see Intel lower DCPMM prices to push the technology into the mainstream.
As for total performance, DCPMM use case stories and trials are just emerging from the first wave of enterprise adopters. Performance results that paint a clear picture across numerous applications, workloads, and platform generations are arriving in bits and pieces, which makes stitching them together at this early stage difficult for outside observers.This is partially because server configurations, which often employ virtualization and cross-system load sharing, can be very tricky to typify. But it’s also because the technology is so new that it hasn’t been widely tested. For now, the theory is that large DCPMM pools, while slower than DRAM-only pools, will reduce the need for disk I/O swaps. That will accelerate total performance above and beyond the levels reduced by adopting a somewhat slower media.
Net takeaway: Optane DCPMM should be a net performance gain for massively memory-hungry applications.
In Part 2, we’ll detail Optane’s various use modes, which include data persistence, and discuss the workloads able to make the most effective use of them.
This article was updated on Dec. 3, 2019 with new technical specifications.