Date: Tuesday , March 24, 2015
In many ways, PC hardware is like cars- the state of even the most basic offering is so good, relative to the way things used to be, that it’s almost become boring. Today’s $15,000 econobox is vastly more reliable, efficient, safe, luxurious, and faster than the sports cars of a few decades ago. But have we lost something in terms of character of the "man and machine" connection as we’ve moved from chrome fins and finicky carburetors to ubiquitous Bangle butts and direct injection? Has commoditization-by-Camry sucked the fun out of cars?
There’s plenty of fuel for both sides of the argument in the automotive world, but in enthusiast computing, the upward-sloping progress of Moore’s Law and the tick-tock of Intel Allan Poe’s telltale Core set some very different expectations. What would the world be like if the horsepower of mainstream cars doubled every 18 months? It would likely be an inconceivably cooler place to live- assuming that tire technology also kept pace. In computing, it largely hasn’t- our daily drivers have no shortage of CPU or GPU power, but where the rubber meets the road, only a small portion of us have the right tires. I’m talking about the I/O bottleneck, and why despite things being good enough to be boring, these are about to get a lot more interesting.
Only about a third of new PCs sold today have SSDs rather than hard disks, despite the tremendous performance advantage offered by SSDs. There are a few interrelated reasons for this; SSDs are more expensive, many people don’t truly know the performance differences, and most people don’t want to pay more to get a smaller storage footprint. And so manufacturers keep churning out archaic, spinny disks, and unsuspecting users keep buying these. Of all of the developments in the PC space over the last half-decade or so, the SSD is the one that non-enthusiast users will actually notice. Suddenly, PC systems boot faster, programs load faster, and computing feels faster, because it is faster. But rather than tilting at the many windmills represented by the two-thirds of the market that is blissfully ignorant, let’s talk about why it’s a sweet, sweet time to be an enthusiast. 2015 is going to be a great year for solid state storage.
When Serial ATA was first released in 2003 with a 1.5Gb/s transfer rate, and then bumped to 3.0Gb/s in 2004, there was no consumer SSD drive that could saturate those interfaces. For my part, I bitterly held onto my PATA/133 gear for a while, as there wasn’t much of a point in upgrading- performance HDDs still spun at 7200rpm. We did however have a few faster oddities like the WD Raptor and VelociRaptor series finding way into the mix. Fast forward to today, when relatively inexpensive SSDs can saturate 6Gb/s SATA interfaces, and it’s clear that the world of storage has changed quite a lot in 12 years. New solutions are needed.
For several years, enterprise users with performance-critical applications have thrown vast amounts of money at vendors of high-grade PCIe SSDs. I’ve seen the difference that came from migrating some complicated databases from 15k SAS RAID arrays to OCZ Z-Drives and LSI WarpDrives, and it became clear that the enterprise pricing of these products (and their counterparts from Fusion-io and other leaders) was totally justified. It was like watching a very expensive preview of the future.
Inevitably, the future trickles down to the masses. Apple started pushing PCIe-based storage, as did makers of premium Ultrabooks. PCIe solutions for consumer PCs that showed up weren’t quite mature, and typically relied on a SATA bridge anyway, which brought with it latency and overhead.
2015 is the year that PCIe storage becomes a top choice for enthusiasts. It’s no secret that every major SSD manufacturer has transformative products in the works. However, with great power seems to come great confusion, as there’s a veritable alphabet soup of interfaces, form factors, and standards to consider, some of which may mean that you’ll need to buy new gear if you want the fastest storage. Mercifully, prices of the drives are coming down, and that trend will only be helped by competition in the space.
On the external and portable storage side, Thunderbolt and the recently-available USB 3.1 provide more attractive options than ever. Thunderbolt has been trickling into the PC market despite a huge installed base of Mac hardware, while USB 3.1 is sure to see widespread adoption becauseآ… it’s USB.
The AHCI command set, which supplanted ATA so long ago, wasn’t designed with SSDs in mind, and is highly inefficient when communicating with low-latency PCIe SSDs. This leads to wasted CPU cycles, poor performance under parallel I/O, and a general inability to get the most out of a PCIe SSD. The NVMe standard was created to solve these problems.
Non-Volatile Memory Host Controller Interface, or NVM Express (a.k.a. NVMe), has been a hot buzzword in storage for some time. Without getting too technical- it is a standard designed specifically for SSDs communicating through PCIe, and enables these drives to reach peak performance with lower CPU utilization. Coupled with the higher throughput and lower overhead of the PCIe 3.x bus (vs. PCIe 2.0), this is a highly optimized solution.
If your motherboard has BIOS support for NVMe, you should be able to boot from the NVMe drive. While NVMe support has been very limited so far, with consumer NVMe drives just beginning to hit the market, check with your motherboard manufacturer for a BIOS update. Windows 8.1 has built-in support for NVMe- earlier versions will require a driver. NVMe SSDs will be offered as M.2 cards, SATA Express drives, and PCIe half and full-height cards.
Without the need for platters, SSD devices are just a printed circuit board, which could theoretically be made into all sorts of interesting, flat shapes. In the time of Ultrabooks, NUCs, and Mini-ITX, the M.2 (formerly NGFF) form factor is a common and compact format for SSDs. M.2 drives are available as double- or single-sided cards, and replace the older mSATA standard while offering a large degree of form factor and interface diversity.
Current M.2 cards are 22m wide, and available in lengths of 30-110mm. Most of the SSDs that you’d see for desktop use will be the 2260 or 2280 size (60mm and 80mm long, respectively). The M.2 connector itself can carry a few different buses, including PCIe, USB 2.0/3.0, SATA, I2C, and audio, depending on how it is keyed. For SSDs, you may see key types of A (PCIe x2), B (PCIe x2 + SATA), E (PCIe x2), or M (PCIe x4 + SATA), with M being the choice for high performance drives.
At present, motherboards with more than one M.2 slot are extremely rare, but there are inexpensive (~$20) adapters available to use M.2 drives in a PCIe slot without a performance penalty. NVMe support will depend on the motherboard’s BIOS and the drive itself.
SATA & SATA Express
The SATA interface that you’ve come to know and love isn’t quite going away, but it is growing up. SATA Express ("SATAe") adds PCIe connectivity to the SATA standard- it’s a new connector, but backwards compatible. 2-lane PCIe 2.0 SATAe links have theoretical max bandwidth of 10Gb/s, and 2-lane PCIe 3.0 SATAe offers 16Gb/s. Both of these greatly exceed SATA 3.0’s 6.0Gb/s transfer rate, while offering NVMe if supported by both the drive and BIOS. SATA Express will be seen primarily on 2.5" SSDs, but could make its way to 1.8" devices as well. Motherboard support for SATAe is still relatively uncommon, and consumer-oriented drives using the standard barely exist, but both will change going forward.
PCIe Add-In Cards
Most of the current enthusiast PCIe SSDs are half-length cards (half or full-height), using two 8 PCIe 2.0 lanes. As manufacturers roll out consumer and enthusiast NVMe cards, the inevitable shift to the much faster PCIe 3.x will take place. Considering both the overhead of the interface and the transfer rate, PCIe 3.x is about twice as fast as PCIe 2.0, so a PCIe 2.0 x8 card will have approximately as much throughput as a PCIe 3.x x4 card.
As mentioned earlier, most of the PCIe SSDs that are currently available have a SATA chip (or several, in RAID) onboard, and are not actually native PCIe solutions. By definition, any of the upcoming NVMe solutions will be native PCIe devices.
Booting to PCIe SSDs used to be hit or miss, but it’s mostly sorted out on current motherboards, such as those using the Intel 9-series chipsets.
I’m putting our two external standards together because you can’t much talk about one without the other. USB 3.1 is a great leap forward in many ways, offering 10Gb/s transfer speeds (matching Thunderbolt), reversible Type-C cables, and up to 100W of power supplied through the port. Most importantly, USB 3.1 will be highly available and inexpensive, while Thunderbolt devices continue to be mostly a niche product that’s almost nonexistent on PCs. Since the USB 3.1 announcement, major manufacturers have announced a large number of drives and enclosures, and we’ll be taking a look at how it stacks up against Thunderbolt in the near future.
If you’ve got an I/O-bound workload, your life is probably about to get a good bit easier, and without having to spend thousands of dollars on enterprise-grade PCIe SSDs. The arrival of NVMe is game-changing, and it will be interesting to see how BIOS support evolves over the coming months. Meanwhile, as the top tier drives will become some flavor of PCIe-based solutions, we may see further downward price pressure in the SATA SSD space.
If you’re already happy with your SSD, then there probably won’t be a compelling need to move to a PCIe SSD. Still, you could take advantage of the changing market to snap up an inexpensive SATA SSD for your backup machine, or family member. Odds are that they will notice that their computer is suddenly faster, and we will all have one fewer windmill at which to tilt.