Date: Monday , May 28, 2018
What is AMD’s StoreMI or (Store Machine Intelligence Technology)? This is what AMD says it is in a nutshell:
SSDs are fast, but expensive, and offer minimal capacity. Mechanical hard drives boast large capacity for a low price, but are much slower than an SSD. AMD StoreMI technology "combines" these two types of storage into a single drive and automatically moves the data you access the most to the SSD, so you get the best of both worlds: SSD responsiveness, and mechanical hard disk capacity with its low price.
This isn’t exactly a new concept, but AMD’s approach to this implementation is different than what we’ve seen in the past. We talk about what this feature is, how it’s used and what you can expect from it. AMD claims that it isn’t merely SSD caching, which is what Intel’s Smart Response Technology and Marvell’s competing solutions are.
The problem is that ultra-fast SSDs are expensive and have lower capacity than spinning hard drives do. Those spinning drives are now cheap and offer a ton of storage for very little money. There are also practical limits for how many NVMe based SSDs you can have in your system, while SATA based hard drives of the past seem like they can be added in bulk without any problems. In reality, you can have a lot of NVMe based SSDs in your system, but the cost of doing so adds up quickly and increases system complexity in a way that simply adding hard drives doesn’t.
First, let’s get some background on the feature and why AMD would bother with what sounds like more SSD caching that many might not be drawn to. AMD has never achieved any kind of feature parity with Intel on its platforms and when it comes to storage, that’s essentially remained the case. While AMD looks to be just as good or better on paper in some respects, it’s platform doesn’t perform as well as the specifications for the storage interfaces would have indicated. There are many reasons for this but that’s beyond the scope of this article. SATA and storage performance are generally behind Intel in one area or another and are less flexible than Intel’s offerings. The biggest advantage on paper is not having to deal with the bandwidth limitations of DMI 3.0. However, Intel has a solution for that as well, but you have to pay extra to get around that limit.
AMD has strived and failed to achieve even feature parity, much less performance parity. With the launch of AMD’s X400 series chipsets, it hopes to start closing some additional gaps and make its platform more appealing. To that end, AMD has finally managed to create something to compete with Intel’s Smart Response Technology. For those who may not recall, Intel’s Smart Response Technology or "ISRT" as I will refer to it from here on out is Intel’s SSD caching feature. We covered this in detail back when the Z68 chipset launched. Frankly, I think people knew that SSD caching, while cool from a technical standpoint wasn’t really the way forward. When you spent the money on an SSD, you wanted to use it as the OS drive as that made the system perform better overall. Games didn’t directly benefit from SSDs as much as the OS did.
Despite the obvious advantages to SSD caching, the feature never really took off. Intel's Z68 chipset with Smart Response Technology worked very well and did what it claimed to do. However, we simply found that enthusiasts never really used drives in this way. The small fast SSD was typically used for the OS and applications with a small footprint. This provides the most improvement in system performance as games don't really benefit from SSDs in most cases outside of reductions in level load times.
Fast storage doesn’t do anything for your frame rates, so you can load games on your slower, larger storage without ill-effect. Some games show no improvement at all, while others only do because they were designed to work on consoles that don't rely on the same type of storage model that PC's do. Examples of this are Star Wars: The Old Republic, which doesn't give a damn how fast your storage is and Arkham Knight which stutters so badly its unplayable when run from a mechanical hard drive. While neither of these represent the latest crop of games, they are examples of how this philosophy works. Basically, most games don't benefit from SSDs and others do, but mostly because they are bad console ports and not because of some technical advantage to loading faster.
Back when the SSD hit the market, it seemed like the hard drive as we knew it was destined to die off. There were articles out at the time which suggested, or rather predicted that magnetic media densities were going to hit a wall and stop growing. SSDs would eventually catch up and we would replace all mechanical magnetic media entirely. This seemed plausible enough at the time. We would still have hard drives in a sense, but they would work differently. A half decade or so later we are still using mechanical drives in our builds. They have simply been relegated to a lower performance tier, which I will talk about in more detail later.
SSDs have gotten larger over the years alongside mechanical drives. However, they haven’t reached capacities that are anywhere near large enough to replace such devices and supplant those completely. SSDs and mechanical drives have both continued to get faster over the years but SSDs have made huge leaps in this area while magnetic disk media hasn’t. As a result, you end up with two different storage tiers in your system. A fast, small, expensive tier and a slow, large and relatively cheap (per GB/TB) tier. In the server space, larger SSDs are available than what we generally see in the consumer market. However, this storage is quite costly per GB with some options like Intel’s DC P3608 4TB are over $2,000. The Intel 3D Xpoint DC P4800X is nearly $1,200 at a mere 375GB. There are solutions that cost more than that. It’s not any different with mechanical hard drives as some of those are faster than others.
The amount of storage required for games and applications has grown with the volumes we use today. If I remember correctly, the larger games available when SSDs first hit the market were 5 to 10GB at most. That was large enough that the earlier 20-60GB SSDs were largely incapable of being used for storing games. Games that are 50GB in size or more are common today. Even a game that's normally not so large can quickly get out of hand when you add mods, expansion packs, or DLCs into the mix. Some games are packaged or compressed on the hard drive and uncompressing those for modification is required. This can easily double the size of the game or more in some cases.
Many people stream, create video content, and store images. When we have multi-mega pixel cameras in our pockets and a social climate that promotes this type of behavior, it’s easy to see why we need ever expanding storage capacities. When SSDs hit the market originally, it was predicted that the mechanical drive would be a thing of the past long before I started writing this article. In reality, this hasn't happened for many of the aforementioned reasons. Flash ROM density isn't sufficiently cheap to match the capacities of magnetic media and therefore most people tend to have both in their machines.
The public response to Smart Response Technology wasn’t really all that surprising in hindsight. It came and went with no one giving a damn that it was here in the first place. People simply didn't use their systems the way Intel thought they would. Despite the capacities, NVMe SSD drives were purchased by many but simply never used for caching. The fact is that the same problems that Intel devised a solution for all those years ago are still present today. Ultra-fast NVMe drives reaching above a terabyte in size are still not common despite being readily available. They are expensive, and a single TB volume gets chewed up fast.
At present, my games directory on my system is 553GB and that doesn't count Steam games which were moved off my 1.2TB Intel SSD 750 this past weekend and placed on an 800GB Intel SSD 750 that I had lying around. My Steam folder is 311GB. Together, with my OS and applications I have nearly consumed 1.2TB of storage space. Granted, I may not be a typical case while I may have more applications installed but certainly other people have more pictures, games, and personal stuff than I do. I don't have that many games installed compared to the number I own on Steam and Origin. It doesn't take much to chew through a large amount of space quickly.
Fortunately, setting up StoreMI is relatively easy. The only word of caution I can offer here is to back up things before starting, but that should go without saying. One last caveat is changing modes or adjusting your storage tiers after the fact is a lengthy process. It can take 30 minutes or more to make these changes and probably much longer with larger mechanical disks with tons of data on those. AMD’s StoreMI Quick Install Guide outlines the process and each of the performance modes. Overall it is quite simple and straight forward which is surely a win in AMD's column.
You can see the first couple of slides from the install guide above. The third and fourth images are how the drive appears in disk manager when you go through the process. Similar to a RAID array, you will have a single logical volume after you combine the two drives into one. The separation really happens on a software level after that where data is promoted or demoted according to your usage patterns which are analyzed and adjusted in real time.
Those of us who have worked in the IT field or do now may be familiar with the storage tier hierarchy model. StoreMI effectively replicates this basic model in which you have several tiers of storage based on performance. The fastest tier is tier 1 and so on. Here is what AMD says in it’s user guide:
"AMD StoreMI is not a caching solution; it utilizes advanced machine intelligence, virtualization and automated tiering to analyze the data blocks that are most often accessed, and actually moves those blocks to the fastest storage tier. StoreMI operates consistently at near SSD performance levels, continuously adapting to changing storage usage patterns in real time. As a result, the user experiences the performance of the fastest tier SSD drive, combined with the large capacity and low-price advantages of the mechanical hard disk in a single, large, and easy-to manage drive."
Essentially, you have already been using a storage model based on the tiered concept whether you realize it or not. If you had an SSD and an HDD in the same system, you are basically prioritizing data and where it executes from manually. What makes StoreMI exciting is that you can effectively automate this process. More than that, it's dynamic and adjusts to your changing usage patterns.
To understand how StoreMI differs from standard SSD caching, we need to understand the basics of how SSD caching works. The basic premise is easy to understand. Using IRST as an example, you create a RAID array in proprietary mode which links the SSD to the mechanical hard drive. The OS and software are installed to the mechanical hard drive. The caching algorithm detects the software that’s used the most frequently and then caches it on the SSD for faster execution. This works well but there are some limitations to the technology. It doesn’t work unless something has been cached. You must run a program at least once or twice before you can benefit from the caching algorithm.
In contrast, StoreMI uses what AMD calls "machine intelligence" to analyze data blocks and migrate them to the appropriate tier constantly. Data is gathered on what programs are used most frequently and physically transferred to the fast tier of storage. Unlike the cache, which must be flushed or can be flushed on occasion, this allocation is permanent until your behavior dictates that it needs to change. It's not unlike the algorithms VMWare uses to determine where it needs to migrate a heavily used VM when a given host is busy. It can move it to a faster host or one that isn’t doing as much work to get better performance. That’s what happens when data in the slow tier is moved to the fast tier.
One major difference between ISRT and StoreMI is that StoreeMI offers a RAM cache. You can enable the optional RAM cache function if your system has enough memory to allocate to it. You need a minimum of 6GB of system RAM to utilize the feature. The benefit to doing this is that you get 2GB of cache as almost an even faster storage tier. Whenever possible, data will be executed straight out of RAM which is about 10 times faster as you will see later. With people having 16GB+ of RAM in many cases, leveraging this is a no brainer and should be possible for almost anyone. You can add the feature to systems with the OS already installed on either an SSD or an HDD.
AMD provides three performance options for StoreMI. This list is taken directly from AMD’s StoreMI FAQ:
Performance option 1: You installed Windows on a relatively slow mechanical hard drive. With StoreMI, if you add an SSD or NVMe drive later, you will enjoy near SSD performance levels when you boot your PC, or load programs and data that you most often employ.
Performance Option 2: You installed windows on a fast SSD drive, but are running out of capacity. If you add a large mechanical hard disk, StoreMI will recognize that the programs you use most should stay resident on the speedy SSD, and move the data that is rarely accessed to the mechanical hard disk. This gives you the best of both worlds: high performance with large capacity.
Performance Option 3: For the fastest booting and storage, a large conventional SSD paired with bootable 3D Xpoint or NVMe drive for incredible booting speed, application launch, and data access performance. Of course, these are only three simple scenarios to illustrate the benefits of StoreMI. The software is also able to add DDR4 RAM to the drive pool for the fastest possible responsiveness, advanced 3D Xpoint and NVMe SSDs for incredible boot times and speedy transfers, and large mechanical hard disks for giant capacity. The bottom line is, AMD StoreMI for AMD Ryzen Processors delivers an incredible combination of storage speed and storage size.
It’s obvious to me that performance option 1 wouldn’t be the best way to leverage the feature, but it’s an easy way to do so if you’ve already installed your OS to a mechanical hard drive. Performance option 2 seems like the most likely usage scenario out of the three cases. Option 3 wouldn’t be out of the question either.
More information on each of the modes can be found in the StoreMI installation guide, which I linked earlier in the article. So far it sounds great. Well, not quite. There are a few limitations. There is a 256GB license limit. It’s very clear that the limitation is a license limit rather than a physical size limit. While you can use larger SSDs you can only benefit from a 256GB or smaller SSD with the free license. You can get licenses to lift these limits which I will cover later. When using a larger SSD, but you’ll have to partition them out and the leftovers won’t be used by StoreMI. This feature is sadly incompatible with RAID arrays where I think it could be of even more serious value. Even though StoreMI isn’t a RAID array, it can’t really handle drive failure. A failure with either drive will either cause the system not to boot or other problems, not the least of which is a lack of performance and probable data loss. However, in the case of writes between tiers being interrupted, data will not be lost or corrupted as StoreMI waits for the physical drive to acknowledge completion of the write before its committed in software.
What value does this really have? let’s find out.