Hidden SSD Specifications that Affect Its Performance
Storage drives are among the most straightforward of the components that you need for a PC. Oftentimes the choice only boils down to quick launching SSDs, or cheap and archiving HDDs. Or maybe if you can stretch your budget further, decide whether or not your DRAM is required for a boot SSD, or for a lesser important gaming SSD. What is the difference? We look at hidden SSD specs that affect its performance.
SSDs in particular, are the most deceptive when it comes to specifications. Other details are simply omitted by manufacturers, and because of that, you might be missing out on more performance optimization features without even realizing it.
In this article, we shall briefly list these specs, and provide you with a better understanding allowing you to maximise the performance of your SSDs and HDDs.
“Secret” Storage Drive Specs The Affects Its Performance
- Maximum usable capacity
- Data cache options
1. Maximum usable capacity
For regular PC enthusiasts, the subject of maximum effective SSD capacity might already be old news. But newbie in the DIY computer space, would not be able to immediately see this basic aspect of NAND flash storage anywhere on a storage drive’s specifications.
Redefining it once again, SSDs have a tendency to perform less optimally when filled close to their storage limit. Some of the general reasons for this include:
- The space needed to swap and constantly write small files randomly becomes constricted, often causing odd data management behaviors.
- It spends more time reading data to find open slots in partially written blocks, thus slowing the SSD somewhat (also known as “garbage collection”).
- Allocated sectors for wear leveling become limited, though this is usually only a problem for SSDs that are already several years within their active service period.
In other words, effective SSD capacity is a hidden specification that is both a partial indicator of optimal data management, as well as being a base value on the actual recommended amount of data that you could fill into it.
On average, most users typically recommended leaving SSDs with at least 30% more open space to accommodate for these data management quirks. This is not a strict requirement, but merely a rule of thumb. For current-generation SSDs, you can most likely push it down to just 10% or less, so long as there is still enough room for data to “wiggle around” each block.
For example, your 1TB capacity Crucial MX500 could (approximately) have a 740 to 860GB of maximum effective usable capacity, when combined with other things such as system partitions and file formatting types.
2. Data Caching options
Cache is a general term for a dedicated hardware data storage that allows the system to quickly accessing files that are crucially required for standard operations. It appears in almost any component of a PC that actively processes information, either by complete instruction or relaying the information somewhere else.
For SSDs, a cache can function in two primary ways. One, to quickly access strings of data that need to be actively written or read to. And two, to form a sort of “data map” that conveniently allows the SSD to know which physical block contains the data for a collected group of files.
To achieve either of the two methods, there are several types of caching features SSDs can use depending on how the data is treated within its system:
A. DRAM (Dynamic Random Access Memory)
SSD DRAM is functionally the same as your motherboard’s memory modules, but made specifically for storage data management. For example, the PC can speed up writing tasks by dumping data temporarily into it, and then saving it a few moments later for permanent storage. Or, it can create the aforementioned virtual map, tracking each binary interaction of all blocks per data written, as well as the current state of each of those blocks.
Because of this, a DRAM cache is considered somewhat of a baseline requirement nowadays for stability and reliability for boot drives. The SSD doesn’t have to allocate part of its data blocks for the same purpose (which would also be subject to the disadvantages of NAND flash memory) as with most slightly cheaper DRAM-less SSDs.
B. HMB (Host Memory Buffer)
HMB is a low-level (near-hardware) combined memory interface that can physically offer the same features as standard DRAM. So apart from relatively slower-ish performance, the only difference is that it is exclusively unique to NVMe drives.
Instead of directly soldering memory hardware to the SSD directly, an NVME SSD with HMB “borrows” a portion of the DRAM of the main computer’s CPU. The information processed is then passed back and forth through the PCI-e medium. While a bit slower than DRAM, it is still way faster than simply saving the same virtual block map and buffering data to NAND flash, while also being more reliable due to its dedicated design.
C. SLC Cache
This one is a completely separate cache component that is built solely to facilitate faster and more responsive data access. It is often implemented in SSDs to keep up with standard performance expectations for SSDs within the tiers each separate product is introduced in.
Explained further, it compensates for the “slower” writing speed of TLC and QLC, by using a tiny, but dedicated SLC cache. Written files will be temporarily saved quickly onto it, and then passively (and gradually) transferred to the main storage later when the SSD is not as active. So long as the data being processed doesn’t go over the cache limit, the drive can behave as if the entire SSD is using SLC NAND flash.
If it does go over the limit (if you’re a 4K lossless video junkie, for example), well… the drive would then throttle significantly down, because it now has to write directly to TLC/QLC flash memory.
NAND is the name of a logic gate in basic computer programming. The name was stuck to flash memory due to how it is designed similarly at a circuity level. NAND-type then refers to how each flash memory configuration processes data this way, though it really is just mostly about cramming the highest set of binary values for each smallest unit of storage.
At the moment, consumer-grade SSDs are typically manufactured with these NAND-types:
- SLC (single-level cell) – as its name suggests, it can store about one binary value (bit) per cell. It is the currently fastest and most reliable (in terms of data integrity). However, it is also very expensive because practically no effort is made to optimize the physical space between data cells and blocks.
- MLC (multi-level cell) – technically “DLC” (double-level cell) but the term “MLC” is much more universally accepted. Stores two bits per cell. It has a slightly higher bit error rate, but is still almost just as reliable as SLC. Also just as expensive.
- TLC (triple-level cell) – where the majority of SSDs fall under today. Saving three bits per cell puts this NAND-type at a sweet spot between speed, capacity, reliability, affordability, and reproducibility. Generally supported by several other updated/modern components to remove, or at least mitigate, its disadvantages.
- QLC (quad-level cell) – where a good number of dirt-cheap SSDs find their way in the market today. At four bits per cell, the storage-per-value ratio is quite high, though data reliability is already considerably hampered without the assistance of other bit correction technologies (that are also integrated into the SSD).
NAND configurations are usually announced when the SSD product is first launched. But they are almost always never listed in the official specs. Understandable, since the performance difference at this point in time is either generational, or based on the target market level. The price of SLC at this point is only ever going to be justified if they are to be used for enterprise-level applications. At the consumer level, TLC and QLC remain dominant, and probably going to be redundant if they are written consistently on spec sheets.