What Is SSD? All About Speed, Types
The HDs have gone through evolutions stunning in the last years. However, current applications require storage devices even more sophisticated, able to unite fast performance, capacity, reasonable storage,less power consumption and durability. The units — or “disks” — SSD (Solid-State Drive) are the answer to this need.
In this text, I, Emerson Rosemary, I’ll explain to you what exactly is the SSD. Also I discuss related concepts, such as formats, Flash, TRIM, construction technologies, etc.
Let’s get started? If you prefer, skip straight to the topic desired in the following list:
– What is SSD?
– Flash memory: the main ingredient
– Single-Level Cell (SLC)
– Multi-Level Cell (MLC)
– Triple Level Cell (TLC)
– Quad Level Cell (QLC)
– 3D NAND (V-NAND)
– 3D XPoint
– SATA Express
– PCI Express
– M. 2
– U. 2
– The nms of an SSD
– IOPS, and other characteristics for choosing an SSD
– Controller of the SSD
– History: the first SSD on the market
What is SSD?
Let’s start by defining the idea. As you already know, SSD stands for Solid-State Drive, something like “Solid State Drive”, in free translation. This is a type of device for data storage which, a way, competes with the hard drives.
Accepts the idea that the name alludes to the absence moving parts in construction the device, which no longer happens in the HDs, that need to motor, disks and heads for reading and recording to work.
The term “Solid-State”, in fact, makes reference the use of solid material for the transport of electrical signals between transistors instead of a passage based on vacuum tubes, as was done at the time of the valves.
Devices SSD, the storage is done in one or more chips memory, eliminating completely the use of mechanical systems for its functioning. As a consequence, units of the type that end up being more economical in the consumption of energy, after all, you do not need to feed motors or similar components (note, however, that there are other conditions that can raise the consumption of energy, depending on the product).
This feature also makes “SSD disks” (not it is a disk, therefore, the use of this name is not is correct, although relatively common) use less physical space, because the data are stored in special chips, of very reduced size. Thanks to this, the SSDs they began to be used widely, including on devices extremely portable, such as notebooks, ultra-thin (ultrabooks) and tablets.
Another advantage of not using parts mobile is in silent — you cannot hear an SSD drive work, such as can happen with an HD. The physical resistance is also a benefit: the risk of damage is lower when the device suffers a fall or is rocked (which is not to say that SSDs are indestructible).
In addition, SSD devices weigh less and, in the majority of cases, can work with higher temperaturesthat those supported by the hard disks. There are still another feature of considerable: the transfer time of data between the RAM memory and SSD drives tend to be much smaller, speeding up the data processing.
Of course, there are also disadvantages: SSD drives are more expensive than HDs, although the prices to decrease as your usage increases. Because of this — in many cases, also due to technological limitations — the vast majority of the SSD drives offered on the market has the capacity storage much less compared to the hard disks that have the same price range.
Flash memory: the main ingredient
The SSD is based on chips specially prepared for store data, even when not receiving power. Are, therefore, devices non-volatile. This means that it is not necessary to use batteries or leave the device constantly plugged in to keep the data on it.
For this to be possible, by convention, is between the manufacturers SSD using Flash memories. This is a type of memory EEPROM* (see explanation below) developed by Toshiba in the year 1980. The Flash memory chips are similar to the amount of RAM memory used in computers, however, unlike of the latter, its properties mean that the data is not are lost when there is more supply of energy, as already informed.
* EEPROM is a type of ROM memory that allows the rewriting data, however, contrary to what happens with the memories EPROM, the processes for erasing and writing information is run electrically, causing it not to be necessary move the device a special device for that rewrite to occur.
There are basically two types of Flash memory: Flash The NOR (Not OR) and NAND Flash (Not AND). The name is from the technology mapping of data from each one. The first type allows access to the memory cells of way random, as is the case with the RAM, but with high speed. In other words, the NOR allows you to access data in positions different memory quickly, without the need of thisactivity to be sequential. The NOR is used in the BIOS chip or firmware smartphones, for example.
The type NAND, in turn, also works in high speed, but it may make sequential access to memory cellsand the treats in the set, that is, in blocks of cells, in instead of accessing them individually. In general, memories NAND can also store more data than memories NOR, considering physical blocks of sizes equivalent. It is, therefore, the cheapest type and the most used in SSDs.
Technologies SLC, MLC, TLC and QLC
Currently, there are three main technologies that can be employed both Flash memories NOR when in NAND Flash: Multi-Level Cell (MLC), Single Level Cell (SLC) and Triple Level Cell (TLC). In 2018, the manufacturer Micron presented the first SSD with a new standard, the QLC (Quad-Level Cell (QLC).
It is quite likely that you find one of these four acronyms in the description of the SSD that you are choosing, therefore, it is good to know them. If you prefer, you can understand the subject matter in this text that explains the differences between SLC, MLC, TLC and QLC.
Single-Level Cell (SLC)
The first SSDs were based on chips with technology SLC, basically, keep a bit in each of the cell storage. This scheme of one bit per cell makes the device more expensive, since it is necessary to have more cells to store the same amount of data as the types, MLC and TLC.
On the other hand, a chip SLC is pretty reliable supporting, by default, about 100 thousand operations read and write per cell, against 10 thousand of the MLC, and 5 thousand of the TLC (but these numbers may vary according to the the evolution of technology).
SLC Chips also generally allow the read and write operations to be performed more quickly, after all, each cell stores only one bit, 0 or 1. In MLC, for example, a cell can have two bits; this elevation in the amount of data makes the procedure a little slower.
The technology SLC practically fell into disuse, being intended for today application very specific.
Multi-Level Cell (MLC)
The type MLC is quite common, consisting of a process that uses voltages differentiated to make amemory cell to store two bits (theoretically, it is possible to make it store more) instead of just one, as in the SLC.
Thanks to the MLC technology, the costs of devices Flash storage become smaller, increasing including the offer products such as usb sticks and smartphones with price more accessible.
As you may have noticed, the MLC allows the SSD the ability to store more data per chip: where there was only a bit, now there is two. There is a downside, however: the performance tends to belower in comparison with the type SLC, as I explained in the previous topic.
This happens because, in the MLC, a cell may store four values of information per account support for two bits: 00, 01, 10 and 11. Because of this, the drive controller needs to use tensions are very specific to correctly identify if the cell is in use, and with which value. This process just leaving the slower operation.
Triple Level Cell (TLC)
The name itself already indicates: the type TLCstores three bits per cell, therefore, the volume of data that can be saved in the unit increases considerably. Is one of the latest standards of the market.
However, the performance is also lower in the comparison with the technology SLC, after all, we get eight possible values with three bits, for which reason there is more variety of voltages: 000, 001, 010, 011, 100, 101, 110 and 111.
Here, the main benefit is the same as the gain storage space since memories of TLC and tend to be more slow chips MLC who, in turn, have less performance technology SLC.
Even so, memories TLC and MLC are faster the HDs, for which reason its use is feasible in most of the applications: in many situations, it does not pay to have an SSD rather fast, but that does not offer enough storage capacity.
In addition, complementary technologies can serve compensation, making SSDs with TLC NAND to achieve speeds interesting.
Quad Level Cell (QLC)
If you think that chips QLC will follow the logic and storing four bits per cell, thought right. The QLC NAND was developed to further increase the density of the chips storage — more data without increasing the physical dimensions of the component.
It is an interesting advantage, but it is prudent not to expect that the QLC NAND dominate the market, at least until the technology is improved: on the one hand is possible to store more data, on the other, it is estimated that chips type support only a thousand read and write operations per cell.
For this reason, SSDs with chips QLC tend to be directed only the application very specific.
3D NAND (V-NAND)
The efforts of the industry to increase the capacity of storage of SSDs does not stop there. The maincompanies in the sector are employing in their products more sophisticated a technique called 3D NAND. This ‘3D’ in the name is a reference to the stack of the memory cells.
To facilitate the understanding, imagine that an SSD is a warehouse full of boxes. Each box storesinformation. However, these boxes are side-by – side, occupying the whole floor, and thus forming a two-dimensional plane (2D).
As the tank was full, someone had the idea of put a box on top of the other, forming stacks and more stacks the boxes, or is, a plan three-dimensional (3D). See that, with this approach, the storage capacity has increased, but the deposit he remained with the same size.
This is more or less the principle of the 3D NAND: instead of terms only a horizontal layer of memory cells in the chip, we have several, forming a stack.
The industry started creating batteries with 24 layers, but soon increased to 32. To cite just one example, in 2015, Intel presented a chip MLC with 32 layers whose capacity was 256 gigabits (or 32 gigabytes — GB). Another chip of 32 layers the company used technology to TLC and, therefore, offered to 384 gigabits (48 GB). Join eight of these chips MLC, you will have an SSD 256 GB, therefore (or, with chips TLC, 384 GB).
In 2016, the industry started to invest in chips with 48 and 64 layers. Western Digital, for example, announced in the same year a chip MLC which, for have 64 layers, could store 512 gigabits of data.In 2017, the chips with 96 layers began to emerge.
All of this, it should be noted, is done without affecting the size physical of the device. The increase in the number of layers it is possible thanks to modifications in the manufacturing techniques and the use of certain materials.
Samsung is one of the companies that use stacking, but with based on a technology called Charge Trap Flash (in general, too manufacturers work with the technical FGMOS — Floating-Gate MOSFET). Instead of 3D NAND, the company uses the name V-NAND (Vertical NAND).
In mid-2015, Intel and Micron have announced the 3D XPoint, a new type of non-volatile memory that promises to be up to a thousand times faster than NAND Flash memories conventional. Yes, a thousand times!
Memories 3D XPoint are also more dense, or is, support more memory cells. Consequently, can also store much more data — up to ten times more. As if that weren’t enough, the technology employed in its the construction makes the 3D XPoint be up to a thousand times more resistant.
But these are estimates of the theoretical. In 2016 and in the beginning of 2017, the first SSDs with 3D XPoint tested were up to four times faster in operations writing and three times more resistant than units with NAND Flash.
Maybe the expectations are to improve with the improvement of technology, but, in the early years of the market, Intel and Micron expect the 3D XPoint reach up to ten times the speed, three times more resistance and up to four times more storage capacity.
Even so, it is a significant step forward, isn’t it? All these advantages are possible because the 3D XPoint it is also based on a technique of layers (such as the ‘3D’ indicates). Only that the cells are positioned in the intersections of the lines of each layer in a way that they are very close. Couple this to the fact of not be necessary to use transistors (as opposed to the NAND memory), the density ends up being much higher.
Basically, this is what allows memories 3D XPoint to store more data and offer more speed in the transfer of data. The model of construction makes it easier to access small blocks of memory (while the NAND Flash, generally, working with larger blocks), speeding up the processes of writing and reading.
There is no forecast for the memories 3D XPoint overwriting the Flash memory. For now, thetechnology caters to only the niche market. Intel, for example, the employs in line products Optane.
Formats and interfaces: M. 2, SATAe, and NVMe and more
The approach which we have done up to now, we can understand any device that uses Flash memory as a SSD drive. But, in fact, it is more appropriate to think of the SSD as a type device a competitor to hard disk — we can’t forget the word “Drive” in the name.
Following this line of thought, the industry started to provide SSD drives as if they were the HDs, but with memory chips instead of disks. Thus, these devices can be connected to interfaces SATA, for example. We can then find SSD drives in 1.8 -, 2.5-and 3.5-inches, such as HDs.
The problem is that even the fast version of the SATA (SATA III), which achieves data transfer rates ofup to 6 Gb/s (gigabits per second), may be insufficient to meet certain SSDs: many models, especially those targeted at high-performance computers (such as those that are used by gamers), can work with speeds higher than that of the bus SATA III.
To cope with this limitation, the industry has resorted to some alternatives, among them, the SATA Express (also known as SATAe). The name is a reference to the junction of two technologies: SATA and PCI Express.
The PCI Express technology is fairly common on the computers (probably, your video card uses this standard) and provides high speeds in data transfer. For that not take advantage of all this potential with SSDs?
The connector SATA Express combines two connectors SATA conventional with a the third party that is used to power electrical. The interesting thing about this approach is that, if a slot SATAe is not in use on the motherboard, it can be used to connect up to two devices via the SATA to “normal”.
Theoretically, the SATA Express can reach transfer rates data up to 16 Gb/s.
If the PCI Express (PCIe) is quite fast, it would not be convenient to count with SSDs based entirely on this technology? Yes! These units are, in fact. Some models even have rates of reading data from up to 2,400 MB/s speed recording usually does not pass half of the fee read, even so, remains high.
Size performance weighs heavy in the pocket. SSDs PCI Express usually are very expensive, the reason why tend to be used only in high performance applications.
The M. 2 (formerly known by the acronym NGFF — Next Generation Form Factor) is a specification that can work with both SATA III and PCI Express. The standard can provide speeds quite high, therefore: up to 32 Gb/s with the use of four lines of PCI Express 3.0, the latest version quick currently (although there is still no SSDs reaching that speed).
Another advantage of M. 2 is its flexibility of formats, has made this standard be used both on laptops quite fine as in desktops. We have sizes ranging from 16 mm to 110 mm width, and 30 mm up to 110 mm in length.
With the M. 2, SSD ends up assuming the format of the card. The option 22 mm wide is the most common. The smaller models, obviously, are more suitable for devices with compact such as notebooks, ultra-thin.
Here is a summary of the various formats M. 2 the most common and their respective sizes:
- M. 2 22110: 110 x 22 mm
- M. 2 2280: 80 x 22 mm (perhaps the most common)
- M. 2 2260: 60 x 22 mm
- M. 2 2242: 42 x 22 mm
To learn more about the subject, check out the article that explains what is M. 2.
The NVMe (Non-Volatile Memory Express) is not a connection standard that competes with the SATA Express, or M. 2, but a kind of protocol that optimizes the access time to the data, in the sense of uniformar the communication between the controller and the storage components themselves.
In SATA technology, there is a specification called AHCI which is responsible for this task. The problem is that the AHCI it is more appropriate to the HDs, that is, the working mode that considers access to data at different positions on the disks of the drive.
There is no disk in the SSDs, NVMe has been developed to explore the potential that can not be achieved with the AHCI. What the NVMe does is to multiply many times the capacity of the unit to receive simultaneously reading commands and writing. Thus, there is less latency (the time it takesto be accessed and read), and the retrieval of the data ends up being more rapid.
With lower latency, the workloads also they are performed more quickly, allowing the SSDs to passmore time inactive. Thus, there is economy of energy and to increase the useful life of the unit.
The specification, NVMe is not limited to a unique connection technology: is it possible to use it with drives based on PCI Express and M. 2, for example.
An important limitation of PCI Express and M. 2 is these standards require direct connection of the SSDS in the slots. If you want to connect the SSD to the computer by means of a cable, will have to use another standard, such as SATA Express. Only then it will not be possible to take advantage of the specification, NVMe. That was why the industry has created another default connection (becauseis, the more a), U. 2 (for some time called SFF-8639).
The U. 2 allows connection via cable and, at the same time, supports the PCI Express 3.0, in addition to NVMe, of course. The problem then it is solved, except, perhaps, for a small detail: cable U. 2 can be quite expensive.
The nms of an SSD
We already talked in 3D architecture, technologies such as MLC and TLC and other aspects that contribute to the increase of the capacity of data storage of SSDs. But missing a: miniaturization of the chips.
The purpose here, essentially, is to leave the transistors that make up the chip with the smallest size possible, thus, the component can store more data without, however, having your size physical increased. This aspect is measured in nanometers (nm), a measure that is equivalent to one-millionth of a mm, that is, a millimeter divided by one million.
We find in the market units with chips of 34 nm, 25 nm and 20 nm, for example. Currently, it is also possible to find SSDs have more sophisticated chips with 15 nm and 10 nm. At the time of the the last update of this text, as if talking about options with 7 nm.
Should not go much further, however. The miniaturization is not a process easy because, in addition to the costs involved, can lead to problems such as instability and increase in error rates read. It is for this reason that the industry studies alternative technologies, such as the already mentioned memory 3D XPoint.
When it comes to SSD, especially when we refer to the the newer units, you may want to pay attention in a characteristic that is gaining more and more prominence: the feature TRIM. It is extremely important. Let’s understand why.
In general, when you delete a file, it is not completely eliminated of the operating system. In fact, the area occupied by it is marked as “free for use” and the data is there in a hidden way into the system until a new recording to occur. This is why many recovery programs deleted files can have success in this task.
In HDs, the space available for the data can be recorded rewritten without major difficulties. This is possible because, on hard disks, the data are grouped in sectors 512 bytes (learn more about this in this matter about HDs), where each sector can be recorded and rewritten in a way independent.
On the SSD, this process is a little different. In memory Flash, the data are grouped into blocks, usually of 512 KB, being each group is composed of several divisions, called pages. Each of them has, usually, 4 KB.
The problem is that this block of data cannot simply be written and, later, rewritten with the same ease as existing in the HDs. For this, it is necessary to first erase the data of a the area written to, causing it to return to its original state, for only then insert the new data.
The issue is worsened by the fact that, usually, this process needs to cover the entire block and notonly certain pages of this. You should already have perceived that this situation may cause a significant the loss of performance.
One of the ways of dealing with this is to make the operating system always use a free area of the SSD. But this is a solution palliative. Sooner or later, the blocks do not used to be all filled in. The TRIM appears to just prevent the user from “panic” to realize that your SSD drive is “overwriting” the data and, consequently, getting more slow.
With TRIM, the operating system is instructed to make a check to “zero out” the pages of deleted files, instead of simply marking them as “available to use”, as happens in the HDs. Thus, when the blocks that pass by by this process they have to receive new data, it will be prepared for this, as if nothing had never been recorded there.
That is why the TRIM is so important. Its function it is able to avoid serious performance issues. I must say that, to work, this feature must be supported both by the operating system and the SSD drive. Is the case with Windows 10 and of more recent versions of Linux, for example.
IOPS and other characteristics for choosing an SSD
When choosing an SSD drive, it is always important to check the specifications of the device. One of them is connected to the aspect of the performance. How many megabytes can be read per second?How many can be recorded at the same time?
These parameters can vary greatly from one product to another. It is common, for example, find SSD drives formed by a set of ten memory chips Flash. The device driver (discussed below) can split given file into 10 parts so that these are recorded simultaneously on the unit, making the recording process as a whole faster, for example.
However, more resources or less can improve or worsen this process. Hence the importance of checking these details. Fortunately, it is virtually the rule among the manufacturers inform the amount of data that can be written and read per second.
Another parameter that also can be observed is the IOPS (Input/Output Operations Per Second), which indicates the the estimated amount of operations of input and output for the second, both for read and for write data. The larger these numbers, the better.
As for the storage capacity, SSDs tend to take the worse in comparison with the HDs. Therefore, it is not it is rare to find laptops that offer an SSD of 240 GB complemented with a HD of 1 TB, for example. The ideal here is to carefully assess how much space you need. Units with plenty of storage capacity are very expensive and can, therefore, do not offer good value cost-benefit analysis. The minimum, to the current standards, is an SSD of 120 GB.
Note also the technologies supported by your computer. Do not buy a SSD, M. 2, for example, before you be sure that the the motherboard of your desktop or laptop is compatible with this format.
It may be that you have more of a standard supported by the the machine, for example, SATA, and U. 2. The U. 2 more fast, however, cost more. It is necessary to then measure if the performance gain worth the investment higher or if a SATA drive is sufficient to meet the what you need to.
Finally, it is worth to check what is the average time durability expected by the manufacturer and if the unit has the resources further, as the buffer, the aforementioned TRIM, technology monitoring S. M. A. R. T. (widely used with HDs) or up to even RoHS (Restriction of Hazardous Substances), that indicates that the manufacturer did not use certain substances harmful to health and the environment in the manufacture of the product.
Controller of the SSD
As well as the HDs, SSDs also have drivers. It is up to the controller— a kind of word processor —facilitate the exchange of data between the computer and the memory Flash, manage the read and write operations, detect and correct errors, among other tasks.
As the controllers of SSDs need to handle large volumes data, they come with features that allow or facilitate this work, as memories dedicated work as the cache and data compression algorithms that make the faster operations or extending the life span of the drive.
The absence or the implementation of certain features on the controllers, varies from manufacturer to manufacturer and of a model of SSD to another. Companies do not usually disclose many details about the functioning of these chips to protect their technologies, for which reason it is not possibleto explore the subject in depth.
History: the first SSD on the market
SSD devices started to appear so massive in the market from 2006, but can say that the technology itself came very before, albeit not with the same name.
In 1976, a company name Dated put on the market a data storage device name Bulk Core (link in PDF) that was composed of eight modules, a type of memory non-volatile with the incredible (for the time) capacity of 256 KB each.
The Bulk Core “emulated” disk drives used at the time, with the differential to be faster than these. The equipment cost about $ 10 thousand and was used in data processing centers.
In view of their characteristics (use of memory non-volatile and higher transfer speed data), the Bulk Core can be considered to be the first SSD on the market.
Many people question if the SSD signals the end of the era of the the hard disks. It is difficult to say. In relation the storage capacity, the Drives also feature excellent benefit-cost ratio, not to mention that these devices count on an average of durability quite satisfactory.
As the SSDs have storage cost higher and the HDs continue to be improved to gain more capacity and durability, the two categories should coexist “peacefully” for a long time.