The Characteristics and Functioning of the HDs (Hard Disks)
The hard disk or HD (Hard Disk) – is the device permanent storage of data most commonly used in computers. In it are stored from the your personal files up information used exclusively by the system operating. In this article, you will know a little of the operation the HDs and you will know the function of its main features, such as IDE, ATAPI, DMA, SATA, cache (buffer), NCQ, etc.
The emergence of the HDs
The hard drive is not a device type of the new storage, but yes a device that has evolved – and very – with the passing of time. One of the first HDs that has news is the IBM 305 RAMAC. Available in the year 1956, it was able to store up to 5 MB of data (a forward for the season) and had dimensions enormous: 14 x 8 inches. Its price also was not at all inviting: the 305 RAMAC cost about 30 thousand dollars.
With the passing of the years, the HDs have been increasing their storage capacity, at the same time that they have become smaller, cheaper and more reliable. Just to illustrate how “giant” they were the first models, the photo below shows a hard disk that is used by Subway of São Paulo in its first years of operation. The device was on display in the Center of Operational Control of the company for a few years:
The components and operation of the HDs
So that you can understand the basic operation of the hard disks, need to know its main components. The as mentioned disks, in fact, are stored within a kind of a “metal box”. These boxes are sealed to prevent entry of external material, because even a particle of dust can damage the disks, since these are quite sensitive. This means that if you open a HD in an environment unprepared and without the use of equipment and appropriate techniques will have great chances of losing it.
The physical size of the HDs
Physically speaking, the HDs may have varying dimensions, going since the size of a box of matches until you get in parts grandalhonas, as already shown disk hard used by the São Paulo Metro. But the industry, of course, created size defaults to to facilitate the popularization of the HDs and its use in computers.
The most common sizes are in the order of 3.5-inch (as represented by the character “) and 2.5 inches. These measurements refer to the diameter of the disks. The larger, 3.5-inch drives are commonly employed in desktops, workstations, and servers, while HDs 2.5 inches are common on laptops and other computers with reduced dimensions.
There are also discs that can be very small, having, for example, dimensions of 1.8 or 1 inch. These are used in portable devices such as audio players.
A HD inside
So that you can get an idea of how HDs work is want to know how these devices are organized internally. The images below help you in this task.
The figure above shows a HD seen below. Note that this part contains a plate with chips. It is the logic board, an item that brings together the components responsible for the various tasks. One of them is a chip known as a driver, that manages a series of actions, such as moving of the disks and the read/write heads (shown below), the sending and receiving of data between the disks and the computer, and up safety routines.
Another device common to the logic board is a small chip memory known as a buffer (or cache), because more below. It is up to he task store small amounts of data during the communication with the computer. As this chip can handle the data in a way faster than hard disks, their use speeds up the process transfer of information. In the market, currently, is common to find hard drives that have buffer with capacity between 2 MB and 64 MB.
Now we come effectively to the inner part of the HDs (that is, to the interior of the “the box”). The photo below shows a HD open. Note that there are signs that describe the most important components. These are detailed below the image:
Dishes and shaft: this is the component that draws the most attention. The dishes are the disks where the data is stored. They are made usually of aluminum (or of a type of crystal) covered by a magnetic material and a layer of protective material. How much more complex is the magnetic material (that is, the more dense), the higher is the storage capacity of the disk. Note the HDs of large capacity have more of a dish, one on the other. They are positioned under an axle responsible by making them rotate. For the pc market, it is common to find Hard drives that spin at 7,200 RPM (rotations per minute), but also there are models that achieve the rate of 10,000 rotations. Until recently long ago, the standard of the market was composed of disks hard 5.400 PRM. Of course, the more spins, best;
Head and arm: the HDs come with a device called the head (or headstock) read and write. This is an item of size greatly reduced that contains a coil that uses magnetic pulses to manipulate the molecules of the surface of the disk, and so write data. There is one head for each side of the disks. This item is located on the tip of a device called the arm, which has the function of positioning the heads above the the surface of the dishes.
Looking over, one has the impression that the reading head and the recording plays on the disks,but this does not occur. In fact, the distance between both it is extremely small. The “communication” occurs by the above-mentioned magnetic pulse.
In the HDs the most current, the recording head account with two components, one responsible for recording and other directed reading. In older devices, both functions were performed by a single component;
Actuator: also called the voice coil, the actuator is responsible for moving the arm above the surface of the dishes and thus allow the heads to do the work. For that movement to occur, the actuator contains in its interior a coil that is “induced” by magnets.
Note that the work between these components needs to be well done. The the simple fact of the read head and the write touch on the surface of one plate is enough to cause damage the both of you. This can easily occur in the case of falls, for example.
Recording and reading of data
The surface of the engraving of the plates is composed for materials that are sensitive to magnetism (usually, oxide of iron). The head of the read and write handles the molecules of this material by means of its poles. For this, the polarity of the heads changes in a frequency very high: when this is positive, it attracts the negative pole of the molecules and vice-versa. According to this polarity isthat are written to the bits (0 and 1). In the process of reading data, the head simply “reads” the magnetic field generated by the molecules and generates an electric current corresponding whose variation is analyzed by the controller of the HD to determine the bit.
For the “ordination” of the data on the HD, is used a scheme known as the geometry of the disks. In it, the disc is “divided” into cylinders, tracks and sectors:
The trails are circles that start at the center of the disk, and go to your edge, as if they were one within of the other. These tracks are numbered from the edge to the center, this that is, the track that is nearest the edge of the disk it is called track 0, the track that comes next is called track 1 and so on, until you reach the the trail closest to the center. Each track is divided into excerpts regular called sectors. Each sector has a capacity specific storage (usually 512 bytes).
And where it enters the cylinders? Here’s an interesting question: you already know that a HD can hold multiple dishes, being that there is a head to read and write to each side of the disks. Imagine that it is necessary to read the track 42 of the upper side of the disk 1. The arm will move head up this trail, but will cause the other if you stand equally. This occurs because typically the arm moves once, that is, he is not able to move a head to a track and a second head for other trail.
This means that, when the head is directed to the track 42 on the upper side of the disk 1, all the other heads are positioned on the same trail, only that in their respective disks. When this occurs, we give the name of the cylinder. In other words, the cylinder is the position of the heads over the same trails of their respective disks.
Note that it is necessary to prepare disks to receive data. This is done through a process known as formatting. There are two types of formatting: formatting physical and formatting logic. The first type is precisely the “division” of the disks in the trails and sectors. This procedure is done at the factory. The formatting logic, in turn, consists in the application of a appropriate file system for each operating system. For example, Windows is able to work with file systems FAT and NTFS. The Linux can work with various file systems, including ext3 and ReiserFS.
The HDs are connected to the computer by means of interfaces able to transmit the data between one and another safe and efficient way. There are several technologies for this, the most common being the patterns IDE, SCSI , and, currently, SATA.
IDE Interface (PATA)
The interface to IDE (Intelligent Drive Electronics or Integrated Drive Electronics) it is also known as ATA (Advanced Technology Attachment) or, still, PATA (Parallel Advanced Technology Attachment).This is a pattern that has come to be the market at the time of the ancient line of processors 386.
As the popularization of this standard, motherboards now provide two IDE connectors (IDE 0, or primary, and IDE 1 or secondary), being that each one is able to connect up to two devices. This connection is made to the HD (and to other compatible devices with the interface) by means a flat cable (flat cable) 40-way. Later, came to the market a flat cable 80-way, whose extra wires are for avoiding data loss caused by noise (interference).
How is it possible to connect two devices on the same cable, a small piece with a metal interior call jumper is positioned at the rear of the HD (or other equipment that makes use this interface). The provision of this jumper varies depending on the manufacturer, but there is always a position which, if used, determines that the device is the primary and the other position which determines that the component to be secondary. This is a means of making with the computer know which data correspond to each device.
Yes, this means that, if there are two HDs “set” also primary or secondary, the computer may have difficulties to recognize them. It is often possible to make this distinction to be made automatically. In this case, it is conventional to use the jumpers on both devices in a third position: cable select.This configuration makes the choice of the unit the primary usually is to the device connected at the tip of the cable and the secondary with the unit that is interconnected the plug-in in the middle of the cable.
Technical ATAPI and EIDE
In the IDE interface, it is also possible to connect other devices, such as CD/DVD drives. For this to occur, make use of a known standard such as ATAPI (Advanced Technology Attachment Packet Interface) it works as a kind of extension to make the interface IDE compatible with the mentioned devices. Worth mentioning that the computer itself, by means of your BIOS and/or chipset of the motherboard, it recognizes what type of device is connected in their entries the IDE and uses the corresponding technology (in general, ATAPI CD/DVD drives, and other ATA for hard disks).
As already said, each IDE interface of a motherboard can work with up to two devices simultaneously, totaling four. This is possible thanks to the EIDE (Enhanced IDE), an extension of the IDE created for this to last you can increase the speed of transmission data from hard disks and, of course, allow the connection of two devices in each flat cable.
DMA and UDMA
In the past, only the processor had direct access to the data in the memory RAM. With this, if any other component of the computer needed something in the memory, would that make this access through of the processor. With the HDs was not different and, as a consequence, there was a certain “waste” of processing resources. Fortunately, a solution has not taken long to appear: a scheme called DMA (Direct Memory Access). As the name itself says, this technology has made it possible the direct memory access by the HD (and other devices), without the need of “aid” straight from the processor.
When the DMA is not in use, is normally used a scheme of data transfer known as PIO mode (Programmed I/O), where, roughly speaking, the processor performs the transfer data between the HD and the RAM.
It is important to note that there is also a standard known as Ultra-DMA (or UDMA). This specification enables you to transfer data at a rate of at least 33,3 MB/s (megabytes per second). The default UDMA does not work if only it is supported by HD – it is necessary that the motherboard also support (actually, the chipset), otherwise, the HD will work with a transfer rate lower. Here’s why: there are 4 basic types of Ultra-DMA: UDMA 33, UDMA-66, UDMA-100 and UDMA 133. The numbers in these acronyms represent the amount of megabytes transferable per second. Thus, the UDMA 33 transmits to the computer data at up to 33 MB/s; UDMA 66 does the same in up to 66 MB/s and so on. Now, to exemplify, imagine that you have installed a HD UDMA 133 on your computer. However,the motherboard only supports UDMA 100 MB/s. This does not it means that your HD will become inoperative. What will happen is your computer only will work with the HD transfer rate up to 100 MB/s and not at the rate of 133 MB/s.
SATA (Serial ATA)
Specification SATA (Serial ATA) became standard on the market, since it offers several advantages over the PAW, as well as higher data transmission rates, waiver of the use of jumpers, patch cords and power supply more fine (facilitating the circulation of air inside the computer), etc.
The SATA interface not account with the scheme to allow two devices for cable, but that’s not enough to be a problem: as your connector is small, its installation it is easier, therefore, it is common to find motherboards that have four, six or up to eight connectors in this default.
With regard to data transfer, the interface SATA can reach maximum rates theoretical according to its type:
- SATA I: up to 150 MB/s;
- SATA II: up to 300 MB/s;
- SATA III: up to 600 MB/s.
SCSI (Small Computer System Interface)
The interface is SCSI (Small Computer System Interface) – normally pronounced as “iscãzi” – is a specification ancient created to allow fast data transfers up to 320 MB/s (megabytes per second). As this is technology that is more complex and, consequently, more expensive, your use has never been common in environments domestic, no, not by users who could invest in more powerful personal computers. Your the application has always been more frequent on servers.
It is possible to find devices that use the SCSI interface even today, however, this lost space for SATA technology. Learn more about this specification in the text SCSI technology.
When you search for specifications of a hard disk, you certainly see a item of name cache or buffer, already mentioned in this text. This is another feature designed to improve performance of the device.
The HDs, by themselves, are not very fast. It doesn’t help much to tell with processors fast if the access the data on the HD affect the performance. A way found by the manufacturers to mitigate this problem was to implement a small amount of faster memory on the device. This it is the cache.
For this memory will, temporarily, data streams that are related to the the information that is being released in time. With these sequences in the cache, reduces the amount procedures read, since many times the data found are already there.
The buffer can also be used for processes recording: if, for some reason, is not possible to write a given immediately after the request, the drive controller can “play” this information in the cache to write it right then.
Currently, it is common to find hard disks with up to 64 MB of cache. Contrary to what a lot of people think about it, the cache does not need to have a great capacity to optimize the performance of the unit.
You have certainly noted that, with the passing of time, the storage capacity of hard Drives has increased considerably, without this has resulted in devices physically larger. There are some tricks to this, how to stack the disks inside the drive. But the the differential is in technologies related to the process of recording and the density of the disks.
When we talk about density we’re talking about, essentially, the amount of data that can be stored in the same space. The idea is to make each time more data can be recorded without the need to increase this space. For this, one of the most widely used techniques consists in the use of perpendicular recording.
Before, it is necessary to understand what is recording longitudinal. This is a technique old, but only began to lose space with the the popularization of the current SATA hard disks.
As you already know, the recording of data in a HD is possible thanks to the electromagnetism. In a few words, an electric current is generated for create a small magnetic field at the read head, andrecording. This field causes an influence on the particles existing on the surface of the disk, making it the be arranged according to the polarity (negative or positive). A set of particles magnetized in one way or another is it that determines if the bit written is 0 or 1.
When you pass by an area which has already been written to perform the read data, the head uses induction electric or resistance to capture the magnetic field existing there, allowing retrieval of the data.
Until a past not very distant, the process of burning was commonly done from the alignmenthorizontal – that is, side – by-side of the particles existing on the surface of the disk.
To make more data could be recorded in the same space and thus increase the storage capacity of the drive, the disks began to be manufactured with particles each smaller and smaller. The problem is that there is a physical limit to this. The industry has arrived at a point where it became possible to obtain particles even smaller, but so small that the proximity between them could cause a effect of demagnetization, causing loss of data.
With the recording longitudinal reaching to its limit, the the industry had to search for an alternative. Is there that comes in perpendicular recording, quite used in the days of today.
In this technique, the particles are aligned way to perpendicular, or vertical, as if the particles would “walk” instead of “lying”, roughly speaking. An extra layer existing just below helps make the process even more effective.
The perpendicular recording manages to not only increase significantly the storage capacity, as it protects the disk of the mentioned risk of demagnetisation. In addition addition, the vertical alignment layer makes it thicker, generating fields stronger and, thus, facilitating the work of the head of read and write.
But, unfortunately, the technique of recording perpendicular also will come in a limit. The industry, of course, is already struggling to find an alternative. One of them was presented by Seagate in march 2012: the technology such as a reflex (Heat-Assisted Magnetic Recording).
In this technique, a small existing laser in the head read and write heats the area of the the surface to be recorded and changes the properties of the local such a way that it is possible to store more data there. It is expected that the first units of the type reaching the market in 2013 or 2014.
NCQ (Native Command Queuing)
It is common to find in the current hard disks a resource name NCQ (Native Command Queuing) that can optimize the performance of the device. How? From a schema reorganization able to decrease the work load of the drive.
Roughly speaking, the NCQ works in the following way: instead of the read head and write following points of the disk in the order that they were requested, the functionality does with this procedure happens according to the proximity of the points. That is, if the point 3 is closer to point 1 than the point 2, the access sequence will be: 1, 3 and 2.
Looking at the image below. On the left, you see the illustration of a HD with no NCQ. To direct a HD with NCQ. Compare them and notice that if the order of request is respected, the HD has more work. But considering the proximity, the accesses are performed in a manner more quick:
NCQ not only optimizes the access to the data, as an aid to increase the life of the HD by providing less wear of the components.
The actual capacity of storage
The manufacturers of hard disks increases the storage capacity of their products constantly. However, it is not uncommon for a person to buy a HD and find that the device has a few gigabytes the less than advertised. Will the seller have deceived you? Will be that the formatting was done the wrong way? Will be the HD is with some problem? In fact, not.
What happens is that the HDs consider 1 gigabyte to be equal the 1000 megabytes, in the same way that they consider 1 megabyte to be equal to 1000 kilobytes, and so on. The operating systems, in turn, consider 1 gigabyte as being equal to 1024 megabytes, and so following. Because of this difference, a HD of 80 GB, for example, will have, in fact, 74,53 GB of capacity in the operating system. An HD-200 GB will have, in turn, 186,26 GB.
Therefore, noticing this difference, do not worry, your hard disk is not having problems. Not everything passes of differences between the companies involved about which measure use. Learn more about this subject on this matter about bits and bytes.
Aspects of performance
When choosing a HD, you will certainly detract from its storage capacity, its interface and, probably,the size of the cache, after all, these are the information accompanying the description of the product. But there are other parameters that are linked to the performance of the device that must also be observed. The most well-known are: Seek Time, Latency Time and Access Time.
The Seek Time (Seek Time)
The Seek Time usually indicates the time that the read head and recording takes to move up a trail of disk or even of a track to another. The smaller this time is, the better the performance, of course. This parameter may have some differentiation, being that its disclosure varies from manufacturer to manufacturer:
– Full Stroke: refers to the time offset of the first track the last track of the disk;
– Track to Track: refers to the time offset of a track to the next;
– Average: refers to the average time offset of the head up to any part of the disk;
– Head Switch Time refers to the time required for the actuation of the read head and the recording;
These measures are given in milliseconds (ms) and may have some variations of the name.
Latency Time (Latency Time)
The Latency Time is the measure that indicates the time required for the head to read and write if position in the sector of the disk that must be read or even written. This parameter suffers the influence of the time of rotation of the disks (currently 5,400, a 7200, and 10,000 RPM), and also is reported in milliseconds.
Transfer Rate (Transfer Rate)
This measure, as you must have guessed, refers to the data transfer rate of the HD. Generally, there are three variations:
– Internal Transfer rate: indicates the rate at which the head read and write unable to write data to the disk;
– Transfer rate External: indicates the maximum rate that the HD reaches to transfer the data to outside and vice-versa, usually limited to the speed of the interface;
– Transfer rate Sustained External: the most important of the three, the rate of sustained establishes a kind of average rates between internal and external, indicating which the fee maximum during a given time interval.
Access Time (Access Time)
Typically, this measure corresponds to a calculation that combines the parameters Latency Time and Seek Time. In terms practical, the Time Access indicates the time required to to obtain an information of HD. Again, the smaller this time, the better.
MTBF – Mean Time Between failures section (Mean Time Between Failures)
Better known by the acronym MTBF, this measure gives a notion of the amount of uninterrupted hours that the HD can work without failure. It turns out that this measure does not it is, necessarily, need.
In other words, if a HD has a MTBF of 400 thousand hours, for example, not to say that the unit will only work for this amount of time. The operating time can be increased or smaller, it all depends on a number of factors.
This is because the MTBF is determined by the manufacturer based on tests and estimates made in the laboratory. Thus, the ideal is to use this measure for the purposes of reliability: if one HD has MTBF of 400 thousand hours, means that the device is, at least in theory, more reliable than a drive with a MTBF of 300 thousand hours, or is less likely to fail than the latter.
S. M. A. R. T.
The HDs are responsible for the permanent storage of data. So, these are retained even when there is the supply of energy, thanks to the properties magnetic. But this does not mean that the hard disk drives are failure-proof, so some features have been created to avoid the “worst”. The main one is the S. M. A. R. T.
Acronym for Self-Monitoring, Analysis, and Reporting Technology, this is a common technology in the units that monitors the disks. The the idea here is to identify when failures are about to happen, and issue alerts. Thus, the user can take any measure, how to replace the drive or do a backup (copy of security).
The S. M. A. R. T. monitors a number of parameters permanently and, based on this, it is able to identify abnormalities that precede failures. The alert may be a warning displayed as soon as the computer is turned on or a information displayed in the BIOS setup, as well as a report of a monitoring program capable of to access the data of the S. M. A. R. T (as the HD Tune, for Windows).
Note that, often, the own HD gives signs of failure: slow growing, read errors and noises that seem to beats are signs that the drive is about to submit some defect.
External hard drives
It is possible to find multiple types of hard Drives on the market, since the well-known hard disks for installation in desktops, passing by more sophisticated devices aimed to the professional market (that is, for servers), getting to each more and more popular external hard Drives.
What is an external HD? Simply a HD that you can take to virtually any place and and connect it to the computer only when you need it. For this, one can use, for example, ports USB, FireWire and even SATA external, everything depends on the model of the HD.
Also it is common to find on the market cases that allow the user to mount their own external HD: it is a equipment which enables the connection of a HD “conventional”, making this work as an external HD. The the user need only purchase an HD compatible with the case, that you use the correct interface and dimensions the corresponding.
The external HD is useful for when you have large amounts data for transport or for backing up (backup of your files). Otherwise, it is preferable to use USB sticks, rewritable DVDs, or other storage device with better relation cost-benefit. This is because the External hard drives are a bit more expensive and tend to be heavy (except the models reduced size). In addition, should be transported with more care, to avoid any damage.
The HD already has gone through several changes since its emergence. Just to give you an example of evolution, the older models had a problem that, if it had not been resolved, you may leave the hard disks delayed in relation to the progress of the most components of a computer: the drive motor the heads read and write was slow. This because when the heads needed to go from one cylinder to another they did it one on one until you reach the destination. Today, the heads go directly to the cylinder requested.
In more, just to observe how the HDs are faster, more trusted and with higher capacity over time.This makes it clear that one day the hard drive can even losing his “reign” to another data storage technology – SSD, for example-but this is a far happen.
To conclude, a small curiosity: when IBM released the HD 3340, there was a version with a capacity of 60 MB, being that 30 MB was fixed and the other 30 MB were removable. This feature made this HD to receive the moniker “30-30”. However, there was a rifle called the Winchester 30-30 and, hence, the comparison between the two was inevitable. As a result, the HD it came to be called also of Winchester, a name that is not the most used, but some people pronounced in the past without knowing exactly what that was.