Ask Slashdot: How Do You Test Storage Media? 297
First time accepted submitter g7a writes "I've been given the task of testing new hardware for the use in our servers. For memory, I can run it through things such as memtest for a few days to ascertain if there are any issues with the new memory. However, I've hit a bit of a brick wall when it comes to testing hard disks; there seems to be no definitive method for doing so. Aside from the obvious S.M.A.R.T tests ( i.e. long offline ) are there any systems out there for testing hard disks to a similar level to that of memtest? Or any tried and tested methods for testing storage media?"
SpinRite (Score:3, Informative)
Re: (Score:2, Informative)
Does their product actually do anything these days? Seems like the last time I used it was when you had the choice of an ARLL or RLL disk controller, haha...
Anyway, I always stress test my drives with IOMeter. Leave them burning on lots of random IOP's to stress test the head positioners and don't forget to power cycle them a good number of times. Spin-up is when most drive motors will fail and when the largest in-rush current occurs.
Re: (Score:3, Informative)
Re:SpinRite (Score:5, Informative)
Re:SpinRite (Score:5, Informative)
Except when it happens a lot... then your drive is F***.............I.....N...............G slow for now apparent reason.
We "test" our drives by filling them with whatever data we have laying around. We do this 5 to 10 times (depending on how soon we need the drives). What eventually happens with a bad drive is that the SMART counter ticks over to some magical number and starts reporting health issues (A requirement for some RMA processes). We also time each fill cycle. We expect the first two or three runs to take longer (EVERY drive these days will have relocations going on for the first few runs). For later runs we expect to see a more consistent fill time and the relocated sector count stop climbing so alarmingly fast.
There are bad sectors on your brand new drive. You can count on it. You have to make the drive find them and map around them because it won't happen in the factory. Write to every byte several times. Don't wait for it to happen naturally... you'll just hit performance problems and put yourself closer to warranty cutoff time. They're counting on you not finding a problem soon enough. You must burn them in or suffer later.
Re: (Score:3)
In the MFM/RLL days, SCSI disks were tested in the factory and came with a list of known bad C/H/S locations, and also keeps a list for bad sectors developed afterwards. I forgot whether the controller board had to skip those sectors during LBA translation or the OS had to not use them.
When IDE drives came out, the 'factory list' suddenly disapp
Re: (Score:3)
LOL, RLL harddiscs had capacities that are by today standards located somewhere between the CPU cache size and the RAM size of average smartphones.
(My first PC had a 20MB HDD)
Put simply, a modern hdd are about the same as a RLL hdd, as a Cadillac is similar to a tryke.
Re:SpinRite (Score:5, Informative)
>Still works 100% as HDD tech is still the same
Not entirely true. Back in the days of MFM/RLL drives, SpinRite could perform a "low level" format on each track. This ensured every last magnetic 1 and 0 was re-written to the disk. Back in the day I witnessed many times when SpinRite would completely recover bad sectors, presumably damaged by electrical/controller issues rather than physical surface issues, and a full pattern test would prove the space was safe to use.
Modern IDE drives don't allow low-level formatting, and as far as I know, even re-writing the user content of the drive does not re-write sector header data. Modern IDE drives also have hidden reserved space for "spare" tracks and space where they store their firmware, which likewise never gets tested or re-written.
Additionally, on MFM/RLL drives SpinRite could use low-level formatting to optimize the sector interleave for the specific system. (You would be surprised, moving some disks and their ISA controllers to a faster system would actually require a higher interleave, slowing them down incredibly until SpinRite was run)
Still, SpinRite is the only program that I know of that can do a controlled read/write pattern test and modify the underlying file system when needed.
Re: (Score:2)
Modern IDE drives don't allow low-level formatting, and as far as I know, even re-writing the user content of the drive does not re-write sector header data.
Enhanced Secure Erase should get a bit further than the user content. When initiating that, the drive internally uses the more detailed information it knows about itself to perform a deeper erase (at least in theory). For this kind of operations I recommend Parted Magic [partedmagic.com] (works for nuking SSDs too).
Re:SpinRite (Score:5, Informative)
An easy test to prove that Spinrite is BS is run it against a USB key. Not a SATA SSD, but a USB flash drive. Make the USB key bootable with DOS, put Spinrite on and boot a PC with no other drives. Run its "tests" against the USB key. All the "low level" tests Spinrite claims to do will appear to work, but are impossible on a USB device.
Infact, they are impossible on a modern mechanical HD as well. As yacc143 pointed out, modern drives are not the same as MFM/RLL drives of the past. The low level tests that Spinrite claims to do are simply impossible.
It's also a terrible data recovery program, since it can only write recovered data back to the same disk. That's a data recovery 101 no-no, and Spinrite fails.
Re: (Score:3, Informative)
Spinrite may do an OK job of exercising disks, but 90% of what it claims to do is BS.
This is a very uniformed opinion about Spinrite. Spinrite has a large population of testimonials that prove that "it works". It's main purpose is data recovery and data maintenance on magnetic-based rotational media.
Your example of a USB drive is just another way of saying "flash", for which Spinrite is not targeted to fix.
Indeed, there are no more "low level" commands like in the day of old HDD technology. However, Spinrite uses the standard ATA command set to do everything possible to get your data off
Re: (Score:3)
You seem to have it in for Spinrite, but it's not clear why. If you listen to Steve's podcast (Security Now), you'll know that he is very careful on how he describes the technical aspects of his products (including Spinrite). I'd be very surprised if you or anyone could point to any of GRC's literature on Spinrite that would prove he's "lying" about anything.
http://www.grc.com/spinrite.htm "and ALL OTHER file systems". Tell me, how well does Spinrite support UFS? EXT4? ZFS? Given that the ZFS driver code alone is several times the size of Spinrite that's not really possible. And filesystem support is important given Spinrite's braindead data recovery. If there is no knowledge of the underlying filesystem then Spinrite has no way of knowing if it is overwriting data, filesystem structure or empty space. Even if it was lucky and got empty space, there is n
Re: (Score:3)
You misunderstand ... Spinrite exercises the drive and the drive heals itself. Steve's pretty clear about that.
Re: (Score:2)
SpinRite is excellent for testing. If your drives run as hot as the old Hitatchi drives did, it doubles as a space heater or makeshift stove.
Seriously, SpinRite will exercise a drive very well indeed. And it will tell you more than the manufacturer wanted you to know.
Re: (Score:2)
Re: (Score:2)
What about motor failure? My last drive became inaccessible when the motor stopped spinning (6 months continuous spin, followed by power failure, followed by no spin).
Re:SpinRite (Score:5, Funny)
Did you restore power?
Re: (Score:3, Funny)
Sounds like your average conversation with a tech support guy.
Re: (Score:3)
That may have been stiction. It was a big issue for some time a couple of years ago. Media was textured in the landing zone so the heads wouldn't stick on the super smooth data surface but the head retraction mechanisms weren't perfect so the head did sometimes land in the data zone when power failed. Chances are it gets stuck there.
These days everybody uses ramp loading and the head isn't allowed to touch the disk ever.
Power cycling the system (after a full backup) to check if it comes back is still good a
Re: (Score:2)
Re: (Score:3)
got it in one (Score:4, Funny)
I've hit a bit of a brick wall when it comes to testing hard disks
Have you tried throwing them against the brick wall?
Re: (Score:2)
Re: (Score:3)
Ah!
http://www.engadget.com/2005/11/16/lacie-brick-hard-drive-as-lego-block/ [engadget.com]
Why? (Score:5, Insightful)
Even if your storage passes the test, it could fail the next day. What you should be doing is designing your storage to gracefully handle failure, like RAID 5 with spares.
Re: (Score:2)
The point is to know whether it's faulty now at the time of arrival rather then 2 weeks down the line where it becomes a problem.
Re:Why? (Score:5, Insightful)
No, the point is to design your system so that if it fails 2 weeks down the line... it isn't a problem.
Re: (Score:3)
Re: (Score:2)
That isn't really true. Lots of hard drives have various states of failure, and you might be able to write data to it even if it has SMART errors. There isn't a universal way to tell if a drive is going to permanently die.
A classic example is a hard drive 'clicking'. The read head is contacting something intermittently, but it may still appear to work. You want to get that data off and onto another drive ASAP. Now if you get a drive out of the box like that, there's no point in even putting it into a ma
Re: (Score:2)
Ah yes, except the clicking is commonly caused by a lack of oil between the ball bearings in the motor.
No such failure.
Re: (Score:3)
The click is a read error. Drive cannot read a sector, so it moves the arm all the way to the edge of the platter to recalibrate (that's the "click"), then moves it back in and tries to read the sector again.
Re: (Score:2)
I think both are important.
If you have the time to test now, it will save you the hassle of swapping it out later.
Re:Why? (Score:5, Insightful)
Point is: You can't 'test'.
You can only tell if it's working, not when it's about to fail.
If people could predict when hard drives were going to fail we wouldn't need RAID or backups.
Re:Why? (Score:5, Informative)
To a degree, you can rule with certainty that everything is working.
New equipment does tend to have ghosts. Given enough systems, with homogeneous roles, it doesn't matter: if it starts to fail, you pull it and put another one in.
If you've got an environment with only a few servers with dedicated roles, having a new 'production server' go tits up is a very bad thing. For a system like this, you really do want to do a 'burn in' period, IMO for at least a couple weeks, where the system is not being depended upon. Your 4-year-old system doing the same thing at relatively diminished capability is not nearly as bad as doing a cut-over and having things go south, then.
You do, however, want to do a "burn in" on that new equipment. My preference is to stress a new piece of equipment with something like building kernels (which will stress every significant subsystem to some degree) while doing file operations (eg. something like bonnie+ if you're not copying files to the machine) for a period of at least a week without any stability or significant performance problems. This is due to the following subjective observations:
* getting a system with a defective disk is not uncommon these days. It's not common, so it's not a serious concern.
* Short of initial failure of the disk/DOA status, the disks will likely run a number of months before your first failure (depending on how many you've got, of course)
* Instability, inconsistent behavior, flaky RAM, or odd behavior from RAID or NIC controllers, and 'ghosts' can almost invariably be traced back to the PDU or PSU. These seem to die within about two weeks to a month if they're defective/poorly designed. With a server, troubleshooting this can be a huge bitch due to how loud they are and the multiple-dependence issue on the PDU. This is kind of an end game for me, and I have a hard time trusting any of the equipment after I've had a PSU fail.
* if you plan on taxing the system at all, you'll probably have a driver related performance problem somewhere down the line. Better to find it before you need the performance.
* Every once in a while, you've got a bad solid state device (RAM, CPU, SSD). These seem to either work, or not work, if they pass initial "does it work?"
Re: (Score:2)
your "test" will tell you nothing at all.
If you are a PHB that thinks that "testing" is important and apply it to everything then hire someone at $8.00 to waste their time.
You might have them run some software to test the server enclosures and 19" racks at the same time.
Re: (Score:3, Informative)
Hard drives, amazingly, are tested pretty effectively before leaving the factory. During tests in a controlled environment it was demonstrated that hard drives show no "failure curve" at onset, but follow a very boring, linear progression throughout their lifespan. The result: if you don't screw up when you install it you have little to worry about on day 1 that is different from day 1000, which is the cold reality that all mechanical devices will fail.
Cue the "but I have seen so many DOA drives from XYZc
Re: (Score:2)
Re: (Score:3, Insightful)
A plastic strap won't save you from the drive head failing to move. I've seen this happen when a bunch of unemployed temp workers unload the truck. This is why it seems "batches" of similar drives fail if you are getting them from the same source... some asshole was throwing and kicking the boxes around.
If your static strap is made of (all) plastic, then you will have issues beyond shipping and handling woes...
Re: (Score:3)
During tests in a controlled environment it was demonstrated that hard drives show no "failure curve" at onset, but follow a very boring, linear progression throughout their lifespan.
Interesting. Didn't the Google study on disk reliability [slashdot.org] show a distinct infant mortality spike in the beginning with a lowest failure rate between 1-2 years of age and after 2 years of age a sharp increase in failure rate quickly reaching a certain plateau? What you describe seems to be quite different.
Re: (Score:3)
During tests in a controlled environment it was demonstrated that hard drives show no "failure curve" at onset, but follow a very boring, linear progression throughout their lifespan.
Interesting. Didn't the Google study on disk reliability [slashdot.org] show a distinct infant mortality spike in the beginning with a lowest failure rate between 1-2 years of age and after 2 years of age a sharp increase in failure rate quickly reaching a certain plateau? What you describe seems to be quite different.
Actually no that was the study I was referring to and it didn't show anything like the "Bathtub" curve you describe. The AFR for drives 0-1 year old was steady for the first year (at most 1% higher for drives new to 3 months than in the first year), and from 1-2 years it held steady and then 2-3 years it rose precipitously until in year 5 when it became statistically chaotic (likely due to drives that were suffering from more obscure failures than the normal bearing or platter wear-out.) Basically, it dem
Re: (Score:3)
There is a very slight bathtub type curve - all numbers rounded, it's about 3% AFR in the first quarter (i.e. about 0.75% failures in first quarter) and 2% for drives in the 3-12 month range (i.e. about 1.5%). If I read the statistics presentation there right 33% of first year failures look to happen in the first quarter, which is detectable but minor initial higher rate. That's dwarfed by 1-2 year AFR (about 8%) and 2-3 year AFR (about 9%), but drops slightly after that.
They presented the AFRs rather tha
Re:Why? (Score:5, Interesting)
I would disagree. I believe it's best to be able to identify the first moment a hard drive is starting to have problems, rather than the condition its in when you get it.
One reason is that most of your hard drives will eventually develop a problem, and only a small fraction of the drives you buy will arrive defective.
Another reason is that nothing of value is on the new drive, you are risking only purchase price. A year from now, you may have important, possibly irreplaceable or at least inconvenient things to replace.
I run a piece of custom software I wrote that does a slow "disk crawl", reading ~100mb every 5 minutes. Over the course of a month it has read every block on the drive, and starts over. I get an email if an i/o error OR slow performance is encountered. I store a lot here, I have somewhere around 25TB of storage under the roof at home. Over the years I've been notified ~8 times of a failing drive. In all cases I was able to replace it before it became inaccessible. One of them failed to spin up ever again the day after I removed it from service. I consider this a very good system, and am surprised not to see a similar commercial offering. (it's a 5,600 line bash script!)
SMART is only useful to possibly confirm that a drive has a problem. Only a fool relies on it to notify them when there's a problem. I've probably replaced somewhere around 750 hard drives here at work, and of those, under a dozen were still accessible and displaying a SMART failure. Many times I've had SMART toggle to failed while I was doing data recovery to a replacement drive, as I was fighting my way through I/O errors. Got some Cpt Obvious going on there I think.
Re: (Score:3)
Color menus (arrow key), user-editable disk database, remote updates, authenticated email relaying, support for multiple drives, auto-detect and add, speed and capacity testing during add, performance and history graphing, quite a lot really. ;) It's a big'un. I make the most of whatever language I use. The incremental scan nature of the script itself requires a good deal of code. There have also been numerous changes to be as certain as possible that the script cannot get hung. Failing hard drives ar
Re: (Score:2)
BACKUP, then back that up... then back that up... then back that up offsite.
Re: (Score:2)
Yo Dawg....
No, no. Won't do that.
Re: (Score:2)
Re: (Score:2)
RAID 6, then RAID 10, then backup hourly, backup daily, backup weekly, make two copies and send one each to two off-site locations.
Re: (Score:2)
I prefer raid 60.
Re:Why? (Score:5, Insightful)
And then what you should test is that it actually notifies you when something does fail, so you know about it and can fix it. You can also test how long it takes to rebuild the array after replacing a disk, and how much performance degradation there is while that is happening.
Re:Why? (Score:5, Informative)
Not until the hardware fails and you need the data that was on there but not on the backup (or realized the backup failed a long time ago...).
For performance, yes, hardware is fastest. For reliability though, software RAID is better (hardware RAID can have interesting firmware version issues).
Linux running an md RAID array? If the server goes down, pop the drives in another server, a couple of mdadm commands later and the array is up and running. Hell, even Windows' software RAID ought to be able to work to recover an array where the server hardware died.
So if you're using RAID not for performance reasons, but for protection against hard drive failure, soft-RAID works very well. Hell, one of my NAS appliances died, and all I did was take the drive out, attach 4 USB adapters to them, and plug them into my Linux box. Instant access to the data,
There's nothing like the panic that happens when an array goes down due to non-drive hardware failure.
Re: (Score:3)
What a convoluted way to make sure when the shit hits, it gets spread out really evenly.
Re: (Score:3)
I guess you haven't dealt with multiple conflicting firmware versions that wipe out raidsets then. You're lucky.
Comment removed (Score:4, Insightful)
Re: (Score:2)
That is not the only solution. There are plenty of multi-path, Multi-controller SAN solutions out there. You can install more than one HBA in your hosts.
However! Once you are talking about controller or HBAs as failure points you need be rethinking your architecture. Disk failures are pretty common but its very unlikely silicon is going to up and die on you. If you can't tolerate those rare events. You really need to be looking at some cluster / application layer redundancy.
You will never eliminate all
Re: (Score:2)
IT's more fun than that.
Two storage vaults with 12 Hard drives each.
Running a nice Raid 5 or raid 6 or even a raid 50 or 60.
Knock loose ONE of those USCSI connectors going to a drive cage.
Raid is toast. I dont care WHAT raid you are running, none of them can withstand a loss of 50% of the drives.
Raid is NOT a backup. it's high availability mitigating the highest failure rate part, the hard drives.
Re: (Score:3)
We had redundant raid arrays
Were they made from inexpensive disks ?
Hard Drive Testing (Score:2, Informative)
In previous jobs, I've used the system of:
Full Format, Verify, Erase, then a Drive fitness test.
If there are errors in media, the Format, verify or erase will pick it up, then the fitness test to check the hardware.
Hitachi has a Drive Fitness test program
I have also used hddllf (hddguru.com)
Re: (Score:2)
ALL the drive manufacturers have drive testing software, WD's is called Data Lifeguard.
Download the software from the drive manufacturer, run extended test which will do a full disk scan for bad sectors. Then do their full write test.
If it passes those, it's good to go... until it fails
S.M.A.R.T. (Score:2)
There is no perfect tool that I could say, each drive manufacturer makes their own, and there are numerous third party tools out there as well. My best advice is have them all and have them handy. One I use quite a bit is HDD Regenerator, pretty thorough utility but it takes some time to run.
Re: (Score:2)
Comment removed (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
SMART is good for telling you when your drives do have problems that need addressing. it's not so great for giving you assurance that your drives do not have problems - consider a positive smart result to be more of an "I don't know" than a "good". you should generally assume your drives can fail at any time. I don't think there's any way to reliably predict the sudden death of a drive.
scsi (Score:3)
don't use consumer drives if you're concerned.
see also http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/archive/disk_failures.pdf [googleusercontent.com]
The Goog wrote a nice paper on hard drives.
Re:scsi (Score:4, Insightful)
Perhaps an honest mistake, the link is broken. Second, evidence has shown SATA are more reliable than commercial/enterprise grade drive. Only buy those if you don't like your money, or there is some clear advantage. That supposed advantage is not reliability, unless there is there is some sort of rapid replacement mechanism coming with the drive. Although replacement isn't reliability in my book.
http://lwn.net/Articles/237924/ [lwn.net]
Re: (Score:2)
Re: (Score:2)
However, with proper redundancy one can still get away with using consumer-level drives with an acceptable level of risk.
Re: (Score:2)
Jet Stress (Score:3)
The usual (Score:5, Informative)
All I usually do is:
1. smartctl -AH
Get an initial baseline report.
2. mke2fs -c -c
Perform a read/write test on the drive.
3. smartctl -AH
Get a final report to compare to the initial report.
If the drive remains healthy, and error counters aren't incrementing between the smartctl reports, it's good to go.
Re: (Score:2)
Re: (Score:2)
A 2-day burn-in is minimal for new hardware. Also, as drives get bigger, they also get faster: IDE, SATA1, SATA2, SATA3, Thunderbolt, ...
This is what I use (Score:4, Interesting)
root ~/bin # cat scandisk
#!/bin/bash
# RW scan of HD
argg='/dev/'$1
# if IDE (old kernels)
hdparm -c1 -d1 -u1 $argg
# Speedup I/O - also good for USB disks
blockdev --setra 16384 $argg
blockdev --getra $argg
#time badblocks -f -c 20480 -n -s -v $argg
#time badblocks -f -c 16384 -n -s -v $argg
time badblocks -f -c 10240 -n -s -v $argg
exit;
---------
Note that this reads existing content on the drive, writes a randomized pattern, reads it back, and writes the original content back. With modern high-capacity over-500GB drives, you should plan on leaving this running overnight. You can do this from pretty much any linux livecd, AFAIK. If running your own distro, you can monitor the disk I/O with ' iostat -k 5 '.
From ' man badblocks '
-n Use non-destructive read-write mode. By default only a non-destructive read-only test is done. This option must not be combined with the -w option, as they are mutually exclusive.
Do a Surface Scan (Score:2)
Hard Disk Sentinel (Score:3, Insightful)
Even with that, using the SMART data, in a SMART way, still only predicts about 30% of failures. The other 70% will come out of no where. That is why it is best to assume all drives will die at anytime and are suspect and never allow a single drive to be the sole copy of anything.
Think Performance - IOZONE (Score:2)
When it comes to media, even with SMART your drives will work 'till they die, and there's no way to predict that with a test.
Given that, your best option is to ensure that the drives are performing as expected. I've found many a faulty drive with IOZONE.
http://www.iozone.org/ [iozone.org]
old timers look here (Score:2, Interesting)
OK so that was the noob version of the question.
I have a question for the old timers. has anyone ever implemented something like:
1) log the time and temp
2) do a run of bonnie++ or a huge dd command
3) log the time and temp
4) Repeat above about ten times
5) numerical differentiation of time and temp and also any "overtemps"
In theory run from a cold or lukewarm start that could detect a drive drawing "too much" current or otherwise being F'd up, or cooling fan malfunction
I'm specifically looking for rate of te
Re: (Score:2)
Re: (Score:2)
Ahh but the delta-temp over delta-time, assuming identical hardware, is a direct measurement of cooling capacity.
Are you testing an array or individual drives? (Score:5, Insightful)
I manage a team that oversees PB of disk, both within an enterprise array and internal to the server. For testing the arrays, since there's GB of cache in front of the disks, I can only rely on the vendor to do the appropriate post installation testing to make sure there are no DOA disks. For internal disks, as others have mentioned you could run IOMeter for days without a problem and then the very next day it's dead. Unlike memory, disks have moving parts that can fail much easier than chips. However, with proper precautions like RAID, single disk failures can be avoided.
The bigger problem is having a double disk failure. This is due to the amount of time required to rebuild the failed disk. Back when disks were 100GB this was a "relatively" quick process. However, in some of my arrays with 3TB drives in them, it can take much longer to replace the drive. Even to the point whereby having hotspares has been considered to be not worth it as my array vendor will have a new disk in the array within 4hrs. With what an enterprise disk costs from the array vendor (not Frys), it can start to add up.
Re: (Score:2)
Re: (Score:3)
If you've got 3TB drives in use in RAID, you're a fool for not running double parity. Like you said, the time required is just too long; you need to be able to survive a 2-disk failure.
Reliability and fault-tolerance (Score:5, Informative)
Not completely related to how to test, but...
In 2007 Google reported that for a sample of 100k drives, only 60% of their drives with failures had ever encountered any SMART errors. Also, NetApp has reported a significant amount of drives with temporary failures, such that they can be placed back into a pool after being taken offline for a period of time and wiped. Google also had a lot of other interesting things to say (such as heat has no noticeable effect on hard drive life under 45C, that load is unrelated to failure rates, and that if a drive doesn't fail after 3 months, it's very unlikely to fail until the 2-3 year timeframe.
You can find the google paper here: http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/disk_failures.pdf [googleusercontent.com]
A few other notes that you can find from storage vendor tech notes if you own their arrays:
* Enterprise-level SAS drives aren't any more reliable than consumer SATA drives
- But they do have considerably different firmwares that assume they will be placed in an array, and thus have a completely different self-healing scheme than consumer-level drives (generally resulting in higher performance in failure scenarios)
* RAID 5 is a really bad idea - correlated failures are much more likely than the math would indicate, especially with the rebuild times involved with today's huge drives
* You have a lot more filesystem options that might not even make sense to use with a RAID system, like ZFS, as well as other mechanisms for distributing your data at a layer higher than the filesystem
Ultimately the reality is that regardless of the testing you put them under, hard drives will fail, and you need to design your production system around this fact. You *should* burn them in with constant read/write cycles for a couple days in order to identify those drives which are essentially DOA, but you shouldn't assume any drive that passes that process won't die tomorrow.
rsync -nc (Score:2)
I mirror data and test it periodically with rsync using the dry-run (-n) and checksum options (-c) to do a full comparison. I usually have more confidence in a new disk after I've done this a few times.
Lithophobia (Score:2)
I have a favorite boulder that has served my burn-in testing needs pretty well. Would you like a photo so you can chisel your own? I added some LED bling to mine.
Can't even believe this made it to the front page. (Score:2)
I mean, really, someone working at slashdot doesn't know this? This is about as basic a question as it gets when it comes to hardware.
Raid 5, a solid backup scheme, and a storage closet full of replacement drives. There is no good way to test HDDs.
Due entirely to the fact that they are a WEAR item it is only possible to decide which brand you trust the most.
Other than that, if its a big job and a lot of HDDs are going to be bought you could take 10 of each candidate drive and run them through spinrite till
IOmeter (Score:2)
UnRAID Preclear Script (Score:4, Informative)
http://lime-technology.com/forum/index.php?topic=2817.0 [lime-technology.com] ... the main feature of the script is
1. gets a SMART report
2. pre-reads the entire disk
3. writes zeros to the entire disk
4. sets the special signature recognized by unRAID
5. verifies the signature
6. post-reads the entire disk
7. optionally repeats the process for additional cycles (if you specified the "-c NN" option, where NN = a number from 1 to 20, default is to run 1 cycle)
8. gets a final SMART report
9. compares the SMART reports alerting you of differences.
Check it out. Its "original" purpose was to set the drive to all "0's" for easy insertion into a parity array (read: parity drive does not need to be updated if the new drive is all zeros) but it has also shown great utility as a stress test / burn-in tool to detect infant mortality and "force the issue" as far as satisfying the criteria needed for an RMA (read: sufficient reallocated block count)
If your skill level is enough to adapt the script to your own environ then great, otherwise UnRaid Basic is free and allows 3 drives in the array which should allow you to simultaneously pre-clear three drives. You might even be able to pre-clear more than that (up to available hardware slots) since you aren't technically dealing with the array at that point, but with enumerated hardware that the script has access to which should be eveything on the disc. Hardware requirements are minimal and it runs from flash.
Storage Unit is more important than the Drives (Score:2)
vendors solved this problem years ago (Score:2)
we use HP servers and HP ships a suite of software to install on the server along with the OS. they monitor the hardware and warn you of any problems. unless you like doing things the hard way, this was solved years and years ago
i have a bad hard drive i call HP, send them a log file and in 2 hours i have a new one delivered
Testing hardware by excercising (Score:2)
H2TestW - in particular for (often fake) USB media (Score:2)
While it is primarily advertised for flash media these days (and indispensable since there have been numerous forgeries or DOAs at least on the European market lately), it evolved as an HDD tester in the first place.
On Linux in particular, a combination of dd and smartctl (before&after writing the entire disk, as well as for self-tests) may come in handy too, of course.
Testing Drives. (Score:2)
It takes a while, but if you really want to be sure of your hardware (as sure as you can be, at least.)
Check the SMART status. If there are any re-allocated sectors, make note of the number.
Run badblocks with the -w switch against the drive (from a Linux live cd of your choice, for example)
That should completely read/write test the drive 4 times with multiple patterns. There should be no errors reported. This test will take longer than overnight on modern drives.
Check the SMART data again. Be wary if th
Ears (Score:4, Informative)
I replaced the drive in my TiVo. The 1st replacement was so much louder, I swapped the original back, then put the new drive in a test rig. It started getting bad sectors in a few days. RMA'd it to Seagate, and the new one was much quieter.
Just sent it to me (Score:2)
Testing methods (Score:2)
Or any tried and tested methods for testing storage media?
#2 pencil, Scantron sheet and the test.
Works for me everytime.
badblocks (Score:5, Interesting)
badblocks -c 10240 -s -w -t random -v /dev/sda1
that's my standard test for all HDDs
def check_storage_media(media): (Score:2)
while label_visible(media):
apply_lighter_fluid(media)
ignite(media)
wait_while_burining(media)
return media_is_bad
My suggestions (Score:5, Informative)
Speaking as somebody that has done hardware qualifications and burn-in development at very large scale for companies you ahve heard of let me tell you the tools I use:
fio: The _BEST_ tool for raw drive performance and burnin testing. A couple of hours of random access will ensure the drive head can dance, then a full block by block walk through with checksum verification will ensure that all blocks are readable and writable.. I usually do 2 or 3 passes here. You can tell fio to reject drives that do not perform to a minimum standard. Very useful for finding functional yet not quite up to speed drives. The statistics produced here are awesome as well.. Something like 70 stats per device per test.
stressapptest: This is google's burn in tool and virtually the only one I have ever found that supports NUMA on modern dual socket machines. This is IMPORTANT as its easy to ignore issues that come up with the link between the CPUs. The various testing modes give you the ability to tear the machine to pieces which is awesome. Stressapptest also is the most power hungry test I have ever seen, including the intel Power testing suite that you have to jump through hoops to get.
Pair this with a pass of memtest and you get a really, really nice burn in system that can burtalize the hardware and give you scriptable systems for detecting failure.
MHDD (Score:3)
It may not be sophisticated, but MHDD is what I use at work (among a couple of other tools). Other tools are more reliable in different circumstances, but my first stop is always MHDD, because it will give me a comprehensive R/W delay test on a disk. Extremely practical for a workshop, perhaps not practical for a data centre.