Thread: How Many Hours Is Too Many For a Disk Drive?

1. How Many Hours Is Too Many For a Disk Drive?

Since disk drives are moving parts, they have a finite lifespan before they will go "poof" leaving you and your data wondering what happened.
We all know the importance of a rigorous back up plan, but how many hours can accrue on a drive collect before it should be considered suspect?
There is no absolute answer of course. I wonder what the drive manufactuers claim for MTBF (mean time between failures).
I have seen several drives with 35,000 hours or better, that seem to run just fine, and pass crystaldiskinfo easily.
For a computer that runs continuously that amounts to only about 4 years of use, which is probably easier on a drive than stop-start every day.
And no doubt de-fragging every week or so is a serious detriment to a drive's longevity.

Any opinions out there?
rstew

2. Good points to ponder! Let's see if others weigh in with their observations

3. Difficult to give any definite answer to this, all component failures will coincide somewhere along a 'bathtub' profile - think 'U' shaped, left = new> right = end of life.

Most failures occur within the first days/weeks of use (left side slope > declining failures with time) then leveling out, very low failure rates after say, 2 months until approaching the end of life period (bottom of the bathtub, U shape curve), finally the curve/failures rise again quite steeply.

The 'normal' low failure period might be anywhere from 9 months to several (8+) years, dependent upon many factors, cost and intended use being just two of them.

4. Originally Posted by rstew
I wonder what the drive manufactuers claim for MTBF (mean time between failures).rstew
rstew.

You can usually find the MTBF on the Manufacturer's website.

Remember what MEAN is:
The mean is the average of all numbers and is sometimes called the arithmetic mean. To calculate mean, add together all of the numbers in a set and then divide the sum by the total count of numbers. For example, in a data center rack, five servers consume 100 watts, 98 watts, 105 watts, 90 watts and 102 watts of power, respectively. The mean power use of that rack is calculated as (100 + 98 + 105 + 90 + 102 W)/5 servers = a calculated mean of 99 W per server. Intelligent power distribution units report the mean power utilization of the rack to systems management software.
Thus, one drive may die after 1 hour while another may last for 10 years of continuous use. The problem is which one did you get? Thus, IMHO a rigorous backup plan and run them till they drop, or at least start showing problems. HTH

5. In years past, 1980's to early 1990's I was updating my HDDs almost every year but because of technology advances in speed and size. Since the mid-late 1990's I have been adding new drives about every 3 years and retiring the old ones, mostly again for the latest technology. I have only ever had one out of dozens fail and it was in a laptop that saw hard industrial strength use.

For an office environment I would recommend changing every 3 years and for home use or not as important office computers every 5 years. YMMV

Most people just backup once a week or never and "hope". :-)

6. I have worked with servers that have been running for 7 years and still had the original drives. I've also seen drives die after a week. The only time I've seen mechanical failure was old desktop drives - you hit them on the side with the blunt end of a screwdriver to start them, then backup.

cheers, Paul

7. Depends on the drive make and model as well...

http://arstechnica.com/information-t...sks-are-equal/

8. Those of us that have worked on IBM Deathstars or a 10/20/30GB series by Fujitsu around the turn of the century are well aware of that

That data is from Backblaze, it's one of 4 or 5 articles/blogs they've published on the topic. Google published one a few years ago, too. These all relate to server workloads, iirc.

For a cross-section (server+home/business/gaming usage) there's been a periodic compilation based on hardware returns from a French -based etailer over the last 6 years, it's short term based as in returns under warranty, so up to 12 months in the main sections and 6 months or so in the slightly later data in the conclusion section. Unfortunately, only the early reports were translated into English, though once you've studied some of the English versions (take a look at the OCZ SSDs, for example), working with the data from the later French only reports is quite straightforward.

* unfortunately, the English link now redirects to the Fr site, the .Be server (behardware.com) cache might be available on web archive sites, or use your favourite translation method.
http://www.hardware.fr/articles/927-...osants-11.html

9. The Following User Says Thank You to satrow For This Useful Post:

Fascist Nation (2015-05-01)

10. I think that if anyone could come up with a definite answer to the OP's question, they could become rich. I'm sure many companies would pay for software or hardware that could accurately predict the life of a drive.

11. "How Many Hours Is Too Many For a Disk Drive?"

The hour you ran it and it failed.

12. And no doubt de-fragging every week or so is a serious detriment to a drive's longevity.
Defragging a mechanical drive is never a detriment to longevity. Your drives work harder if you NEVER defrag them.
Not only the above, but filling them to capacity is also a detriment.

For a computer that runs continuously that amounts to only about 4 years of use, which is probably easier on a drive than stop-start every day.
A computer may be up and running 24/7, but that does not mean a drive is continuously running. Mech drives are designed and built to "start/stop" frequently.

What is a WELL KNOWN & PROVEN detriment to a mechanical drive:

1. An adverse and prolonged thermal environment.
2. An adverse and prolonged vibration environment.
3. Power failures, electrical variations (voltage and current variations; poor quality power), and electrical surges.
4. Factory defects. There is no such thing as a technology derived by man that isn't infallible.
5.

13. I built a new computer for Win7 when it came out and put in a 250GB C: drive and a 500GB D: drive, haven't had a problem with it being on 24/7 since. I do clean the computer frequently [video card fan lets me know when] and keep it on a UPS.

14. I've an XP machine which has been up continuously (apart from maintenance, holidays etc.) since 2003. The disk only died in 2014. It's now good as new with a drive I took from a video recorder which had never been used.

Originally Posted by Berton
<snip>
[video card fan lets me know when]
Somewhat OT, but that reminds me; the machine above is on its 3rd video card. The way it's orientated in the case means the fan's on the bottom rather than the top, and only glued on. I've had two fall off, frying the card. The last one took the motherboard with it. The present card doesn't have a fan, and is more powerful. So it's got its 2nd MB, 3rd video card, and 2nd HD, all with Windows OEM which still activates nicely!

15. So I went to the Seagate site to see what they say about expected MTBF for their drives.
Amazingly they say its about 1.2 MILLION hours!! That is about 136 years.
From their site:
"AFR and MTBF specifications are based on the following assumptions for business critical storage system environments:
• 8,760 power-on-hours per year.
• 250 average motor start/stop cycles per year.
• Operations at nominal voltages.
• Systems will provide adequate cooling to ensure the case temperatures do not exceed 40°C. Temperatures outside the specifications in Section 2.9 will increase the product AFR and decrease MTBF"

Wonder why so many drives die before their time?
rste

16. The drives I use mention 100,000 hours MTBF/Mean Time Between Failures. With 8760 hours in a year that would still be longer than most will keep a computer, a bit over 11 years.

Page 1 of 2 12 Last

Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•