Page 1 of 2 12 LastLast
Results 1 to 15 of 24
  1. #1
    3 Star Lounger
    Join Date
    Jun 2014
    Posts
    283
    Thanks
    11
    Thanked 6 Times in 5 Posts

    How Many Hours Is Too Many For a Disk Drive?

    Since disk drives are moving parts, they have a finite lifespan before they will go "poof" leaving you and your data wondering what happened.
    We all know the importance of a rigorous back up plan, but how many hours can accrue on a drive collect before it should be considered suspect?
    There is no absolute answer of course. I wonder what the drive manufactuers claim for MTBF (mean time between failures).
    I have seen several drives with 35,000 hours or better, that seem to run just fine, and pass crystaldiskinfo easily.
    For a computer that runs continuously that amounts to only about 4 years of use, which is probably easier on a drive than stop-start every day.
    And no doubt de-fragging every week or so is a serious detriment to a drive's longevity.

    Any opinions out there?
    rstew

  2. #2
    Silver Lounger RolandJS's Avatar
    Join Date
    Dec 2009
    Location
    Austin metro area TX USA
    Posts
    1,727
    Thanks
    95
    Thanked 127 Times in 124 Posts
    Good points to ponder! Let's see if others weigh in with their observations
    "Take care of thy backups and thy restores shall take care of thee." Ben Franklin revisited.
    http://collegecafe.fr.yuku.com/forum...-Technologies/

  3. #3
    Super Moderator satrow's Avatar
    Join Date
    Dec 2009
    Location
    Cardiff, UK
    Posts
    4,486
    Thanks
    284
    Thanked 574 Times in 478 Posts
    Difficult to give any definite answer to this, all component failures will coincide somewhere along a 'bathtub' profile - think 'U' shaped, left = new> right = end of life.

    Most failures occur within the first days/weeks of use (left side slope > declining failures with time) then leveling out, very low failure rates after say, 2 months until approaching the end of life period (bottom of the bathtub, U shape curve), finally the curve/failures rise again quite steeply.

    The 'normal' low failure period might be anywhere from 9 months to several (8+) years, dependent upon many factors, cost and intended use being just two of them.

  4. #4
    Super Moderator RetiredGeek's Avatar
    Join Date
    Mar 2004
    Location
    Manning, South Carolina
    Posts
    9,434
    Thanks
    372
    Thanked 1,457 Times in 1,326 Posts
    Quote Originally Posted by rstew View Post
    I wonder what the drive manufactuers claim for MTBF (mean time between failures).rstew
    rstew.

    You can usually find the MTBF on the Manufacturer's website.

    Remember what MEAN is:
    The mean is the average of all numbers and is sometimes called the arithmetic mean. To calculate mean, add together all of the numbers in a set and then divide the sum by the total count of numbers. For example, in a data center rack, five servers consume 100 watts, 98 watts, 105 watts, 90 watts and 102 watts of power, respectively. The mean power use of that rack is calculated as (100 + 98 + 105 + 90 + 102 W)/5 servers = a calculated mean of 99 W per server. Intelligent power distribution units report the mean power utilization of the rack to systems management software.
    Thus, one drive may die after 1 hour while another may last for 10 years of continuous use. The problem is which one did you get? Thus, IMHO a rigorous backup plan and run them till they drop, or at least start showing problems. HTH
    May the Forces of good computing be with you!

    RG

    PowerShell & VBA Rule!

    My Systems: Desktop Specs
    Laptop Specs

  5. #5
    5 Star Lounger RussB's Avatar
    Join Date
    Dec 2009
    Location
    Grand Rapids, Michigan
    Posts
    803
    Thanks
    10
    Thanked 50 Times in 49 Posts
    In years past, 1980's to early 1990's I was updating my HDDs almost every year but because of technology advances in speed and size. Since the mid-late 1990's I have been adding new drives about every 3 years and retiring the old ones, mostly again for the latest technology. I have only ever had one out of dozens fail and it was in a laptop that saw hard industrial strength use.

    For an office environment I would recommend changing every 3 years and for home use or not as important office computers every 5 years. YMMV

    Most people just backup once a week or never and "hope". :-)
    Do you "Believe"? Do you vote? Please Read:
    LEARN something today so you can TEACH something tomorrow.
    DETAIL in your question promotes DETAIL in my answer.
    Dominus Vobiscum <))>(

  6. #6
    WS Lounge VIP
    Join Date
    Dec 2009
    Location
    Earth
    Posts
    8,176
    Thanks
    47
    Thanked 982 Times in 912 Posts
    I have worked with servers that have been running for 7 years and still had the original drives. I've also seen drives die after a week. The only time I've seen mechanical failure was old desktop drives - you hit them on the side with the blunt end of a screwdriver to start them, then backup.

    cheers, Paul

  7. #7
    jwoods
    Guest
    Depends on the drive make and model as well...

    http://arstechnica.com/information-t...sks-are-equal/

  8. #8
    Super Moderator satrow's Avatar
    Join Date
    Dec 2009
    Location
    Cardiff, UK
    Posts
    4,486
    Thanks
    284
    Thanked 574 Times in 478 Posts
    Those of us that have worked on IBM Deathstars or a 10/20/30GB series by Fujitsu around the turn of the century are well aware of that

    That data is from Backblaze, it's one of 4 or 5 articles/blogs they've published on the topic. Google published one a few years ago, too. These all relate to server workloads, iirc.

    For a cross-section (server+home/business/gaming usage) there's been a periodic compilation based on hardware returns from a French -based etailer over the last 6 years, it's short term based as in returns under warranty, so up to 12 months in the main sections and 6 months or so in the slightly later data in the conclusion section. Unfortunately, only the early reports were translated into English, though once you've studied some of the English versions (take a look at the OCZ SSDs, for example), working with the data from the later French only reports is quite straightforward.

    * unfortunately, the English link now redirects to the Fr site, the .Be server (behardware.com) cache might be available on web archive sites, or use your favourite translation method.
    http://www.hardware.fr/articles/927-...osants-11.html

  9. The Following User Says Thank You to satrow For This Useful Post:

    Fascist Nation (2015-05-01)

  10. #9
    New Lounger
    Join Date
    Jan 2015
    Posts
    24
    Thanks
    0
    Thanked 3 Times in 3 Posts
    I think that if anyone could come up with a definite answer to the OP's question, they could become rich. I'm sure many companies would pay for software or hardware that could accurately predict the life of a drive.

  11. #10
    5 Star Lounger
    Join Date
    Oct 2013
    Location
    Phoenix, AZ
    Posts
    926
    Thanks
    554
    Thanked 137 Times in 128 Posts
    "How Many Hours Is Too Many For a Disk Drive?"

    The hour you ran it and it failed.

  12. #11
    Super Moderator CLiNT's Avatar
    Join Date
    Dec 2009
    Location
    California & Arizona
    Posts
    6,121
    Thanks
    160
    Thanked 609 Times in 557 Posts
    And no doubt de-fragging every week or so is a serious detriment to a drive's longevity.
    Defragging a mechanical drive is never a detriment to longevity. Your drives work harder if you NEVER defrag them.
    Not only the above, but filling them to capacity is also a detriment.

    For a computer that runs continuously that amounts to only about 4 years of use, which is probably easier on a drive than stop-start every day.
    A computer may be up and running 24/7, but that does not mean a drive is continuously running. Mech drives are designed and built to "start/stop" frequently.

    What is a WELL KNOWN & PROVEN detriment to a mechanical drive:

    1. An adverse and prolonged thermal environment.
    2. An adverse and prolonged vibration environment.
    3. Power failures, electrical variations (voltage and current variations; poor quality power), and electrical surges.
    4. Factory defects. There is no such thing as a technology derived by man that isn't infallible.
    5.
    DRIVE IMAGING
    Invest a little time and energy in a well thought out BACKUP regimen and you will have minimal down time, and headache.

    Build your own system; get everything you want and nothing you don't.
    Latest Build:
    ASUS X99 Deluxe, Core i7-5960X, Corsair Hydro H100i, Plextor M6e 256GB M.2 SSD, Corsair DOMINATOR Platinum 32GB DDR4@2666, W8.1 64 bit,
    EVGA GTX980, Seasonic PLATINUM-1000W PSU, MountainMods U2-UFO Case, and 7 other internal drives.

  13. #12
    Silver Lounger
    Join Date
    Mar 2014
    Location
    Forever West
    Posts
    2,072
    Thanks
    0
    Thanked 259 Times in 248 Posts
    I built a new computer for Win7 when it came out and put in a 250GB C: drive and a 500GB D: drive, haven't had a problem with it being on 24/7 since. I do clean the computer frequently [video card fan lets me know when] and keep it on a UPS.

  14. #13
    4 Star Lounger
    Join Date
    Jun 2011
    Location
    Hampshire (the old one)
    Posts
    525
    Thanks
    21
    Thanked 72 Times in 62 Posts
    I've an XP machine which has been up continuously (apart from maintenance, holidays etc.) since 2003. The disk only died in 2014. It's now good as new with a drive I took from a video recorder which had never been used.

    Quote Originally Posted by Berton View Post
    <snip>
    [video card fan lets me know when]
    Somewhat OT, but that reminds me; the machine above is on its 3rd video card. The way it's orientated in the case means the fan's on the bottom rather than the top, and only glued on. I've had two fall off, frying the card. The last one took the motherboard with it. The present card doesn't have a fan, and is more powerful. So it's got its 2nd MB, 3rd video card, and 2nd HD, all with Windows OEM which still activates nicely!

  15. #14
    3 Star Lounger
    Join Date
    Jun 2014
    Posts
    283
    Thanks
    11
    Thanked 6 Times in 5 Posts
    So I went to the Seagate site to see what they say about expected MTBF for their drives.
    Amazingly they say its about 1.2 MILLION hours!! That is about 136 years.
    From their site:
    "AFR and MTBF specifications are based on the following assumptions for business critical storage system environments:
    • 8,760 power-on-hours per year.
    • 250 average motor start/stop cycles per year.
    • Operations at nominal voltages.
    • Systems will provide adequate cooling to ensure the case temperatures do not exceed 40C. Temperatures outside the specifications in Section 2.9 will increase the product AFR and decrease MTBF"

    Wonder why so many drives die before their time?
    rste

  16. #15
    Silver Lounger
    Join Date
    Mar 2014
    Location
    Forever West
    Posts
    2,072
    Thanks
    0
    Thanked 259 Times in 248 Posts
    The drives I use mention 100,000 hours MTBF/Mean Time Between Failures. With 8760 hours in a year that would still be longer than most will keep a computer, a bit over 11 years.

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •