Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Can SSDs Be Used For Software Development? 480

hackingbear writes "I'm considering buying a current-generation SSD to replace my external hard disk drive for use in my day-to-day software development, especially to boost the IDE's performance. Size is not a great concern: 120GB is enough for me. Price is not much of a concern either, as my boss will pay. I do have concerns on the limitations of write cycles as well as write speeds. As I understand, the current SSDs overcome it by heuristically placing the writes randomly. That would be good enough for regular users, but in software development, one may have to update 10-30% of the source files from Subversion and recompile the whole project, several times a day. I wonder how SSDs will do in this usage pattern. What's your experience developing on SSDs?"
This discussion has been archived. No new comments can be posted.

Can SSDs Be Used For Software Development?

Comments Filter:
  • I'm not seating it (Score:5, Insightful)

    by timeOday ( 582209 ) on Friday March 06, 2009 @04:23PM (#27096465)
    I'm using the Intel SSD and I think it's great - fast and silent. Will it last? I'd argue you never know about any particular model of hard drive or SSD until a few years after it is released. On the other hand, I'd also argue it doesn't matter much. Say one drive has a 3% failure rate in the 3rd year and another has a 6% rate. That's a huge difference percentage-wise (100% increase). And yet it's only a 3% extra risk - and, most importantly, you need a backup either way.
    • by Zebra_X ( 13249 ) on Friday March 06, 2009 @04:46PM (#27096891)

      The real key here is this: when an SSD drive can no longer execute a write - the disk you will let you know. Reads do not cause appreciable wear so you will end up with a read only disk when the drive has reached the end of it's life. This is vastly superior to the drive just dying becuase it's had enough of this cruel world.

      I'd be interested to see some statistics on electrical failure of these drives though... but it seems that isn't as much of an issue.

      • Re: (Score:3, Funny)

        You haven't actually done much work with these drives have you? I can tell because of the pixels and the amount of nonsense you display....

        Point is, for significant use, SSD's crap out in less than a year.

        And yes, I have statistics and anecdotal evidence both on my side.
    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Warning: I'm an Intel employee

      But I've been using the 80GB Intel MLC drive since mid-year 2008 and it's great. Very fast and silent -- I refuse to go back to a mechanical drive again. It's perfect for a client workload (99.9% of users) but not perfect for a transaction heavy server (use the SLC drive).

      My workload is writing code and generating/parsing very large data sets from fab (1 - 4 GB).

      Here is the "insider" information from my drive:

      6.3TB written total (roughly 9 months of usage)
      58 cycles (average)

      • by tytso ( 63275 ) * on Friday March 06, 2009 @09:38PM (#27101017) Homepage

        So interested people want to know --- how do you get the "insider" information from an X25-M (ie., total amount of writes written, and number of cycles for each block of NAND)?

        I've added this capability to ext4, and on my brand-spanking new X25-M (paid for out of my own pocket because Intel was to cheap to give one to the ext4 developer :-), I have:

        <tytso@closure> {/usr/projects/e2fsprogs/e2fsprogs} [maint]
        568% cat /sys/fs/ext4/dm-0/lifetime_write_kbytes
        51960208

        Or just about 50GB written to the disk (I also have a /boot partition which has about half a GB of writes to it).

        But it would be nice to be able to get the real information straight from the horse's mouth.

  • Swap? (Score:4, Interesting)

    by qoncept ( 599709 ) on Friday March 06, 2009 @04:23PM (#27096469) Homepage
    Do you have a swap file/partition? You're talking hundreds of writes a day, tops. That sounds like a big number, but in reality it just ain't. I would question why you feel the need for an SSD, though. I know the difference between $300 and $50 isn't that big in the grand scheme of things, what benefit are you expecting?
    • Re:Swap? (Score:5, Informative)

      by timeOday ( 582209 ) on Friday March 06, 2009 @04:31PM (#27096593)
      The main difference is a good SSD is much, much faster than any hard drive. If discussions about the topic don't give that impression, it's only because people fixate on sustained transfer - where there is still some competition between slower SSDs and hard drives - rather than seek time, which is often more important, and where SSDs blow the doors off hard drives. To me, suddenly widening the biggest bottleneck in PC performance for the first time in a couple decades is pretty exciting.
      • Re:Swap? (Score:4, Informative)

        by Mad Merlin ( 837387 ) on Friday March 06, 2009 @04:43PM (#27096819) Homepage

        Yeah, except only the SLC SSDs are worth having. MLC SSDs are junk and extremely common, you're better off with a spinning platter drive. However, I can't recommend SLC SSDs enough, they're substantially faster than conventional spinning platter drives in all ways.

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          Would you care to explain your opinion that MLC SSDs are junk? I know some people have gotten a bad impression of MLC SSDs because Windows' default configuration doesn't play nicely with them. However if you tune Windows, MLCs work great. If you use OS X, just about everything is, by accident, property tuned and they work great. My guess, with Linux they will just work great.

          Three days in with my new SSD and OS X, and I love it. The almost total elimination of disk latency has made it a whole new experience

      • Re:Swap? (Score:5, Insightful)

        by afidel ( 530433 ) on Friday March 06, 2009 @05:06PM (#27097287)
        The best bet if your project is smaller than about 20GB is to buy a box full of ram and use a FAT32 formatted ramdrive. Orders of magnitude faster than even an SSD.
  • Does anyone know when the Sandisk SSD G3 are coming out?

  • should be fine (Score:3, Informative)

    by MagicMerlin ( 576324 ) on Friday March 06, 2009 @04:25PM (#27096491)
    Unless you type like The Flash, even MLC SSDs from the better vendors (Intel) should be fine for anything outside of server applications. Simple math should back this up (how many GB total the drive can write over its lifetime vs how much you produce each day). merlin
    • by Tetsujin ( 103070 ) on Friday March 06, 2009 @04:51PM (#27097009) Homepage Journal

      Unless you type like The Flash, even MLC SSDs from the better vendors (Intel) should be fine for anything outside of server applications. Simple math should back this up (how many GB total the drive can write over its lifetime vs how much you produce each day).

      I don't know who this "The Flash" is... But this reminds me of some odd invoices I've seen here lately at Star Labs. Someone special-ordered a custom keyboard rated to one hundred times the usual keystroke impact, an 80MHz keyboard controller, and a built-in 1MiB keystroke buffer. Pretty ridiculous, huh? The usual 10ms polling rate for a USB keyboard should be enough for anybody - no need for all that fancy junk.

    • Re: (Score:3, Insightful)

      by clone53421 ( 1310749 )

      how many GB total the drive can write over its lifetime vs how much you produce each day

      It's not as simple as that. Make a small change (insertion or deletion) near the beginning of a large source code file, and the entire file – from the edit onward – must be written over. Then, any source code file that has been modified must be read and built, overwriting the previous binary files for those source codes. Finally, all the binary files must be re-linked into the executable.

      So you're not just writing ___ bytes of code. You're writing ___ bytes of code, re-writing ___ bytes of code

      • It's not as simple as that. Make a small change (insertion or deletion) near the beginning of a large source code file, and the entire file – from the edit onward – must be written over.

        It's not like any normal editor actually opens the file in edit mode and only patches in bytes that have been modified. They all rely on the simple solution of actually writing the whole file at once.

      • Re: (Score:3, Informative)

        by CAIMLAS ( 41445 )

        So? Find out how much is actually being written. It's trivial (at least in Windows). If this is a linux machine, you can either use iostat or look at the actual files within /sys which denote this information and do some basic arithmetic.

        So, say you rewrite (say) 2Gb of data a day. Set the disk cache to be high. The SSD should last a year or two, minimum, at this rate of writing because it balances the writes across the disk.

        Another approach which could be taken is not use the SSD for daily compiling use. U

  • by vlad_petric ( 94134 ) on Friday March 06, 2009 @04:26PM (#27096509) Homepage

    If they're good enough for Databases (frequent writes), they should be just fine for devel.

    OTOH, You should be a lot more concerned about losing data because of a) software bugs or b) mechanical failures in a conventional drive

    • by Z00L00K ( 682162 )

      It also depends on what type of filesystem you use. A journaling filesystem like ext3 can wear down a disk a lot faster than a non-journaling filesystem.

    • Database edits don't propagate through the database the way a code edit propagates through the files in your project. In addition to the source code itself, object files, dlls, and executables will probably have to be re-written if you change a source code file.

  • Backups (Score:5, Informative)

    by RonnyJ ( 651856 ) on Friday March 06, 2009 @04:29PM (#27096559)

    If you're worried about losing work, I think your backup solution is what you need to improve instead.

  • by wjh31 ( 1372867 ) on Friday March 06, 2009 @04:31PM (#27096597) Homepage
    could a raid structure give the performance boost i assume you are after? ive no experiance with them but i gather they can offer higher read/write rates. Can someone with more experiance say exactly how much of a performace boost they give, a set of small HDD's could be the same price without the concerns over cycle limits
    • by Guspaz ( 556486 )

      RAID can increase throughput, but it can't reduce access latencies. Of course, if you can read two different things at the same time, that has a similar effect to halving the effective access time. But it'd take a lot of Raptors to get the effective access time down from ~7ms to ~0.1ms.

  • IDE? (Score:5, Funny)

    by Hatta ( 162192 ) on Friday March 06, 2009 @04:35PM (#27096651) Journal

    You should get an SATA SSD instead.

  • Answer: (Score:2, Insightful)

    by BitZtream ( 692029 )

    Yes, a SSD can be used for development.

    A better question to ask is should you use a SSD for development.

  • I have been using my Thinkpad X300 for developing for the last several month without any problems !

  • Since you're asking about it and mentioning revision control up front, I'm going to assume that you'll be committing your changes frequently to the revision control system.

    If thats the case, you've already got a backup system in place to deal with hard disk failures thats probably better than any other solution for a workstation. Not only do you get backups of your source, you get (assuming your commits are good) nice checkpoints of working code rather than a backup of some random stuff you were working on

  • SSDs = productivity (Score:5, Interesting)

    by Civil_Disobedient ( 261825 ) on Friday March 06, 2009 @04:42PM (#27096805)

    I use SSDs for my (both) development systems--the first was for the work system, and after seeing the improvements I decided I would never use spinning-platter technology again.

    The biggest performance gains are in my IDE (IntelliJ). My "normal" sized projects tend to link to hundreds of megs of JAR files, and the IDE is constantly performing inspections to validate the code is correct. No matter how fast the processor, you quickly become IO-bound as the computer struggles to parse through tens of thousands of classes. After upgrading to SSD, I no longer find the IDE struggling to keep up.

    I ended up going with SSD after reading this suggestion [jexp.de] for increasing IDE performance. The general jist: the only way to improve the speed of your programming environment is to get rid of your file access latency.

  • by Zakabog ( 603757 ) <john.jmaug@com> on Friday March 06, 2009 @04:43PM (#27096815)

    The company I'm working at thought about using SSDs, but we were thinking more on the server end (to allow faster database access.) You don't have to worry about the write limits as it's highly unlikely you will hit them within the lifetime of a standard hard drive.

    The main issue we ran into was cost, the drives we were looking at started around $3,000 for something like 80 gigs. That just wasn't worth it for us, though if you personally feel that the added cost (and I doubt you're looking at a $3,000 SSD, more likely you're looking at the $300 drives) is worth the performance gains then go for it. Though I think even for $300 it won't make a worthwhile difference.

    There are other bottlenecks to consider, is your CPU fast enough, do you have enough RAM, could the hard drive your software and OS is on use an upgrade, etc. Perhaps even buy an internal SATA drive (if you can) to replace the external you're using, those external enclosures generally aren't known for their performance. If you've exhausted all of those options and you still need more speed, then I'd say go for the SSD.

  • That would be good enough for regular users, but in software development, one may have to update 10-30% of the source files from Subversion and recompile the whole project, several times a day

    I couldn't help but notice, that you said several times per day, rather than several times per second.

    Are you worried that after you die of old age, in the unlikely event that your great grandkids start to have problems with their inherited flash drive, they won't be able to replace it?

    • Re: (Score:3, Funny)

      I used to worry about rewrites on my eeepc. But I have installed ubuntu twice in the last month and the disk seems to be exactly the same as it was initially so I don't worry any more.
  • hey, if your boss is paying for it, buy a couple and replace them when they wear out
    (or just tell him you'll need a better, bigger, faster one in a year)

  • Something that'll handle 30+Gb of RAM. Then it pretty much doesn't matter.

     

  • make backups? (Score:2, Insightful)

    by bzipitidoo ( 647217 )

    You do back up your work, don't you? You know, in case it's lost, stolen, destroyed, etc.? An SSD going bad is hardly the only danger. So why not try out an SSD, and if you're especially worried, backup more frequently and keep more backups?

  • by petes_PoV ( 912422 ) on Friday March 06, 2009 @04:51PM (#27097007)
    That way it'll encourage them to write efficient implementations.

    If you give your programmers an 8-way 4GHz m/b with 64GB of memory (if sucha thing exists yet), they'll use all the processing power in dumb, inefficient algorithms, just because the development time is reduced. While those of us in the real world have to get by on "normal" machines.

    When we complain about poor performance, they just shrug and say "well it works fine on my nuclear-powered, warp-10, so-fast-it-can-travel-back-in-time" machine"

    However, if they were made to develop the software on boxes that met the minimum recommended spec. for their operating system, they'd have to give some thought to making the code run efficiently. If it extended the development time and reduced the frequency of updates, well that wouldn't be a bad thing either.

    • by Anonymous Coward on Friday March 06, 2009 @05:07PM (#27097309)
      compile time has nothing to do with inefficient algorithms slowing down programs.
    • by vadim_t ( 324782 ) on Friday March 06, 2009 @05:23PM (#27097607) Homepage

      Disagree. This problem went away for the most part.

      First, performance isn't nearly the problem it used to be. We aren't using anymore the kind of hardware that needs the programmer to squeeze every last drop of performance out of it. In fact, we can afford to be massively wasteful by using languages like Perl and Python, and still get things done, because for most things, the CPU is more than fast enough.

      Second, we're not coding as much in C anymore. In C I could see this argument, lazy programmer writing bubble sort or something dumb like that because for him waiting half a second on his hardware isn't such a problem. But most of this has been abstracted these days. Libraries, and high level languages contain highly optimized algorithms for sorting, searching and hashes. It's a rare need to have to code your own implementation of a basic data structure.

      Third, the CPU is rarely the problem anymore, I/O is. Programs spend most of their time waiting for user input, the database, the network, or in rare cases, the hard disk. A lot of code written today is shinier versions of things written 20 years ago, and which would run perfectly fine on a 486. Also for web software the performance of the client is mostly meaningless, since heavy lifting is server-side.

      Also, programming has a much higher resource requirement than running the result. People code on 8GB boxes because they want to: run the IDE, the application, the build process with make -j4, and multiple VMs for testing. On Windows you're going to want to test your app on XP and Vista, on Linux you may need to try multiple distributions. VMs are also extremely desirable for testing installers, as it's easy to forget to include necessary files.

      I'd say that giving your developer a 32 core box would actually be an extremely good idea, because the multicore CPUs have massively caught on, but applications capable of taking advantage of them are few. Since coding threaded code is not lazy but actually takes effort, giving the programmers reasons to write it sounds like a very good idea to me.

    • by glwtta ( 532858 ) on Friday March 06, 2009 @05:37PM (#27097887) Homepage
      That way it'll encourage them to write efficient implementations.

      That's just stupid - I'm going to write better code because my compiles take longer?

      There seem to be a lot of these posts on Slashdot with down-home folk wisdom on how to educate the smug and indifferent programmer, who is so clearly divorced from reality that he doesn't even know what computers his customers use. I get the sneaking suspicion that the authors know very little about actual programming.

      There are two reasons for bad software:

      a) incompetent programmers
      b) bad project management

      The latter includes things like unrealistic timelines and ill defined scope and requirements. I'm not sure which one is the bigger culprit, but both are pervasive.

      In neither case, though, are you going to fix the problem with gimmicky bullshit like inadequate equipment.
    • by ultrabot ( 200914 ) on Friday March 06, 2009 @05:43PM (#27098001)

      That way it'll encourage them to write efficient implementations.

      Actually, the opposite is true.

      If development is painful (which it is, if your workflow is hampered by slow builds), you will produce crappier code. It's all about retaining focus & flow. Sad thing is, compilation still takes too long; you can still check your gmail or refresh slashdot.

      How many of you are reading this article while automake is checking the version of your fortran compiler in order to run gcc on a .c file?

    • by psnyder ( 1326089 ) on Friday March 06, 2009 @05:48PM (#27098107)
      A similar argument was used in World War II to keep bolt action sniper rifles in use in some countries instead of 'upgrading' to 'auto-loading' rifles. With bolt action, after shooting, you had to physically lift the bolt, cock it in place, and push it down again before you could fire another shot.

      The argument was, if the snipers knew they couldn't fire again immediately, they would be more careful lining up and aiming that first shot. With an 'auto-loading' rifle, you could keep your eye in the scope and fire off more rounds.

      It seems quite obvious, that if you're in the field, the seconds after that first shot are very important. If you need to take your eye away from the scope, and spend the time reloading the chamber, the outcome could be completely different than if you were able to fire off a few rounds immediately.

      A good sniper would have aimed that first shot up carefully no matter what rifle they were using, in the same way a good programmer will make efficient, elegant algorithms no matter what machine they're using. You'd only have to 'limit' your programmers if you think they're bad programmers. If a supervisor is thinking along these lines, they've already hired bad programmers and are setting both themselves and their team up for failure. The faster the machines, the less time wasted. You don't need forced limits reminding them about efficiency, because any decent programmer will already be thinking about it.
    • Re: (Score:3, Interesting)

      by rossz ( 67331 )

      I worked in the game industry in the past and I felt this was one of their problems. The developers all had the latest greatest processors and the cutting edge overpriced video cards. The games ran just fine, of course. On a typical system, however, the game performance would suck big time. I refuse to replace my computer every year just to play the latest game.

      You can continue to give the developers cutting edge hardware, but make sure your QA people are running "typical" systems.

      My experience was from

    • by merreborn ( 853723 ) on Friday March 06, 2009 @06:12PM (#27098519) Journal

      Developers should use *slow* machines
      That way it'll encourage them to write efficient implementations.
      If you give your programmers an 8-way 4GHz m/b with 64GB of memory (if sucha thing exists yet), they'll use all the processing power in dumb, inefficient algorithms, just because the development time is reduced. While those of us in the real world have to get by on "normal" machines.

      No, developers should develop on fast machines... and test on slow machines.

      It's a waste of money to pay your programmers $50/hr to sit and wait for compiles to complete, IDEs to load, etc. That hurts the employer, and the additional cost gets passed on to the customer. It's in everyone's best interest that developers are maximally productive.

      Give them fast development environments, and realistic test environments.

  • It really depends upon the size of the sutff you are doing. If you are going to recompile the same stuff over and over and the dataset will fit in memory... you most likely will get little to no benefit. Linux (Vista and others) cache every single file until some app needs memory and pushes it out. It sounds like he's doing it on a box by himself (not a server shared by 5000 other people), and with memory so cheap... unless you are compiling something huge I'd guess that you probably not have to disk aga

  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Friday March 06, 2009 @04:53PM (#27097053) Journal

    Just got one in a Dell laptop, came with Ubuntu. A subjective overview:

    I have no idea how well it performs with swap. I'm not even really sure why I have swap -- I don't have quite enough to suspend properly, but I also never seem to run out of my 4 gigs of RAM.

    It's true, the write speed is slower. However, I also frequently transfer files over gigabit, and the bottleneck is not my SSD, it's this cheap Netgear switch, or possibly SSH -- I get about 30 megabytes per second either way.

    So, is there gigabit between you and the SVN server? If so, you might run into speed issues. Maybe. Probably not.

    Also worth mentioning: Pick a good filesystem if a lot of small files equals a lot of writes for you. A good example of this would be ReiserFS' tail packing -- make whatever "killer FS" jokes you like, it really isn't a bad filesystem. But any decent filesystem should at least be trying to pack writes together, and I only expect the situation to improve as filesystems are tuned with SSDs in mind.

    It also boots noticeably faster than my last machine. This one is 2.5 ghz with 4 gigs of RAM; last one was 2.4 ghz with 2 gigs, so not much of a difference there. It becomes more obvious with actual use, like launching Firefox -- it's honestly hard to tell whether or not I've launched it before (and thus, it's already cached in my massive RAM) -- it's just as fast from a cold boot. The same is true of most things -- for another test, I just launched OpenOffice.org for the first time this boot, and it took about three seconds.

    It's possible I've been out of the loop, and OO.o really has improved that much since I last used it, but that does look impressive to me.

    Probably the biggest advantage is durability -- no moving parts to be jostled -- and silence. To see that in action, just pick out a passively-cooled netbook -- the thing makes absolutely no discernible noise once it's on, other than out of the speakers.

    All around, I don't see much of a disadvantage. However, it may not be as much of an advantage as you expect. Quite a lot of things will now be CPU-bound, and there are even the annoying bits which seem to be wallclock-bound.

  • by Mysticalfruit ( 533341 ) on Friday March 06, 2009 @04:54PM (#27097073) Homepage Journal
    If price isn't an option, then he should get himself 4 ANS-9010's and set them up as a hardware RAID0 hanging off the back of good fast raid controller.

    If he filled each of them with 4GB DIMMs he'd have 128GB of storage space.

    Volatile? Hell yeah... But also just crazy fast...
  • Simple arithmetics (Score:5, Insightful)

    by MathFox ( 686808 ) on Friday March 06, 2009 @04:56PM (#27097093)
    A typical flash cell easily lasts 10.000 writes. Let's assume that every compile (or svn update) only touches 10% of your SSD space, that gives you 100.000 "cou" (compiles or updates). If you do 20 cou per day, the SDD will last 5000 working days, or 20 year.

    Now find a hard disk that'll last that long.

  • How much do you hit ^X^S? And are you really going to notice a few ms difference between loading your source file off the drive? If you were using it as a database server with frequent writes that'd be one thing, but software development?

    If your boss is willing to shell out for one, then go for it. If you actually do the math on the write limit, you'll find that you'll be dead of old age long before the drive runs out of writes in any given cell (Last time I checked it was something like 160 years of con

  • RAM disk ? (Score:3, Interesting)

    by smoker2 ( 750216 ) on Friday March 06, 2009 @04:59PM (#27097179) Homepage Journal
    Can't you just load up on RAM and create a RAM drive for working stuff and keep the slow HDD for shutdown time ? Cheaper than SSD and no write cycle issues. You can also get RAM based IDE and SATA drives.
  • by guruevi ( 827432 ) on Friday March 06, 2009 @05:00PM (#27097201)

    Although they use an SSD for another purpose, they said currently SSD's last about 6 months under heavy read/write conditions (cache on a RAID controller) even with leveling techniques. Hard drives last a whole lot longer for those purposes I would say.

    I think SSD in a desktop-type system would be all right however I would suggest you invest in some fast disks instead of SSD until SSD matures and more lifetime data is available. Remember MTBF doesn't always mean that a piece of hardware will last that long. Most likely it will die long before that.

  • How about ramdisks? (Score:3, Interesting)

    by ultrabot ( 200914 ) on Friday March 06, 2009 @05:01PM (#27097217)

    Sometimes I wonder whether it would make sense to optimize the disk usage for flash drives by writing transient files to ramdisk instead of hard disk. E.g. in compilation, intermediate files could well reside on ramdisk. If you rely on "make clean" a lot (e.g. when you are rebuilding "clean" .debs all the time), you won't have that much attachment to your object files.

    Of course this may require more work than what it's really worth, but it's a thought.

  • Intel or bust (Score:3, Informative)

    by Chris Snook ( 872473 ) on Friday March 06, 2009 @05:07PM (#27097303)

    Developing on a conventional SSD with large user-visible erase blocks is PAINFUL. The small writes caused by creating temporary files in the build process absolutely destroy performance. There are ludicrously expensive enterprise products which work around this in software, but at the laptop/desktop scale, you want something that's self-contained. As far as I'm aware, Intel's X25 drives are the only ones actually on the market now that hide the erase blocks effectively at the firmware level. The MLC ones should be fine.

  • by Daimanta ( 1140543 ) on Friday March 06, 2009 @05:28PM (#27097693) Journal

    yet, but I am eager to learn. What happens if you exceed the limit of writes? How does usage degrade the disks? Is heat bad? Does using the SSD as virtual memory degrade the disk fast?

    What about bad sectors, how do they compare with HDDs? Are SSDs generally more sturdy(longer lifespans) than HDDs?

    Inquiring minds want to know.

  • by Stephen Ma ( 163056 ) on Friday March 06, 2009 @05:33PM (#27097821)
    As I understand it, flash drives use wear leveling to spread the writing burden over many sectors of the disk. So each time I overwrite the same sector, say logical sector 100, the data goes to a different spot on the drive. That makes sense.

    However, suppose I fill up the drive with data, then free half of it. My question is: how does the drive know that half its sectors are free again for use in wear leveling? As far as the drive knows, all of its sectors still hold data from when the drive was full, and no sectors are available for levelling purposes.

    Is there some protocol for telling the drive that "sectors x, y, z are now free"? Or does the drive itself understand the disk layout of the zillions of different filesystems out there?

  • by billcopc ( 196330 ) <vrillco@yahoo.com> on Friday March 06, 2009 @05:42PM (#27097981) Homepage

    Everyone's going SSD-crazy, but I'm not yet convinced. They're not _that_ much faster than spinning platters of death, at least not yet, and I'd much rather throw a ton of Ram at the disk cache for the same amount of money.

    If you're really worried about performance, invest in a true Ramdisk - the kind that has DDR memory slots on one side and a SATA connector on the other. You can write a 2-line script to mount and format it on boot, and even backup its contents upon shutdown (if needed). That's the ultimate /tmp drive, and it will not wear out no matter how hard you pound it.

  • by BobSixtyFour ( 967533 ) on Friday March 06, 2009 @05:48PM (#27098093)

    Serious Long-Term Fragmentation Problems...

    Potential buyers BEWARE, and do some research first. Google the term "intel ssd fragmentation" before purchasing this drive to understand this potential long-term issue. Chances are it won't impact most people, but if you plan on using this drive to house lots of smaller files, think again.

    Also
    Absolutely avoid using defragmentation tools on this drive! They will only decrease the life of the drive.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...