Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux

Recovering Deleted Files on ReiserFS3? 126

DarkSarin asks: "I have a rather serious problem: I managed to accidentally delete some files (rather important ones at that!) while trying to back them up to cd (I was using a GUI burning software that will remain nameless for now). How do you recover accidentally deleted files in Reiserfs? This thread (started by me) indicates that you can't recover them. Note that I had found a way to rebuild the tree, but that didn't work. It seems odd to me that you wouldn't be able to recover accidental deletions, but that really does seem to be the case. Help? Please?"
This discussion has been archived. No new comments can be posted.

Recovering Deleted Files on ReiserFS3?

Comments Filter:
  • Didn't find a way to recover my files either. :(

    • Depending on what you want to undelete, you can always do a grep -a -100 STRING /dev/DEVICE. That recently came in handy for me when I accidently deleted a directory with a script file that I really needed. Took me a whole day to write that script, so I was not eager to rewrite it again. I managed to recover the whole script.
    • Solution (Score:3, Informative)

      by scheme ( 19778 )
      Since ext3 is just ext2 with added features, you can undelete the file the way you would do so on ext2. There's actually an undelete howto for ext2. The basic gist of it is that you immediately unmount and remount the partition read only. Then you grab a list of last delete blocks and use that to recontruct the file. I've done it once or twice but I've been fortunate enough to have a tape backup solution that has been able to alleviate the need for this for a while now.
      • Comment removed based on user account deletion
    • apt-get install -u gtkrecover recover

      IT's even a GUI for the CLI adverse. It's for recovering ext2/ext3 filesystems via a node grab by date method. I've used it many times to recover deleted files quickly. Also there is the lazrus toolkit, but I haven't personally used it.

      I've personally written some simple tools to recover MS Office and RTF files, which is just a little more advanced than grepping a raw device. However, it also handles partial partition recovery this way -- like if you're recovering
  • Good luck... (Score:4, Interesting)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Monday December 01, 2003 @09:39PM (#7605172) Journal
    A filesystem has never (AFAIK) implemented a trash / recycle bin folder -- not on Windows or OS X, and not on any UNIX that I know of.

    The reason for this is that a recycle bin is to save you from accidental deletions. If you delete a file from the nice, big, friendly GUI, it usually asks you at least once whether you want to delete it, then instead moves it to the trash. When it's time to empty the trash, it asks you again to make sure you're not screwing yourself over.

    However, many programs create temporary files and then promptly delete them -- so many times that it would be ridiculously inefficient (both in space and fragmentation levels) to put them into the trash. Furthermore, can you imagine looking for your files in the middle of all sorts of files with names like 11025u012348512i51253.tmp?

    As someone said on the other forum, there's the hard way -- grep for it on the raw partition. This may not even work with ReiserFS, I'm not sure. The usual way to protect yourself from this is to back up in the first place (yeah, I know) and to only run programs you trust as a user that can delete files that you need.

    I would suggest that you try the grep method, and if that doesn't work, learn from it. The safest way to do this is (ironically) the command line. If you type "cp", you know for sure it will copy the file. If you type "mkisofs" or something similar, it is very unlikely that it will delete the files. And these tools (along with mv, which does delete the old copy after the new one is successful) have been around for so long and are so simple that the only way you could screw this up is through a very stupid mistake (like rm instead of cp) or using an experimental filesystem, which despite the opinion at Gentoo, ReiserFS is not.
    • Re:Good luck... (Score:4, Interesting)

      by GigsVT ( 208848 ) on Monday December 01, 2003 @09:53PM (#7605266) Journal
      On the other hand, an operating system is considered inefficient these days if it doesn't use 100% of the RAM for something or another.

      What use is "empty" disk space? The OS might as well use it for something, as long as it can ditch things that aren't important if there is a demand for space. As for your temp file issue, it's easy enough to just make /tmp and /var/tmp (and any others) a different file system that doesn't act this way.

      Modern file systems don't need to have a limited number of inodes. Even ext3 by default creates way too many inodes on large file systems, if you are going to be storing files of any significant size.*

      I think it's high time for filesystem reform, and it doesn't need anything revolutionary like databased buzzword filled paradigm shifting crap. It's just logical evolutionary improvements.

      *And it wastes 5% of the space by default! That's 100 GB on a 2TB fs completely wasted! Always use -m0 on storage fs's or -m1 on system fs with mke2fs. Use -T largefile4 to make one inode per 4MB, which is fine for storing "large" files. Otherwise the fs takes hours to create all those damn unnecessary inodes on a large fs.
      • Re:Good luck... (Score:2, Insightful)

        by Gilk180 ( 513755 )
        The problems with this are in efficiency. Leaving the files in place would create fragmentation problems. Moving them to another part of the disk would result in a lot of unnecessary disk activity.

        Periodic backups are a much better answer.

        Schemes like this would also require the fs to delete old files when the space is needed, but this is what is done now. The data is still there until the space is used by something else (and even after that for all of you super security freaks). Given, the choice of
        • Doesn't the move command simply change the inode? I've noticed that many trash bin type arrangements simply change the inode (or whatever). Dosen't this get by the frag problem?

          Zorton
        • Periodic backups are a much better answer.

          Except this is what caused the problem in the first place.
        • I think a balance can be found.

          Until someone trys it, someone who is a real file system whiz, not some hack, we won't really know just how reasonable it is.

          That's the way computer science works; 1000 people say you can't do it, but one person does it, and it works, well, and suddenly everyone changes their opinion.

          How many people, for example, really expected SGI to clean up XFS enough to merge into the official kernel?

          I think it can be done. I can contemplate an algorithm that balances delete recovery
      • Re:Good luck... (Score:3, Informative)

        by xenocide2 ( 231786 )
        Actually, rotational storage is very different from standard memory. Its considered inefficient not to use as much RAM as possible because using one page is as useful as the next. There's a uniform cost across all areas of RAM. In contrast, one prefers linear writes in a disk because it improves throughput. Each page on disk is not identical in usage cost. What we're paying here is the oppertunity cost if we use a specific page in disk.

        On the other hand, I agree that a marked for deletion queue makes a gre
      • Re:Good luck... (Score:1, Informative)

        by Anonymous Coward
        And it wastes 5% of the space by default! That's 100 GB on a 2TB fs completely wasted! Always use -m0 on storage fs's or -m1 on system fs with mke2fs.

        tune2fs can fix that after creating the filesystem. But it's not wasted space, it's just reserved for root (or another user ID, if you change it - useful as a cheap quota system).

        ReiserFS v3 and v4 are pretty good with space efficiency. No space is reserved for inodes, and tail-packing means very little space is wasted storing the last block of a file.
      • Windows never uses 100% of RAM!!!! It's generally using more like 15-50% of physical ram and triple whatever physical amount that is in swap. Wonder why that is...
        • Well, it uses a good chunk of RAM for cache. Windows VM never was highly acclaimed or anything.
          • Yeah but the sad part is that it still uses 15-50% of physical ram and 3x that if all your running is notepad and have a 2gig ram. That's just sick, there shouldn't even be any VM happening, but on windows, VM is always happening.
      • This a filesystem design issue. At NEU, we have a kick-ass netapp NFS fileserver, with file-checkpoints every first hour, then the first days, the first week... This allows the user to recover from almost any mishap without sysadmins needing to go digging for backup tapes.

        Since only x% of inodes change, you don't need to duplicate the whole storage, just the modifications. I think plan nine did something similar with a WORM drive. They reported capacities growing faster than they could fill it -- probably
    • Re:Good luck... (Score:1, Insightful)

      by Anonymous Coward
      A lot of higher end storage appliances and some filesystems support file versioning. Even NTFS has streams which cna be used to version a given file.

      As to files being created and destroyed frequently, this is why we partition into at least:
      /
      swap /var /tmp /usr

      obviously var and tmp would not be a place to version files.

      you could consider the use of versioning in a place like /home.

    • NetWare has... (Score:1, Informative)

      by Anonymous Coward

      A filesystem has never (AFAIK) implemented a trash / recycle bin folder -- not on Windows or OS X, and not on any UNIX that I know of.

      NetWare has had a very sophisticated file undeletion capability since time immemorial.

      If Novell ports it to SuSE, you Linux clowns might just find yourselves in possession of a mission-critical operating system after all [not that you deserve it].

      • Back around, oh, 1997ish, I ran Lotus Notes on a Netware server (I think it was a v3 server). Because of the way that the Notes full-text indexing system works (it creates a *lot* of really *tiny* files), I blew out the file allocation tables on the server because all of those small files were being left behind instead of being tossed. Eventually I think I flagged the entire tree so that those tiny files weren't left behind.

        Learned a lot more about the NW file system then I really wanted to know at the
    • As I posted below, there is one way I know of to make a recycle bin be system wide......

      [drum roll, please]

      LVM!

      It will keep a frozen in time snapshot of the drive at a given time until it runs out of COW space (copy on write). The space dedicated to snapshots are not seen by the filesystem, and when the filesystem is changed after a snapshot LVM copies the modified data away to the snapshot dedicated area. (I guess you could call the snapshot reserve a "Secret Cow Level". :)

      You can run multiple snapshot
    • Re:Good luck... (Score:4, Informative)

      by SteveOU ( 541402 ) <sbishop20.cox@net> on Tuesday December 02, 2003 @12:21AM (#7606316)

      I will point out that the filesystems included in Novell's Netware product did include a deletion-recovery tool, accessible via the salvage command. My understanding was that Netware would not permantently delete a file until that disk space was needed for active data or until a timeout period expired.

      Damned handy tool, too. We had IBM's TSM for our major backup operations, but for those "oops" moments, salvage was sure handy. I hope that the new Novell might consider implementing those features on existing linux filesystems, or at least contribute native linux implemenations of their filesystems.

    • However, many programs create temporary files and then promptly delete them -- so many times that it would be ridiculously inefficient (both in space and fragmentation levels) to put them into the trash.

      Actually, there is an OS that lets you specify that a file is temporary, though I can't remember offhand which one (VMS? NT? OS X? dragonfly BSD?). Or maybe I'm thinking of SQL - for small temporary tables, you can often have them stored in memory.

      Anyhow, you could add an fcntl flag to indicate a file i

      • maybe opening the files with
        tmpfile()
        or something alike?
      • Actually I think DOS had an Open Temp file function. Yes it could be written. A commit/rollback function for changes to a file would be nice as well. Yes I know you can create a temp file then copy or rename it but that does not take any metadata with the file.
    • Re:Good luck... (Score:3, Informative)

      by isj ( 453011 )
      A filesystem has never (AFAIK) implemented a trash / recycle bin folder -- not on Windows or OS X, and not on any UNIX that I know of.

      Actually, OS/2 implemented it. It could be enabled/disabled per drive, the size of the trashcan could be configured, and it worked even for temporary files made by programs. And yes, it was somewhat slow.

    • Just FYI. Netware's file system does have a trashcan built in and will keep the files that you've deleted. Even multiple versions of them until ther is no room on the storage device, then it will start to actually delete the oldest deleted files at that point. It's quite useful! you can disable the function globally or just on a directory/tree also. It has been doing this since version 2 for sure. Possible even before that.
    • It would be pretty straightforward to alias rm to a script that could understand rm, rm -f, rm -rf (all I ever use) and do a mv to ~/.trash instead. Then a simple emptytrash alias and you're good to go.

      Once upon a time Norton even sold undelete for Unix, ULTRIX maybe. Before Norton was part of Symantec, of course and Peter Norton did more than pose for pictures. (yeah, I'm just envious)
  • Stop!!! (Score:5, Interesting)

    by zulux ( 112259 ) on Monday December 01, 2003 @09:39PM (#7605178) Homepage Journal
    Take the filesystem offline. NOW!

    Then use dd to copy the partition to another partion/disk. Then mess with the copy.

    A lot of silesystems do a good job at keeping files and their blocks in order. I've had luck with *BSD file-system by grepping for somthing at the begining of the file and grabbing a big chunk of data afterwords. This works great for MS Office Documents, JPEG or anthing that begins with a known preamble.

    This may not work for your filesystem.
    • If it's shortish text files you're looking for, Reiserfs isn't so amenable to this type of treatment because it doesn't just store file contents in nice neat blocks like older file systems do. It stuffs shorter files into the spaces between bigger chunks of data. I think it even stores some small files in structured usually used for filesystem metadata.
  • The Coroners Toolkit (Score:1, Interesting)

    by hookedup ( 630460 )

    This may help..

    TCT is a collection of programs by Dan Farmer and Wietse Venema for a post-mortem analysis of a UNIX system after break-in. The software was presented first in a Computer Forensics Analysis class in August 1999 Examples of using TCT can also be found on-line in a series of columns in the Doctor Dobb's Journal. Notable TCT components are the grave-robber tool that captures information, the ils and mactime tools that display access patterns of files dead or alive, the unrm and lazarus tools t
  • More questions... (Score:3, Insightful)

    by ComputerSlicer23 ( 516509 ) on Monday December 01, 2003 @09:46PM (#7605227)
    First off, I have several questions. Do you have an original copy of the partition before you started running recovery tools (after you deleted the file, but before you created new ones)? If not, make an image immediately. You want the most original image you can find. Now, the second question, is how much data am I a looking for, and how large is the partition? (How large is the needle, and how large is the haystack?). What type of data am I looking for? Is it a word document? A text file? A gif? A jpg? Some html? A PDF? The smaller the file, the more likely that if it got overwritten, it all got overwritten. However, the more likely you are to recover all of it. If it was a very large file, it's possible that you can recover pieces and parts, but not all of it. Now, it's my understanding that you can recover anything written to a harddrive, even if you have overwritten it several times. However, it's very, very expensive to do so. So now the question is how much money is it worth to you? The guys as ReiserFS probably have the best shot at helping you. They probably don't want to however. The more you know about the order of the files in the directory, the more you know about how the files were constructed, and the order files got put on disk the better. They you can make better educated guesses about the sequence in which the pages got allocated to know where to go look for the file. Do you have anything on the drive you are worried about posting? Can you post an image of the drive? I'm not an expert in this area, but I've seen people recover mail spools at an ISP using dd. People leave ISP's over losing all their mail, so they worked really hard at it (however, that was an ext2 filesystem). Kirby
    • Now, it's my understanding that you can recover anything written to a harddrive, even if you have overwritten it several times.

      If a certain sequence of bits on the disk was originally 1011010010001011101001, and it got overwritten with 0110101101010010101111, how -- barring psychics, voodoo, and fairy dust -- can the original be recovered? Simpler case: a certain bit used to be 1, it was overwritten a few times. How do I know what it was (let's say non-journaled filesystem) before being overwritten?

      Mayb

      • Re:More questions... (Score:3, Informative)

        by zulux ( 112259 )
        If a certain sequence of bits on the disk was originally 1011010010001011101001, and it got overwritten with 0110101101010010101111, how -- barring psychics, voodoo, and fairy dust -- can the original be recovered?

        By reading the slop in between tracks. The writes look more like layers, with little bit of data poking out from the edges, to a scanning electron microscope.

        Think of paint layers - at the edge, you can somtimes pick out the previous colors and the order that they were painted.

        Of course, this
        • by Anonymous Coward
          huh, so maybe the HD manufacturers can start advertising 800 GB drives **

          ** as long as you have access to the CIA tech to read the old bits

        • Re:More questions... (Score:3, Informative)

          by Gordonjcp ( 186804 )
          People like the CIA get to play with this stuff.


          Except that they don't. It's entirely a myth that the CIA can read multiply-overwritten data from hard disks. The idea that the tracks look like layers doesn't hold up - you'd have to use less and less write density every time. It doesn't happen that way.

          Now, what you can do - and what does work - is look at the analogue signal from the head and see what the variance from an "average" one or zero is. So, if the head returns a 4mV pulse for a one, on av

        • The cool bit is that RAM is actually easier to read after several rewrite cycles than magnetic storage. Sorry; no link, but it shoudl be reachable off the srm webpage.
      • Re:More questions... (Score:4, Informative)

        by Nucleon500 ( 628631 ) <tcfelker@example.com> on Tuesday December 02, 2003 @03:16AM (#7607078) Homepage
        It has to do with the analog nature of the storage. If you had 0, 1, 1, 0, and you overwrote that with zeros, you'd then have 0, .1, .1, 0. Chances are that the drive itself (without at least modified firmware) can't tell the difference, but a data recovery lab can. You can actually still read data after it's been written between 5 and 20 times - each time, subtract the obvious and multiply the residue.
        • thanks, that makes sense. I wasn't thinking that the value could be somewhere between zero and one. I always wondered why 'srm' overwrites so many times, and now I understand.
  • by damu ( 575189 )
    If you had this problem then I or anyone will have this problem too, so please let us know what program you are talking about. Was a user error? Was it a bug? Is the bug being worked on?
    • Backup software that even has the ability to delete files from the filesystem sounds like a major design flaw to me.

      Reminds me of LoneTar, which helpfully will tell you that /dev/null is an insecure backup device, and ask you if you want to secure it. Who wouldn't want to secure it? Anyway, it chmods 000 it, which breaks everything under the sun, usually in mysterious ways.

      The reason is because /dev/null is usually set up as a dummy tape device in lone tar, and it doesn't know it's not a real tape devic
    • > If you had this problem then I or anyone will have this problem too, so please let us know what program you are talking about.
      > Was a user error? Was it a bug? Is the bug being worked on?

      I'm not poster so I don't know the answer to your question, but I will say I've accidently done this in K3b. I had files highlighted in the list of files to burn, AND there were files highlighted in the tree view of my filesystem. I hit the delete key thinking it would remove the ones from the list of files to bu
    • by DarkSarin ( 651985 ) on Tuesday December 02, 2003 @03:28AM (#7607116) Homepage Journal
      The program, which I now feel safe in naming, was CDBakeOven 2.0 (yeah, I know, beta software and all that-it still shouldn't EVER do this!)

      To the user who gave instructions on how to use rebuild tree, those are about the same steps
      I used (same -S option) on --rebuild-tree, to no avail.

      So, the end result is--thanks, but so far the best advice still seems to be to pay the $25 to the folks who made the fs. I may yet do that. In the mean time, I am using my sorry winXP install....

      blech
  • Suddenly... (Score:5, Insightful)

    by Anonvmous Coward ( 589068 ) on Monday December 01, 2003 @09:57PM (#7605283)
    ...every Windows user looks at that Recycle Bin shortcut on their desktop and smiles.

    (No, that's not really a troll. Human error happens.)
    • Would it require a kernel patch to create a "Recycle Bin" of sorts? Maybe a piece of code could check for the UID or GID of the file being deleted and decide whether to back it up, based on a table of UID/GID's. It would probably be useful now that Linux systems are becoming more desktop and end-user oriented.

      As an amusing anecdote, I once was writing a rudimentary file manager when I accidentally deleted all my source! After locking down my filesystem and learning how to undelete files, I realized that
      • No, it doesnt require a kernel patch at all, in fact, there are already implementations of a trash can for linux(though i dont remember the url off hand). they work using ld.preload to replace the unlink call, thereby causing all aps to use the trash can.
      • I'm going to explode, but yes, there is a kernel patch that has something similar to the recycle bin functionality: LVM.

        Google the LVM snapshots, and if the frequency is high enough, you'll only lose a little time's worth of ze data.
      • ...or you could script it. Anything on God's green Earth can be created with a shell script. I'm serious.
        • Not true. If a C program deletes a file by calling the unlink() standard function, your shell script can't do much to prevent it. I'm guessing you were thinking about scripting a replacement for the rm command, or something, but that doesn't catch all file deletions. I'm not sure replacing unlink() through a preload does either, but it at least sounds a lot better.
    • Re:Suddenly... (Score:3, Insightful)

      by zulux ( 112259 )
      ...every Windows user looks at that Recycle Bin shortcut on their desktop and smiles.

      The recycle bin only works if it's a well-behaved GUI app.

      Do this...

      START->RUN->COMMAND and hit enter.

      type in

      DEL c:\*.* and hit enter.

      If you're asked any questions - say 'yes'

      Now.... Try to find your files in the "Recycle Bin."

      • There are apps out there that protect files deleted at command line or even from network shares, Norton has one, I imagine there are others.
        • Re:Suddenly... (Score:3, Insightful)

          Nevertheless, that is not functionality present with the Recycle Bin. Desktops such as Gnome and KDE both have their own implementations that work about as well. Norton's software is data recovery. Apples and oranges, buddy.
          • Gnome and KDE don't protect you from shell deletions, only what you delete via their API.

            Norton Utilities includes an extension to the command line so things deleted there (or anywhere via whatever non-Recycle Bin API they use) will also go to the Recycle Bin.

            OTOH, it fills the 'Bin up pretty quick, since lots of apps create and delete many temporary files, and you normally only want the things you've interactively deleted.
      • Re:Suddenly... (Score:3, Insightful)

        by OrangeTide ( 124937 )
        Suddenly Bill Gates erases your minds so you forget that Windows stole the idea from MacOS (which stole the idea originally and made it mainstream:)
    • I don't know what exactly happened to the poster, but as someone else pointed out, recycle bin is only helpful in nice well-behaved apps. For example, I once had my text editor spaz out, lock up, and trash the source for something I was working on ... recycle bin ain't going to help there.

      (Luckily, I had the editor set to be making backups, which were OK)
    • Well...to be fair. (Score:1, Informative)

      by Anonymous Coward
      KDE has a trash bin too.

      But in the context menu it asks you if you want to delete or move to trash. Not the same thing! In DOS, delete, or del usually just write a lowercase delta IIRC over the first character of the file name marking the space as free to be used.

      Right now, his enemy is the "relatively" obscure file system, and how much writing he's done to the harddrive since the "incident".
    • Because of windows recycle bin, I never hit "delete" without holding "shift". Recycle bin? There is no recycle bin!
      • You know you can turn that off, right?

        Uncheck the "Display delete confirmation dialog" option in the Recycle Bin properties page.
        First thing I do on a new Windows install... followed by deleting all the worthless crap on the FS that Windows thinks I need ("Online Services" and such).
      • More often than not, I do the same.

        Ok, I've been caught out a few times...
        1) Shift
        2) Delete
        3) Notice WHICH file got deleted...
        4) Panic/swear

        Tiggs
    • Simple alias rm to move the files to a recycle bin folder somewhere in your home dir. If you want it to be fancy, add a timestamp to the filename so the same original filename won't get overwritten. That several GUIs have a recycle bin has already been mentioned.

      To clear the trash, you have to use 'rm' unaliased. Normally, you can't do such a thing by accident :) While this method is neither foolproofed nor perfect it should help at least a bit to prevent future accidents.
  • Try this (Score:5, Informative)

    by Anonymous Coward on Monday December 01, 2003 @09:57PM (#7605286)
    I really don't understand how this was done. None the less you CAN recover from this. Here's a little tutorial I found. I Highly suggest doing the backup first!!! :

    If you're really really desperate, you can do what I did a few weeks ago. In my \
    case, fsck didn't recover the partition either, indeed it crashed. So here's what's \
    I did from the beginning of what I think fixed it:

    1) reiserfsck --rebuild-tree
    2) mount
    3) reiserfsck -S
    4) debugreiserfs to get metadata for Vitaly
    5) mount
    6) mount again

    I'm not sure why this happened, but after the second mount, the partition was not \
    recognizeable as ReiserFS anymore. I suspect it had to do with a few really huge \
    files that were originally on the partition that reisefsck -S tried to recover. In \
    doing so it probably hosed lots of stuff. Now, it was as simple as

    7) reiserfsck --rebuild-tree

    And I had most of my data linked under lost+found! Took me a few hours to sort \
    through it all but I got back most of what I cared about. Maybe if you use the new \
    pre8 fsck you won't need to jump through these hoops. Since the potential for data \
    destruction is high here, I wouldn't blame you for not trying. And yes, this all \
    happened by trial and error ::-)

    This might help too :
    http://marc.theaimsgroup.com/?l=reiserfs&m=1048613 18421306&w=2 [theaimsgroup.com].

    Good luck!
    • What kind of person uses a mouse to copy and paste from Emacs?

      Ever heard of macros? xclip?
    • I used this command:
      reiserfsck --rebuild-tree -S -l rebuild.log /dev/yourdevice

      after taking the disk offline (umount /dev/hdb6 as su(do))

      I then (after waiting a long time for it to finish) checked the lost & found dir, and got nothing useful (although it did pick up some stray music files that I don't know where they came from!).

  • Ask Namesys (Score:5, Informative)

    by cornice ( 9801 ) on Monday December 01, 2003 @09:58PM (#7605289)
    Pay Namesys $25. They wrote ReiserFS so they should know. You'll be getting really great support and helping those who wrote your file system. Look here:

    http://www.namesys.com/support.html
  • The way i did it (Score:4, Informative)

    by jjshoe ( 410772 ) on Monday December 01, 2003 @11:00PM (#7605719) Homepage
    I managed to re-create the resier file system three times over 20 years of digital photos, with no backup ofcourse. I was able to replay the journal recovering all but the most recent photos.


    I beleive i used the --rebuild-tree option. You should follow the steps in the manpage under Example.


    so in short, man reiserfsck before asking slashdot :)

    • Uh, you have a backup now, right?

      And how the *heck* did you get 20 years of digital photos? I assume some were scanned......

      From History [about.com]:

      The first digital cameras for the consumer-level market that worked with a home computer via a serial cable were the Apple QuickTake 100 camera (February 17 , 1994), the Kodak DC40 camera (March 28, 1995), the Casio QV-11 (with LCD monitor, late 1995), and Sony's Cyber-Shot Digital Still Camera (1996).

      • Backup now? no. You'd think i would have learned... Yes, quite a few photos were scanned that i no longer have the origionals of due to a fire unfortunatly. I realize now the origional poster said the rebuild tree option didnt work because i honestly re-made the reiser file system three times over the existing reiser file system and it re-played all but the most recently copied images..
        • Backup now? no. You'd think i would have learned...

          *Smack!*

          "I don't make backups" is computerese for "I have no important data which I can't either reconstruct or re-download."

          If you can't make that claim, then it's time to reexamine your backup policy.
      • The first digital cameras for the consumer-level market that worked with a home computer via a serial cable were the Apple QuickTake 100 camera (February 17 , 1994), the Kodak DC40 camera (March 28, 1995), the Casio QV-11 (with LCD monitor, late 1995), and Sony's Cyber-Shot Digital Still Camera (1996).

        The keyword there is consumer-level.. don't make assumptions.
  • That should teach you!

    Real users never make backups!

    -K
  • by Bombcar ( 16057 ) <racbmob@@@bombcar...com> on Monday December 01, 2003 @11:14PM (#7605804) Homepage Journal
    But it can be helpful in the future to dedicate, say 10% of your drive to an LVM snapshot [sistina.com] space....

    I haven't done this yet (I'm lucky! I have a real tape drive [inostor.com] to backup my stuff.....) but I plan to make my system take a snapshot every hour and every day (total of two) so that at most I lose an hour's worth of work.

    Also, I've always wondered if it was possible to make an operating system that would take as long to destroy something as it did to create it. For example, your term paper took ten days to write, so the rm termpaper.tex command would take ten days to run :)
    • Also, I've always wondered if it was possible to make an operating system that would take as long to destroy something as it did to create it. For example, your term paper took ten days to write, so the rm termpaper.tex command would take ten days to run :)

      Good old UFS is close. A reoccuring job we run at work creates around 50000+ new files and directories, does a quick rename, and then deletes 50000+ old files and directories. This takes a looong time. The funny thing is, the delete process takes *

  • I had to use data recovery software a few years back after I accidentally started a NTFS format on a drive I was using as a temporary storage dump while the main drives were being upgraded... got back only 70% after a 3% format :( took hours too.

    Best advice here is to keep active backups (Tape/CD is good for archival), if the files are small (docs/text/logs/source code), HD space is dirt cheap, get another drive (or partition)
    mount as something like /mnt/snafu
    and set rsync/cron/whatever to copy the files f
  • Stop mucking around (Score:3, Interesting)

    by You're All Wrong ( 573825 ) on Tuesday December 02, 2003 @09:15AM (#7608047)
    "I was using a GUI burning software that will remain nameless for now"

    _Either_
    - you fucked up, be a man and admit it's your fault;
    - the software fucked up, in which case let others know what it was and how it fucked up so that they can avoid risking the same bug.

    YAW.
  • by anthony_dipierro ( 543308 ) on Tuesday December 02, 2003 @10:40AM (#7608506) Journal

    How do you recover accidentally deleted files in Reiserfs?

    It's really easy. You just restore from backup.

  • This little snippet of shell has saved me from disaster a couple times. Let's say you just deleted "foo.c", and you need it back! Right away! If you know that the code will contain the text "while (mungeCount < superMungeCount) {", then you can try:

    tr </dev/hda '\n' '~' | tr '\0-\37\200-\377' '\n' | grep "while (mungeCount < superMungeCount) {" | tr '~' '\n' >foo-recovered.c

    This does have its problems. If the file spanned multiple blocks it may not get all of it, but you'd be surprised how

    • Yeah, that's great, but it was not really a text file--one was a OpenOffice Presentation(.sxw?) and the other very important one was an OO.org text (.sxw?). I also lost numerous .pdf files. These were all for a paper and presentation due about 30 minutes after the actual deletion. I was VERY fortunate that I had already printed the paper. Nothing to be done about the presentation.

      Like I said, these were important files.
      • OO files are actually ZIP files with XML data inside. Try searching for "mimetypeapplication/vnd.sun.xml.writerPK" to find OOWriter files and similar stuff for OOPresenter. It's best to create a test file of each with the same version OO and pass it through "strings".

        In any case, DO NOT run anything from the relevant filesystem and especially DO NOT MOUNT the filesystem rw.
  • It seems odd to me that you wouldn't be able to recover accidental deletions

    Why would this seem odd? None of the most widely used file systems allow for undelete. If you think the recycle bin is undelete try del *.* and then see what you can recover. The only one that really supports undelete, and does it really well, is Netware's Salvage utility.

    There are kludgy solutions for FAT and NTFS but there really isn't a true deleted file recovery system in any of the mainstream file systems. That includes ext
  • Or diskdoctor on the Amiga? It's a shame that modern file systems don't seem to be able to do something that was fairly universal 15 years ago.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...