Messed up USB stick

Discussion in 'PC Hardware' started by Galileo Figaro, Jun 27, 2013.

  1. Galileo Figaro

    Galileo Figaro MDL Junior Member

    Sep 6, 2010
    86
    13
    0
    It seems I messed up my 4GB USB stick by partitioning it and formatting it under MS-DOS 7.2 (DOS of Windows 98). Now it shows a capacity of 5KB, yes KB, not GB.

    I have tried various recovery programs and CLI commands under (Puppy) Linux. And nothing of it.
    Anyone got any ideas?
     
  2. LatinMcG

    LatinMcG Bios Borker

    Feb 27, 2011
    5,711
    1,606
    180
    try gparted live cd .. boot from it then try selecting usb and create new partition table which is in Device menu.
    format it fat32
     
  3. onetwo3

    onetwo3 MDL Member

    Jun 21, 2013
    122
    16
    10
    rmprepusb might take care of it, if that doesn't you search for "chip genius", download that it will tell you what the controller on the stick is and from that you can find the manufacturer tools to reformat it but its not always an easy process.
     
  4. mikeserv

    mikeserv MDL Novice

    Sep 28, 2013
    9
    0
    0
    #4 mikeserv, Sep 29, 2013
    Last edited by a moderator: Apr 20, 2017
    Partitioning tools don't actually write out a whole partition but will instead just mark their boundaries. Usually what they do is build out whatever metadata backend they run and then drop a few pellets across a block device marking that space as their own before calling the partitioning job done. In Linux we call the pellets "Superblocks" but I don't s know what Win98 would call them. Anyway, proper procedure for a filesystem is to write a little data about yourself to a predefined space on a superblock so when a controller or block driver scans you it can quickly identify you and load your filesystem driver.

    Ok, so this system is pretty efficient, but the issue that can sometimes occur is that an incompatible filesystem for which you have no driver has installed a superblock immediately before or after one you do recognize. This can effectively truncate a partition, and, because repartitioning with your recognized filesystem will just result in a new filesystem superblock installed over the one already there without affecting the superblock which you do not recognize, it does not solve your problem.

    What you need is a raw write - some application that bypasses the filesystem abstraction layer and communicates with the block device with raw i/o. There are tools like this for windows available, of course. The developer of ImDisk makes a little cmd shell app - rawcopy, I think it's called - weighing in around a couple hundred kb or so. Probably dd - Unix's disk dump utility - has been ported to Windows but I have no idea.

    Surest method is a Linux live boot. Boot linux, find a terminal, discover your disk's /dev identifier, then zero that f**ker.

    Code:
    $ sudo su
    $ lsblk
    
    ## Here you should see a lot of /dev/sd?? devices, 
    
    ## like the disk /dev/sda followed by it's partitions 
    
    ## /dev/sda1 and /dev/sda2. Try to find either your 
    
    ## 5KB partition or its proper size as Linux may 
    
    ## well parse the filesystems better (especially on 
    
    ## a live disk). If the device has a digit at the end 
    
    ## forget it; you want the root device. If /dev/sda 
    
    ## is 5KB we want /dev/sda, but if /dev/sda2 is 
    
    ## 5KB we still want /dev/sda. 
    
    ## BE SURE YOU'VE GOT THIS RIGHT!! Zeroing 
    
    ## the wrong disk could be VERY BAD!!!
    
    $ dd iflags=sync if=/dev/zero oflags=sync of=/dev/sd?  ##<<--- YOUR DEVICE THERE!
    
    
    The above method will take awhile as it explicitly and synchronously writes zeros to every block on the target disk. dd will provide no output while it works, but will return the command prompt to you with a message when it is through. Be patient.

    If you can't successfully partition the disk afterward then burn it with fire.

    -Mike
     
  5. pisthai

    pisthai Imperfect Human

    Jul 29, 2009
    7,221
    2,273
    240
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. mikeserv

    mikeserv MDL Novice

    Sep 28, 2013
    9
    0
    0
    #6 mikeserv, Sep 30, 2013
    Last edited: Sep 30, 2013
    psfthai's method is probably easier and will probably work as well, so long as HDD Low Level Format really handles raw, block-level i/o. Please do not expect 50MB/s on your USB thumb drive though, as you're more likely to see between 1MB/s and 5/MB/s.

    The actual speed is limited by several factors:

    Base Transport - your USB controller hardware, its driver, and its version
    Disk controller - your USB stick's flash controller circuitry
    Write medium - the actual flash NAND chip in your stick
    Block i/o driver - in the case of HDD LLF its probably a simple filter driver allowing access to Windows' own base block i/o driver
    Write i/o logical blocksize - this is probably specified by the driver and may be configurable in the format application. For instance, larger zero-filled 4MB logical blocks allow for fewer overall write operations and therefore faster completion times than the default 512KB blocks, but the default is more compatible with various types of disk geometry and therefore safer.
    Others - multitasking applications, cpu cycles, job priority, bla bla bla

    One caveat is that you must ensure that the write operations have actually completed before attempting any other block-level writes or before removing the stick. I'm pretty sure that Windows defaults to write-through i/o for usb flash nowadays but it's better to be safe than sorry of course. The OS could cache the write operations in memory and report the task completed while it carries on performing the physical writes in the background. This is the primary reason it's important to use the "Eject USB stick" or whatever it's called in the system tray at the bottom-right corner of your screen before removing your USB media - interrupted write operations can ruin disk sectors forever. This is why I added the "{i,o}flags=sync" parameters to the dd command line I provided you earlier as it specifies completely cacheless, syncronous, block-level writes so that when it says it's through it actually is. If HDD LLF is any good at what it does it will operate similarly.

    If you understood what I explained about superblocks earlier than you might have grasped that zeroing the whole disk is a little overkill. Obviously it's thorough, and no rogue superblock or other corrupt data can escape it, so I offered it as a solution because it is. If a rogue superblock really is your problem though, and it probably is, you really only need to overwrite one of the many thousands of blocks on your disk to clear the error. dd can do this if you can tell it exactly which block is the problem with parameters like "count=1 bs=512 seek=block#" and your problem is solved in seconds with one block-level write operation, but, then again, you must first know the target's block-size and block-number which can be tricky. It's always best to be absolutely certain when working with dd - though it's officially an acronym for "disk dump," it is also often called the "disk destroyer" for good reason. Anything that bypasses file-system logic and performs block-level i/o, which probably also includes HDD LLF, should be handled with similar caution.

    I didn't mention this before because I'm not certain how widely available it is, though I know it's provided on Arch Linux's live install .iso image. There is another tiny command-line utility for linux called wipefs (google for "wipefs man page" for documentation) that is specifically designed to diagnose and repair issues like yours by targeting only filesystem superblocks, or "magic strings" as it calls them. Run from an Arch live disk the command "wipefs -a /dev/sd?" will locate and remove all superblocks from the /dev/sd? disk. This, again, will likely resolve your issue in a matter of a few seconds versus the half-hour or more it will take to zero your whole disk and is much less taxing on the disk itself. In fact, though its name belies the fact, wipefs is safer in this case than any other solution simply because it only targets "magic strings." If you accidentally remove a partition table it can be restored without losing any of the data it held because wipefs never touched it. Again, google if interested.

    Whatever you do, just doubly- and triply-check that you're actually targeting the disk you wish to target before performing any block-level i/o actions, whether you do it with dd, wipefs, or with HDD LLF. It would suck to hose your OS's boot partition, or, worse, its root file-system, because you fed the i/o application h: or /dev/sdb instead of the correct i: or /dev/sdc.
     
  7. pisthai

    pisthai Imperfect Human

    Jul 29, 2009
    7,221
    2,273
    240
    @mikeserv:

    That speed HDD LLF is working with in free version is maximised up to to 50MB/sec and limited for sure to the max speed the controller on the device is capable to working with. a Class 4 USB stick has an max write speed of 4MB/sec, Class 10 has 10MB/sec and so on. even if you didn't know, because it's not displayed at the Stick, what speed your stick is build for, running HDD LLF will show you the limit.
    Further on, HDD LLF uses single Sector operation and din't "build" the structure for any future file system.
    And more, with that single Sector operation the whole storage area will be cleared from any still existing data, whatever they are. In an single block operation, as suggested by you, all the data are still available for to recovering, except of that single block you "cleaned"! It's a fast way but a good one! It's the call of what you really like to achieve: a fast finished operation or an 100% secure operation! That you have to "pay" the "price" for both of them is an fact:
    • fast = keep the old data include all sh*t which will come back in case of recover that storage device
    • secure = EVERYTHING will be gone, NO recovery possible and old sh*t could not come back from that storage device

    Personally I would chose the secure operation only ALL times!
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  8. mikeserv

    mikeserv MDL Novice

    Sep 28, 2013
    9
    0
    0
    OK. I think you're saying HDD LLF works with raw block i/o, which is what I assumed. I'm not sure what you mean by future file system, though I'm intrigued. New stuff is interesting.

    Right. It explicitly writes zeroes to the disk. It zeroes the disk. Yes?

    Exactly! So the OP can recover his disk like he asked.

    This does mean it's a good thing, right?

    Well, the question was about how to rescue a disk with an apparently truncated partition table, so probably in that situation I would personally make an effort to preserve the data and rebuild the partition table around it. But we are talking about a USB key he was experimenting with Win98 format tools on, so it's likely that its contents were of little importance anyway. That in mind, probably there is no significant advantage either way excepting convenience. HDD LLF probably wins in that department for most despite the time it might take because it is a Windows app. However, in the off-chance the OP is a little Linux savvy, or that he really does need to preserve the data, or both, then dd and/or wipefs would be the tools I would recommend.

    And you're right. As I explained in the first post, data needs to be overwritten to go away, else it remains until it is. This is true even when you delete a file from Windows Explorer, for example. The data is still there, it's just that the file system driver has now marked that region of the disk as freely writable. So it follows that to securely delete data from a disk you have to write over it. That's what I mean by zeroing a disk - explicitly and synchronously writing zero-filled blocks to every sector on the disk.

    -Mike
     
  9. pisthai

    pisthai Imperfect Human

    Jul 29, 2009
    7,221
    2,273
    240
    Low-level formatting is the process of outlining the positions of the tracks and sectors on the hard disk, and writing the control structures that define where the tracks and sectors are. This is often called a "true" formatting operation, because it really creates the physical format that defines where the data is stored on the disk. The first time that a low-level format ("LLF") is performed on a hard disk, the disk's platters start out empty. That's the last time the platters will be empty for the life of the drive. If an LLF is done on a disk with data on it already, the data is permanently erased (save heroic data recovery measures which are sometimes possible).


    NO, he asked for to get the whole space of his stick back and NOT old data!


    No, it didn't mean that! Seems to be that you're not an native English speaker as I too! The meaning of the "but" could (and is in this meaning) also mean "but not" ! The fast way is not every time the best, it could also the most bad one!


    Again, Low Level Format (LLF) isn't just simple zeroing an HDD! It restructures the HDD back to it's virgin state, or to be precise to it's "nearly" virgin state. In "normal" formatting operation there's a tiny space which wouldn't be touches. That the Recovery software is using for to get information about Data location back to life. I LLF also those tiny space will be delete and overwritten.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. mikeserv

    mikeserv MDL Novice

    Sep 28, 2013
    9
    0
    0
    #10 mikeserv, Oct 1, 2013
    Last edited: Oct 1, 2013
    Ok, this is not possible. Binary is simple: 1 or 0. On or off. Digital, dude. When a digital device's state is neither on or off you can rest assured that it will never be either again.

    For instance, when you burn an optical disc you embed tiny pits of a certain depth along circular tracks of the disc. Optical disc readers operate by passing a laser along a track over a spinning disc which is reflected back to a reader that can tell whether laser has passed over a valley or not. If yes then 1, no then 0. No 2's or 3's or any craziness like that. The evolution of optical media falls along the scale of how sensitive the pit detection can be; more sensitive readers detect more and smaller valleys per square inch so more data can be stored there. Hence CD > LaserDisc > DVD > BluRay and so on. If you scratch a disc to the point that the laser no longer reliably reports the track's bitstate it's a coaster.

    Hard disks are magnetic media. They use the same principle, but they measure for on or off by reporting on a region's magnetic state. If yes the 1, no then 0. No 2's or 3's or any craziness there, either.

    In the olden days hard disk firmware was controlled by the Basic Input Output System (BIOS) because operating systems were retarded and didn't understand how to deal with hardware. So an application would ask the OS for a file, the OS would ask the BIOS to read out the relevant portion of the disk with some nonsense like an Int13h interrupt call, and the BIOS would likely comply. Writes worked the same way. The method an OS used to implement this interface was its filesystem.

    At that time the "low-level format" was a sort of a hushed, magical thing because some surly, neck-bearded geek had to conjure up a firmware interface on your crippled computer to ask your BIOS to do it. The BIOS did the raw i/o, so it had to zero the disk. The low-level refers to block i/o - below the filesystem abstraction layer.

    Since then operating systems have grown a little in capability and have pretty much usurped the BIOS's control over the disk, and, well, basically everything else. They still have filesystems though, and they still interact with firmware, though it is most often now on a circuit board directly in the hard disk enclosure. But now the OS is the big dog and calls all the shots. An app makes a file call, the OS drivers collectively interpret the request into block i/o and the OS tells the firmware: "Yo, controller, you tell those platters to switch off here, here and here." That's the file-system abstraction layer.

    Raw, block, low-level i/o casts aside silly constructs like files and partition tables and other fairy-tale ideas and deals directly with physical disk locations and physical bit-state. The OS says, "Yo, disk, be a 0 here, a 1 there," and so on and the so the disk is. But if it's not one of those things, it's not much at all.

    And NAND flash memory is harder to explain. It's hard for me to understand. But it is a sort of a construct that passes electric signals along a circuit like a microchip - in fact it is a microchip - but it physically alters its state as it does so. This allows for persistent storage of ons and offs even after the circuit is broken and the electrical power source is removed, as opposed to RAM which is restored to its original (read: wiped) with the interruption of current.

    Still, NAND works the same way: 1 or 0. On or off. There is no NOT 1 or 0 in the digital world, friend. Data is not UNwritten - it is overwritten.

    Probably you don't believe me. Ok, I'm nobody special after all. But maybe you could google "What is a low level format?" Maybe click on the first result at dedomeido dot com slash computers slash low-level-formatting dot html? (Sorry, can't link cause I'm probably a filthy spammer.)

    Ok. We're reading the same thread, right? I read: "I messed up my 4GB stick [which]... [n]ow... shows a capacity of 5KB. I have tried various recovery programs and CLI commands under (Puppy) Linux." Sounds like a Linux-friendly guy attempting a disk recovery to me...

    Well, if you say so.

    If this is really true then I retract my earlier endorsement of the application; if HDD LLF is not simply writing new digital bit-states to overwrite old bit-states - commonly called "zeroing" - then it should never be used by anyone because it will certainly render the disk inoperable. I think it's more likely, though, that it does zero the disk and you're just a little confused about how it does so.

    And the "tiny spaces" you mention are mostly just errata - blocks are made just a little bit larger than is absolutely necessary because the world is not perfect and neither is anything within it so disk writes are allowed a small margin of error. There can be no sure way of absolutely overwriting a block ever without completely rewriting a disk's firmware and its physical access control logic because you cannot otherwise define physical blocksize, but you can overwrite the block more than one time in the hopes that your writes will veer enough to either side to write it all. Also, and I'm not certain, but I suspect that whole idea is entirely irrelevant when it comes to flash media. Could be wrong on that one, though.

    -Mike
     
  11. Yen

    Yen Admin
    Staff Member

    May 6, 2007
    13,081
    13,979
    340
    #11 Yen, Oct 1, 2013
    Last edited: Oct 1, 2013
    AFAIK LLF has been used by BIOS or special applications (manufacturer) to initialise / execute the controllers' LLF routine. This had been done by the manufacturer itself, the BIOS, or a special tool.

    Today's term of LLF has changed. It is 'something' below high level formatting. (HLF: Usual OS related formatting such as NTFS, EXT4)
    Especially concerning NAND where the controller has a much more complex function to manage the cells (SLC = one bit per cell, one particular voltage / MLC = for instance four bits per cell, achieved / controlled by multiple different voltages). Concerning data, yes it is of course binary, but concerning states, there are more. (Unused= free, used, still occupied but flagged to become 'erased')
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. mikeserv

    mikeserv MDL Novice

    Sep 28, 2013
    9
    0
    0
    #12 mikeserv, Oct 1, 2013
    Last edited: Oct 1, 2013
    Well, kind of. A high level format is what I described in the first post in this thread: a filesystem marks off its boundaries on the block device and calls it a night. A low-level format is an explicit overwriting of every physical block on a device regardless of filesystem ownership or interpretation.

    Edit: Ok, now I'm confusing myself. The first statement isn't necessarily true. A filesystem might do something approaching a low-level format within its boundaries as it likes, though it is rarely case. Consider the difference between the "Quick" and "Full" format options in Windows when formatting NTFS or FAT. In practice the first statement is generally true, but not necessarily always so.

    The NAND states you mention interest me, and I did a lot of research on the subject when I deployed an SSD-backed software RAID with the newly linux-mainlined bcache abstraction driver (the bcache website has a lot to say about this, and I wrote in the very bottom wiki-page entry on troubleshooting rogue superblocks). As I understate it the states are a sort of a firmware-level filesystem abstraction which is much more like the BIOS days than less like. The firmware incorporates built-in logic to maximize wear-leveling and to dish out the write jobs in random "buckets" because NAND incorporates no real physical block-size and can only be written to so many times before it fails. Again, though, this is a fuzzy concept to me and I could certainly know better.

    And I don't doubt that HDD LLF is used as a format tool by any type of person, manufacturer or end-user, to zero their drives. I just find it hard to believe that it does anything more or less than that.

    -Mike
     
  13. Yen

    Yen Admin
    Staff Member

    May 6, 2007
    13,081
    13,979
    340
    #13 Yen, Oct 1, 2013
    Last edited: Oct 1, 2013
    To me it sounds the OP can just fix it by deleting the partitions (MBR) and re-creating a new one....I'd try M$'es own diskpart first.


    Concerning SSD. AFAIK you can only write in blocks. As soon as you change just one byte into it you have to write the entire block. The write rate seems to become faster if the data comes closer to one block size (Write Amplification).
    When one deletes data the controller collects cells to create complete erasable blocks ('third' state, flagged to become erased, obsolete, but not free yet)

    SSD's controller can also S.M.A.R.T. (failure forecast) and have 'spare cells'....
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. mikeserv

    mikeserv MDL Novice

    Sep 28, 2013
    9
    0
    0
    Yeah! I don't think it's MBR though, but something very close. It looked to me like a truncated partition table due to misplaced/corrupted superblocks - the filesystem markers. He did of course try a repartition/reformat but it left behind the offending marker and continued to report as truncated. So first I recommended zeroing the disk with dd which should definitely solve the problem but is admittedly overkill, so I thought twice and recommended wipefs which specifically targets and removes only superblocks.

    Oh, but I think you're right about DISKPART. If I remember correctly the CLEAN command should do what wipefs does. I'm just not hip to Windows stuff.

    I guess I pissed off HDD LLF guy though, so I went with it.

    They can S.M.A.R.T.? I thought that was just a mechanical drive thing. Cool.



    -Mike
     
  15. Yen

    Yen Admin
    Staff Member

    May 6, 2007
    13,081
    13,979
    340
    Modern SSD's can S.M.A.R.T., thumb drives (usb pen / stick) I guess not.
    I had been interested in the technology when I decided to buy one small system SSD some years ago. They were expensive ~400 € for 100 MBytes!!!

    So I got they use T.R.I.M. and S.M.A.R.T. The latter is actually far more important on SSDs because if you wouldn't manage uniform use of cells you'd lose some of them quickly. SSDs have also spare cells to keep the blocks until they are completed to become finally 'erased'. Older SSDs had no spare, so it was advised to use maximum of 70% of total capacity.


    Diskpart is quite powerful. Yes it has the clean and clean all parameter. All zeros additionally.


    diskpart-->list disk should give a first overview.

    Then select disk <disk number of the drive> -->clean....later create partition primary-->select partition 1--->format fs=ntfs quick...or like that...:)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...