Working Deduplication files for Microsoft Windows 10

Discussion in 'Windows 10' started by dreamss, Oct 4, 2014.

  1. TONSCHUH

    TONSCHUH MDL Addicted

    Jun 11, 2012
    814
    287
    30
    #281 TONSCHUH, Aug 17, 2016
    Last edited by a moderator: Apr 20, 2017
    Why does the older one work and the newer one not ?

    I'm just in the process of doing the Unoptimization and now the files seem to need much more space than before and I have to move files of the affected drives to be able to finish the process.

    Code:
    start-dedupjob -Volume <VolumeLetter> -Type Unoptimization
    
    When I move files of a deduplicated drive to a not deduplicated drive, will be the files arrive at the destination-drive un-deduplicated ?

    :confused:
     
  2. EFA11

    EFA11 Avatar Guru

    Oct 7, 2010
    8,710
    6,739
    270
    #282 EFA11, Aug 18, 2016
    Last edited by a moderator: Apr 20, 2017
    unoptimizing will decompress the file system. So yes will take more space. Like my 4TB drives holding ~8TB with deduplication. Without it Id need two 4TB drives to hold the same files.

    and yes, when you move the files from a dedup volume to a non-dedup volume, they are not dedup'ed any longer. Its the volume and how it organizes the files that holds the deduplication.
     
  3. TONSCHUH

    TONSCHUH MDL Addicted

    Jun 11, 2012
    814
    287
    30
    Thanks for the info !

    :)
     
  4. T-S

    T-S MDL Guru

    Dec 14, 2012
    3,984
    1,331
    120
    I'm in the boring process of copying Srv2016 style deduped files to a freshly formatted disk, to redupe them with Server 2012 r2

    When finished I need to format the source disk, add it to the new pool created by drivepool, and wait for drivepool's duplication, and windows' deduplication.

    Very likely the whole process will require more than a week o_O
     
  5. EFA11

    EFA11 Avatar Guru

    Oct 7, 2010
    8,710
    6,739
    270
    The things we do :D lol
     
  6. T-S

    T-S MDL Guru

    Dec 14, 2012
    3,984
    1,331
    120
    I worked for a couple of months the the pool connected to a 2016 VM running inside 2012R2.

    It worked w/o big performance problems or any data loss, but given HyperV doesn't proxy the SMART data I was not so comfortable with that setup so I decided to go back trough the boring process.

    All of this for a single damn 153KB dll, which is incompatible with server 2016/w10 and make it crash:mad:
     
  7. abbodi1406

    abbodi1406 MDL KB0000001

    Feb 19, 2011
    17,204
    90,762
    340
  8. fbifido

    fbifido MDL Member

    Jun 6, 2007
    199
    26
    10
    #289 fbifido, Aug 19, 2016
    Last edited by a moderator: Apr 20, 2017
  9. TONSCHUH

    TONSCHUH MDL Addicted

    Jun 11, 2012
    814
    287
    30
    @fbifido & @abbodi1406

    Thanks a lot !

    That pretty much saved my weekend, after Windows updated again to 14901.1000 overnight, without asking me and me being finished with the last drive's un-optimization !

    :worthy:

    This I understand and thanks for the info, but what I don't understand is, that I had to move hundreds of gigabyte off one of my drives, just to be able to un-optimize it, because the files which previously fitted on the same drive, which had also still heaps of free space on it, don't fit on it anymore.

    Does the process de-compress / un-pack any other files when doing so ?

    I had to move / un-install >1TB, because I always ran out of space when I did the un-optimization (which is still not finished for that drive, I guess, because it still came up via "Get-DedupStatus".

    :confused:
     
  10. EFA11

    EFA11 Avatar Guru

    Oct 7, 2010
    8,710
    6,739
    270
    during the decompress I believe it uses space to do the job as well. Im really not sure.

    It has to put humpty dumpty back together again, somewhere lol
     
  11. TONSCHUH

    TONSCHUH MDL Addicted

    Jun 11, 2012
    814
    287
    30
    Yeah, true.

    It doesn't really matter now, because I already moved the stuff around and the latest Dedup-Pack is working on 14905 as well (fingers-crossed that it will stay that way, at least for the moment).

    :)
     
  12. fbifido

    fbifido MDL Member

    Jun 6, 2007
    199
    26
    10
    If it updates again, just use the 14393_manual step.
     
  13. TONSCHUH

    TONSCHUH MDL Addicted

    Jun 11, 2012
    814
    287
    30
    Ok, thanks !

    :)
     
  14. DonZoomik

    DonZoomik MDL Novice

    Apr 5, 2015
    17
    8
    0
    I was testing WS2016 with larger-than supported files (1TB+) and I'm consistently seeing chunk store corruption. Can anyone confirm?
    My scenario is writing multi-TB files with pseudorandom data (about 1:50 dedup ratio) to test image-based backup. Pretty much all large files cannot be fully dedupped and Scrubbing shows chunk store corruption.
    WS2012R2 dedup works fine on same hardware, though throughput is slower. About 1TB+ files - this is semi-supported on WS2012R2, if you write files in one run and never modify them.

    WS2016 14393 patched to .82 (though there are no dedup component updates)
    HP EliteDesk 800 G1 with i5-4570 and 32GB RAM.
    3* HGST UltraStar 6TB Striped storage spaces. (HDD passes SMART selftest and chkdsk /R)

    WS2016 forum has very few users so I might get better results here.
     
  15. EFA11

    EFA11 Avatar Guru

    Oct 7, 2010
    8,710
    6,739
    270
    #297 EFA11, Aug 21, 2016
    Last edited by a moderator: Apr 20, 2017
    This happens because of Windows deduplication limitations that cause degradation with reading and or appending data from -> and to <- very large files. Best practice is to not let files go larger than 1TB as you said unsupported. If possible split the files up.

    This might help some depending on your volumes, but either way you would be best to have this format regardless of 1TB files (IMO)

    open CMD and check your volume, make sure the FRS is 4096 and not the 1024(Default)

    fsutil fsinfo ntfsinfo Volume:

    Code:
    Microsoft Windows [Version 10.0.14393]
    (c) 2016 Microsoft Corporation. All rights reserved.
    
    C:\Users\efa11>fsutil fsinfo ntfsinfo I: (change to your drive letter)
    NTFS Volume Serial Number :        0x9cfcb412fcb3e4a4
    NTFS Version   :                   3.1
    LFS Version    :                   1.1
    Number Sectors :                   0x000000003a380fd4
    Total Clusters :                   0x000000003a380fd4
    Free Clusters  :                   0x0000000028efa641
    Total Reserved :                   0x000000000000126e
    Bytes Per Sector  :                4096
    Bytes Per Physical Sector :        4096
    Bytes Per Cluster :                4096
    Bytes Per FileRecord Segment    :  4096
    Clusters Per FileRecord Segment :  1
    Mft Valid Data Length :            0x0000000004100000
    Mft Start Lcn  :                   0x00000000000c0000
    Mft2 Start Lcn :                   0x0000000000000002
    Mft Zone Start :                   0x00000000000c4100
    Mft Zone End   :                   0x00000000000cc820
    Max Device Trim Extent Count :     0
    Max Device Trim Byte Count :       0x0
    Max Volume Trim Extent Count :     62
    Max Volume Trim Byte Count :       0x40000000
    Resource Manager Identifier :     357MCCC5-E62C-18G3-0267-2726H86RR135
    You can format the drive either in windows and choose 4096, or CMD prompt to reformat NTFS volume with larger FRS (/L):

    format Volume: /L
    e.g. format I: /L

    You will lose any data on the drive if you format it :D
     
  16. TheKingDavid

    TheKingDavid MDL Novice

    Aug 22, 2016
    9
    1
    0
    Unfortunately this package seems to be broken on 10240 for me.
    I can access to dedup volume and all contents but none job works and i get error message in eventlog.
    While reading eventlog i understand that COM components are not registered or not available.

    Any ideas ?
     
  17. T-S

    T-S MDL Guru

    Dec 14, 2012
    3,984
    1,331
    120

    Why you want to use the components meant for an OS two generations younger than what you have, on a so delicate area?

    Just use the packages released at the time
     
  18. TheKingDavid

    TheKingDavid MDL Novice

    Aug 22, 2016
    9
    1
    0
    Thanks for your answer.
    Before trying this package, I was using the "Dedup_10586_for_10240" package with another problem.
    I can access Dedup volume but when I try to Scrub, Optimize ... the job stuck at Initializing step and then failed ???