I tried many times to force dedup work on my existing 20H2/19041 build (and 20H1 too) and nothing helped - there were errors after second mounting. Now I created virtual machine with 1909 windows and mount/maintain my large deduped VHDX inside it as it is mounted natively by Hyper-V. Drive is shared via SMB and can be reached from host system. Normally it consumes only 1-2GB RAM when it is launched to let me use my deduped drive when I need it. It is acceptable tradeoff as for me.
Recently getting an issue where my drive will refuse to optimize or pretty much do any dedup job other than unoptimize, claims the filesystem is possibly damaged with a 0x8056530E error code. I checked the filesystem multiple times and there is no damage but the system still refuses to do anything. Anyone have any ideas?
So, it's rather obvious to do, but still thought I'd post it. This is what I have running now, and it causes me to be able to use dedup almost as normal. I've placed a .ps1 file somewhere on my computer with these contents: Remove-Item "D:\System Volume Information\Dedup\Settings\HSM.00.CFG" Remove-Item "D:\System Volume Information\Dedup\Settings\HSM.01.CFG" Enable-DedupVolume -Volume D: -DataAccess Now, go to Task Scheduler and schedule a task when you login. Make sure you select Run as "SYSTEM". Reference the .ps1 file. I've also tinkered a bit with delaying it on startup. This causes the access to the volume to be available within a minute after logging in. Thanks to terminx for finding this workaround.
Just wanted to jot down my experience with '10 Pro, 20H2 19042.804 Dedupe. Some odd details that might further investigation. System has gone through a few versions of dedupe packages, notably ones for 17134 and 17692, and now 19041.1 and 19041.610 with reg fix. Originally had dedupe volumes SATA SSD H:\ and SATA HDD I:\. Recently added NVMe SSD volume G:\ for further testing on 20H2. None of these volumes are members of Storage Pools, mapping, or anything special. They are all 'simple' volumes. For all volumes, data is fully accessible. (As in, I have not run into the inaccessibility issues others have listed.) Dedupe on H: optimize, garbage collect, and scrub will begin and then immediately exit with a failure. For optimize, it's the filter driver message. For the other two options, it's the "not enabled for deduplication" (it is; have tried cycling enable/disable). Event viewer reports the drive is locked by BitLocker. It is not a BitLocker enabled volume/drive, so not sure how that's happening. Dedupe on I: optimize, garbage collect, and scrub all seem to work, although interestingly, if they're cancelled, they continue to run without a discoverable 'job' in powershell or event viewer. The process is still visible and active in Task Manager. Unable to run on this volume long enough to determine if it's doing useful work or not, sorry. Slow HDD that needs to be freed for other work. More on the 'useful work' concern with findings from G:\ jobs. Dedupe on G: optimize runs, does useful work, but appears to hang and not get past the point where it should be finished (edit: it finishes; just takes WAY longer than it has any right to wrap up), given the set of data and results given a manual cancellation. Garbage collect doesn't seem to accomplish anything useful. For my set of data (~500GB, 17% dedupe savings, likely 50-80GB of trash to clean) it will get stuck cycling 5GB or so in and out, indefinitely. Perhaps it's accidentally rehydrating some data and then trashing it? Potentially conflicting/data-adding programs are disabled during this process, so not likely new data competing with the garbage being removed. Manual cancellation and retesting a couple times shows no net gain to available space. Scrub completes and finds no errors. Edit2: I have since wiped (diskpart 'clean') H:, reformatted, enabled dedupe on H:, added my data back to it, and successfully deduplicated it (optimize+GCollect). It really feels like some difference between old dedupe versions wasn't updated/upgraded moving to the new packages. Unfortunate. Edit3: I really can't describe it well, but it seems like as long as consistent activity is happening on either my G: or H: volume, Windows starts losing its mind, railing on those volumes. "System" reports as fully saturating the SSDs' active time. I disabled Dedupe-core and the issue alleviated. I can't imagine what could be happening, as dedupe scheduled tasks are disabled, I've rebooted since a recent manual dedupe operation, and as far as I know there's no default or otherwise configured live deduplication operations. Edit4: Who knows what's improved the situation, as I've probly disabled and re-enabled dedupe a dozen times recently, and done some smaller Win10 updates, but deduplication operations including optimization and garbage collect all seem to complete properly now; although they still take extraordinarily long and absolutely annihilate the drives with what I have to assume are excess IOPS. Edit5: I now encounter the problem some other users described where certain files are inaccessible unless dedupe is enabled; alas, with it enabled, I get erroneous/excess drive activity on even the most basic drive accesses, which absolutely thrashes the PC and drive.
Anybody got it working with 20h2 ( 19042.867 ) ? Every package i tried so far fails with "The specified package is not applicable to this image"
Ok then, assuming i have acces to Win Server, does this help in any way ? From what i understand those files are extracted from Win Server ?
Gave it a whirl and I can see why it isn't enabled for consumer editions of Windows. I think that to become widely viable, the filesystem should dedupe in realtime. For example, if you make a copy of a file, at no point would there be 2 allocated copies of the same file, rather the file will be deduped on the fly and the copy would be created instantly, taking up virtually no additional storage space. On that topic, tiered storage pools would be awesome as well if it was done in a way that the performance tiers are simply part of the usable storage, and when the capacity tier gets full, files could be swapped between the drives at a potential performance hit. So if you have a 1TB HDD and a 512GB SSD, you would have 1.5 TBs of usable storage in a single smartly tiered pool. Also, on-the-fly file compression (instead of having to rerun compact) would be a plus as well.
hey, would it be possible to have the latest dedup pack for win10 20h2? went to the first post but the manual dism link is dead, the other packages in that first post are old. Majid
Does anyone still have the 17763 package? All the links for it in this thread are dead. I'm running LTSC (10.0.17763.1935)