I use Stablebit Drivepool on my WHS11 server. Works perfectly, can't give link, not enough posts, but you should find it on google.
Call it a raid at file level. You can group n HDDs as a single virtual HDD. You can set the redundancy of one or more disk (like a raid 1), but unlike old school raids you can do that also on a single folder level (which is a more flexible implementation of what WHS 2007 did) You can even add a online disk where the files are stored in HDD images, not as single files, so they can be encrypted or compressed to just like a normal HDD. Hence you can put in a "raid like" mode even two (or more) cloud storage entities, say Gdrive and onedrive. All of this is done at file level, not at block level like old school raids, so the single members of your virtual drive (POOL) can be read w/o having DrivePool installed, and can be read by any OS capable of reading the NTFS filesystem (say a 15 year old Linux or Win NT351 or win 95 + ntfsDOS). All files are compared using hashes, so unlike traditional raids there isn't the problem of a corrupted file being duplicated overwriting the good copy. All of this is done transparently, the user has just to use his pool like he would do with a plain single HDD In short one of the best piece of SW released in the last 15 years. MS has now a bad copy it of that called storage spaces which has just a subsection of the DrivePool's features. DriveBender is an alternative, but it's less simple and flexible to use.
It has absolutely nothing to do. WHS 2007 (and DrivePool) are about *duplicating* files for redundancy, and/or pooling more storage sources together to a single drive Deduplication is about saving space *removing* redundancy on a single volume. They are almost opposite goals. That said I use both deduplication and duplication at the same time, it may sound stupid but it isn't Say I have 4GB of data and 2x 2TB HDDs. The data fits on the available space, obviously. If one of my HDD dies I simply lost 50% of my data. INSTEAD I have 4GB of data that thanks to the deduplication fits on a single drive, then they are duplicated by DrivePool. I still have all my data that fits in 4GB of HDD, but if one of the two drives fails, nothing happens (aside the alert from Drivepool). All I have to do is to take out the broken HDD, place a new one, and wait that Drivepool reduplicate (in background) the data to restore the redundancy. I dont even need to turn of the PC. Zero data loss, zero downtime. BTW Drivepool has 30 days of fully functionality, and nothing of what it does is irreversible. No need to format a drive, no need to dedicate a full drive to it (the members of the pool are just hidden folders with a GUID name, so to understand how it works is the best way to test it in first person. P.S. As for deduplication hardly there is something as good, robust, compatible and working on low resources as the deduplication from MS (that you can get unofficially even on W8-W11). Say even the ZFS (online) implementation is good but it needs 8GB of RAM for each TB of deduplicated storage. The one from MS (offline) needs around 300MB of ram per Terabyte, and they are used only during the deduplication process, not all the time.
I was referencing the principle of a filesystem on top of filesystem. You were referencing if I understood correctly the « Drive Extender » of Windows Home Server
There isn't any FS on top of FS in Drivepool. It uses just the plain NTFS (or ReFS), that's the point. And even MS deduplication doesn't use a special FS, but just a special kind of reparse points (the same used for hardlinks, symbolic links) As a result Drivepool members readable by any OS supporting NTFS (you just don't see the virtual drive, if drivepool isn't installed) On a deduplicated FS, a filter driver is needed to access all the files, but a 20 year old OS can still read any file which is not actually deduplicated, and can write any file, and even perform a checkdisk w/o messing anything.