Windows 10 1903 - takes way too much space

Discussion in 'Windows 10' started by MonarchX, May 25, 2019.

  1. ch100

    ch100 MDL Addicted

    Sep 11, 2016
    829
    694
    30
    https://forums.mydigitallife.net/th...-way-too-much-space.79675/page-3#post-1527825
     
  2. Yen

    Yen Admin
    Staff Member

    May 6, 2007
    13,081
    13,979
    340
    I took the time to get more info. Finally I do not agree even on half duplex communications. (SATA)
    Reason: There are two common ways to realize half duplex communications. TDD time division duplex (this one would make me agree) and FDD (frequency division duplex)!

    This is btw an exciting topic. The details would exceed the thread's topic even more. Just saying that FDD makes it possible that the bus can communicate on both directions at the same time.

    I tried to have a look at the matter to figure what could reason the hypothesis that SSDs can only read or only write at the same time.
    The SATA bus does not and the SSD (array) itself does not.
    The question why should it still remains. :)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. VDev

    VDev MDL Member

    Sep 9, 2015
    109
    57
    10
    I used Wintoolkit for rebase and free version of NTLite is buggy and always says Dism failed skipping step even when I didn't integrate netfx3.5.
    I do updates, WTK addons and silent installers in Wintoolkit and use NTLite for explorer and other tweak and reduce WIM size slightly by using wimlib which is integrated in NTLite. I rebase using Wintoolkit. You can rebase boot.wim files by using component remover and close the window and choose Keep Image mounted and open Dism using /Image cmd line with the path pointing Wintoolkit_xxx and run /resetbase to trim the image from 2.1GB to mere 800MB. You can apply same thing to Trim winre.wim in System32\recovery folder too.
    As for killing reserved storage completely:
    1. Use regtweak to disable storage reserves (Chef Koch's tweaks repo)
    2. change dynamic sized page file to fixed size otherwise reserve storage will be actively eating/reserving the space from dynamic PF.
    3. Delete all restore points and disable system restore.
    I don't think it affects data at all since NTFS is journaled Filesystem so there is log written for every operation.
     
  4. pf100

    pf100 Duct Tape Coder

    Oct 22, 2010
    2,069
    3,447
    90
    #64 pf100, Jun 1, 2019
    Last edited: Jun 1, 2019
    Ram has always been lost when powered off with any file system. One exception is nvram (used as ram) which is not common.
    Edit: I'll try to explain a little better. If you yank the power cord the contents of ram don't get written to disk which can corrupt the file system. A journaled file system can handle that better, but it can still get corrupted. When you do a normal shutdown that doesn't happen because the contents of ram is safely written to the drive unless you have other hardware issues (too high of an overclock, a dying hard drive, bad ram, etc.). Losing power to ram without a clean shutdown is roughly the same thing as pulling out a flash drive with or without write caching while writing to it. You may or may not corrupt the file system or lose files.
     
  5. whitestar_999

    whitestar_999 MDL Addicted

    Dec 9, 2011
    713
    318
    30
    #65 whitestar_999, Jun 2, 2019
    Last edited: Jun 2, 2019
    How??Even a lowest tier 120gb dram-less ssd has ~54B/s 4KiB random write speed compared to ~1MB/s of a usual hdd(result on crystaldisk mark) or you meant something else.
     
  6. ch100

    ch100 MDL Addicted

    Sep 11, 2016
    829
    694
    30
    I don't understand your numbers, but it is my opinion that the SSDs are not so special for 4k random write performance which is the main pattern of writing in Windows and likely other operating systems. There are various acceleration techniques, but the one which I think is the most effective is to have around 30% over-provisioning which makes the solution relatively expensive.
    However on balance, it is obviously a lot better to use SSD when compared to HDD, maybe except for long-term backup where data retention for a longer period is very important.
     
  7. pf100

    pf100 Duct Tape Coder

    Oct 22, 2010
    2,069
    3,447
    90
    #68 pf100, Jun 2, 2019
    Last edited: Jun 3, 2019
    Here are CrystalDiskMark benchmark 4k random write speeds with all default settings I ran on my SATA III 6Gb/s laptop recently.

    *Crucial MX500 SATA III 6Gb/s 500GB SSD with "Momentum Cache" (writes cached with ram) - 20% overprovisioning
    Random Write 4KiB (Q= 1,T= 1) : 271.881 MB/s [ 66377.2 IOPS]

    *Crucial MX500 SATA III 6Gb/s 500GB SSD without "Momentum Cache" write caching - 20% overprovisioning
    Random Write 4KiB (Q= 1,T= 1) : 54.490 MB/s [ 13303.2 IOPS]

    *Crappy 5400 rpm SATA II 3Gb/s laptop hard drive
    Random Write 4KiB (Q= 1,T= 1) : 0.842 MB/s [ 205.6 IOPS]

    *Unknown 128 GB USB 3 flash drive
    Random Write 4KiB (Q= 1,T= 1) : 2.049 MB/s [ 500.2 IOPS]

    *Horribly slow cheap sd card
    Random Write 4KiB (Q= 1,T= 1) : 0.018 MB/s [ 4.4 IOPS]
     
  8. bfoos

    bfoos MDL Guide Dog

    Jun 15, 2008
    757
    701
    30
    Yes yes yes, but as we all know benchmarks != real world performance. They are numbers derived under ideal conditions. You can see what @ch100 is referring to very simply by moving a directory containing thousands of files of mixed sizes. Many mere kilobytes with some larger files. Try your steam directory for example. Speed is always going to be faster while transferring larger contiguous files and will tank and chug along on many small files of mixed sizes.
     
  9. pf100

    pf100 Duct Tape Coder

    Oct 22, 2010
    2,069
    3,447
    90
    True, but they were discussing CrystalDiskMark numbers and I already had those benchmark numbers saved.
     
  10. ch100

    ch100 MDL Addicted

    Sep 11, 2016
    829
    694
    30
    This one is the only relevant benchmark in the context. :)
     
  11. ch100

    ch100 MDL Addicted

    Sep 11, 2016
    829
    694
    30
    I didn't initially discuss CrystalDiskMark which I used occasionally but I generally prefer ATTO benchmark.
    It was a general statement from me when @whitestar_999 brought this into discussion.
    I think there is a typo in that post and this is what I said that I don't understand, not the tool itself.
    Otherwise, this post from @bfoos https://forums.mydigitallife.net/th...-way-too-much-space.79675/page-4#post-1528356 clarifies what my intention was when I posted.
     
  12. whitestar_999

    whitestar_999 MDL Addicted

    Dec 9, 2011
    713
    318
    30
    Well the benchmark numbers may not match real world performance but the differential remains same between ssd & hdd.Benchmark says write speed many times faster than hdd even for lots of small files & that's correct.On my laptop a download folder with thousands of file(ranging from small few hundred KBs size to dozens of MBs) becomes ready on ssd(the green progress bar in explorer address bar) in much less time compared to same folder on hdd.
     
  13. Yen

    Yen Admin
    Staff Member

    May 6, 2007
    13,081
    13,979
    340
    Well, the difference of 'real' values and benchmark actually is that on benchmark you create artificial data with determined physical attributes and read / write them by a determined mode (random or sequential) on disk whereas when having 'real' conditions you are using data as it is...real files with content

    On some controllers it even depends if the data is compressible....
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. VDev

    VDev MDL Member

    Sep 9, 2015
    109
    57
    10
    Yeah, I found slipstreaming Windows using NTLite or WTK or W8/10UI etc.. where ISO files and updates are extracted and applied gave the max real world R/W speeds of any media except USB flash drives.
     
  15. TheCollDude489

    TheCollDude489 MDL Member

    Apr 16, 2018
    147
    32
    10
    Well that does make sense, although wouldn't it be larger since my machine is maxed out at 64GB RAM? Also, that file was there long before the first time I hibernated the computer (specifically it was there when I first installed Windows with the same size).
    I assume I can change the number to anything I want? Running the command would actually increase the size of the hibernate file, since I have 64GB RAM installed.
     
  16. bfoos

    bfoos MDL Guide Dog

    Jun 15, 2008
    757
    701
    30
    Hiberfil.sys is also used for the "Fast Startup" feature. Which is why even though traditional hibernation is disabled by default in Windows 10, the file is still created to be used by Fast Startup.
     
  17. AveYo

    AveYo MDL Expert

    Feb 10, 2009
    1,836
    5,693
    60
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. SAM-R

    SAM-R MDL Guru

    Mar 21, 2015
    5,822
    5,605
    180
    Space doesn't mean much on a 1 TB HDD
     
  19. ch100

    ch100 MDL Addicted

    Sep 11, 2016
    829
    694
    30