Windows 10 1903 - takes way too much space

Discussion in 'Windows 10' started by MonarchX, May 25, 2019.

  1. JeanYuhs

    JeanYuhs MDL Member

    Feb 16, 2010
    196
    127
    10
    That's just an opinion, not a statement of fact. The reputation this place has more than anything else is "Hey, go there and that's where you find those hacks/cracks/patches/loaders/KMS/etc that allow you to use Windows illegitimately..." if you want a statement of fact.

    Anyway, the information I've presented is basically the same info that Russinovich himself preaches when it comes to system optimization and performance. The page file != virtual memory, it's just one aspect of the virtual memory subsystem, and the old methods of 1.5x physical chip RAM are no longer valid and are now considered to be avoided at all costs whenever possible. Since the average computer owner never bothers with such things, Windows has changed over the years and doesn't necessarily require that or even make that an actual case usage pattern - I've worked on laptops with 4GB of RAM that have a system managed 2GB page file, some other systems have 6GB, etc, it depends on the case usage of the actual computer itself being tested.

    But you folks have fun tinkering,yep.
     
  2. Yen

    Yen Admin
    Staff Member

    May 6, 2007
    13,081
    13,979
    340
    #42 Yen, May 29, 2019
    Last edited: May 29, 2019
    It's actually no question of right or wrong, it's rather a question of......
    Does it still apply today and
    Is there a rational (actually scientifically verified) a measurable difference.

    At the time of HDDs people suggested to use a fixed size of pagefile. The reason for it was to have a constant size at a constant place (to avoid fragmentation)
    This is obsolete on SSDs
    Programs can claim for pagefile and also windows OS administrates it.
    It is reasonable to let the OS administrate the pagefile.(default)

    Hibernation is something different. It is a user feature. One can think about if one needs it at all or not. It is also used for fastboot.
    If one doesn't need hibernation state and fastboot I would completely deactivate hibernation by command powercfg.exe /hibernate off

    Windows sets the hibernation flag on each accessible physical drive (especially at fastboot).
    This means when you boot another OS (dual boot for instance) and you want to access one of those drives as well you cannot write!
    If you could you would break the actual hibernation state since the original OS cannot know about the changes on SSD made by the 'foreign' OS. (For instance Linux)...
    That's the reason why I have switched it off.

    But if I would use it then at OS default, because I think the OS does it administrate reasonably.

    And finally when talking about such things....defragmentation on SSD yes or no.
    No, since the SSD's controller has an proprietary algo that fits best.


    @JeanYuhs
    Your advice and reasoning to use a pagefile on each drive does not apply to SSDs anymore. They can read and write at the very same time since they have more than one NAND die.
    Such a die consists of smaller planes which themselves consist of smaller blocks. Finally a block consists of the pages.

    A SSD can read from dieA while it writes to dieB. Such a die actually behaves like a HDD which either can read or can write at the same time, yes...
    But a common SSD usually has 8 to 64 of such dies and hence can read and write at the very same time.

    This is how I got the basics of a NAND flash layout...:)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. ch100

    ch100 MDL Addicted

    Sep 11, 2016
    829
    694
    30
    @JeanYuhs I see it differently. It is not about "hacks/cracks/patches/loaders/KMS/etc" for me, but about understanding the inner workings of Windows far beyond what you can find on forums like tenforums who have their role and are great in their own ways. There are few special contributors here who I will not name because they know who they are ;) and who put a great deal of effort and share their results with others.
    And I hope that you don't feel like you are one of those who I called "expert" in a previous post, quite the opposite. :)
     
  4. Yen

    Yen Admin
    Staff Member

    May 6, 2007
    13,081
    13,979
    340
    #46 Yen, May 30, 2019
    Last edited: May 30, 2019
    Let me just pick up the topic defrag one more time.
    I also have read a lot of controversial posts about it.
    I used to have the defrag scheduled task to be switched off since w7 because I wanted to run it only manually from time to time.

    This did not change when I have got my first system SSD in the year 2010 EXCEPT I never ran any defrag process on it.
    I used w7 with that SSD since last April (almost 9 years!!!) without any noticeable issue! I have replaced the SSD only because I migrated to LTSC to have a new one which is faster and bigger.

    This means I cannot confirm this bit:
    For now on LTSC one of the first thing I did was to disable the scheduled task for defrag. The new SSD has never seen any defrag process running on it so far.
    I could now have a look at terabytes written and execute defrag on it manually. Then after completion check again TBW. The difference would be wear created by defrag.

    Although I am not perfectly sure if it's good to let it disabled all the time again defrag creates additional wear that has to be compared to another negative effect I personally never saw on w7 within 8 years usage: filesystem metadata fragmentation issues.
    I guess I have to read more about it to update my knowledge. :)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. Yen

    Yen Admin
    Staff Member

    May 6, 2007
    13,081
    13,979
    340
    One more...
    I got curious and did the following:

    I have written down the TBW of my sys SSD (250 GB) of LTSC and the 1TB SSD. Both are NTFS formatted and used / are accessible by LTSC. (Besides of that I have an EXT4 formatted sys SSD with Kubuntu on it).
    I never ran defrag on any of the SSDs so far.
    So I started the first time to run defrag on:

    1) sys SSD of LTSC
    2) 1TB NTFS data SSD

    Guess what?
    Defrag completed within seconds on BOTH SSDs saying anything 'has been optimized'.
    I rechecked the wear (TBW) and both did not change at all!!!

    THIS behavior is completely different to w7! When running defrag on w7 on a SSD the first time it moves a lot and it takes a lot more of time and it creates wear.

    So my conclusion now is: Defrag on w10 creates no (almost no) wear.
    BUT it actually has had 'nothing' to do!

    I update now what I have posted.
    People can leave the scheduled task for defrag enabled on w10. It creates no significant extra wear.
    Anyway its purpose is questionable. I suppose the SSDs controllers did anything regarding optimizations already in advance.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. JeanYuhs

    JeanYuhs MDL Member

    Feb 16, 2010
    196
    127
    10
    That is completely inaccurate, and irrelevant since the SATA controllers and even the NVMe controllers can't do both operations at the same time. Even the best most high performance storage controllers in use today in desktop computing hardware cannot do reads and writes on the same device at the same instant, it's read then write then read then write, over and over again. If you believe or think otherwise, you're wrong.

    Want an easy way to prove this to yourself? Find some very large file, and I mean something very large, like larger than 4GB in size, somewhere on your storage whatever it is, and for this test I'm going to presume you are using an SSD or NVMe storage device. I say use a very large file because given the speeds that such storage devices are capable of this test needs to last a few seconds so you can see the actual transfer rates, I'd say find some file 6GB in size or larger if you can, that means a single file, not several files that total larger than 6GB. An ISO file, an MKV or MP4 video file, a Blu-ray ripped to the storage in native .ts or .m2ts would work great - the point being you want a single contiguous file larger than 4GB in size (so obviously you'll be using NTFS and I say that since we're dealing with Windows here).

    Now, when you have such a file, right click on it and choose Copy.

    Now, right click someplace and choose Paste, and as soon as you do it watch the transfer rate as the copy process starts and proceeds.

    What you'll see - presuming you're doing exactly what I just suggested and you're doing this on one single SSD or NVMe and not across multiple drives - if the possible transfer speed will show at about half the maximum potential speed your SSD or NVMe is capable of. If it's an SATA III SSD with a rough max of about 560MB/s transfer speeds for reads and about 540MB/s writes you'll see write speeds during this copy process of about 240 to 260MB/s because the drive controller can read at any given moment or it can write at any given moment, it cannot do both at the same time.

    With NVMe storage, you'll see basically the same: the write speeds will be roughly half (maybe just a tad more, a few percentage points more) of the drive's potential speeds, and since there are so many different NVMe drives out there the speeds could be a few hundred megabytes per second to a gigabyte or two per second.

    During a copy operation of one file to another file on the same storage device, the overall potential transfer rates are cut in half because data is read, data is written, data is read, data is written

    This isn't rocket science, and it's not that tough to comprehend: modern storage controllers, even the best possible ones for NVMe storage and even those that will support PCI-Express 4.0 which was recently announced and some of the newly announced NVMe drives using PCI-E 4.0 claiming speeds of about 5GB/s max means those drives, when in use, will still only be able to read or write a piece of data at any given moment in time.

    It's not possible to read and write from any current form of storage technology available today, not even PCI-Express 4.0 supports this. This means read and write operations on the same physical drive happen in a read-write-read-write pattern over and over, it doesn't happen at the same time, which is why you'll see the speeds showing much lower rates than one thinks it should be.

    Feel free to test this whenever you want, but I already know the results since storage controllers can't do reads and writes at the same exact time on the same exact device, it reads a chunk of data then writes it, then reads another chunk of data and writes it, over and over again, until the copy (or move) routine is complete, and because of this singular operation methodology it cuts the write speed operations in half, roughly.

    Also, just for the record, it's page file, not pagefile. The filename found on the actual media is pagefile.sys, yes, because it's using the old 8.3 format which means no spaces in the filename for ease of use, but the actual correct term is page file, two words.

    Anyway, my advice stands, is based on decades of tuning Windows since before v1.0 was ever even public knowledge, and if it's followed it'll boost performance on any given machine, sometimes dramatically (on older machines), sometimes with less prominence (on more modern hardware with SSD/NVMe storage based solutions), but as I said before:

    Every little bit helps...
     
  7. Micro

    Micro MDL Member

    Apr 26, 2009
    136
    51
    10
    Without trying to "stir the pot", how does (or doesn't) the swap file (swapfile.sys) fit into all this?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  8. JeanYuhs

    JeanYuhs MDL Member

    Feb 16, 2010
    196
    127
    10
    The swapfile.sys file is a very old holdout/leftover from Windows 3.x days, I'm not entirely certain why Windows still has it laying around but I do know this: even if you set your system to not use a page file at all, either set it to off/none/0 min 0 max/etc, that swapfile.sys file will get a little use. It's not very large (usually defaults to 16MB which in today's multi-gigabyte RAM/storage situations) but I suppose Microsoft feels it necessary to keep it around just in case, who really knows. :p

    In the early days of the i386 (the first to support the virtual memory subsystem extensions) what we now call the page file was originally called the swap file because it was literally "swapping" data to and from the storage when RAM usage passed a certain point beyond what the physical chip RAM could provide.
     
  9. Yen

    Yen Admin
    Staff Member

    May 6, 2007
    13,081
    13,979
    340
    Interesting.
    First of all I am interested to learn. But I still have doubts. :p
    (I wrote pagefile since everyone knows what I mean by that).

    A few thoughts.

    - When you copy a 6GB file on the same drive the very same controller has to read 6GB and has to write 6GB, it's a question of bandwidth limitation.
    As you have written you only get info about the write speed (half of the job to do).
    I guess your example is not suitable to prove that there is no read and write at the very same time.

    Hypothesis: The speed is nearly half because the entire r/w operation is related to the double amount of data per controller. When copying from drive a to drive b two controllers share the job. Each one handles 6GB of data.

    -If you are right then you would disqualify your own solution of a separate page file on each drive. It would not help at all at your given copy example.

    It depends on where you have a look at it. A very same SSD is an array of units like a RAID. There can be cells written and there can be cells read at the very same time. That's what I have posted.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. ch100

    ch100 MDL Addicted

    Sep 11, 2016
    829
    694
    30
    I think "defrag" in Windows 10 actually does TRIM instead when SSD is sensed.
    Windows 7 is less aware of SSDs and is doing full defrag like on HDD when instructed.
    Thanks for following up and I believe that the recommendation to leave the defrag task enabled on Windows 10 is correct.
    And you are like right in saying that the SSD controllers also do most of the work and it is not clear what is left for Windows to do.
    Perhaps only the TRIM actions scheduled or on demand.
     
  11. ch100

    ch100 MDL Addicted

    Sep 11, 2016
    829
    694
    30
    How would you feel if I replied to you "you are wrong" in the same way you replied to an otherwise very good post above? :)
    Anyway, thanks for the info about controllers, very useful stuff.

    The swapfile.sys file is a page file used exclusively by the new style Windows Apps, so it is another form a paging to disk.
    It cannot be easily moved except for creating a symbolic link to the default location and moved elsewhere but can be removed if the main page file is made 0 on all disks - not recommended.
     
  12. ch100

    ch100 MDL Addicted

    Sep 11, 2016
    829
    694
    30
    Well, this depends on the configuration and what you say it is not typical in most cases. But it is definitely the best if multiple controllers are available.
     
  13. MonarchX

    MonarchX MDL Expert

    May 5, 2007
    1,732
    313
    60
    Maybe, I have not tested how much of a pagefile is required. As I stated earlier, I think that its existence is what some games or applications verify and require to function 100% correctly. I simply enabled 3072MB (and now use 4096MB) for both minimum and maximum and that stopped Deus Ex: Mankind Divided and Far Cry 5 / New Dawn from crashing entirely.

    I hope Windows 10 doesn't use that page file for anything, unless RAM usage exceeded physical amount present. I set the same thing for minimum and maximum to prevent fluctuations, just like I prevent fluctuations in CPU core clocks via BIOS / UEFI and Windows OS tweaks.
     
  14. JeanYuhs

    JeanYuhs MDL Member

    Feb 16, 2010
    196
    127
    10
    You're almost there, but I was referring to the storage controller on the devices themselves, not the SATA or NVMe controllers on the motherboard. The motherboard controllers can handle the rated bandwidth per channel meaning per device connected to them, but the device controllers can't - those are limited to the rated bandwidth per a read or a write operation, but it can't do both at the same moment in time. It reads, it writes, or it's idle, there are no other states, there's no such thing as a read+write state with those device controllers.

    I just tested this because I already knew what the results would be - if I have an SATA III SSD (my primary SSD in my ThinkPad W540) and I have a second SATA III SSD in the optical bay (using a drive mount and not an optical drive), and I copied a 9GB MKV file from my primary SSD (an Intel 5450s enterprise class SSD) to the secondary SSD (a Samsung 860 EVO) and the transfer rate during the process held steady at about 540MB/s.

    That's reading from the Intel source SSD at roughly 550MB/s and writing to the Samsung destination SSD at roughly 540MB/s, sustained, from start to finish.

    The Intel SATA III controller on my motherboard easily handles the bandwidth of the SATA III devices because while each one was connected to that same Intel controller that controller is designed to provide the max bandwidth per channel meaning per device connected.

    If I were to copy the file from the Intel SSD to the Intel SSD, the SATA controller on the motherboard would be happy to do it at max speed but the controller on the SSD itself cannot do reads and writes at the same moment hence the transfer rate of the copy process is cut roughly in half during the process.

    Physical drive to physical drive on the same chipset controller = max bandwidth of the drives themselves that they can provide and because each drive controller is capable of it too

    Physical drive copy to and from the same drive on the same chipset controller = bandwidth is cut roughly in half because the controller on the drive itself can't do reads and writes in the same operation, it reads into the buffer, writes the buffer, reads, writes, over and over again until the process is complete

    Chipset controller != drive controller, the chipset controller is basically working as a hub and can and does provide max bandwidth anytime any device requires it. The limitation exists on the devices themselves and is why the bandwidth is cut roughly in half since it can't read and write at the same instant. In a drive to drive copy, one basically goes into the read state and the other goes into the write state and because they're on discrete channels with the motherboard's chipset controller they can operate that way sustained at their max rated speeds.
     
  15. MonarchX

    MonarchX MDL Expert

    May 5, 2007
    1,732
    313
    60
    Yeap, Windows 10 SSD defrag = TRIM. I don't know about write-caching setting though. I read that some SSD's benefit from it being enabled while others do not. HDD performance definitely benefits from it, just like it does from power settings setting HDD turn off time to 0 minutes (which disabled them from turning off entirely).
     
  16. ch100

    ch100 MDL Addicted

    Sep 11, 2016
    829
    694
    30
    The issue with write-caching is that they compensate for slow random-write of SSDs like for HDD. The random write speed for small files on SSD is not much better than the same on HDD.
    Where SSDs excel is the read speed, many times better and this can be noticed by anyone when doing a cold boot.
    For large files like those proposed earlier, the write-cache is likely to slightly slow down the transfer.
     
  17. Yen

    Yen Admin
    Staff Member

    May 6, 2007
    13,081
    13,979
    340
    #59 Yen, May 31, 2019
    Last edited: May 31, 2019
    SATA is half duplex. I just got that. I actually thought it's full duplex. So I could agree because of that. (Because of the bus, not the SSD itself)

    As mentioned already. Copying one file on the same SSD always means the same controller has to read and to write. Means to share bandwidth and as effect you get only half of write speeds. This is a causality and does not reason there is only read or only write at the same time, though. There could be read and write at the same time, but both on the half speed due to bandwidth limitation.

    An alternating read-write pattern (because of half duplex = either or) can reason half of speeds as well, though.

    SAS and NVMe are full duplex. I wonder why....if there would be no ability of the SSD to be in a read and write state at the same time?!?

    If you would repeat your copy example on a full duplex bus you get also half of speeds, because it's again one and the same controller with a fixed bandwidth to be shared on two processes (read and write).

    Finally a completely different approach...
    Any infrastructure such as bus and interfaces are clocked. Also the CPUs and any processor is clocked.
    Hertz is 1/s. This actually strictly means that there can be never 2 processes at the very same time since communications have to be synchronized :D
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. Carlos Detweiller

    Carlos Detweiller Emperor of Ice-Cream

    Dec 21, 2012
    6,331
    7,048
    210
    From what I know, swapfile.sys is used by the "modern" Store apps on Windows 10.

    It is not related to the old Windows 3 filenames, which were actually different: Windows 3 had two possible types of swapfile, either temporary or permanent. The temporary file was named WIN386.SWP, and the permanent one 386SPART.PAR. I never encountered a swapfile.sys in these old OS (except, maybe OS/2 used it?).