I took the time to get more info. Finally I do not agree even on half duplex communications. (SATA) Reason: There are two common ways to realize half duplex communications. TDD time division duplex (this one would make me agree) and FDD (frequency division duplex)! This is btw an exciting topic. The details would exceed the thread's topic even more. Just saying that FDD makes it possible that the bus can communicate on both directions at the same time. I tried to have a look at the matter to figure what could reason the hypothesis that SSDs can only read or only write at the same time. The SATA bus does not and the SSD (array) itself does not. The question why should it still remains.
I used Wintoolkit for rebase and free version of NTLite is buggy and always says Dism failed skipping step even when I didn't integrate netfx3.5. I do updates, WTK addons and silent installers in Wintoolkit and use NTLite for explorer and other tweak and reduce WIM size slightly by using wimlib which is integrated in NTLite. I rebase using Wintoolkit. You can rebase boot.wim files by using component remover and close the window and choose Keep Image mounted and open Dism using /Image cmd line with the path pointing Wintoolkit_xxx and run /resetbase to trim the image from 2.1GB to mere 800MB. You can apply same thing to Trim winre.wim in System32\recovery folder too. As for killing reserved storage completely: Use regtweak to disable storage reserves (Chef Koch's tweaks repo) change dynamic sized page file to fixed size otherwise reserve storage will be actively eating/reserving the space from dynamic PF. Delete all restore points and disable system restore. I don't think it affects data at all since NTFS is journaled Filesystem so there is log written for every operation.
Ram has always been lost when powered off with any file system. One exception is nvram (used as ram) which is not common. Edit: I'll try to explain a little better. If you yank the power cord the contents of ram don't get written to disk which can corrupt the file system. A journaled file system can handle that better, but it can still get corrupted. When you do a normal shutdown that doesn't happen because the contents of ram is safely written to the drive unless you have other hardware issues (too high of an overclock, a dying hard drive, bad ram, etc.). Losing power to ram without a clean shutdown is roughly the same thing as pulling out a flash drive with or without write caching while writing to it. You may or may not corrupt the file system or lose files.
How??Even a lowest tier 120gb dram-less ssd has ~54B/s 4KiB random write speed compared to ~1MB/s of a usual hdd(result on crystaldisk mark) or you meant something else.
I don't understand your numbers, but it is my opinion that the SSDs are not so special for 4k random write performance which is the main pattern of writing in Windows and likely other operating systems. There are various acceleration techniques, but the one which I think is the most effective is to have around 30% over-provisioning which makes the solution relatively expensive. However on balance, it is obviously a lot better to use SSD when compared to HDD, maybe except for long-term backup where data retention for a longer period is very important.
Here are CrystalDiskMark benchmark 4k random write speeds with all default settings I ran on my SATA III 6Gb/s laptop recently. *Crucial MX500 SATA III 6Gb/s 500GB SSD with "Momentum Cache" (writes cached with ram) - 20% overprovisioning Random Write 4KiB (Q= 1,T= 1) : 271.881 MB/s [ 66377.2 IOPS] *Crucial MX500 SATA III 6Gb/s 500GB SSD without "Momentum Cache" write caching - 20% overprovisioning Random Write 4KiB (Q= 1,T= 1) : 54.490 MB/s [ 13303.2 IOPS] *Crappy 5400 rpm SATA II 3Gb/s laptop hard drive Random Write 4KiB (Q= 1,T= 1) : 0.842 MB/s [ 205.6 IOPS] *Unknown 128 GB USB 3 flash drive Random Write 4KiB (Q= 1,T= 1) : 2.049 MB/s [ 500.2 IOPS] *Horribly slow cheap sd card Random Write 4KiB (Q= 1,T= 1) : 0.018 MB/s [ 4.4 IOPS]
Yes yes yes, but as we all know benchmarks != real world performance. They are numbers derived under ideal conditions. You can see what @ch100 is referring to very simply by moving a directory containing thousands of files of mixed sizes. Many mere kilobytes with some larger files. Try your steam directory for example. Speed is always going to be faster while transferring larger contiguous files and will tank and chug along on many small files of mixed sizes.
True, but they were discussing CrystalDiskMark numbers and I already had those benchmark numbers saved.
I didn't initially discuss CrystalDiskMark which I used occasionally but I generally prefer ATTO benchmark. It was a general statement from me when @whitestar_999 brought this into discussion. I think there is a typo in that post and this is what I said that I don't understand, not the tool itself. Otherwise, this post from @bfoos https://forums.mydigitallife.net/th...-way-too-much-space.79675/page-4#post-1528356 clarifies what my intention was when I posted.
Well the benchmark numbers may not match real world performance but the differential remains same between ssd & hdd.Benchmark says write speed many times faster than hdd even for lots of small files & that's correct.On my laptop a download folder with thousands of file(ranging from small few hundred KBs size to dozens of MBs) becomes ready on ssd(the green progress bar in explorer address bar) in much less time compared to same folder on hdd.
Well, the difference of 'real' values and benchmark actually is that on benchmark you create artificial data with determined physical attributes and read / write them by a determined mode (random or sequential) on disk whereas when having 'real' conditions you are using data as it is...real files with content On some controllers it even depends if the data is compressible....
Yeah, I found slipstreaming Windows using NTLite or WTK or W8/10UI etc.. where ISO files and updates are extracted and applied gave the max real world R/W speeds of any media except USB flash drives.
Well that does make sense, although wouldn't it be larger since my machine is maxed out at 64GB RAM? Also, that file was there long before the first time I hibernated the computer (specifically it was there when I first installed Windows with the same size). I assume I can change the number to anything I want? Running the command would actually increase the size of the hibernate file, since I have 64GB RAM installed.
Hiberfil.sys is also used for the "Fast Startup" feature. Which is why even though traditional hibernation is disabled by default in Windows 10, the file is still created to be used by Fast Startup.