RAID or JBOD, drive layout, NTFS or ReFS w/LSI SAS-9211-8x card?

Discussion in 'PC Hardware' started by scanman1, Aug 9, 2016.

  1. scanman1

    scanman1 MDL Novice

    Aug 10, 2010
    18
    10
    0
    #1 scanman1, Aug 9, 2016
    Last edited: Aug 17, 2016
    I have been reading and getting very conflicting information on the best way to set up my new storage.

    I am willing to switch to Enterprise or Educational SKU if needed, yet I use MCE with HDHomerun dual tuner and triple boot OS now.

    I have sitting on the table new in the box:
    (1) LSI-9211-8x 8 PCIE lane SATA6 card flashed with HBI IT mode firmware rev. P19.
    (2) Amphenol Mini SAS SFF-8087 to 4xSATA Fanout Cable with 8 wire SGPIO header.
    (3) brand new 3 TB Seagate Constellation ES.3 drives.
    (2) Intel SC5400 AXX4DRV3G SAS/SATA Hard Drive Cage W/ 4 Drive Bay & Backplane Board.

    I plan to rivet (custom retrofit) the hot swap drive cages into my tower case if they can be used and mount fans to pull the required air cooling through them.

    The drive cages backplane boards may need to be removed as it is not officially rated for SATA6? Each drive cage came with cables and connectors for SES 4 pin and SGPIO 4 pin. The SGPIO header that is on the SATA end of the Amphenol 8087 fanout cable(s) is an 8 pin connector. Can this be retrofitted to the 4 pin connector on the backplane to make the drive cage LED's work? If the backplane is not workable, I could just use the fanout cable as I have room in the tower to mount (12) 3 1/2 disks and plenty of power with a 1000 watt Corsair RM1000 power Supply.


    I currently have an x58 chipset with an Intel ICH10 6 port sata2 softraid controller with custom bios and added option rom for SSD trim support. The system contains a Xenon X5680 OC'd to 4.5 Ghz with speedstep turned on, and 48 GB of Micron Balistic 8-8-8-24 (1600) (6x8) ram. The MB itself contains a 2-port SATA II to PCI Express x1 JMicron JMB362 Host Controller and a PCI Express x1 JMicron JMB363 PCI Express to SATA II / PATA Controller that are very speed limited due to the one PCI express lane and is the primary reason I bought the new controller.

    I have (2) 16x PCIE slots with a EVGA Nvidia GeForce GTX 980 in the first slot and plan to place the 9211 card in the second x16 slot to avoid any conflicts with the various on board devices.

    I am unsure if I should use softraid to reduce the drive letters and have even been told that I should run the controller under a VM and run the (6) 3 TB drives under ZFS file system running OpenNAS, yet that sounds dangerous and could be troublesome with my rebooting different OS. I have read that Windows native soft raid is very slow.

    I'm unsure about Windows Spaces, yet I would like to have direct read access to all data if I set up another Windows 7 gaming partition.

    What is the best way to format the three new drives for both performance and to not have a raid-0 large data loss potential?

    Most data is video as this is a Media PC and torrent server 24/7 and rarly used for gaming and at most may require a data rate to support two 4k video feeds and one 1080p at the same time.

    I plan to migrate the other (3) 3 TB drives as well as the SSD to the 9211 controller and remove the older disks for offline backup storage.

    All my current hard drives are NTFS with no raid and filled with data and connected on all three motherboard controllers. I triple boot off two hard drives Windows 8.1 x64, Windows 10 (10586), and Windows 10 alpha builds.

    I SUBST the user and data directories on all three OS's to the same local data directory on the SSD so I am always using the same data set regardless of OS I am currently running and only need to make an image backup of my SSD for all program data. I currently have to be careful to make certain that each drive letter is the same except for C: in each booted OS as I have Deluge torrent client set up to share from each data drive and this is becoming a real mess with this many drive letters with data across all three current controllers:

    ICH10 Port 0: OCZ-VERTEX3 (SSD-Primary boot drive-Windows 8.1)
    ICH10 Port 1: Western Digital VelociRaptor WDC WD3000GLFS-01F8U0 (Win 10 Release and Alpha on second part.)
    ICH10 Port 2: Hitachi Deskstar 7K2000 Hitachi HDS722020ALA330 [2.00 TB] (Data Drive-Full)
    ICH10 Port 3: Seagate Barracuda 7200.14 ST3000DM001-1CH166 [3.00 TB] (Data Drive-Full)
    ICH10 Port 4: Seagate Barracuda 7200.11 ST31500341AS [1.50 TB] (Data Drive-Full)
    ICH10 Port 5: (External Sata)-backup port
    JMB362 Port 0: Western Digital Green WDC WD30EZRX-00SPEB0 [3.00 TB] (Critical Data Drive- 1/2 full)
    JMB362 Port 1: Western Digital Green WDC WD30EZRX-00SPEB0 [3.00 TB] (Data Drive-Full)
    JMC363 Port 0: Western Digital Caviar WDC WD20EARS-00MVWB0 [2.00 TB] (Data Drive-Full)
    JMC363 Port 1: Optical BR Burner
     
  2. atgpud2003

    atgpud2003 MDL Addicted

    Apr 30, 2015
    532
    86
    30
    ahh, I have FreeNAS runs 10 hard disk drives, and 1 CF Card SATA and 32 GB CF Card and Gigabytes motherboard with 8GB RAM enough to run and 3 nics card.. So, 1st Server runs 5 hard drives including SSD 64GB just for OS Windows Server 2008 Enterprise R2 with 8GB and I connect iSCSI to FreeNAS and chain up more storage.. So, runs 24/7!! and last 3rd server is my hosting..

    ATGPUD2003
     
  3. scanman1

    scanman1 MDL Novice

    Aug 10, 2010
    18
    10
    0
    Is your FreeNAS a dedicated install of only FreeNAS, or are you running it in a VM on a box that is used as a Windows desktop at the same time?

    Is there a better sub forum that I can post this question? I figured the server section would get the most eyeballs that would be familiar with the server nas card questions.

    I have done much research and am not one to ask questions if there is an answer that is find able on the first three pages of results.
     
  4. scanman1

    scanman1 MDL Novice

    Aug 10, 2010
    18
    10
    0
    Should I cross post this question in another area?

    Perhaps another place?

    Suggestions?

    I am really stumped with the SGPIO cable issue. :confused:
     
  5. sebus

    sebus MDL Guru

    Jul 23, 2008
    6,356
    2,026
    210
  6. scanman1

    scanman1 MDL Novice

    Aug 10, 2010
    18
    10
    0
    The SAS cable is this one:

    www .ebay. com/itm/121460778754


    It is usually running 24/7 yet I will boot it into another OS for an hour or so to test a new OS or game. It usually runs for weeks between boots except for critical updates that require a reboot.
     
  7. atgpud2003

    atgpud2003 MDL Addicted

    Apr 30, 2015
    532
    86
    30
    #7 atgpud2003, Aug 15, 2016
    Last edited: Aug 15, 2016
    @Scanman1, No, I don't use VM, it self FreeNAS Server box with 10hard drives and 1 CF Card of FreeNAS Software.. then I have 3 nic card, I use iSCSI configurations from FreeNAS to allow my Windows Server connect iSCSI to FreeNas Server box..

    ATGPUD2003
     

    Attached Files:

  8. Flipp3r

    Flipp3r MDL Expert

    Feb 11, 2009
    1,965
    908
    60
    Jbod the lot as your not using enterprise class drives. Desktop drives will randomly trip the raid as broken.
    Ignore the "SGPIO" header as it's specific to chassis & not required.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. scanman1

    scanman1 MDL Novice

    Aug 10, 2010
    18
    10
    0
    #9 scanman1, Aug 16, 2016
    Last edited: Aug 17, 2016
    (OP)
    I realise the SGPIO header is optional, as I would like to use the drive activity, locate and failure LED's with the SGPIO communication channel to the backplanes. It seems there is not a physical standard for SGPIO connectors between vendors and I will need to modify the cable, if I can find out what pins are what signal lines.

    My main question is not clear enough. I am interested in setting up the new drives with Storage Spaces (Parity Space) and possibly use the ReFS file system. I am concerned that there seem to be two flavors and the Windows 8.1 is not compatible with the newer Windows 10 version. It's not possible to add ReFS and Storage Spaces to Windows 7?

    There is almost no talk of this here and the last article I found on it was from 2015.


    The three new 3 TB drives I just bought are Enterprise class drives:
    3 TB Seagate Constellation ES.3 drives (Three of them)
     
  10. scanman1

    scanman1 MDL Novice

    Aug 10, 2010
    18
    10
    0
  11. Flipp3r

    Flipp3r MDL Expert

    Feb 11, 2009
    1,965
    908
    60
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. scanman1

    scanman1 MDL Novice

    Aug 10, 2010
    18
    10
    0
  13. Flipp3r

    Flipp3r MDL Expert

    Feb 11, 2009
    1,965
    908
    60
    I think you'll also need to mount a fan behind it to get some airflow through the drives. The Intel server chassis would have had fans as part of the chassis or a kit that mounted to the backplane.

    I have an old 6 bay cage at work but it only has 2 sas headers on it. I did at one stage think I'd build a pc with some hs cages for media storage but ended up getting a DS2415+ Synology Nas...
    Good luck with your project m8ty!
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. scanman1

    scanman1 MDL Novice

    Aug 10, 2010
    18
    10
    0
    #14 scanman1, Aug 17, 2016
    Last edited: Aug 17, 2016
    (OP)
    I was going to side mount a 120 mm low rpm fan and have it force air in from the case side and vent into the drive cage and let the other higher volume exhaust fans move the air from inside to the rear and top higher volume fans. I run Almico Speedfan and optimize the fan speed as needed to maintain temperatures with motherboard control.

    There is almost no airflow possibility from the rear of the drive cage. If you look at the photo, you will see that Intel really made it nearly impossible to suck air through the backplane with the plastic drive caddys covering each small hole!

    The original drive case that this is designed to be installed in has a vacuum cleaner dual layer of noisy super high rpm fans to overcome this and is why the airflow is so restricted. I don't see any airflow making it that way in my case mod.
     
  15. T-S

    T-S MDL Guru

    Dec 14, 2012
    3,984
    1,331
    120

    SS is just scaring.


    Keep the thing simple. And nothing is better than Drivepool to keep the things simple, reliable.

    Independent from the windows version and from obscure filesystems, it works on top of plain NTFS disks (even if deduplicated), the configuration relies on the fs itself, so in case of crash of the OS you have just to reinstall it, reinstall drivepool and you're done, no configurations needed at all.

    The disks are readable (individually) by any NTFS capable OS, the mirroring function can be set for whole disks or per folder/file.

    It has a stand alone GUI with remoting capabilities and it has also the dashboard plugin if you install it on WHS-2011/SBS-2011or any Server Essentials

    In short one of the most brilliant SW available for windows

    Is not free but is cheap (25$ or so) and personally I've tortured it in any way w/o a single file lost