So what should I do with these old servers? 1. HP ProLiant DL380 G4 2x Intel Xeon 3.6 gHz 6 GB RAM 6x 72.8 GB SCSI 2x Redundant power supplies. 2. HP ProLiant DL380 G4 2x Intel Xeon 2.4 gHz 6 GB RAM 6x 148.8 GB SCSI 2x Redundant power supplies.
I'm actually interested in this topic as I have about 6 old servers at the moment and can't think of a function for them. I would do some sort of storage or Hyper-V server, but some have SCSI drives that aren't very large and none of them have enough cores for a hypervisor. I haven't thought of a use that justifies the massive amount of wattage they draw, but I'm open to ideas.
That kind of servers a very good for business but home use! Used as Data Server, they'll do a excellent job, even those older machines! Not really reliable as Application Server! Cost of just running them is high because of the huge power consumption! Even as Storage server, they're not really reliable because of the low capacity SCSI HDD's! And, if some of the HDD's ned to be replaced, that's very expensive too, compare to SATA HDD's! As stated, using such servers in business as File Server, would be a good way. Even HP/Compaq used high-quality hardware parts for building, just those are very expensive. To sell on eBay, would be the best. If already setup with OS, Linux is preferred for that, on base as File Server, with also installed and set RAID1, would be quite easy to sell to Business New-Comers! That's the way I would go!
I agree with Pisthai, ebay that stuff. There are many small businesses that would love to make use of second hand servers. Mom and Pop's type places especially I would think. Not to mention the droves of home users that would make great use of these.
Some businesses scraps servers by default when the 3-year warranty expires and not for technical reasons. So they end up at 2:nd hand computer stores together with older desk-top PC's. Indeed, these servers can be got for sometimes next to nothing. But there is a catch... If you plan to run them 24/365 - Watch your electricity bill, and the first time you hear all the fans inside start (even after the O/S has taken control over them) will give you that "oh-s**t" experience. There is one strategy that works but requires some planning... I sometimes buy 2:nd hand workstations, often equipped with low amount of RAM and one out of two Xeons populated... ..Then I proceed to the shelf where the servers are stacked in piles, and get one cheap... At home with my bargains, I relocate the server Xeon over to the workstation MB, and also all the server RAM. The heat sink issue must be resolved since many servers rely on passive cooling using an array of fans in the 19" front area but for eg. 1366's there are plenty of solutions available. As said, it requires a bit of a planning, since servers more often focuses on threads/cores than actual clock speed. Retrofitting a workstation with new Xeons or special type ECC RAMs can be v e r y expenceive.
16GB ECC. The customer uses it for FLAC storage (all Classical believe it or not). Her old machine only had 12TB of storage and was 99% full, so time for an upgrade. I delivered it myself and found out she's still using a 10/100 switch.
That's not surprising in the least (the 10/100 switch part, that is) as most residential connections were not pipelined large enough to bog - let alone tax - a gigabit switch; even today, over two-thirds of residential broadband STILL isn't (in North America, that is). In my own home, I have the only wired gigabit connection in my house; there are, in fact, only two other wired connections, period. One is bogged at the destination end (10/100 port on the Tivo Premiere in my bedroom) and the connection to the port in the library is, in fact, miswired (no, I didn't wire it), therefore, the desktop PC in that room uses a wireless (not wired) connection. If I can get the proper dongle for the Tivo, I'll replace that substandard wired connection with a wireless-N connection (the Premiere *itself* does not support wired gigabit; therefore, hardware issue). Even now that I've replaced the early D3 cable modem with a more modern one (ARRIS WBM-760A->ARRIS SB-6183), I have a throughput of 10 megabytes per second at MOST (larger-piped providers, such as Steam, universities with overlarge backbones, Akamai, etc.) - and it's overkill. (And it still doesn't come close to bogging the existing router's gigabit switch - even *while* driving wirelessly two tablets and two phones (with one or both tablets streaming at least 720p via FOXNOW). And the router is standard "prosumer" low-end; Netgear WNDR3700v4 using DD-WRT current-version firmware).
Geeze - how big does a non-commercial Hyper-V server have to be? Hyper-V doesn't require all that much horsepower in and of itself - it simply leverages more modern features on Intel and AMD processors; in specific, ONE modern feature - Extended Processor Table support - and that is itself ONLY a requirement in desktop (not server) Windows 8 and later - OR Server 2016. While older versions of Server 2008R2/Server2012/Server2012R2 can leverage it, none of them need it; I ran the latter on a Q6600 (Kentsfield) *because* it didn't need EPT (which Q6600 lacks) - as opposed to Windows 10 (which can't use Hyper-V without EPT support in the CPU). In fact, if you have XEON server CPUs that lack EPT, Server 2012R2 or earlier are, in fact, the safest choice for repurposing these oldest of XEONs as non-commercial servers. Another possible repurpose is throwing a LTS version of Ubuntu on it and using it as an Android build farm (such as CyanogenMod); I'm using a spare HDD on my desktop for that same purpose (two personal projects intertwined based on the same version of CyanogenMod - 12.1). Ubuntu makes sense as the OS cost is, in fact, zero - hence your only cost in setting up the build farm itself - other than hardware - is, in fact, time. (I'm setting up a working model of the project itself up in a Kubuntu VM on the Windows 10 side of the same desktop running Hyper-V simply so I don't screw up on the real project OS side - which is, in fact, identical to the VM side except that the real project is running bare-metal, and thus has more system RAM and HDD space to play with. On either side, I'll sync and do the scutwork during the day, and do builds while I snooze - therefore, no lost sleep. I also won't be doing builds daily - or even nightly; at most, I'd do builds "weekly"; however, since I'll largely be working with existing trees, I'll sync daily or at worst every other day (the latter means I'll sync VM-side one day, and real side either that same day or the following day). Running Hyper-V in and of itself is not that core-intensive - a Celeron DC E3400 can do it - and that is - literally, a nerfed dual-core E8400 (Wolfdale). It's what the VMs on a Hyper-V - or Xen, or vmWare - server are doing that gets core-intensive - not the hypervisor itself. (That applies to ALL the type 1 hypervisors.) If the ports were SATA - not SCSI - I'd use multiple WD EcoGreen drives in RAID and throw a 'buntu LTS on it and use it as a CM build farm; if they are (as you said) SCSI, I'd take the two largest same-size SCSI drives and do the same thing (RAID-0 mirroring, of course - repeating for each pair of like-sized drives until I run out of either pairs or repurposable servers) and either eBay them or local-paper them; local-paper makes the most sense as you can also sell extra services (such as basic support) if you choose.