# Post Your Server!!!



## Jtvd78

Just post a picture of your server setup so everyone here can see your Amazing servers. The more pictures the better 

The post format should be as follows:

Description / Usage(Print server, backups, file server, etc.)

*OS*:
*Case*:
*CPU*:
*Motherboard*:
*Memory*:
*PSU*:
*OS HDD *(If you have one): 
*Storage HDD(s)*:
*Server Manufacturer *(Ex: Dell, HP, You?):

PICS PICS PICS!!!!


----------



## zodiacdm

Just turned my old gaming PC into a web server... It's my second gen computer, the only one besides my sig rig that I listed stats for.

Have some old pics, but it wasn't a server back then, was it? 

I'll have to take a few more, but want to clean the area around it first...


----------



## Marma Duke

Ahh get out of my head, I was going to make one of these!

I'll get some pics later, but I want to see ComGuards


----------



## Crooksy

AMD Athlon X2 2.8Ghz
2x1Gb DDR2 kingston
mATX Asus motherboard
8Tb of storage, mostly 1Tb F3's
SATA card
500w generic power supply
Antec 300.

Just a basic little file server. Sorry about general spec, it's my brothers and don't really know much more about it than that.


----------



## PinkPenguin

Does this included what we get to play with at work? Cause if so... moahh lol


----------



## ComGuards

Quote:


Originally Posted by *Marma Duke* 
Ahh get out of my head, I was going to make one of these!

I'll get some pics later, but I want to see ComGuards









Flatterer









----

I only have the original pics I took of my server, plus a single pic of the basic modifications I made.

----

Original Front Angle View, day I unpacked it:










----

Case cover off:










----

DELL PERC6i RAID Card:










----

Front bezel off showing drive bays:










----

Front bezel off showing hot swap HDDs:










8x 160GB WD-RE3 HDDs, All Hot-Swap

----

Modified system showing two (2) new HDDs installed in the 5.25" bays










2x WD Black 1TB HDDs, single, connected to AHCI SATA.

----

Rear Panel of system










Intel PRO/1000 PT Dual-Port NIC installed in slot (2-ports)

Broadcom 57xx Dual-port NIC x2 installed onboard (4-ports)

DELL iDRAC6 Enterprise Remote Management (1-port)(Towards the bottom)

1100W Hot-Swap Redundant Power Supplies x2

----

BIOS showing CPU information:










2x Quad-Core Intel E5520 2.26Ghz with Hyper-Threading Enabled
16-logical processors

----

BIOS showing RAM information:










----

What I use this for? VMWare ESX vSphere Enterprise Plus. It runs everything and anything I need it to...









==================

The rest of the servers in my ghetto server "rack". Something like 30TB or so of storage space in that area....










Used for everything else.

================================

*Server-01*

*OS:* VMWare vSphere Enterprise Plus
*Case:* PowerEdge T710 Stock
*CPU:* (2x) Intel XEON E5520 | 8-Cores / 16-Threads Effective / 2.26GHz
*Motherboard:* DELL
*Cooling:* T710 Stock
*Memory:* 24GB DDR3-1066-ECC
*PSU:* DELL 1100W Hot-Swap Redundant
*OS HDD:* Crucial 1GB USB Flash Drive
*HDD:* 8x160GB WD-RE3 (RAID-10) | 2x WD-Black 1TB | Multiple iSCSI DataStores
*Server maker:* DELL

*Purpose:* VMWare Host

================================

*Server-02*

*OS:* Windows Server 2003 R2 Enterprise Edition
*Case:* PowerEdge SC1420 Stock
*CPU:* (2x) Intel XEON 2.80GHz | 2-CPU / 4-Threads Effective / 2.80GHz
*Motherboard:* DELL
*Cooling:* PowerEdge SC1420 Stock
*Memory:* 8GB DDR2-400-ECC
*PSU:* DELL 400W Single
*OS HDD:* 4x320GB WD-RE (RAID-10) | Adaptec 2405
*HDD:* (2x) 2.5" 250GB SATA | 750GB Seagate 7200.11 eSATA
*Server maker:* DELL

*Purpose:* Newsgroup Download & Extract system

================================

*Server-03*

*OS:* Windows Server 2003 R2 Enterprise Edition
*Case:* Antec Sonata
*CPU:* Intel Pentium-D 805 | 2-Core / 2-Threads Effective / 2.66GHz
*Motherboard:* Asus P5P800-SE
*Cooling:* Intel Stock s775
*Memory:* 3GB DDR-400
*PSU:* Corsair 750W
*OS HD:* WD-RE2 500GB (Single)
*HDD:* (3x) WD-Black 1TB | (5x) WD-Green 1TB | (2x) Seagate 1TB 7200.12, USB RAID1 | (2x) WD-Black 1TB, USB RAID-1 | (2x) WD-Blue 750GB, USB RAID-1 |
*Server Maker:* Whitebox

*Purpose:* Multimedia File Server

================================

*Server-04*

*OS:* Windows Server 2003 R2 Enterprise Edition
*Case:* Dimension 4400 Stock
*CPU:* Intel Pentium-4 2.00GHz | 1-Core / 1-Thread Effective / 2.00GHz
*Motherboard:* Dell / Foxconn
*Cooling:* Dell Stock Cooling Shroud
*Memory:* 2GB DDR-400
*PSU:* Dell Stock
*OS HD:* (2x) WD Caviar 320GB RAID-1 | Adaptec IDE RAID
*HDD:* Seagate 7200.11 500GB eSATA (Backups) | (4x) WD-Green 2TB USB |

*Purpose:* File Server | Backup Server (Symantec BackupExec)

================================

*Server-05*

*OS:* Windows Server 2003 R2 Enterprise Edition
*Case:* Asrock Stock
*CPU:* Intel Atom 330 | 2-Core / 4-Thread Effective / 1.60GHz
*Motherboard:* Asrock stock
*Cooling:* Asrock Stock
*Memory:* 4GB DDR2-800
*PSU:* Asrock External Power Brick
*OS HD:* 2.5" Fujitsu 80GB SATA
*HDD:* (2x) Seagate 7200.11 500GB USB RAID-1

*Purpose:* File Server

================================

*Server-06*

*OS:* Windows XP Professional SP3
*Case:* Acer Ferrari 3200
*CPU:* AMD Mobile Athlon64 2800+ | 1-Core / 1-Thread Effective | 800MHz/1.8GHz
*Motherboard:* Acer Ferrari stock
*Cooling:* Stock
*Memory:* 2GB DDR-333
*PSU:* Acer External Power Brick
*OS HDD:* 2.5" Seagate 80GB PATA
*HDD:* WD MyBook Mirror Edition 1TB (2x1TB USB RAID-1)
*
Purpose:* Archives Server

================================


----------



## Photographer

OS: windows server 2003
Case: power edge 2400
CPU: 2X PIII 667mhz
Memory: 512mb 133Mhz SDR
PSU: 2X330W Redundant PSUs
HDD: 4X9.1GB (SCSI Raid0)
Server maker







ELL
(What you use it for: Print server, torrent server)


----------



## Marma Duke

Quote:


Originally Posted by *ComGuards* 
Flatterer









----

I only have the original pics I took of my server, plus a single pic of the basic modifications I made.

The rest of the servers in my ghetto server "rack". Something like 30TB or so of storage space in that area....


Phwoar, wish I had the space and money.


----------



## Photographer

Quote:


Originally Posted by *ComGuards* 
Flatterer








----

I only have the original pics I took of my server, plus a single pic of the basic modifications I made.
----

Original Front Angle View, day I unpacked it:

----

Case cover off:

---

DELL PERC6i RAID Card:

----

Front bezel off showing drive bays:....................

.

hands down the fastest server this thread will ever see


----------



## aleiro

Quote:


Originally Posted by *ComGuards* 
The rest of the servers in my ghetto server "rack".

Thats not ghetto... I have the same thing for all my computers


----------



## Fooxz

I had some parts laying around, so i got a case and some HDDs, and now i have a simple server that i use for backups and file storage, and whatever other kinds of uses i may come up with.

Specs:
OS: Windows Server 2008 R2 x64 (got for free from mircrosoft dreamspark being in school







)
AMD Athlon 64 X2 5000+ BE cooled by a AC Freezer 7, underclocked i believe to save power (it hardly goes over 10% when im using it.)
2GB DDR2 RAM
Some 400W PSU
OS HDD: Hitachi 160GB 2.5inch SATA
Data HDDs: 2x WD Green 1TB 3.5inch SATA

I use one WD 1TB for data storage, and the OS backups that drive to the other nightly. (so then i actually have a chance to recover older versions of files)

Was a cheap build and plenty enough power for my needs.





Case side is super thin, so i added some thickness


----------



## Chandlermaki

I'll post some pictures later, but the specs are as follows:

Athlon 64 X2 4200+ @ 2.4GHz
2x1GB Kingston DDR2-667
PCChips A13G+ MicroATX Mobo
GeForce 6100 integrated video
On-board NIC
Old North-Star case
Apex ATX400 385W PSU

Used for my RuneScape server.


----------



## SniperXX

AMD X2 250
Biostar MATX mobo (Frys combo with cpu)
2GB DDR2
Raptor 75GB (OS)
2x 2TB HD (raid 1)
Cheap CM case I picked up for $20 at mwave's warehouse

I needed something very low power for a home file server. All the family pcs/laptops backup to it. Its then running carbonite and backing up the most important files. Eventually I plan to pick up a perc 5 card and do a nice raid 5.


----------



## Jtvd78

Quote:


Originally Posted by *PinkPenguin* 
Does this included what we get to play with at work? Cause if so... moahh lol

Go ahead, but make sure you state in the post that
1) You do not own it
2) What company you work for
3) All of the required information in the Original post


----------



## Marma Duke

Aww more pictures guys so I can see where other people are hiding them away or not.


----------



## Jtvd78

Added a new format - Any new posts should be in this format. You can find this in the OP. This should make that thread a little bit neater. I also updated the OP.


----------



## mbreitba

I'm a 10% co-owner of this company and an employee, does that make it partially my equipment?

http://nosupportlinuxhosting.com/images/NSLH_DC_Pic.jpg










Starting in the middle (2nd visible rack)with the Promise iSCSI arrays

Promise m610i - 8TB RAW capacity - ~4TB formatted in RAID10
Promise m610i - 8TB RAW capacity - ~4TB formatted in RAID10
Promise m610i - 8TB RAW capacity - ~4TB formatted in RAID10

Bladecenter - right to left

Dual Xeon 5420's w/32GB RAM + mirrored 250GB SATA HDD + 20Gbit InfiniBand
Dual Xeon 5420's w/32GB RAM + mirrored 250GB SATA HDD + 20Gbit InfiniBand
Dual Xeon 5420's w/32GB RAM + mirrored 250GB SATA HDD + 20Gbit InfiniBand
Dual Xeon 5420's w/32GB RAM + mirrored 250GB SATA HDD + 20Gbit InfiniBand
Blank
Blank
Blank
Pic is blank - have added single Xeon 5506 w/ 2GB RAM and mirrored 250 SATA HDD - this is a control system for our InfiniBand network
Dual Xeon 5620's w/ 48GB RAM + mirrored X25-V SSD + 20Gbit InfiniBand
Dual Xeon 5520's w/ 48GB RAM + 3 32GB 10krpm SAS HDD + 20Gbit InfiniBand
16 port KVM
8 port KVM + 17" LCD/Keyboard/touchpad
Promise Vtrak M500i - 6TB RAW - ~ 5TB formatted in RAID5 - backup volumes
Promise Vtrak M300i - 3.8TB RAW - ~ 1.9TB formatted in RAID 10
6x - Tyan Transport - Dual Opteron 270's w/ 4GB RAM - dedicated hosting solution for one of our customers
Not seen second rack bottom - Dual APC 3000VA Rack mount UPS's & TrippLite 4500VA UPS

Third rack (to the right)

Very top - Dell Poweredge 350 - P3 850 w/ 512MB RAM - Firewall for our network - runs pfsense
First box - ZFSBuild.com project box - Xeon 5504, 12GB RAM, 2x Intel X25V (boot) 2x Intel X25-E (Write cache 32GB) 2x Intel X25-MG2 (160GB read cache) 20x Western Digital RE3 1TB drives. Dual port Mellanox Infinihost III EX 20Gbit Infiniband card.
Promise M610I - 16TB RAW - ~8TB formatted capacity RAID 10
Spare bladecenter
2x PowerWare 9025 5000VA 208 Volt UPS's

First rack - mostly unseen
Dell PowerEdge 350 - P3 850 - 512MB RAM - Load Balancer for SpamAssassin filtering
6x mix of Tyan and Supermicro systems - Dual opteron varying speed - 4GB RAM - Ubuntu systems running SpamAssassin and ClamAV for virus filtering
Dell Poweredge 2540 or something like that - dual P3 1133's 1GB RAM - used to be MSSQL server, now just runs WhatsUpGold
Another Poweredge - similar specs connected to powervault 220S - Tape backup library used for some critical backups - needs to be upgraded because it doesn't have enough capacity without rotating tapes constantly.

Somewhere in this rack exists an Areca SATA->SCSI unit w/ 12 500GB SATA HDD's that we use as a backup staging system. All backups go to this system, then are spooled off to tape.
Also not pictured - APC 1200VA and APC 2200VA UPS


----------



## Fooxz

@mbreitba








*drool* i like.


----------



## Jtvd78

NICCCCCCEE!!!
I have one question, tho. Whats you Up/Down Internet speed? You must need extremely high speeds for that.......


----------



## mbreitba

We've got a gigabit drop but we're rate limited to lower than that - a simple phone call could turn us up to a gigabit in a few minutes.


----------



## SupaSupra

Quote:


Originally Posted by *mbreitba* 
We've got a gigabit drop but we're rate limited to lower than that - a simple phone call could turn us up to a gigabit in a few minutes.

I'd make the call.


----------



## Marma Duke

How many sites you hosting, sheesh.


----------



## mbreitba

North of 30,000.


----------



## Puckbandit35

OS: WHS
Case: NZXT Beta
CPU: Model / Speed / Cores e6300
Motherboard: DFI P35
Cooling: Couple 120MM fans
Memory: 2 gigs
PSU: Antec Earthwatts 450
OS HDD: (If you have one) 3 1TB WD Black 2 More on the way
HDD: Space / Interface Sata
Server maker: (If there is one. Ex: Dell, HP) Me

Streams my music and movies to all the computers in my house. Backs up all 4 of the computers in my house every night. Makes my music and videos accessible anywhere I am. I also store all my important files on it.

Also Turtle Rock Studios FTW for sending me that sticker.


----------



## mbreitba

Quote:


Originally Posted by *SupaSupra* 
I'd make the call.

No need to though - we've got things pretty well managed so that we don't have unforseen spikes. We actually only use about 1/2 of the bandwidth that we have currently turned up. No reason to jump up to gigabit and pay for it when we have no need for it.


----------



## killabytes

*My Web Server:*

*AMD Athlon XP 1800+
2GB PC3200
RAID 1 Array 60Gb*

I host 5 sites.

*Game Server:*

*Intel PIII 1Ghz
1.5Gb PC133 RAM
60Gb HDD*

Sits around waiting to be sold now. Did host Battlefield 1942 & Urban Terror for years.

Both systems are old, but work great. I've had them running 24/7 for years.


----------



## ComGuards

Quote:


Originally Posted by *killabytes* 
*My Web Server:*

*AMD Athlon XP 1800+
2GB PC3200
RAID 1 Array 60Gb*

I host 5 sites.

*Game Server:*

*Intel PIII 1Ghz
1.5Gb PC133 RAM
60Gb HDD*

Sits around waiting to be sold now. Did host Battlefield 1942 & Urban Terror for years.

Both systems are old, but work great. I've had them running 24/7 for years.

Those cases bring back some nice memories...


----------



## killabytes

The red one was given to me by a local gaming cafe years ago. The white one, I bought new years upon years ago. Lots of room in them, I'll upgrade someday. But not worth it with my system specs.







You can tell I hang onto older computers...no?


----------



## ComGuards

Quote:


Originally Posted by *killabytes* 
The red one was given to me by a local gaming cafe years ago. The white one, I bought new years upon years ago. Lots of room in them, I'll upgrade someday. But not worth it with my system specs.







You can tell I hang onto older computers...no?









I used to have three of those white ones. I gave two of them away. I still have one, and it's got the guts of a system inside - but it's got an old old old external water-cooling system in it, and I think it leaks, lol







. Anyways, it's just collecting dust for now. P4-2.66GHz, Northwood-B, 1.5GB RAM, AGP, that's about it. It was struggling as my MCE system, so it got replaced... but yeah, lots of space, though full-length AGP cards still blocked a couple of the HDD bays...


----------



## AMD SLI guru

OS:Windows 7
Case:
CPU: AMD 720 X3, 3.2 ghz @ 3.4 ghz with 4th core unlocked
Motherboard:Gigabyte
Cooling:stock CPU cooler
Memory: 4 Gigs DDR2 @ 1066
PSU: Thermaltake 400 watt
OS HDD: 1 *750 gig HDD
HDD: 5 x 1 terabyte HDD Via Sata , 1.5 terabyte HDD via Sata
Server maker: ME

Used for: Media Server for my Apartment and folding 100% of the time.

GPU: 8800GT














































I forgot the case, but it's all steel and it cost me about 200 bucks from newegg and it came with the raid cage. The Motherboard is a Gigabyte gaming board.


----------



## mbreitba

Quote:


Originally Posted by *AMD SLI guru* 

Used for: Media Server for my Apartment and folding 100% of the time.

I like the chassis for that - I've had a few that are similar, and liked all of them.

Just out of curiosity, what do you use for your media center OS? I've grown accustomed to using XBMC on an original xbox, and love using a wireless game controller to control it. The only drawback that I've found is that I can't play HD content on it (just not fast enough). I've toyed with building a media center PC, but haven't found an interface that I'm really in love with.


----------



## Damarious25

well. as long as were posting ghetto setups i dont feel so bad throwin these up...

edit. pics had to much personal info as file names... had to rename/repost.

this is the play room... gaming rig in living room dubs as htpc too.
only got a bunch of 1TB drives for now. will build actual file server with a bunch of 2TB drives in raid next year. getting this (very) soon too for school/work/remote access. monochrome printer (FTW) behind dogfood/garbage bag

here it is... nothing special.
OS: win7 ultimate 64
Case: ***
CPU: ***
GPU: ***
Motherboard: ***
Cooling: stock cpu and a few fans
Memory: ***
PSU: ***
OS HDD: ***
HDD: ***


----------



## Jtvd78

Quote:


Originally Posted by *Damarious25* 
well. as long as were posting ghetto setups i dont feel so bad throwin these up...

edit. pics had to much personal info as file names... had to rename/repost.

Check post format in the OP.


----------



## Damarious25

Quote:


Originally Posted by *Jtvd78* 
Check post format in the OP.

you want specs? its nothing special by any means but can provide


----------



## Jtvd78

Quote:


Originally Posted by *Damarious25* 
you want specs? its nothing special by any means but can provide

Yup


----------



## AMD SLI guru

Quote:


Originally Posted by *mbreitba* 
I like the chassis for that - I've had a few that are similar, and liked all of them.

Just out of curiosity, what do you use for your media center OS? I've grown accustomed to using XBMC on an original xbox, and love using a wireless game controller to control it. The only drawback that I've found is that I can't play HD content on it (just not fast enough). I've toyed with building a media center PC, but haven't found an interface that I'm really in love with.

I'm using Windows 7 64bit home premium. I also use PS3 media server, to stream to my PS3 and Xbox 360. I stream my 1080p blu-rays over my network from that computer and PS3 Media Streamer uses the CPU to transcode the file formats to work on both my xbox and ps3.  totally replaced my media center PC i had in my living room.


----------



## Damarious25

Quote:


Originally Posted by *Jtvd78* 
Yup

see post


----------



## Jtvd78

Quote:


Originally Posted by *Damarious25* 
see post

Thanks. Hope that wasn't too much trouble. Lol


----------



## Damarious25

Quote:


Originally Posted by *Jtvd78* 
Thanks. Hope that wasn't too much trouble. Lol

meh... just lazy is all


----------



## Spectre21

Radius,

OS: Linux Ubuntu 9.10 & Windows XP
CPU: Intel I7 920 D0 stock
HDD: 2 Samsung 1TB, 1x WD640GB & Another on the way
GPU: Nvidia 6800GS 256MBs
Software, Linux remote software, network backup software and video editing software and various standard packages office etc

Purpose,

Backup both of my machines regularly, storage of media, video editing and encoding


----------



## IBuyJunk

Dell Vostro 200
Core 2 Duo 1.83 E6400
5gb DDR2
8gb USB Flash Drive (os installed)
160gb HD - Virtual Machines
500gb HD - File Storage

OS: Ubuntu Server 10.04


----------



## HighOC

Here is mine







, Sorry for the Shiitii Picz









used For Torrenting 24/7 And my Bro's PC
Intel Atom 1.2Ghz
1G RAM


----------



## portauthority

My old server looked something like that.

It was a Dell Optiplex 755
Core 2 Duo 2.33GHz
4GB DDR2 RAM
160GB Hard Drive
500GB Hard Drive
DVD/CDRW Drive
DVD Drive
Floppy Drive (yes, floppy)
Onboard Broadcom Gigabit and an Intel Gigabit PCI Card
Windows Server 2003 Standard Edition

I threw everything you could imagine on that machine. AD, DNS, DHCP, downloads, file server, Symantec AV console, IIS, etc

Now I have nothing.

@ComGuard: how much is the energy bill for all that stuff?


----------



## Jtvd78

Quote:


Originally Posted by *Spectre21* 
Radius,

OS: Linux Ubuntu 9.10 & Windows XP
CPU: Intel I7 920 D0 stock
HDD: 2 Samsung 1TB, 1x WD640GB & Another on the way
GPU: Nvidia 6800GS 256MBs
Software, Linux remote software, network backup software and video editing software and various standard packages office etc

Purpose,

Backup both of my machines regularly, storage of media, video editing and encoding

Got any pics?

Quote:


Originally Posted by *Damarious25* 
well. as long as were posting ghetto setups i dont feel so bad throwin these up...

edit. pics had to much personal info as file names... had to rename/repost.

this is the play room... gaming rig in living room dubs as htpc too.
only got a bunch of 1TB drives for now. will build actual file server with a bunch of 2TB drives in raid next year. getting this (very) soon too for school/work/remote access. monochrome printer (FTW) behind dogfood/garbage bag

here it is... nothing special.
OS: win7 ultimate 64
Case: ***
CPU: ***
GPU: ***
Motherboard: ***
Cooling: stock cpu and a few fans
Memory: ***
PSU: ***
OS HDD: ***
HDD: ***

What happened to all the specs and pics?

Quote:


Originally Posted by *IBuyJunk* 
Dell Vostro 200
Core 2 Duo 1.83 E6400
5gb DDR2
8gb USB Flash Drive (os installed)
160gb HD - Virtual Machines
500gb HD - File Storage

OS: Ubuntu Server 10.04











Quote:


Originally Posted by *HighOC* 
Here is mine







, Sorry for the Shiitii Picz









used For Torrenting 24/7 And my Bro's PC
Intel Atom 1.2Ghz
1G RAM

Check the first post for formant


----------



## ComGuards

Quote:


Originally Posted by *portauthority* 









Unknown. I live in a condo and the electrical bill is included in the monthly management/maintenance fees, which includes electricity, water, air-con/heat, & digital cable-TV.


----------



## kz26

I use my sig rig at college for light web/file serving via HTTPS and FTPS. Also have Subsonic for remote access to my music.

Back home there's a machine equipped with an Athlon X2 4850e, 2GB RAM, 250GB+1TB hard drive used for a Windows 7 Media Center as well as a torrent and storage server.


----------



## wtomlinson

Quote:


Originally Posted by *portauthority* 









My old server looked something like that.

It was a Dell Optiplex 755
P

that's crazy it's the exact same thing i have.










OS: Windows Server 2008 Standard (prior WHS)
Case: Dell Optiplex 755
CPU: E4500 2.20Ghz
Memory: 2GB DDR2 ram
PSU: Raidmax 420w
OS HDD: WD 80gb
HDD: 1xWD 500gb, 1xSeagate 500gb

it shares all my media, my printer, and downloads my torrents. right now i'm playing around with IIS trying to get it to look like the WHS remote access pages.


----------



## portauthority

the Optiplex PCs are great for this stuff

@ComGuards lucky you...
I noticed all six of your NIC ports are used up, what do they do?
P.S. I don't know if you noticed this or not, but your service tag is visible


----------



## phasezero

This is my windows home server closet setup. I pass all my old gear to it. I use it to backup and serve media to 4 PC's and to stream blu-ray rips to my PS3. It's also a print server to the color ink jet canon printer and a canon laser printer.

*OS*: Windows Home Server

*Case*: Antec SLK3700AMB

*CPU*: AMD Phenom II X2 555

*Motherboard*: MSI 785GTM-E45

*Cooling*: Rocketfish RF-UPCUWR (CM Hyper TX3 Copy)
2 x 120mm Case fans

*Memory*: Crucial Ballistix 2x2gb DDR2 800

*PSU*: Rocketfish RF-700WPS2 (550 watt cwt psh)

*HDD*: WD 1TB Caviar Black
Samsung 2TB F3EG
Samsung 1TB F2EG x 2


----------



## metallicamaster3

Subbed. Just wait until I get home







...


----------



## rpm666

OOOO









AMD X4 630
MSI 785G
8GB DDR3 1600
1x250GB
1x750GB
Server 2008 R2

And I suppose pics to come when I Get home









Full SCCM 2007 setup.
Full Application virtualization setup.


----------



## nookkin

I converted an old Gateway laptop into my server... didn't cost me anything, low power usage, compact form factor, built-in uninterruptible power supply (the battery), and it has a built-in screen for when I need physical access to it. Thus it's perfect for a small server.


(Click for full. Please pardon the low quality of the photo.)

*Specs:*
*OS:* Windows Server 2003
*CPU:* Intel Pentium M @ 1.6 GHz
*RAM:* 512 MB
*HDD:* 40GB

I access it via RDP (Remote Desktop) most of the time, and I've even forwarded it through the router so I can access it outside the house.

My primary uses for it are:

*File server*, obviously for storing files on the network. I set it up with user-level access control, so every user in the house can access only their files; some folders are readable/writable by all users, while others are read-only, and still others are completely inaccessible to all but the authorized users.

*Download and upload server*. Instead of leaving my sig rig on all night, I delegate the downloads and uploads of large files to the server (i.e. downloading a Linux image, uploading a YouTube video, or uploading my off-site backup).

*Demo server via RDP.* I haven't made serious use of this yet, but let's say I design a website for a client and I want them to preview it while minimizing the chance of them stealing the design without paying. I will temporarily create a restricted account for them on the server, and will watch them via Windows Server 2003's "Remote Control" feature. This also works for giving demos of any software I write.

*Mini web/HTTP server*. No, I don't host my blog on it, but I do have an intranet site for the house as well as an internet-accessible site where I post files I want to send to people. Since Pidgin (my IM client) regularly fails to upload a file I'm sharing with a friend, I just put it on the server and give them a link to it.

*Folding rig?* I thought of making this into a folding rig, since it's basically on 24/7, but I don't know how practical it would be. A modest GPU can fold considerably better than this machine. What are your thoughts on this?

I would have set up a print server, but since we have an all-in-one printer/scanner/copier, this would not work. I also thought about setting up a domain controller, but decided against it because it would likely drive my family crazy.


----------



## Jtvd78

@nookkin the laptop server seems like a good idea, however, you won't be able to store much data internally...


----------



## nookkin

Yeah, 40GB isn't much, but I can always a) upgrade the internal drive to something bigger or b) add a USB-powered external hard drive.

Of course, a laptop isn't the best idea for a serious server, simply because it was never designed to be under so much stress. That's why you'd need to buy server-grade SCSI/SAS drives, ECC/buffered RAM, etc. if you wanted something to run an office building's Active Directory, for example.


----------



## Jtvd78

I mean it is a good idea for low stress, home use.


----------



## Dilyn

Pics



































*(Description)*
It's just a silly little media server I made. Was fiddling around with it, got bored and decided that I would enjoy getting to stream my music everywhere. Used LAMP for it, seemed pretty easy. I was following a tutorial online, and some of the commands were broken/outdated or he was missing info. So I created my own for other people to use (finding that guide took about two days worth of searching to find it).
Right now it's out of commission... Didn't feel like updating the DNS address so now it's gone for a while. The backlight is busted so I need to hook it up to an external monitor to see what I'm doing as well. Don't feel like sinking the cash into it to make it work nice, as I got it for free because of the backlight not working. Just for funsies and so I can get my music wherever








Going to try to use a Server Edition later on to make it much better... The GUI just rapes it.
Also want to keep it in my basement connected to my hardline... But I don't want to have to go all the way downstairs with a monitor to refresh the damn thing. Oh well, not everything can be amazingly easy.

*OS:* Ubuntu Linux 9.04 Desktop Edition. Looking to retry with the Server Edition though.
*Case:* Just a laptop case








*CPU:* It's a mobile CPU... Dual Core... Thinking it's about 1.6GHz?
*Motherboard:* Stock Dell board.
*Cooling:* Stock fans.
*Memory:* About 1 gig if I remember correctly... Can't check right now though.
*PSU:*Umm...
*HDD:* 30GB when unformatted. About 25 gigs left after the OS. Just enough for my music!
*Server maker:* Dell









*(What you use it for Ex: Print server, backups, file server, etc.)
*Media server.
*( If you have temps/ loudness / etc. thay should be posted HERE)
*Very quiet. Although it does get quite hot.
*(do you have any additional software that you use?)
*Just Linux/Apache/MySQL/Python, as well as DynDNS for internet access. I use another program to get it going, but I forget what it's called


----------



## ComGuards

Quote:


Originally Posted by *nookkin* 
Yeah, 40GB isn't much, but I can always a) upgrade the internal drive to something bigger or b) add a USB-powered external hard drive.

Of course, a laptop isn't the best idea for a serious server, simply because it was never designed to be under so much stress. That's why you'd need to buy server-grade SCSI/SAS drives, ECC/buffered RAM, etc. if you wanted something to run an office building's Active Directory, for example.

Add in a second NIC and you could use the system as an iSCSI front-end with an iSCSI SAN....


----------



## the_beast

or add a couple of eSATA ports via the PC Card slot and run a few 2TB drives directly. If you currently manage with 40GB, 4TB should see you ok for the foreseeable...


----------



## Damarious25

Quote:


Originally Posted by *Jtvd78* 
What happened to all the specs and pics?

Sorry, thought the thread had died. Was really sad to because I really like it!!! Will repost in a few hours.

Quote:


Originally Posted by *ComGuards* 
Unknown. I live in a condo and the electrical bill is included in the monthly management/maintenance fees, which includes electricity, water, air-con/heat, & digital cable-TV.









Jerk... (j/k)

Quote:


Originally Posted by *phasezero* 









So......................... Clean.............................?!

Quote:


Originally Posted by *metallicamaster3* 
Subbed. Just wait until I get home








...

tick tock tick tock.

------

Also, all the little lap top file storage systems look great! Great idea!


----------



## bobfig

well here is mine....i macgyvered it so done laugh xD

its a old compaq case and prapiatary psu.

Specs:
Atom 230 intel d945gclf
200gig wd pata hdd
200gig maxtor pata hdd
640gig seagate 7200.11 sata
1gig of generic ram
win7 OS

i manely use it for network storage and backups. i have tried running a l4d and a vent/team speak servers on it but my internet connection sucks major monkey balls.
all in all i like it. just need to upgrade it to 2gigs of ram some day. never had any problems and if you look closely i used a molex connector for the 4 pin power plug.

oo and BTW that is a win ME key on the front.


----------



## TheLaw

portauthority said:


> the Optiplex PCs are great for this stuff
> 
> Represent! 2002 Optiplex GX260 still hauling.


----------



## chatch15117

I have my laptop loaded with a bunch of services so I can take my business on the go. HTTP, POP3/SMTP, SQL etc.

Sample Blade of the new Westmere revision

4x Westmere ES chips
8x8GB DDR3 1333
Custom 4 socket board

600GB SAS drives

VMWare ESX is taking care of it with 2008R2 datacenter edition & a *NIX. Not powered on all the time


----------



## Jtvd78

Quote:


Originally Posted by *chatch15117* 
I have my laptop loaded with a bunch of services so I can take my business on the go. HTTP, POP3/SMTP, SQL etc.

Sample Blade of the new Westmere revision

4x Westmere ES chips
8x8GB DDR3 1333
Custom 4 socket board

600GB SAS drives

VMWare ESX is taking care of it with 2008R2 datacenter edition & a *NIX. Not powered on all the time

Got any pics?


----------



## IBuyJunk

Quote:


Originally Posted by *bobfig* 
well here is mine....i macgyvered it so done laugh xD

its a old compaq case and prapiatary psu.

Specs:
Atom 230 intel d945gclf
200gig wd pata hdd
200gig maxtor pata hdd
640gig seagate 7200.11 sata
1gig of generic ram
win7 OS

i manely use it for network storage and backups. i have tried running a l4d and a vent/team speak servers on it but my internet connection sucks major monkey balls.
all in all i like it. just need to upgrade it to 2gigs of ram some day. never had any problems and if you look closely i used a molex connector for the 4 pin power plug.

oo and BTW that is a win ME key on the front.


















Hah! I was going to stick an Atom mini ITX board in my parents' computer which is the same model.


----------



## Dickinson

File server, Stream Music and videos to the other computers, run some virtual machines,ftp, ssh, torrent.

Intel Q6600
Cooling CM Hyper TX2, 2 Enermax White leds in front and 2 antecs behind.
Motherboard Asus p5kpl
HDs Seagate 80 Sata, 80 IDE, 120 IDE, 1000Gb, 2x Wd 1000Gb Green
2 Gb DDR2 667
Radeon 4850
OS Ubuntu
Antec 300
Corsair vx 450w
I build this rig to play games 2 years ago


yeah i know, i need to organize this cables


----------



## MCBrown.CA

The Dell R710 in my signature is not mine. It belongs to the university I attend but it is my baby for the time being as I am moving our entire IT programs servers setup into a virtual environment. Specs are here. The system is currently hosting 10 servers.


----------



## Jtvd78

^^ pic is broken
EDIT: fixed now


----------



## Darkknight512

Everything server running on a laptop with a broken screen

OS:Win Vista
Case: Laptop
CPU: Pentium M / 1.5 Ghz
Motherboard: Acer C300 (Laptop)
Memory: 768 Mb
OS HDD: 40 Gb
Storage HDD: None
Server Manufacturer: Acer

Print, HTTP, Vent, Games, FTP, File server, Teamspeak, Mail server, media server, bit torrent


----------



## chatch15117

Quote:


Originally Posted by *Jtvd78* 
Got any pics?

westmere *es* chips... no pix lol this revision will be available next year for purchase.


----------



## jibesh

Here are my server specs that I listed in another thread.

http://www.overclock.net/servers/748...l-servers.html


----------



## ndoggfromhell

OS: Windows Server 2008
Case: Thermaltake Armor
CPU: Phenom X4 9150
Motherboard: Asus M3A76-CM
Cooling: Zalman
Memory: 8 Gig's DDR2-800
PSU: Corsair
OS HDD: 20Gb Seagate 2.5inch SATA (mirror'd)
Storage HDD: 4X500Gb sata (RAID 5 = 1.5TB), 4X1Tb (RAID1/0 = 2TB), 2X1.5Tb (RAID 1 = 1.5Tb)
Server Manufacturer: Me!

What you use it for: Storing movies, tv shows, etc... Backups for all PC's.
Any additional software that you use: Utorrent for "downloading", also folding.
Pics I'll include later


----------



## mrsmoke

OS: Winows 7
Case: Antec
CPU: Q6600 GO/ 2.4 Ghz / 4 Core
Motherboard: Asus P5Q-Pro
Cooling:
Memory: 4 Gb
PSU: 300 Watt No Name
OS HDD: 80 Gb Seagate
Storage HDD: Space 2 Tb (4 x 1 Gb / 2 Arrays)
Interface: Adaptec 1420sa 4 port Sata ll Raid controller
Server Manufacturer: My Self

Use: Backup (Movies, Music, Pictures), FTP Files Server, Website Hosting
Additional Software: Cerberus FTP Server, IIS
Pics:


----------



## sofakng

Here's what I'm running:

*CPU*: AMD Phenom II X6 1055T (6 cores @ 2.8 GHz)
*RAM*: 8 GB DDR2 ECC
*STORAGE*: PERC 6i - 8 TB RAID 6 (6 TB usable)
*OS DRIVE*: WD 500 GB
*VHD DRIVE*: WD Veliciraptor 150 GB
*OS*: Windows Server 2008 R2 (Hyper-V) with several virtual machines.

*Purpose*: Storage, streaming, downloading (SABnzbd/SickBeard, uTorrent), hosting (IIS/Apache), development lab.

I'm trying to figure out some other uses since it's quite a bit powerful and I also have a business internet connection (with five static ip addresses) plus a decent upload.


----------



## Jtvd78

Quote:


Originally Posted by *sofakng* 
Here's what I'm running:

*CPU*: AMD Phenom II X6 1055T (6 cores @ 2.8 GHz)
*RAM*: 8 GB DDR2 ECC
*STORAGE*: PERC 6i - 8 TB RAID 6 (6 TB usable)
*OS DRIVE*: WD 500 GB
*VHD DRIVE*: WD Veliciraptor 150 GB
*OS*: Windows Server 2008 R2 (Hyper-V) with several virtual machines.

*Purpose*: Storage, streaming, downloading (SABnzbd/SickBeard, uTorrent), hosting (IIS/Apache), development lab.

I'm trying to figure out some other uses since it's quite a bit powerful and I also have a business internet connection (with five static ip addresses) plus a decent upload.

^^^ thats a bit powerful for a home server. You could run a private game/vent server.


----------



## Darkknight512

Quote:


Originally Posted by *sofakng* 
Here's what I'm running:

*CPU*: AMD Phenom II X6 1055T (6 cores @ 2.8 GHz)
*RAM*: 8 GB DDR2 ECC
*STORAGE*: PERC 6i - 8 TB RAID 6 (6 TB usable)
*OS DRIVE*: WD 500 GB
*VHD DRIVE*: WD Veliciraptor 150 GB
*OS*: Windows Server 2008 R2 (Hyper-V) with several virtual machines.

*Purpose*: Storage, streaming, downloading (SABnzbd/SickBeard, uTorrent), hosting (IIS/Apache), development lab.

I'm trying to figure out some other uses since it's quite a bit powerful and I also have a business internet connection (with five static ip addresses) plus a decent upload.

By development lab do you mean running virtual machines for testing or do you mean a SVN server? Also have you thought about using ESXi to run all the systems in virtual machines?


----------



## IrDewey

FTP, HTTP, the occasional GMod or CSS server, some folding (for what it's worth), and seedbox.

P4 2GHz
256MB RDRAM
Nvidia GeForce4 MX 420


----------



## Boyboyd

OS: FreeNAS
CPU: Intel Pentium D 820
Motherboard: P5WD2 Premium ICH8R
Cooling: Stock
Memory: 4Gb
PSU: ummmm, a noisy one
OS HDD: 80Gb SATA
Storage HDD: 500Gb RAID1 / SATA (redundancy is the most important thing for us)

What you use it for: (In no particular order) Laser print server, backups from ALL the work PCs, backups of itself s), VPN (oh yes, a mighty 100kb/s up speed), NAS,

Temps, loudness, etc: Honestly got no idea about the first, it's very quiet though.

I admit it's not the fastest server in the world (or as fast as we need), but it gets the job done. I can recommend FreeNAS to anyone.


----------



## KG363

I don't even know how to use or employ a server


----------



## MCBrown.CA

Quote:


Originally Posted by *MCBrown.CA* 
The Dell R710 in my signature is not mine. It belongs to the university I attend but it is my baby for the time being as I am moving our entire IT programs servers setup into a virtual environment. Specs are here. The system is currently hosting 10 servers.











Quote:


Originally Posted by *Jtvd78* 
^^ pic is broken

Fixed!







Figure I'll put specs here to save yall a click.









CPUs: dual Xeon E5520 (i7) 2.26GHz
RAM: 12GB DDR3 1066Mhz ECC rDIMMs
Storage: SAS 6/ir controller: 250GB RAID1 (OS), 1TB RAID1 (VM datastore) with two 1TB hot-spares
PSU: redundant 870W
OS: ESXi 4.0 Advanced running 10+ VMs at the moment


----------



## LoneWolf15

I'm running an HP MediaSmart EX490.










I've swapped the original processor from a Celeron 420 to a Pentium Dual-Core E2140 pulled from a system that was upgraded. I have the original Seagate 7200.10 1TB drive in it, plus three Seagate 7200.11 1.5TB drives. I have a Hitachi 2TB in a Thermaltake external enclosure hooked up via eSATA to serve as a backup drive.

I'm using an APC BackUPS ES 750 for protection, with the GridJunction add-on installed for Windows Home Server, and WHSClamAV for Antivirus. I'm very happy with the box, it's a slick little setup, and due to HP's extra work, it supports Mac too in case I need to do backup work for a friend or colleague.

P.S. Nice Server, MCBrown. I look over a fair amount of Dell server equipment at work as well, though my organization is probably smaller than yours. I've virtualized all but two of our Active Directory servers using VMWare ESX.


----------



## Damarious25

great photo skills wolf... or is that a manufacturer pic???


----------



## AMD SLI guru

OS: Windows 7 64bit Ultimate
Case: Istar Steel w/ 5 bay Raid Cage
CPU: AMD X3 720 @ 3.2 GHz
Motherboard: Gigabyte GA-MA790X-UD4P
Cooling: Stock CPU Cooler
Memory: 4gigs DDR2 1066
PSU: Thermaltake 430Watt
OS HDD: 500 Gig 7200RPM Western Digital
Storage HDD: 5 TB's, 5 x 1TB Western Digital HDD, all SATA II, no Raid
Server Manufacturer: ME!

What you use it for: Home Storage Server, Media Streamer, Constant Folder with CPU and 9800Gt
Temps, loudness: Temps are about 47c at CPU, honestly is pretty loud. Idk the DB but it's louder than my Gaming rig *Sig rig*.

Pics


----------



## LoneWolf15

Quote:


Originally Posted by *Damarious25* 
great photo skills wolf... or is that a manufacturer pic???

I ripped that one --though I can always take a shot of mine and its external drive. UPS is away from it, on the floor. I'm not nearly that organized.









I do have to say, I'm eagerly awaiting the final release of WHS Vail. Windows Home Server is one of those products that Microsoft has done a downright decent job with.


----------



## chingwilly

os:xp pro sp3
case: coolermaster 590
cpu:intel e5200
motherboard:abit p43
cooling:stock hsf, bunch of 120mm fans
memory:2G ddr800 ocz
psu:corsair 650
storage







4) 2tb green WD, (8) 1tb green WD
(2) promise 4 port sata cards, pci vga(just in case i need it)
(3) coolermaster 4in3 HD bays
uses:file server and backup,stream blueray and dvd to htpc


----------



## i_ame_killer_2

My random junk ClearOS server (used as router) currently under my bed...Works well thought.

P4 1.6ghz
512mb ram
10gb HDD


----------



## killabytes

Quote:


Originally Posted by *killabytes* 
*My Web Server:*

*AMD Athlon XP 1800+
2GB PC3200
RAID 1 Array 60Gb*

I host 5 sites.

*Game Server:*

*Intel PIII 1Ghz
1.5Gb PC133 RAM
60Gb HDD*

Sits around waiting to be sold now. Did host Battlefield 1942 & Urban Terror for years.

Both systems are old, but work great. I've had them running 24/7 for years.

Replaced the 1800+ with a Barton 2900+ from a member here. Still haven't sold the P3, no big deal.

But the newest member to the family is a Cobalt Raq XTR.

Specs:
733Mhz P3
1.25Gb of ECC PC133 RAM
4 X 30Gb RAID 7200RPM

I just got it and it has the factory ROM/Linux. I'm looking to dump Debian onto it. Seems like it's a bit of a hack though. But I'll try. I attached some pics.


----------



## tombug

Quote:


Originally Posted by *chingwilly* 
os:xp pro sp3
case: coolermaster 590
cpu:intel e5200
motherboard:abit p43
cooling:stock hsf, bunch of 120mm fans
memory:2G ddr800 ocz
psu:corsair 650
storage







4) 2tb green WD, (8) 1tb green WD
(2) promise 4 port sata cards, pci vga(just in case i need it)
(3) coolermaster 4in3 HD bays
uses:file server and backup,stream blueray and dvd to htpc

Hey what are those hard drives in, some type of cage? If you can link me to where you go them I would appreciate it, I know that the bottom one is included.


----------



## LoneWolf15

Quote:


Originally Posted by *killabytes* 
But the newest member to the family is a Cobalt Raq XTR.

Cool. I always thought the Cobalt Qube was a really cool piece of equipment.

For comp-history buffs --Wikipedia link


----------



## killabytes

The Qube is pretty badass looking. I'd love to get the case and put some modern hardware inside. Could be a center piece in a man cave, lol.


----------



## chingwilly

Hey what are those hard drives in, some type of cage? If you can link me to where you go them I would appreciate it, I know that the bottom one is included.

http://www.newegg.com/Product/Produc...-002-_-Product


----------



## Jtvd78

It would be pretty cool if we could get this stickied.


----------



## monogoat

No pics at the moment

OS: Ubuntu (Planning a move to Gentoo)
Case: Cheapo
CPU: (2) Intel Xeon Prestonia LV 1.60GHz w/ HyperThreading
Motherboard: Asus PC-DL Deluxe
Cooling: Intel 1U Server Heatsinks w/ 80mm fans zip-tied on
Memory: 1gb of unmatching chips (2x 256mb, 1x 512mb)
PSU: ENERMAX MODU82+ 625W
OS HDD: 36gb Raptor SATA
Storage HDD: (4) 300gb, (4) 320gb IDE Drives in RAID 5
Server Manufacturer: Me fool!

What you use it for: fileserver, eventually also a mythtv server.

-----------------------------------------------------------------------------

OS: Ubuntu (Planning a move to Gentoo)
Case: Stock Compaq Tower
CPU: Intel Pentium III 500mhz
Motherboard: Stock Compaq
Cooling: Stock Compaq
Memory: 512mb
PSU: Stock Compaq
OS HDD: 60gb Quantum Fireball
Server Manufacturer: Compaq

What you use it for: Internet gateway, firewall, dhcp and dns server.


----------



## Freightweight

OS: gentoo
case: some rosewill for $20
cpu: amd sempron, unlocked 2nd core
memory: 2gb value ram
psu: antec 400w 80+
storage: 2x 1.5tb 7200 rpm seagate, 1tb 7200 rpm seagate, 8gb cf card for os


----------



## zodiacdm

Quote:


Originally Posted by *Freightweight* 
OS: gentoo
case: some rosewill for $20
cpu: amd sempron, unlocked 2nd core
memory: 2gb value ram
psu: antec 400w 80+
storage: 2x 1.5tb 7200 rpm seagate, 1tb 7200 rpm seagate, 8gb cf card for os

Hey, you guys hear that the latest gentoo distro came with a package containing a *backdoor virus?*

http://www.zdnet.com/blog/bott/linux...06?tag=nl.e550


----------



## e_dogg

*OS:* Windows Home Server
*Case:* Antec 300
*CPU:* AMD Sempron LE-1250
*Motherboard:* Gigabyte GA-MA78GPM-DS2H
*Cooling:* Passive CoolerMaster Gemini II, 2x 120mm Antec Tricools for intake, 1x 120mm Antec Tricool exhaust, 1x 140mm Antec Tricool exhaust
*Memory:* 2x1gb G.Skill 1066 (formerly Ballistix that died - need to RMA those suckers)
*PSU:* Ultra 400w modular
*Storage HDD:* 2x 750gb Western Digital RE2 Green WD7500AYYS drives, 1x Seagate 500gb 7200.10 (cannibalized from another system)
*Backup HDD:* 1x Western Digital 1TB green drive in an eSATA dock

*Uses:* PC/laptop backups, file server, remote access (when port forwarding is actually working







)

My WHS box is the one on the right. Unfortunately, I have no pics of the guts inside the Antec 300:









Here are the components in their former home (Silverstone SUGO3)









And the disks in their former home


----------



## Damarious25

bump to this thread, also, you sever gurus PLEASE check my server upgrade project here.

will constantly relating it here and and showing progress.

thanks folks!


----------



## killabytes

Just picked up a Watchguard Firebox II. I'm going to put m0n0wall onto it. I think I need to make myself a little rack to hold all this stuff.


----------



## ACiD GRiM

Personal use:

The mighty Zeus sits aloft his stronghold as he wields:

Supermicro MBD-X8DTL-iF-O with an IPMI for out of band management
2x Intel Xeon E5620s @ 2.40Ghz for 16 logical CPUs
8GB of ECC RAM @ 1066Mhz (soon to be upgraded to 16GB)
4x 1TB Western Digital Black SATA HDDs in RAID1+0 in hot swappable trays
(Soon to add an AMD or Nvidia GPU)
6 external 1TB HDDs in RAID1+span for a total volume of 3TB

He is armored in a 3U Supermicro CSE-833T-650B which has 6 hot swappable cooling fans and a 650W PSU.

Zeus runs Fedora 13's KVM (waiting for CentOS 6) system to control a myraid of VM's that run services ranging from DNS to DHCP to Apache to Various versions of Windows. It's mostly purely for experimentation, I'm building a virtual Beowolf cluster, and I plan to do some WPA/2 security testing once I decide on a GPU. I also run some useful services such as rtorrent, a print server, and backup server for my room mates and myself. The external HDDs are for storing media and backups, and the system I have them set up in allows me to expand the volume indefinitely as well as redundancy.

Below Zeus I have a Cisco 2960s 24 port Gigabit switch, named nn-s1 after a standard naming convention for networking devices. All ports are 15.4 Watt PoE, but I only use one right now to power a Cisco Aironet 1130ag access point. Zeus connects via two separate 1Gbps links that are bonded through an LACP Etherchannel for a total throughput of 2Gbps. The rainbow of Ethernet cables was merely for show when I took the picture, but now it's about half full with various IP devices I have.

On the next level I've got Helios, a Lenovo nettop, running Nagios and Cacti to watch over my various servers's statuses. I can do some experimentation in network reliability and netflow monitoring so I used a seperate PC instead of a VM on Zeus because I wanted to account for events when Zeus would be off. Next to Helios is nn-g1, my Cisco ASA 5505 firewall router. It's got IPsec, SSL, and HTTPS VPNs so I can access anything at home from anywhere in the world that has internet. I also do some QoS to ensure my website and XBox live aren't ever effected by HTTP or BitTorrent downloads. Nn-g1 also does VLAN seperation to keep my administrative network seperate from my roommates' resident network and the guest network.

Finally I've got an APC 1500VA UPS to protect my rack from power surges and data from power failures. It connects to Zeus via USB, which apcupsd uses to decide when it needs to shut down the server. Helios also uses apcupsd to connect remotely to Zeus's instance to share UPS status data.

The fan behind the cabinet rack blows the exhaust out a dog door to the outside. There's a weather resistant cover to prevent rain or snow from getting inside. That, coupled with the fans of every device in the cabinet, is audible anywhere within a 20ft radius, even through the metal door to the closet.

-ACiD GRiM


----------



## tehmaggot

I'm using an old Alienware desktop of mine (circa ~2004) as a server for the time being. It'll be my primary PC in my parent's new home down in Florida.

*OS*: XP Pro/CentOS 32-bit
*Case*: mATX Alienware case (not the flashy design)
*CPU*: Pentium 4 3.0Ghz LGA775
*Motherboard*: ASUS P5GD1-VM (considering upgrading this, but who knows. I'd like to do some good old P4 overclocking.)
*GPU*: 1950XTX
*Cooling*: Thermaltake Big Typhoon
*Memory*: 1GB (4x256MB) DDR-200
*PSU*: Random brand 400W PSU
*OS HDD*: WD 320GB SATA
*Storage HDD*: None at the moment. Will likely be moving the storage drive from my desktop into this machine.
*Manufacturer*: Original manufacturer was Alienware, but I've since changed the RAM, GPU and HDD.

As mentioned before, this will be my primary PC when I am at my parent's home in Florida. I won't be able to travel back and forward with my primary PC, so this weak machine will have to be a substitute while I'm there for short bits of time. While it's still at my home in Ohio, it's my 24/7 IRC machine, and will soon be networked storage.


----------



## Coolman4now

- I've built this last week, just finished it today.

- Specs :

OS: Windows 7 32bit.
Case: NZXT Beta EVO
CPU: AMD Athlon II X2 240
Motherboard: Asus M4A78LT-M
Cooling: Thermaltake Contac 29 + CM Fan on the top and front.
Memory: 2GB Apogee DDR3 1333MHz
PSU: HEC Cougar 700W
OS HDD: Western Digital WD5000AAKS-00V1A0
Storage HDD: 5* WD1000EARS + 1*WD10002EAFX + Maxtor 1TB 7200.12
Interface: SATA

- Its Quiet, Efficent and serves my needs well.

- I use it through 1GBps Network, A wireless N Router to Serve my needs.

- The upper 4 Drives are put in Coolermaster 4 in 3 Drive Bay.

- I use Map network drive to access the content and control via Remote Desktop Connection as its headless.

- What you guys think ? any far suggestions ?


----------



## prn1357

Quote:


Originally Posted by *Coolman4now* 
- I've built this last week, just finished it today.

- Specs :

OS: Windows 7 32bit.
Case: NZXT Beta EVO
CPU: AMD Athlon II X2 240
Motherboard: Asus M4A78LT-M
Cooling: Thermaltake Contac 29 + CM Fan on the top and front.
Memory: 2GB Apogee DDR3 1333MHz
PSU: HEC Cougar 700W
OS HDD: Western Digital WD5000AAKS-00V1A0
Storage HDD: 5* WD1000EARS + 1*WD10002EAFX + Maxtor 1TB 7200.12
Interface: SATA

- Its Quiet, Efficent and serves my needs well.

- I use it through 1GBps Network, A wireless N Router to Serve my needs.

- The upper 4 Drives are put in Coolermaster 4 in 3 Drive Bay.

- I use Map network drive to access the content and control via Remote Desktop Connection as its headless.

- What you guys think ? any far suggestions ?










Thats a pretty sick server. Personally, I would have gone with WHS or Linux. And a 700 Watt PSU is a little overkill for a low power server. And The space between the third and fourth drive is really annoying me. I hate OCD. Other than that. Great sever, I wish I could have one like that.


----------



## Jtvd78

As the creator of this thread, I should have a server, and now I do. Nothing specian, just a P4 with a 500GB WD green.

OS: Ubuntu
Case: Dell Stock
CPU: P4 / 1 core W/ HT
Motherboard: Dell Stock
Cooling: Dell Stock
Memory: 4 GB
PSU: Dell Stock
OS HDD: 500GB WD Green for OS and storage -SATA
Storage HDD: None
Server Manufacturer: Dell

Currently, I just use it for a NAS around the house. I have separate folders for each computer/user in my house, but my family doesn't really care, and they don't use the server at all. Also, the server is barely audible. I have it connected with Gbit LAN and Wireless N, but none of the computers in the house have Gbit, except for mine.

A pic of the server, nothing special. I control the it with a remote desktop software.









What the networked folders show up as, in My Computer.


----------



## killabytes

Latest inside look of my server. New RAID 5 setup too.


----------



## JumJum

i have a server MB just sitting on my parts bench. Dual socket intel MB, with 2 single core 3.4ghz xeon cpus...just never ordered the ram to get it up and running lol.


----------



## Citra

Do Nas systems count as a server?


----------



## Jtvd78

Quote:


Originally Posted by *Citra* 
Do Nas systems count as a server?

1. A Server is a computer that provides services used by other computers. For example a web server serves up web pages.

2. A Server is a computer program that has an associated Client program. This might run on the same computer or on another networked computer. MySql is an example of a database server program. Developers write clients that communicate with the server.

3. In computer networking, a server is simply a program that operates as a socket listener

4. A server computer is a computer, or series of computers, that link other computers or electronic devices together.

5. Servers often provide essential services across a network, either to private users inside a large organization or to public users.

Yes, a NAS is a server


----------



## OverSightX

Mine is just one of my old case with some parts off an old build.


















Athlon 64 x2 4600+
4GB DDR2
Asus MB
75GB Raptor w/ server 2003
4TB internal storage
2TB external storage

Used mostly as a media/storage server

At work we have about 40 servers so I wont be naming those


----------



## Citra

OS: DSM 3.0 (Linux Custom)
Case: DS 210J
CPU: Model / Speed / Cores: ARM/800MHZ/1
Motherboard: Synology proprietory
Cooling: 7 CM Fan
Memory: 128mb ddr 800
PSU: Power Brick 25 watt max consumption
OS HDD: (If you have one): N/A
Storage HDD: Space (GB , TB) / Interface (IDE, SATA, SAS): Seagate Barracuda 7200.12 1TB Sata
Server Manufacturer: (If there is one. Ex: Dell, HP): Synology
Nas








Will upload some pictures later.


----------



## CurlyBrackets

My new systems: (click for bigger pic)
Front side



Back side (mind the cables)


And with the front door open on the top one.


The top one is my webserver that me and my friends use the bottom one is my pfsense box.

Webserver specs:
Athlon II x4 635
4gb (2x2gb) Gskill ripjaws, 1333mHz 7-7-7-21
Asus M4A78T-E mobo
1 x 500gb WD caviar black, 1 x 500gb 7200.12 Seagate thing (older than the back)
620w Seasonic S12II (Got it for $62 CAD, I love combo deals)
Norco RPC-470 chassis

Webserver roles:
-HTTP
-FTP
-Ventrilo
-Source servers (gmod will be coming soon)
-Minecraft
-Mediaserver for my home
-Backup/storage for machines around the house
-Anything else it needs to be

I apologize for not having any pictures of it's internals, i was in a rush to get everything built and back up. My only complaint thus far is that it needs more RAM, minecraft hogs a lot of it....

Love to see other peoples servers,
~{}


----------



## Citra

Quote:


Originally Posted by *CurlyBrackets* 
My new systems: (click for bigger pic)
Front side



Back side (mind the cables)


And with the front door open on the top one.


The top one is my webserver that me and my friends use the bottom one is my pfsense box.

Webserver specs:
Athlon II x4 635
4gb (2x2gb) Gskill ripjaws, 1333mHz 7-7-7-21
Asus M4A78T-E mobo
1 x 500gb WD caviar black, 1 x 500gb 7200.12 Seagate thing (older than the back)
620w Seasonic S12II (Got it for $62 CAD, I love combo deals)
Norco RPC-470 chassis

Webserver roles:
-HTTP
-FTP
-Ventrilo
-Source servers (gmod will be coming soon)
-Minecraft
-Mediaserver for my home
-Backup/storage for machines around the house
-Anything else it needs to be

I apologize for not having any pictures of it's internals, i was in a rush to get everything built and back up. My only complaint thus far is that it needs more RAM, minecraft hogs a lot of it....

Love to see other peoples servers,
~{}

Wow mine sucks compared to yours


----------



## CurlyBrackets

Quote:


Originally Posted by *Citra* 
Wow mine sucks compared to yours









Don't worry, your server will eventually become something insane. I started with a system just marginally better than yours 10 months ago. It had a 1.8gHz 939 setup in it, 512mb of RAM and the 500gb Seagate that's in my main server now. That server is now the pfsense machine chugging away underneath its more powerful brother.

Come to think of it, this is now my fourth major server revision, yay...

~{}


----------



## kremtok

My sig rig is my home file/music/print server. Does that count as a 'server' in the sense being discussed here?

Its primary function is as my gaming computer, but it doesn't see much of that these days. It does fold 24/7, though.


----------



## Citra

Quote:


Originally Posted by *CurlyBrackets* 
Don't worry, your server will eventually become something insane. I started with a system just marginally better than yours 10 months ago. It had a 1.8gHz 939 setup in it, 512mb of RAM and the 500gb Seagate that's in my main server now. That server is now the pfsense machine chugging away underneath its more powerful brother.

Come to think of it, this is now my fourth major server revision, yay...

~{}

lol i hope so xD


----------



## Coolman4now

Quote:


Originally Posted by *prn1357* 
Thats a pretty sick server. Personally, I would have gone with WHS or Linux. And a 700 Watt PSU is a little overkill for a low power server. And The space between the third and fourth drive is really annoying me. I hate OCD. Other than that. Great sever, I wish I could have one like that.

- Thanks man.

- I have to stick with windows, so everyone here can deal with it.

- As for My OCD preventing me from enjoying the server, I'll add 1TB soon to feel this space


----------



## Jtvd78

Quote:


Originally Posted by *Coolman4now* 
- Thanks man.

- I have to stick with windows, so everyone here can deal with it.

- As for My OCD preventing me from enjoying the server, I'll add 1TB soon to feel this space









WHS = Windows Home Server.


----------



## Coolman4now

Quote:


Originally Posted by *Jtvd78* 
WHS = Windows Home Server.

- of-course, but as I said, I needed something familiar everyone can handle from grandma to my little sister.


----------



## portauthority

Just received the hardware, still need to set it up by getting more disks and software.

Model: Dell PowerEdge R510
OS: Unknown, leading towards HyperV ATM
Case: Dell 2U Rack
CPU: 2x Intel Xeon E5620, 2.4GHz - 8 cores 16 threads
Motherboard: Intel 5520 chipset
Cooling: 5 fans inside, surprisingly quiet
Memory: Got 8GB for now but will be switched to 24GB soon
PSU: 2x 750W, 80 PLUS Gold rating
Server Manufacturer: Dell

What you use it for: hosting VMs of all kinds, haven't determined what they'll be doing yet. Some will be application servers. Others will do networking things.

I was expecting rack servers to be loud but this server is pretty quiet compared to their 1U lineup.


----------



## Pentium-David

Server 1:
OS: Windows Server 2003 R2
CPU: Pentium 4 550 3.4GHz
RAM: 2GB 667
GPU: Nvidia GeForce 7300GT
HDD: 400GB Sata II
Purpose: File Server, DNS, DHCP server

Nothing special. Only sees about about 33 gigs of traffic per month

--------------------------------

Server 2:
OS: Windows Vista








CPU: AMD Athlon x2 2.8GHz
RAM: 2GB 667
GPU: GeForce 9500GT
HDD: 200GB
Purpose: Seedbox and Folding computer
--------------------------------
I'll upload pics later lol


----------



## Nick7269

*OS:* W7 64
*Case:* eMachine
*CPU:* AMD II X2 235e Dual core
*Motherboard:* Made in China
*Cooling:* 2x 120mm
*Memory:* 6gig
*PSU:* Generic says 300watt max
*OS HDD:* 200gig
*Storage HDD:* 4x 2tb (raid 10) High Point Rocketraid 2300
*Server Manufacturer:* eMachine

*What you use it for?* Media Streaming is plan A. Who knows from there where it will go?
*Temps, loudness, etc.* Temps do not seem to be an issue after adding the fans. The machine is surprising almost silent.
*Any additional software that you use* I'm playing with Apache and looking at Amahi, teamserver, and other ideas.

I had lots of fun cramming all this drives in this box.

Here is the pictures I posted and performance.
http://www.overclock.net/servers/848...ver-build.html


----------



## manchesterutd81

this is a server i won during a folding torney...

dual xeon server.. now if i can just get the darn thing to work lol









Shot at 2010-10-21


----------



## ComGuards

Quote:


Originally Posted by *manchesterutd81* 
this is a server i won during a folding torney...

dual xeon server.. now if i can just get the darn thing to work lol


What's the problem?


----------



## Imrac

Just assembled my newish server!

Zotac Mini-ITX -- GF6100-E-E --- $50 (newegg)
AMD Athlon x2 4050e 45w -------- $25 (OC Market place)
2x 1gb Crucial Ballistic DDR2 ------ $35 (OC Market place)
Generic 420w PSU ------------------- Free
74gb WD Raptor (Old) -------------- Free
3 x 1TB Samsung F3 ----------------- $180 (newegg)
Windows Server 2008 R2 ------------ Free (Dream Spark)
Total ----------------------------------- $290

I need to get a new PSU because it says it was manufactured in 2003. I am sure its super inefficient. I was looking at that 400 Watt Antec Neo PSU. Currently it idles at 62watts and load is like 90watts.

Currently just file serving, but I did install ventrillo, just need to port foward. Probably will install a small webserver and dhcp.

Crapy cell phone pic:


----------



## IBuyJunk

Quote:


Originally Posted by *portauthority* 
Just received the hardware, still need to set it up by getting more disks and software.

Model: Dell PowerEdge R510
OS: Unknown, leading towards HyperV ATM
Case: Dell 2U Rack
CPU: 2x Intel Xeon E5620, 2.4GHz - 8 cores 16 threads
Motherboard: Intel 5520 chipset
Cooling: 5 fans inside, surprisingly quiet
Memory: Got 8GB for now but will be switched to 24GB soon
PSU: 2x 750W, 80 PLUS Gold rating
Server Manufacturer: Dell

What you use it for: hosting VMs of all kinds, haven't determined what they'll be doing yet. Some will be application servers. Others will do networking things.

I was expecting rack servers to be loud but this server is pretty quiet compared to their 1U lineup.

Loudest servers I've ever heard: Sunfire V20Z


----------



## killabytes

Quote:


Originally Posted by *IBuyJunk* 
Loudest servers I've ever heard: Sunfire V20Z

You need to hear my Cobalt XTR. 10 10,000 RPM Fans.


----------



## Jtvd78

Quote:


Originally Posted by *killabytes* 
You need to hear my Cobalt XTR. 10 10,000 RPM Fans.









Yes. Yes we do.


----------



## cyclist14

I have two 1U cases with 2GB DDR400/ P4 2.8 HT/20 GB IDE drives. In this pic one of them was my DC and the other a file server. I have taken the DC offline and I am now just running the file server. The DC will be replaced by a VM on my sig rig. My file server has around 1tb of connected, shared drives and runs BT, Orb media and peerblocker 24/7


----------



## null_x86

Dell XPS 420
OS: Windows Server 2008 R2
CPU: Intel C2Q Q6600
Memory: 4GB DDR2 1066
PSU: 425W Dell
OS HDD: 30GB Vertex SSD
Storage HDD: 2.27TB Formatted - 500GB Caviar Black, 2x 1TB Hitachi DeathStars (pun intended, these are the good models)
GPU: Nvidia GeForce 8600GTS 256MB

What you use it for - Runs File, Print, Web, AD, Folding, and its my media work horse (streaming, encoding, ripping, etc)

Temps - Temps stay under 50c with stock cooling, Loudness isnt all that bad unless the drives get pretty active

Any additional software that you use - uTorrent, TVersity, CCCP, and a few others I cant remember off the top of my head

Guts Pic


----------



## TurboTurtle

My little jack-of-all-trades home sever:

OS: Debian
Motherboard: Asus M4A88TDMUSB3
CPU: Athlon II x4 640
Memory: 4GB (4x1GB) DDR3 1333 ECC (my boss gave me these for $10 a piece







)
PSU: 600w....forget the make. Had this laying around and it still kicks.
OS HDD: 320GB 7200.10 Seagate. Had this laying around from an old build. Runs like a champ after 5+ years, no bad sectors at all.
Storage HDD: 2x1 TB 7200.12 Seagates. Thinking of changing to a RAID 5 in the near future.
GPU: Onboard.

What you use it for:
File Server / Media Streaming
Printer Server
Newsreader - SabNZB+ downloads, repairs, unpacks, and sorts my downloads. And discards the archives afterwards.
Ampache Server
Home Surveillance 'Server' - cameras connect to the server and dump their captures there.

Once I get out of my apartment I'll also be using this as an Untangle Router.

Thing is almost dead silent and sits right next to my TV. Only thing you can hear from time to time is the aging 7200.10 drive whenever it gets starts getting pegged for a while.

Pics to come.


----------



## Jtvd78

Quote:


Originally Posted by *null_x86* 
Dell XPS 420
OS: Windows Server 2008 R2
CPU: Intel C2Q Q6600
Memory: 4GB DDR2 1066
PSU: 425W Dell
OS HDD: 30GB Vertex SSD
Storage HDD: 2.27TB Formatted - 500GB Caviar Black, 2x 1TB Hitachi DeathStars (pun intended, these are the good models)
GPU: Nvidia GeForce 8600GTS 256MB

What you use it for - Runs File, Print, Web, AD, Folding, and its my media work horse (streaming, encoding, ripping, etc)

Temps - Temps stay under 50c with stock cooling, Loudness isnt all that bad unless the drives get pretty active

Any additional software that you use - uTorrent, TVersity, CCCP, and a few others I cant remember off the top of my head

Guts Pic









Do you really need an SSD for a Server?


----------



## ComGuards

Quote:


Originally Posted by *Jtvd78* 
Do you really need an SSD for a Server?

Yes. I have SSDs in two of my servers.









It depends on what you're doing with the server. For me, one server has a slower CPU (Atom 330) so I eliminate the really slow HDD to make things faster by a bit. The other server stores a SQL database on the SSD. Works great


----------



## null_x86

Quote:


Originally Posted by *Jtvd78* 
Do you really need an SSD for a Server?

It all depends on what youre doing. Considering I do a lot of media stuff on mine, so its nice to have. That, and I have my AD stuff on there, so theres not a whole lot of lag. I'm happy with it. Configuration was a bit odd, since its only a 30GB, but once I got everything configured, it runs nice and smooth.


----------



## Dizzymagoo

Doesn't everyone have a half rack in their basement? Im still in the process of setting up all the servers with their iLo and respectable IP address's.


----------



## AMD SLI guru

This is my own Home server. I own this and it's all about my own making for home use.
*
(Description)

OS: Windows 7 Ultimate
Case: istarUSA D400-6SE-SL
CPU: AMD X3 720 / 3 cores / 3.6 ghz
Motherboard: GIGABYTE GA-790XTA-UD4
Cooling: Stock Air Cooler from Box
Memory: 8 Gigs
PSU: Thermaltake 500watt
OS HDD: Junky Sata 120 gig Notebook Drive
Storage HDD: 1TB x 6 drives / Sata II inferface
Server Manufacturer: MEEEEEEEEEEEEEEEEEEEEEEEE!

What you use it for? I use it for Backing Up my computers, Media Streaming, and transcoding ( + [email protected] on the GPU- 9800GT )

All my CPU core's are running at 40c max.
The Raid cage keeps all the hard drives really really cool.
*


----------



## zouk52

Quote:


Originally Posted by *AMD SLI guru* 
This is my own Home server. I own this and it's all about my own making for home use.
*
(Description)

OS: Windows 7 Ultimate
Case: istarUSA D400-6SE-SL
CPU: AMD X3 720 / 3 cores / 3.6 ghz
Motherboard: GIGABYTE GA-790XTA-UD4
Cooling: Stock Air Cooler from Box
Memory: 8 Gigs
PSU: Thermaltake 500watt
OS HDD: Junky Sata 120 gig Notebook Drive
Storage HDD: 1TB x 6 drives / Sata II inferface
Server Manufacturer: MEEEEEEEEEEEEEEEEEEEEEEEE!

What you use it for? I use it for Backing Up my computers, Media Streaming, and transcoding ( + [email protected] on the GPU- 9800GT )

All my CPU core's are running at 40c max.
The Raid cage keeps all the hard drives really really cool.
*










Is that a 3x5.25" to 5x3.5" drive adapter you have on there? If so, could you tell me what model it is please?


----------



## Jtvd78

Quote:


Originally Posted by *Dizzymagoo* 

















Doesn't everyone have a half rack in their basement? Im still in the process of setting up all the servers with their iLo and respectable IP address's.

Wait... You have that in your basement... For home use. What could you possibly need to do that requires half rack.

PS: Nice Rack







(Well, Half rack)


----------



## Dizzymagoo

Haha my dad works for HP and they were cleaning out a data center at DirectTV so I asked him to grab me a server or two... and he came home with a halfrack cabinet and a bunch of servers with iLo and a console station and 4 switches... lol

Im still trying to put them all to use! Haha. But yes its in my basement


----------



## the_beast

Quote:


Originally Posted by *Dizzymagoo* 









You're the first person I have seen who has used blanking panels properly on your home rack - good job.

Would have been more impressed if you hadn't spaced the servers out and had them all together at the bottom of the rack (REALLY annoys me when I see this at work), but with blanking panels in place (and for a home system) that's not too much of an issue...


----------



## Marma Duke

Quote:


Originally Posted by *Dizzymagoo* 
Haha my dad works for HP and they were cleaning out a data center at DirectTV so I asked him to grab me a server or two... and he came home with a halfrack cabinet and a bunch of servers with iLo and a console station and 4 switches... lol

Im still trying to put them all to use! Haha. But yes its in my basement









That an ML350 G4 on the bottom? I love the sound it makes on powering on.


----------



## this n00b again

Quote:


Originally Posted by *Dizzymagoo* 

Doesn't everyone have a half rack in their basement? Im still in the process of setting up all the servers with their iLo and respectable IP address's.

To answer your question, it's actually just in my room next to my main PC:










Top one is server in rackmount case,
Also not pictured is 2x more servers.

All servers configured with 6x 1tb raid 1+0, & 200 GB OS HDD
CPU: amd athlon x3 440
MOBO: biostar
PSU: Corsair
RAM; 2x2gb Gskills
OS: Ubuntu, and Windows server 2003


----------



## Damarious25

Quote:


Originally Posted by *this n00b again* 









dem speakers look a little tipsy der buuuuddy...

beautiful storage space though.

EDIT. you'd think its a recording studio but with the motherboard poster I'm now confused...


----------



## Dizzymagoo

I spaced them out because I have them categorized by what they do. And yeah the bottom server is a beast. Its got six 300GB 10k SCSI drives. 8GB EEC Reg. Two Intel Xeon dual cores. Its my file server. Then I have a DHCP server. Another one is a game server. Others are just practice boxes for unit and windows OS's.


----------



## this n00b again

Quote:


Originally Posted by *Damarious25* 
dem speakers look a little tipsy der buuuuddy...

beautiful storage space though.

EDIT. you'd think its a recording studio but with the motherboard poster I'm now confused...

lols, that's because it use to be a recording studio, it still is, i just have to set it up. It's just a temp setup for maybe a month more.

The bottom machine runs a q8300 OSX for protools. The rest of the studio equipment isnt pictured.

Oh yeah, lol there are isolator pads i have for the speakers, that aren't placed there in the picture. But the speakers have been rearranged more strategicly since then.


----------



## Chicken_Lover

CPU - Intel p4 3.2GHZ
RAM - 1gig
OS - WHS 2003
GPU - 6600GT
HDD - 320g/500g/2x1TB
CASE -? Ebay (X2)... Expansion

For general media (movies/music), backups, file storage.

Servicing 4 laptops, 2 desktops and wd live box for the 50" plasma on a gigabit network.

Upgrading to ... E6500/965p/4gig ram, have parts scattered around somewhere just have to find the time to put it together.

My Ghetto server corner... will get a proper server cabinet one day.


----------



## Jtvd78

Bump. Lets get some more epic servers in here


----------



## prn1357

Nice setups, everyone


----------



## 3dfxvoodoo

I'd post mine
but it's to much like work


----------



## wtomlinson

i have a couple more, although these were a couple years ago, and they belonged to work. please excuse the cleanliness of them, i was in a very sandy place far from here.

First, some sort of Dell desktop. no memory of the specs i just know they used 2 video cards for 3 total monitors. XP Pro.









Second, 3 PowerEdge servers. one dedicated for network monitoring (the nice piece on the wall with Solarwinds running), one dedicated IRC server, and the 3rd just for regular admin use. all running Server 2003.









4 HP Proliant (G6 i think). 2 DNS, 2 Exchange. all running Server 2003.









2 PowerEdge servers for DNS, 2 IBM servers for Exchange. everything on the left was for satellite equipment. all running Server 2003.









4 PowerEdge servers. 2 for DNS, 2 for Exchange. all running Server 2003. everything on right is for another satellite setup.


----------



## mbudden

It's so dirty....


----------



## Citra

Second picture looks like a server in the washroom lmao


----------



## wtomlinson

Quote:


Originally Posted by *mbudden* 
It's so dirty....

all were taken in Iraq.

Quote:


Originally Posted by *Citra* 
Second picture looks like a server in the washroom lmao

Actually, you are exactly correct. pictures 2 and 3 were from what used to be a bathroom with a shower stall.


----------



## mbudden

Quote:


Originally Posted by *wtomlinson* 
all were taken in Iraq.

Well that explains it.
How do you deal with all the dust/sand?
I'm sure the temps are like hell inside those rigs.


----------



## wtomlinson

no real way to deal with it. just used an air compressor A LOT.

temps were pretty good actually. they (military) wanted services to stay up so there was always an a/c on that kept the rooms pretty cold. however, when the generators would go out (which happened a lot in the location of the bathroom), everything had to get shutdown within minutes because outside temps were around 130 F.

the bathroom had a lot going on in there. a total of 8 servers (one wasn't shown because i didn't have a picture of it. it was for testing purposes), 5 UPS, 6 switches, 2 routers, a rather large NAS (not pictured), and all the satellite equipment. doesn't sound like much, but when you take into consideration how big those little 2 rooms were (they're joined together), you can imagine how hot it got within minutes of no air.


----------



## Volvo

Let's see, no pictures now since I'm going to be flying off to Turkey in awhile.

But here's my file server.

OS: Windows 7 Professional x86
Case: CoolerMaster 310 Black Edition, 3x AVC DS12025B12H chassis fans
CPU: Model / Speed / Cores: Intel Pentium 4 HT 660 (3.6GHz, 1C2T)
Motherboard: ASUS P5LP-LE
Cooling: CoolerMaster Hyper TX3 w/ Delta AFC0912DE
Memory: (2x 1GB + 2x 512MB) KVR DDR2-667 CL5 Non-ECC
GPU: ATI Radeon X1650SE 256MB
PSU: Seasonic SS-350ES
OS HDD: Western Digital WD800JD 80GB S-ATA
Storage HDD: Western Digital WD20EARS 2.0TB (x2) S-ATA
Server Manufacturer: Original system manufactured by HP, modified by me.

What you use it for: Print server, file server, torrent box.
Temps: 40 deg. C idle (high ambients)
Loudness: Very loud


----------



## Lord Xeb

^^ This is still evolving I will get more pics up.


----------



## Volvo

LOL, that is like, a whole data centre in your basement.


----------



## Jtvd78

Quote:


Originally Posted by *Lord Xeb* 

















^^ This is still evolving I will get more pics up.

I think I'm jealous..


----------



## null_x86

I think Lord Xeb wins the thread.

Slightly off topic question for those of us who use W2K8 - Do you have to have two NICs to have Hyper-V installed? I always seem to get some sort of network error with Hyper-V and File Server installed...


----------



## Dizzymagoo

Quote:


Originally Posted by *wtomlinson* 
all were taken in Iraq.

Actually, you are exactly correct. pictures 2 and 3 were from what used to be a bathroom with a shower stall.


----------



## Liighthead

bumpage for more servers







been thru every page lol...

might attempt to make a home server... if i can find a hhd haha.. give it ago anyway xD
(( slightly off topic: easy beginner os for a home server + webserver to host pics/stuff n a website







(html based website btw lol school project.. ) ))

anyways xD


----------



## trueg50

Here are some of the servers I administer:









3x HP Proliant BL465 G7 for a group of VMWare ESXI servers running a Citrix Farm.

Each one has 2x 8 core AMD Magny-cours CPU's, and 16GB of RAM.


----------



## the_beast

Quote:


> Originally Posted by *trueg50;11682481*
> Here are some of the servers I administer:


that's cheating...


----------



## Sodalink

I got the parts to build my server like 3 months ago and just about to build it next week.

here are the specs:
2x1TB Desktars i think? in RAID 1
500GB with Server 2008 R2 and maybe Win7 dual booted
9400GT w/ HDMI out so might make it an HTPC, too.
Corsair 400W
NXZT Hades
2GB DDR2 800
Biostar cheap mobo
Old Athlon x2 2.8ghz


----------



## tiro_uspsss

ooooookkkk...:

empty:

http://public.bay.livefilestore.com/y1pztpZshkkqY85lndqVi1-l-OrzTjLrguWT9kE20JZXVmLVzy0NrEJEwiixBr4y-Vi6rxBRvU5UI_Yt2wUQ-HNFw/IMG_1601b.jpg?psid=1

full:

http://public.bay.livefilestore.com/y1pJmrLLIAq-60YgfahKXcKjlGgIyxrqyM3aSrRyncHkbLdO9Sc7eNKg58yMCRrFCWduAhAShJmVrY4u1S8V4Z6Fg/2010-12-10%2009.23.38.jpg?psid=1

specs: (old as)
4200+ X2
Asus A8N32-SLi
2x1GB Legend ECC
ATi 4350 passive cooling
3x PCI SiliconImage 3114
1x PCIEx1 SiliconImage 3124
1x PCIEx1 SiliconImage 3132
1x LSI 9240-8i
4x WD1600YS
4x RaptorX
1x 36GB Raptor
some random Seagate 80GB for OS (Win7 x64 Ultimate)
2x 500GB WD 'RE / YS'
1x Seagate 1TB ES.2
3x 1TB WD Black
(bar the OS HDD, I dont have/buy anything but 5yr warranty HDDs







)
case: Lian Li PC-201B, modded to hold 28 HDDs (24x 3.5" & 4x 2.5" )


----------



## Jtvd78

Quote:


> Originally Posted by *tiro_uspsss;11780559*
> ooooookkkk...:
> 
> empty:
> 
> http://public.bay.livefilestore.com/y1pztpZshkkqY85lndqVi1-l-OrzTjLrguWT9kE20JZXVmLVzy0NrEJEwiixBr4y-Vi6rxBRvU5UI_Yt2wUQ-HNFw/IMG_1601b.jpg?psid=1
> 
> full:
> 
> http://public.bay.livefilestore.com/y1pJmrLLIAq-60YgfahKXcKjlGgIyxrqyM3aSrRyncHkbLdO9Sc7eNKg58yMCRrFCWduAhAShJmVrY4u1S8V4Z6Fg/2010-12-10%2009.23.38.jpg?psid=1
> 
> specs: (old as)
> 4200+ X2
> Asus A8N32-SLi
> 2x1GB Legend ECC
> ATi 4350 passive cooling
> 3x PCI SiliconImage 3114
> 1x PCIEx1 SiliconImage 3124
> 1x PCIEx1 SiliconImage 3132
> 1x LSI 9240-8i
> 4x WD1600YS
> 4x RaptorX
> 1x 36GB Raptor
> some random Seagate 80GB for OS (Win7 x64 Ultimate)
> 2x 500GB WD 'RE / YS'
> 1x Seagate 1TB ES.2
> 3x 1TB WD Black
> (bar the OS HDD, I dont have/buy anything but 5yr warranty HDDs
> 
> 
> 
> 
> 
> 
> 
> )
> case: Lian Li PC-201B, modded to hold 28 HDDs (24x 3.5" & 4x 2.5" )


Nice server you got there. If i could have any case for a file server, it would definitely be that one, because all the Hard drives are housed at the bottom of the case. Too bad they don't sell them any more.


----------



## Special_K

OS: Windows XP Media Center 32bit
Case: Laptop
CPU: Centrino Duo 1.66 Ghz 2 cores
Motherboard: laptop
Cooling: stock
Memory: 2gb
PSU: Cord
OS HDD: 250gb SATA
Storage HDD:N/A
Server Manufacturer: HP

What you use it for: House Minecraft Server
Sits on a mini fridge in my room.
Any additional software that you use: Remote Desktop

---

OS: Windows XP Media Center 32bit
Case: Acer Eee PC Netbook Laptop
CPU: Intel Atom Ghz 1 cores /2 threads
Motherboard: netbook laptop
Cooling: stock
Memory: 2gb
PSU: Cord
OS HDD: 160gb SATA
Storage HDD:N/A
Server Manufacturer: Acer

What you use it for: Downloading torrents so I can turn off my Main PC and save energy while still letting the downloads finish.
Any additional software that you use: Synergy+

---

OS: Windows 7 x64
Case: 50 Ammo can
CPU: Intel q9650
Motherboard: Zotac 9300 ITX
Cooling: Corsair H50
Memory: 4gb
PSU: 400w
OS HDD: 160gb x2 raid0 SATA
Storage HDD:N/A
Server Manufacturer: Me.inc


















What you use it for: While in my room, GF games on it. While in the living room, people watch movies on it. Very portable with handle on top.
Any additional software that you use: Synergy+


----------



## hks85

in reguard to the dusty pics,

i spy a Cisco VOIP phone, & a spec-A?


----------



## hks85

Quote:


> Originally Posted by *the_beast;11682703*
> that's cheating...


Here are some of the servers I found online


----------



## wtomlinson

Quote:


> Originally Posted by *hks85;11780999*
> in reguard to the dusty pics,
> 
> i spy a Cisco VOIP phone, & a spec-A?


what's a spec-A? and yes, Cisco phones


----------



## tiro_uspsss

Quote:


> Originally Posted by *Jtvd78;11780681*
> Nice server you got there. If i could have any case for a file server, it would definitely be that one, because all the Hard drives are housed at the bottom of the case. Too bad they don't sell them any more.


thanks!








yeah, I miss many 'old skool' Lian Li cases, esp. the V2000/2100 series.. the newer 2010/2110 series just-doesnt-cut it imo


----------



## the_beast

Quote:


> Originally Posted by *hks85;11781021*
> Here are some of the servers I found online


That's pretty much what I see everyday, covering 2000m2 halls...


----------



## Jtvd78

Quote:


> Originally Posted by *the_beast;11783622*
> That's pretty much what I see everyday, covering 2000m2 halls...


Well, why don't you post it then


----------



## the_beast

Quote:


> Originally Posted by *Jtvd78;11784547*
> Well, why don't you post it then


because my contract doesn't allow me to.


----------



## dracotonisamond

i turned excess parts from firestorm into a server/auxiliary server i use for game servers and other services for my home.

OS: Windows 7 Ultimate
Case: Antec 1200
CPU: Core i7 960
Motherboard: Gigabyte EX58-UD5
CPU Cooling: Corsair H50
Memory: Corsair dominator 12GB 8-8-8-24 1600MHz
PSU: Antec TPQ-1200
OS HDD: Western Digital Black 1TB
Storage HDD/GS backup: Samsung 2TB
Game Server Drive 3 WD150HLFS velociraptors raid 0

i run a garrys mod server and what ever else me and or my friends are into at the time which includes minecraft at the moment.

it usually only runs any server i need to run 24/7 because i have no issues running temporary servers on my gaming rig. which is just one of the many reasons i bought a 980x.


----------



## epidemic

Quote:


> Originally Posted by *the_beast;11785665*
> because my contract doesn't allow me to.


Working in a data center does have its down sides...


----------



## Jtvd78

Quote:


> Originally Posted by *dracotonisamond;11785807*
> i turned excess parts from firestorm into a server/auxiliary server i use for game servers and other services for my home.
> 
> OS: Windows 7 Ultimate
> Case: Antec 1200
> CPU: Core i7 960
> Motherboard: Gigabyte EX58-UD5
> CPU Cooling: Corsair H50
> Memory: Corsair dominator 12GB 8-8-8-24 1600MHz
> PSU: Antec TPQ-1200
> OS HDD: Western Digital Black 1TB
> Storage HDD/GS backup: Samsung 2TB
> Game Server Drive 3 WD150HLFS velociraptors raid 0
> 
> i run a garrys mod server and what ever else me and or my friends are into at the time which includes minecraft at the moment.
> 
> it usually only runs any server i need to run 24/7 because i have no issues running temporary servers on my gaming rig. which is just one of the many reasons i bought a 980x.


So your spare parts are better than my sig rig's parts!?!








You mind giving me some?


----------



## trueg50

Quote:


> Originally Posted by *epidemic;11786029*
> Working in a data center does have its down sides...


Nah, I'd take better pay and a good job over not being able to show off your servers.

Besides, working in a datacenter can have its upsides. Nice and cool in the summer, and no one can hear you scream


----------



## rmp459

Was told to repost this in this thread... Its a work in progress but a solid platform i think.

Got a enclosed rack for free from a public storage box that someone stopped paying for. Thing was brand new from like 1990... Smoked front glass w/ intake fans on the bottom of the back.

Decided to do some rewiring in the house... to TRY to clean some stuff up... too many devices in this damn place.

Anyways....

The Specs:
- Windows 2008 R2 Enterprise Ed. *(had)*
- Gigabyte EP35C-DS3R Motherboard *(had)*
- 8 GB of Corsair Dominator DDR2 1066mhz *(had)*
- Q9550
- 2x Intel Gigabit Desktop CT Nics
- HP Smart Array p400 w/ 512mb Memory Module and BBU *(had)*
- 2x 32Pin SAS to 4x SATA Cables *(had)*
- 8x Seagate Constellation ES ST32000644NS 2TB 7200 RPM
- 2x Seagate Constellation ES ST3500514NS 500GB 7200 RPM 32MB (Raid1 for OS)
- iStarUSA TC-RPSL20 20" Sliding Rail Kit for Most Rackmount Chassis - OEM
- iStarUSA BPU-230SATA-RED 2x5.25" to 3x3.5" SATA Hot-swap Raid Cage - OEM
- iStarUSA D-410-B10SA Black Zinc-Coated Steel 4U Rackmount Server Case
- Scythe KM02-BK 5.25" Bay Fan Controller
- 2x Scythe DFS123812-3000 "ULTRA KAZE" 120 x 38 mm Case Fan
- Scythe Big Shuriken SCBSK-1000 120mm CPU Cooler
- Corsair AX750 PSU

Switch is a Cisco Small Business (aka linksys) 24 port managed gigabit switch.

Also have my 2x Tripplite 2500W UPS's w/ extended batteries in the bottom which power most of the electronics in the house.

Currently running a domain... DHCP, DNS are on the server... HyperV is running a WHS "Vail" beta OS which im using for torrents... might try a linux OS for that in the future... still playing around...

Also use tzo.com on my fios actiontec router for dynamic dns.. have a old iogear 4 bay nas unit that runs an older windows 2003 storage server... which i might convert into an opensource firewall... not sure yet.


----------



## mbudden

I really like that. I would love to have a server rack.


----------



## tiro_uspsss

I find it odd that so many 'servers' that ppl run dont have ECC ram :shrug:


----------



## tiro_uspsss

Quote:


> Originally Posted by *mbudden;11804908*
> I really like that. I would love to have a server rack.


I want some darn chunky UPSs








thats another thing a server should have imo.. I dont have one tho


----------



## ComGuards

Quote:


> Originally Posted by *tiro_uspsss;11804932*
> I find it odd that so many 'servers' that ppl run dont have ECC ram :shrug:


Don't need it for non-mission critical systems. Besides which, you need compatible motherboard, which increases costs.


----------



## Jtvd78

Hi everyone! As the maker of this thread, Shouldn't I have a server too? Well I do now.. Kinda

I got all the parts in today, except for the CPU. That shipped today.

OS: WHS PP3
Case: Antec 300
CPU: AMD Athlon II X2 240E (Energy Efficient) 2.8 GHZ
Motherboard: MSI 760GM-E51
Cooling: Stock
Memory: 4 GB Mushkin Silverline
PSU: Antec Neo Exo 400W
OS HDD: WD 500GB green
Storage HDD: 2x 2TB Sata Samsung F4.
Server Manufacturer: Me

I plan on using it for Nightly backups and a file server. I might even run a minecraft server on it.
Temps, loudness, etc. : No Idea as of now.

Yay! that parts are in. (except for the CPU )








Shot of the inside. But what's a computer without a CPU. You also might notice that I am missing some SATA cables. the Motherboard only came with 1! Now I have to buy them local. They're like 3 bucks a piece.








The 300 sucks for cable management








The cat likes it


----------



## bobfig

Whats the knife there for?


----------



## Jtvd78

Quote:


> Originally Posted by *bobfig;11824959*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Whats the knife there for?


Opening the boxes


----------



## tiro_uspsss

antec 300 sucks for cable management?? I love the 300, still have one, nice case for price

:shrug:


----------



## phasezero

Quote:


> Originally Posted by *Jtvd78;11824846*
> Hi everyone! As the maker of this thread, Shouldn't I have a server too? Well I do now.. Kinda
> 
> I got all the parts in today, except for the CPU. That shipped today.
> 
> OS: WHS PP3
> Case: Antec 300
> CPU: AMD Athlon II X2 240E (Energy Efficient) 2.8 GHZ
> Motherboard: MSI 760GM-E51
> Cooling: Stock
> Memory: 4 GB Mushkin Silverline
> PSU: Antec Neo Exo 400W
> OS HDD: WD 500GB green
> Storage HDD: 2x 2TB Sata Samsung F4.
> Server Manufacturer: Me
> 
> I plan on using it for Nightly backups and a file server. I might even run a minecraft server on it.
> Temps, loudness, etc. : No Idea as of now.
> 
> Yay! that parts are in. (except for the CPU )
> 
> Shot of the inside. But what's a computer without a CPU. You also might notice that I am missing some SATA cables. the Motherboard only came with 1! Now I have to buy them local. They're like 3 bucks a piece.


Looks like a good setup. Will you be using your server to stream media to anything? I'm curious to know if you can stream 1080p videos without stuttering with the new format 2TB F4's.

Here's a thread on it. http://forum.wegotserved.com/index.php/topic/16349-2tb-samsung-f4eg-hd204ui-drives-question/

I don't remember or notice if the drive affected the speed of the backups or not, but it did affect my video streaming immediately.


----------



## mbudden

Quote:


> Originally Posted by *tiro_uspsss;11825142*
> antec 300 sucks for cable management?? I love the 300, still have one, nice case for price
> 
> :shrug:


I don't think he really cares about much cable management in his server case lol.


----------



## Balsagna

I don't have pics.. but

X3450, 500gb HD, 8 gigs ram, located in a datacenter actually with internap Bandwidth.

I use it for a gaming community that is basically dead now. Might donate the box to OCN..


----------



## rmp459

Quote:


> Originally Posted by *tiro_uspsss;11804932*
> I find it odd that so many 'servers' that ppl run dont have ECC ram :shrug:


Most home servers are really just serving up files and doing menial hosting tasks... and it is far cheaper to just use older desktop components than going out of the way to get a server board and ecc ram.

If we were talking a 24/7 production environment w/ ram intensive applications or databases... that would be a different story.

For the most part even when using the server as a DC and letting it handle dhcp/dns for my network, filesharing, and backups, its barely going through the paces... not really too many places for memory errors.

Also ive been running this hardware for years and am very confident on what is 100% stable in terms of memory timings/voltages. Ive had my server board and ram paired together since like 2008.


----------



## 115635

Seems like alot of you have larger machines. I noticed with running a full desktop machine as a server that power usage was pretty abysmal, so I built this:

OS: Windows 7 Ultimate (will transition to ubuntu soon..)
Case: Chenbro ES34069 (4 hot-swap bays)
CPU: Intel Atom N330 / 2.0GHz (OC), 1.60 GHz (stock) / Dual Core
Motherboard: Zotac Ion G-E
Cooling: Stock, Vantec 60mm in front
Memory: 2GB
PSU: Built-in 160W
OS HDD: 320GB Scorpio Blue 2.5"
Storage HDD:

2 x WD Green 2TB (WD20EADS)
2 x WD Black 1TB (WD1000FALS)
Other: SunTech PCI-E SATA and eSATA add on card (mobo only had 4 sata ports)

Uses:

Backup
Media machine (attached to a 42" LCD TV, Plays 1080P)
Print Server
Hub for small file sharing network
Runs miscellaneous long-running scripts
Loudness: Chenbro case fans are a little noisy. Used my noctua resistors to quiet them down a bit. Will soon replace case fans and Vantec up front with quieter alternatives.

Pics:
attached.


----------



## Norse

My works servers that i manage

One ontop of the rack cabinet

OS: Server 2003 standard
Case:Stock Dell poweredge R510
CPU: Dual intel E5620 2.4ghz
Motherboard: Stock
Cooling: Stock
Memory: Amount 4GB
PSU:Stock
OS HDD: Raid 1
Storage HDD: 2 136GB 10k RPM Drives in Raid 1, 3 450GB drives Raid 5
Server Manufacturer: Dell
Used for Email/Internet

Top server in the rack cabinet

OS: Server 2003 standard
Case: HP ML370 G3
CPU: Dual Intel Xeon 3.06Ghz
Motherboard: Stock
Cooling: Stock
Memory: Amount 4GB
PSU:Stock
OS HDD: 73GB Raid 5 drive
Storage HDD: 3 73GB 10k RPM Drives in Raid 5, 3 136GB drives Raid 5
Server Manufacturer: HP
Used for our Oracle database/Software

Server below the one in top (the grey thing)
No idea about specs, its our phone system but its a Nortel BCM400

Server to the right of the rack cabinet on the floor
HP Desktop computer with server 2003 on it, Dual core 2ghz, 2GB ram only used for TS server (soon to be replaced with a Dual quad core, 16GB ram TS Server)










Dont you just love Patch panels?









The other junk in our server closet


----------



## the_beast

Quote:


> Originally Posted by *Norse;12088216*


Seriously? You wouldn't get into any of my DCs for a second time...


----------



## pon

http://valid.canardpc.com/show_oc.php?id=1607288


----------



## Norse

Quote:


> Originally Posted by *the_beast;12090776*
> Seriously? You wouldn't get into any of my DCs for a second time...


Alot of it is mess from previous person that did it. cant get the time to redo it all as i'd have to do it when the building is closed and no one is about (so after 9pm) and then make sure certain cables plug in exactly same place due to them being patched phonelines


----------



## subassy

I think I'll post info on the servers I work with at the data center. I can't actually post pics or tell you the name of the company because I would be violating any number of NDAs so take this for whatever it's worth:

72 gigs of memory
1TB x24 hard drives (OS mirrored, data drives mirrored in pairs, one hot spare)
All running 10 Xen VMs (on CentOS)
1Gbps network (using Cat6, of course)
Dual Xeon CPUs at least Quad core each
3Ware Raid controller (worth at least $700 ea.)

Tempertures for the CPUs seem to be 60 - 70c

Mainly either Pogo or Aberdeen as the OEMs (I'm pretty sure I can say that at least)

The servers all look something like this:


----------



## mbudden

Quote:


> Originally Posted by *subassy;12103420*
> ...


All custom servers or they OEM?


----------



## subassy

The company I work for is quite a large customer so there might be some custom specs on the hardware and inside design, I'm not sure. I'm almost positive I remember someone saying something about that. It wouldn't surprise me at least. Maybe I'll eventually post the specs to my personal server.


----------



## CheeseJaguar

OS: Ubuntu Server 10.4 LTS
Case: Apex MI-100
CPU: Intel Atom D510, 2 Cores, 4 Threads @ 1.66GHz
Motherboard: Intel D510MO
Cooling: Single 120mm Silenx case fan
Memory: 2GB DDR2-800
PSU: Case PSU
OS HDD: 320GB SATA 7200rpm drive
Storage HDD: 500GB SATA 7200rpm drive

What you use it for: File server, media server, *nix test platform, shell accounts for friends, torrent box, minecraft server

I'm thinking about dropping in a 64GB SSD for an OS drive, and upgrading RAM to 4GB.


----------



## bobfig

Quote:


> Originally Posted by *CheeseJaguar;12105028*
> OS: Ubuntu Server 10.4 LTS
> Case: Apex MI-100
> CPU: Intel Atom D510, 2 Cores, 4 Threads @ 1.66GHz
> Motherboard: Intel D510MO
> Cooling: Single 120mm Silenx case fan
> Memory: 2GB DDR2-800
> PSU: Case PSU
> OS HDD: 320GB SATA 7200rpm drive
> Storage HDD: 500GB SATA 7200rpm drive
> 
> What you use it for: File server, media server, *nix test platform, shell accounts for friends, torrent box, minecraft server
> 
> I'm thinking about dropping in a 64GB SSD for an OS drive, and upgrading RAM to 4GB.


Thats a nice little server there. IMO i wouldnt put a SSD in a server, as for the ram that would be fine because i hear a minecraft server uses a bunch of ram depending how many ppl are connected.


----------



## killabytes

Yawn, if this turns into a "I work here so I'll post this..." thread...I'm gonna rage!

Lets try to keep this about PERSONAL servers you have at home.


----------



## null_x86

Quote:


> Originally Posted by *pon;12091761*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://valid.canardpc.com/show_oc.php?id=1607288


24 Logical Threads









Do Want. What all is it running??


----------



## mbudden

Hopefully [email protected]


----------



## Artikbot

Here it is mine!

·AMD Opteron 1214 @2.91GHz w/stock Phenom HSF.
·ASUS M2N SLi Deluxe.
·2x1GB OCZ Platinum XTC CL4 DDR2-800.
·nVIDIA GeForce 8800GT 512MB.
·Barracuda 7200.10 system drive, gonna throw in a pair more of those, retrieve data from them (they're a lost RAID0) and make a RAID 0+1 on them.
·LC Power Rev2 550W.
·LG DVD-ROM, IDE.
·ASUS Vento A1 ATX case.

This is it


















And this is the wire mess I have behind it. It's not much of a mess, but enough to lose cables in between >_<


----------



## FireMarshallBill

Here is my dedicated server. I use it for hosting my revo-d website and game servers like garrysmod and minecraft. The server is super quiet and usually hangs around 40deg to 50deg C when you are playing games, 35deg C when idle.

OS: Windows Server 2008 R2
Case: Xion Solaris
CPU: AMD 5400+ / 2.81GHz / Dual Core
Motherboard: M2N E SLI
Cooling: R1 Thermaltake and a Enermax 80mm case fan
Memory: 4GB DDR2 800MHz
PSU: 400W or something not sure
OS HDD: 200GB Segate

Sorry for the craptacular cell pics lol


----------



## Imrac

Quote:


> Originally Posted by *FireMarshallBill;12137776*
> Sorry for the craptacular cell pics lol


Is bucket man always outside your dwelling?










Consider removing Geotaggin


----------



## mbudden

The creeps on this site that love looking at EXIF data astounds me.


----------



## Imrac

Quote:


> Originally Posted by *mbudden;12139826*
> The creeps on this site that love looking at EXIF data astounds me.


I actually use it to better my photography skills. Since it's as easy as mousing over the photo, I think it's great raising awareness.


----------



## mbudden

And how're you going to better your photography skills from a photo from a phone? You're not. So your point is invalid there.


----------



## Imrac

Like I said, it's as easy as mousing over the photo. when I scroll down the page, my cursor is generally in the center of the screen. When it lands on a photo, the EXIF is overlaid and you can even mouse over the little "GPS" button and google maps will be overlaid.

So anyways, anyone got a recommendation of a new power supply for my server? #122 Idles at 60watts and peaks around 95 during spinup/prime95. Currently I am using an 8 year old generic in it, and I am a little worried.


----------



## Jtvd78

Quote:


> Originally Posted by *Imrac;12141300*
> Like I said, it's as easy as mousing over the photo. when I scroll down the page, my cursor is generally in the center of the screen. When it lands on a photo, the EXIF is overlaid and you can even mouse over the little "GPS" button and google maps will be overlaid.
> 
> So anyways, anyone got a recommendation of a new power supply for my server? #122 Idles at 60watts and peaks around 95 during spinup/prime95. Currently I am using an 8 year old generic in it, and I am a little worried.


I'd go for the neo eco 400w.


----------



## tiro_uspsss

did a case make-over..

before:

http://public.bay.livefilestore.com/y1pYMSWIojX-SyPcBmdbvlxH0Oh6tTMy4TzOL2IqlaHQDjipcMBQXgS9_cd076kkjQhVauSHK-9f1j7B0nBtOcVVA/IMG_1601b.jpg?psid=1

^ space for 24x 3.5" HDDs (with PSU)









after:

http://fyez6q.bay.livefilestore.com/y1p3HQZR4wrK_osJ3Fe5MmyOMp85QnjJV7Gm9xBo3YtCxNu3B6EPRwtdf5OTiCtXRjNuBw4l7qlIV8jK0nScBUtDajNj69ZvcmK/2011-02-03%2017.19.38.jpg?psid=1

^ space for 32 3.5" HDDs......not finished cause I still need to mount the PSU somewhere!


----------



## XxG3nexX

OS: Server 2008 R2 Enterprise Edition
Case: Antec 1200
CPU: Amd Athlon X2 5600 / 2.8Ghz / Dual Core
Motherboard: M2n32 Sli Deluxe
Cooling: Zalman 9700 NT, 5x 120mm Red led fans, Antec 200mm, 
Memory: 4GB
PSU: Antec 550 watt
OS HDD: 80GB Seagate HDD
Storage HDD: Space *(10.9TB total)** = (5x 2TB + Hot spare in raid 5) + (2x 2TB in raid 0)*
Interface: Perc 5/i, (2 SAS to 8 SATA)
Server Manufacturer: n/a

What you use it for (backups, file server, torrents, and nzbs)


----------



## the_beast

Quote:


> Originally Posted by *tiro_uspsss;12243868*
> did a case make-over..
> 
> before:
> 
> http://public.bay.livefilestore.com/y1pYMSWIojX-SyPcBmdbvlxH0Oh6tTMy4TzOL2IqlaHQDjipcMBQXgS9_cd076kkjQhVauSHK-9f1j7B0nBtOcVVA/IMG_1601b.jpg?psid=1
> 
> ^ space for 24x 3.5" HDDs (with PSU)
> 
> 
> 
> 
> 
> 
> 
> 
> 
> after:
> 
> http://fyez6q.bay.livefilestore.com/y1p3HQZR4wrK_osJ3Fe5MmyOMp85QnjJV7Gm9xBo3YtCxNu3B6EPRwtdf5OTiCtXRjNuBw4l7qlIV8jK0nScBUtDajNj69ZvcmK/2011-02-03%2017.19.38.jpg?psid=1
> 
> ^ space for 32 3.5" HDDs......not finished cause I still need to mount the PSU somewhere!


Nice.

Couple of things though - it appears your cooling for your drives won't pull air over the lower disks in your newly added brackets - are you going to add a second fan lower down to cool them?

Also I'd advise you to check your PSU ratings carefully - you may well need to use 2 separate PSUs to run that many drives safely, even if you use a high wattage unit and use staggered spin-up. With 32 drives you will have one hell of a draw on the 5V line, and modern PSUs (even at the 1 kW and above mark) usually have limited power available there as modern CPUs and GPUs are driven from the 12V rails.


----------



## tiro_uspsss

Quote:



Originally Posted by *the_beast*


Nice.

Couple of things though - it appears your cooling for your drives won't pull air over the lower disks in your newly added brackets - are you going to add a second fan lower down to cool them?

Also I'd advise you to check your PSU ratings carefully - you may well need to use 2 separate PSUs to run that many drives safely, even if you use a high wattage unit and use staggered spin-up. With 32 drives you will have one hell of a draw on the 5V line, and modern PSUs (even at the 1 kW and above mark) usually have limited power available there as modern CPUs and GPUs are driven from the 12V rails.


well temps in the old config were never a problem - ~43C.. so we'll see how they go.. keep in mind the fans in the lower section are 120x38* mm fans (3K rpm, 45dB, 133cfm)








I will have a look @ putting a 92mm fan below the 120mm fan @ rear..

as for the PSU.. I dont have 32 HDDs.. yet..








*sigh.. if a truck load of cash fell into my lap, I'd fill out the entire bottom section with WD Black 2TBs
I 'only' have 17 HDDs atm.. the PSU I have atm is a silverstone 850W, so that should tide me over for a while.. once I get more HDDs & $, I'm planning on getting the silverstone 1500W


----------



## blupupher

My little home server I just got together:

*OS:* WHS w/PP3
*Case:* Antec 300
*CPU:* BE-2400 for lower power use
*Motherboard:* EPoX EP-MF4-J
*Cooling:* OCZ Vendetta 92mm
*Memory:* 4 gigs (4x1) OCZ Vendetta DDR2 1066
*PSU:* Antec Earthwatts 380 (delta)
*OS HDD:* 80 gig WD IDE
*Storage HDD:* 500 GB WD black SATA (for recorded TV), 2 TB WD Green SATA (x2) and a 640 gig WD green SATA.
*Server Manufacturer:* Self

It does nightly backups of all computers in the house, holds all media (ripped DVD's and recorded TV).
Can't get an accurate temp, since it is still sitting in my garage, and it is about 35° F in there right now (CPU temp monitor on the mobo shows 11° C right now though).
Temps are in the mid 30's.
I also need to plug in my kill-a-watt to see how much power this thing uses. it sits between 70-80w right now.
I do plan on using it as a print server also, just need to get around to setting it up. Set up as a print server for the house.
Still getting the whole WHS figured out, so not sure what else I will be running on it.

Here are some pics in the antec 300. May be swaping it inot a Cooler Master Centurion 5, the 140 mm fan on the Antec is a little louder than I want (but the Antec does have better airflow overall).


























Here is a pic of it before I got it into a case:









*********************************

edit: Here it is with all the drives in it now










80 gig ide - OS

2 TB SATA - DE Storage

2 TB SATA - DE Storage

640 GB SATA - Backup of pics and important files (need to get an external drive for it)

500 GB SATA - Recorded TV

DVD drive is just in there since I don't have a filler plate for it.


----------



## haza1981

Poweredge 2850
2x3ghz Xeon processors
4gb ram
6x40gb hdd's
Server 2008R2

Running for studying mainly(maybe the odd hyper V movie hosting desktop)


----------



## SKl

Server owners club








http://www.overclock.net/member-run-...-club-d-2.html


----------



## null_x86

Ok, I have to nitpick really quickly..

Desktop OS w/ Server features =/= Server

Windows 7 is not a server OS. Just gotta get that off my chest.


----------



## Jtvd78

Quote:


> Originally Posted by *null_x86;12316663*
> Ok, I have to nitpick really quickly..
> 
> Desktop OS w/ Server features =/= Server
> 
> Windows 7 is not a server OS. Just gotta get that off my chest.


Technically.....
Quote:


> In computing, the term server is used to refer to one of the following:
> 
> -a computer program running as a service, to serve the needs or requests of other programs (referred to in this context as "clients") which may or may not be running on the same computer.
> 
> *-a physical computer dedicated to running one or more such services, to serve the needs of programs running on other computers on the same network.*
> 
> -a software/hardware system (i.e. a software service running on a dedicated computer) such as a database server, file server, mail server, or print server.


http://en.wikipedia.org/wiki/Server_(computing)

I think as long as the computer is used as a server, and nothing else, it can be considered a server. If someone uses their rig as a server, then I don't consider that computer a server.


----------



## bobfig

well i got my new server up and running. i went from a atom with 1gb to a e8400 with 4gb, what a boost in speed.... i also have to note i love this new case. i got it for $40 and it has a lot of features that higher end cases have.

OS: Server 2008 R2 Standerd
Case: Diablotek EVO
CPU: E8400 3ghz
Motherboard:Biostar mATX g41-m7
Cooling: XIGMATEK LOKI SD963 92mm + bolt threw 
Memory: G Skill 4gb 800mhz
PSU:Corsair VX450
OS HDD: 200gb WD PATA
Storage HDD: 200gb Maxtor PATA, 640gb Seagate SATA 
Server Manufacturer: all mine

uses are still the same: file sharing, torrent/big downloads, backups. i did however have a Minecraft server running for when i get board. maybe another game server when needed.

Any additional software that you use?
what ever i come across i may use or not.

i plan on getting a perc 5i/6i eventually and run a raid 5 with 1 tarabyte drives.

i know the top and bottom drives are upside down. it made it easier for the PATA cable to get plugged in. eventually i will be going with SATA to fix this.


----------



## Jtvd78

Bummmmppp!


----------



## bobfig

mines still the same with nothing changed :/

i do how ever need to get some more hdd's as the ones i have now are nearly full


----------



## Bonz(TM)

Blurry camera phone pics, but whatever.

USAGaming.net Game server:
• AMD Athlon II x4 620
• MSI 785-GT63
• 8GB G.Skill DDR2-800
• 2x WD 160GB - RAID1
• Antec Earthwatts 380w
• Norco RPC-251 2U
• OS: Windows Server 2008 R2










File Server @ Home:
• AMD Phenom II x4 940
• Gigabyte 785GM-US2H
• 8GB G.Skill DDR2-800
• 5x Hitachi 5K3000 2TB
• 2x Seagate LP 1.5TB
• 8x WD Black 1TB
• Supermicro AOC-USAS2-L8i
• 3x SNT SAS3051b Hot Swap caddies
• Corsair TX650
• CoolerMaster CM590
• OS: WHS for now, in a few days either ESXi or OpenIndiana


----------



## The_Punisher

Dell Poweredge 1500SC
OS: FC14 w/Amahi
CPU: 2x Pentium III 1.2Ghz
Cooling: 1 delta fan and one 80mm exhaust. CPUs passively cooled!
Memory: 2x1GB PC133 SDRAM ECC
PSU: stock single (wish I had redundant)
OS HDD: 36.4GB 10K SCSI
Storage HDD(s): None yet. 3Ware 7500-8 IDE RAID card

Uses: DHCP/DNS, DLNA server, storage and backups once I get more HDDs.
Temps and noise are very low. Link to pics of my progress on it in sig.


----------



## agrophel

patch pannel AMP Netconect Cat 6
HP Procurve 1800 Switch
Chenbro RM 414B
Norco 4220
APC UPS 2200VA

Chenbro RM 414B
Windows Server 2008
Intel Xeon X3460 2.80GHz
Supermicro X8SIA-F
4GB RAM (ecc)
Areca 1680IX-16
10x 2TB hdd in raid 6

Server under building, before put raid card in.









Norco 4220
Windows Server 2003
10x WD black 1TB in raid 6
HP SAS Exapnder
4GB ram
Some intel HK and CPU


----------



## DIABLOS

My ickle HP Proliant Microserver bit of a bargain at £140

















AMD Athlon II Neo N36L / 1.3 GHz
4GB RAM
2x Samsung SpinPoint F4 EcoGreen 2TB
2x Samsung SpinPoint F3 EcoGreen 1.5TB
Windows Server 2008R2

Used for : Fileserver, download box, FTP server


----------



## the_beast

Quote:


> Originally Posted by *DIABLOS;13609110*
> My ickle HP Proliant Microserver bit of a bargain at £140


If they made a 4 bay rackmount version of this I'd be all over it - still it's a great unit for the price, and a much better buy (for most people) rather than a prebuilt NAS device.

How does 2008R2 work out on the little Neo though? Seems like a little overkill to put such an OS on such a little CPU...


----------



## kujon

Quote:


> Originally Posted by *agrophel;13596139*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> patch pannel AMP Netconect Cat 6
> HP Procurve 1800 Switch
> Chenbro RM 414B
> Norco 4220
> APC UPS 2200VA
> 
> Chenbro RM 414B
> Windows Server 2008
> Intel Xeon X3460 2.80GHz
> Supermicro X8SIA-F
> 4GB RAM (ecc)
> Areca 1680IX-16
> 10x 2TB hdd in raid 6
> 
> Server under building, before put raid card in.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Norco 4220
> Windows Server 2003
> 10x WD black 1TB in raid 6
> HP SAS Exapnder
> 4GB ram
> Some intel HK and CPU


between those two cases, which one do you prefer and why?


----------



## Imrac

Snagged three Dell PowerEdge SC 1425. They are just sitting under my bed for now


----------



## agrophel

Quote:


> Originally Posted by *kujon;13615977*
> between those two cases, which one do you prefer and why?


Chenbro is higher quality and less noise then Norco, when useing orgial fans


----------



## kujon

Quote:


> Originally Posted by *agrophel;13621271*
> Chenbro is higher quality and less noise then Norco, when useing orgial fans


im researching what case i should get for my next large server. i was thinking about the norcos since they have what i need (something around 20-24 bays).


----------



## Pentium-David

Updated mine quite a bit since I last posted mine...Very basic looking but very, very quiet and does everything I want it to.
Celeron D 356 (3.33GHz)
2GB 533 RAM
Windows Server 2003 R2
Antec Earthwatts 380W
2x2TB
Using onboard and PCI Nic's for more bandwidth....


----------



## 8ight

HP DL580 G3
4x3.33GHz XEON MP HTs (x64, VT-x) 12m cache
4x36gb 15k ULTRA320 SCSI HotSwap drives (battery backup RAID)
8x1gb (2/4 memory boards filled) DDR2 667MHz ECC reg RAM
2xintegrated gigabit Intel LAN ports
HP iLO
2x1.3kw redundant PSUs
6xRedundant Nidec Beta V fans (LOUD!!)
4U DL580 G3 Chassis

It does EVERYTHING I could ask a server to.
Always running Linux, but the distro changes based on the project.

Google a pic, they all look the same


----------



## nigelke

some sweet home servers here ^^

I really long to the moment that I live alone to set up my own servers


----------



## jibesh

*OS:* Windows Server 2008 R2 Enterprise Edition
*Case:* NORCO RPC-4216 4U Rackmount
*CPU:* Intel i7 950
*Motherboard:* Asus P6X58-E WS (w/ dual Intel onboard NICs)
*Cooling:* Thermaltake Frio CPU Cooler
*Memory:* Gskill 24GB (6x4GB) DDR3 1600
*PSU:* Seasonic 750W
*OS HD:* 2x WD VR 74GB 10K RPM in RAID1
*HDDs:*
8 x Hitachi 2TB 7200RPM in RAID5 - NAS
8 x Hitachi 1TB 7200RPM in RAID10 - VM Storage
*RAID Controllers*:
3ware 9690SA-8i for RAID10 array
3ware 9650SE-8LPML for RAID5 array

Purpose: NAS, Hyper-V VMs, Domain Controller, Windows Deployment Server


----------



## Ooimo

I have an old mail server, it has 2 pentium 3's and it is huge. I don't use it for anything though


----------



## jigglywiggly

OS
Ubuntu 10.04 LTS
Ram 8 gigs DDR2
q6600 @ 3.4 ghz
11x500 hds in RAID 6 in MDADM
1x500 for the OS


















BTW [email protected] the windows server users.


----------



## Deeeebs

Well at my place of employment this is what I get to work on daily. Its about 40 different servers right now ranging from DL160 G5's to blades to DL980 G7's, with a few Itanium units mixed in there.

I only have the silver cart and the first two racks next to the cart. Im trying to take over the last two from another group.










Also from in the above pic, this is my folding behemoth from my sig.

8 Procs, 128 threads, 256GB 10600r ddr3, 24/7 folder.










EDIT:

Almost forgot about my new home server / 24/7 folder.

Its just a lousy Xeon X5675 overclocked to 4.2 with (1) 32GB Samsung SSD loaded with Win 2K8 R2 Enterprise Sever, (2) 750GB drives for storage, 6GB Mushkin Radioactive 7-9-7-24, with a Quadro FX4800.


----------



## The_Punisher

Quote:


> Originally Posted by *Ooimo;13686640*
> I have an old mail server, it has 2 pentium 3's and it is huge. I don't use it for anything though


You would be surprised how much you can do with it!


----------



## Cvalley75

Deebs,
What is a Quadro doing in a home server, that's one pricey card. I'm a solidworks user, I'd love to have one of those in my workstation.


----------



## Deeeebs

Quote:


> Originally Posted by *Cvalley75;13703749*
> Jiggly,
> What is a Quadro doing in a home server, that's one pricey card. I'm a solidworks user, I'd love to have one of those in my workstation.


Want to buy it? I also have a Quadro FX5500...


----------



## mbudden

Quote:


> Originally Posted by *Cvalley75;13703749*
> Jiggly,
> What is a Quadro doing in a home server, that's one pricey card. I'm a solidworks user, I'd love to have one of those in my workstation.


I think you mean Deeeeebs.


----------



## Deeeebs

Quote:


> Originally Posted by *mbudden;13703785*
> I think you mean Deeeeebs.


I was just about to edit my post with that...  Ty buddy


----------



## mbudden

Quote:


> Originally Posted by *Deeeebs;13703798*
> I was just about to edit my post with that...  Ty buddy


Quit making everyone jelly


----------



## Deeeebs

Quote:


> Originally Posted by *mbudden;13703816*
> Quit making everyone jelly


Of what? :-/


----------



## Skaterboydale

Rack on floor is a dual socket P3, with a 300gb 10k HDD, and 4gb of RAM, running centos. This is solely my site server, I host a friends and my site.

Small black standing one on the right is a athlon II 240e i think, with 4gb RAM, 2 1tb HDDs, 2 ethernet cards (3 total ports), and a wireless card. This runs windows server 2008, and acts as my router, firewall and NAS/video streaming.


----------



## cactusS4

Dual e5530s in a Z8NA-D6C
24GB 1333 9-9-9-24 ram at 1066 7-7-7
Corsair TX750
4 7K2000 2TB in RAID5 using mdadm Stuff
2 Maxtor 150GB in RAID1 using mdadm OS
Ubuntu 11.04

Used for xen, subsonic, ftp, web, [email protected]


----------



## Cvalley75

Quote:


> Originally Posted by *mbudden;13703785*
> I think you mean Deeeeebs.


Thanks for the correction, too many pics confuse me.

The Quadro cards are a bit too spendy for my blood, I like em, but their crazy expensive.


----------



## Deeeebs

Quote:


> Originally Posted by *Cvalley75;13706635*
> Thanks for the correction, too many pics confuse me.
> 
> The Quadro cards are a bit too spendy for my blood, I like em, but their crazy expensive.


Still doesn't mean you CAN'T buy one...


----------



## 102014

Quote:


> Originally Posted by *DIABLOS;13609110*
> My ickle HP Proliant Microserver bit of a bargain at £140
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AMD Athlon II Neo N36L / 1.3 GHz
> 4GB RAM
> 2x Samsung SpinPoint F4 EcoGreen 2TB
> 2x Samsung SpinPoint F3 EcoGreen 1.5TB
> Windows Server 2008R2
> 
> Used for : Fileserver, download box, FTP server


looks like a good deal! my server ish started as a torrent slave then evolved in to some form of HTPC with file server uses

where did you buy this from?


----------



## Eaglake

OS: FreeNAS 0.7.2
case: Custom made
CPU: Intel Celeron 2.4GHz
MB: Asrock P4i65G
RAM: 512Mb x2 Corsair XMS
PSU: Unknown 300W
OS drive: Delock IDE Flash Module 1GB
Storage: 2TB x2 Western Digital Green drives

This is my storage server. It's pretty slow and old, but it's quite silent.
It has two fans - one for cpu and one 120mm for general cooling
But I think I'll have to retire this one and make a new one.


----------



## Liighthead

thats pretty sweet


----------



## DIABLOS

Quote:


> Originally Posted by *markp1989;13721953*
> looks like a good deal! my server ish started as a torrent slave then evolved in to some form of HTPC with file server uses
> 
> where did you buy this from?


Ebuyer, it's a very flexable solution.


----------



## Rexel

Heres mine:

OS: WHS v1
CPU: E2200 2.2Ghz
Motherboad: Asus P5B SE
RAM: 3GB
HDD OS: WD 400GB
HDD DATA: 3x Samsung 1.5TB (HD154UI)


----------



## Slim Shady

In my home rack I have
HP Proliant DL380 G3 (Domino Server)
Rackable systems (dual opteron 250)(Terminal Server)
2U HomeBrew AMD athlon x2 (ISA Server 2006)

All are kept in their rack in the Garage. Pics Soon


----------



## Ooimo

Quote:


> Originally Posted by *cactusS4;13704400*
> Dual e5530s in a Z8NA-D6C
> 24GB 1333 9-9-9-24 ram at 1066 7-7-7
> Corsair TX750
> 4 7K2000 2TB in RAID5 using mdadm Stuff
> 2 Maxtor 150GB in RAID1 using mdadm OS
> Ubuntu 11.04
> 
> Used for xen, subsonic, ftp, web, [email protected]


Wait, 24gb of ram, why?


----------



## mbudden

Quote:


> Originally Posted by *Ooimo;13737350*
> Wait, 24gb of ram, why?


Virtualization.
http://www.xen.org/


----------



## Baking Soda

OS: Windows Server 2008 R2 x64
case: Custom made
CPU: Intel Core 2 Duo x6800 @3.0ghz
MB: Bad Axe 2
RAM: 2x1gb Corsair DDR2 800mhz
PSU: Macron 250W(prolly going to blow up soon)
OS drive: 80GB WD 7200RPM
Storage: 2x1TB Seagate 7200RPM
Use: [email protected], and backups



Ujelly?


----------



## Yumyums

Yeah, a little bit actually


----------



## Liighthead

even though ur heatsink sits out of the case







nice work

you make it i guess?


----------



## Baking Soda

Quote:


> Originally Posted by *Liighthead;13755573*
> even though ur heatsink sits out of the case
> 
> 
> 
> 
> 
> 
> 
> 
> nice work
> 
> you make it i guess?


Yeah that heat sink is massive, and yes I built it.


----------



## TheLombax

OS: Ubuntu 11.04 desktop modified to make it headless
Case: HP DC7100 SFF
CPU: Pentium 4 LGA775
Motherboard: DC7100
Memory: 512mb DDR RAM 2x256mb
OS HDD: 1TB WD Caviar Green WD10EARS
Storage HDD(s): Same as above
Server Manufacturer: HP (was a desktop from work)

I use it mostly as a file server, but also use it as a media server for the PS3 and a central location for backups. 1TB may not be much these days, but I struggle to even fill 500GB at the moment, lol. This machine does very well in it's job and I am happy with it. If I need more storage I will upgrade to a home built server using a server OS.

Machine is always running cool and it is silent, it sits under my bed. I can hear the occasional hard disk noise, but it's not much.

I also use it as a PS3 media server and a torrent downloader. It only has a power and ethernet connection, so I use the uTorrent WebUI and remote desktop on the machine.

EDIT: Pics of the machine



















I have also changed the OS since making the post. I have replaced XP Pro SP3 with Ubuntu 11.04, using Samba for shares and Transmission for torrenting. I use RealVNC client to connect to remote desktop and manage Transmission by the web interface to make this machine a proper headless unit.


----------



## ZFedora

OS: Windows Server Standard 2008 R2
Case: Apec Vortex
CPU: Intel Core i5 2300
Motherboard: MSi H61MU-E35 (b3)
Cooling: Stock
Memory: 4GB DDR3
PSU: Corsair TX650W
OS HDD: 3x 1TB (2 are Seagate Barracudas, 1 is a Hitachi 7200)
Storage HDD: 3x 1TB (2 are Seagate Barracudas, 1 is a Hitachi 7200) No RAID
Server Manufacturer: Me


----------



## killabytes

After moving into our new house I wasn't sure what to do with my gear. I was thinking of putting it into the lower basement of our 4 level split home, that way they'd stay cool. But since nothing is wired up, yet, I went with my office. I'm in the middle of setting up my rack. So forgive the mess, the wife is on my ass about it daily.


















The cover is missing from the firewall. My CF Card died so it's running a 2.5" drive now. Not shown is my new 1U socket 775 server. That's sitting on my desk right now while I install what I need. Also not shown is my second ISP connection. Since it's a fiber connection I couldn't have it run into this room. It's load balanced into the pfSense box.

From the top down it's...

Dell 15" LCD
Rogers SMC DOCSIS 3.0 Modem
Belkin Wireless 'N'
Trendnet 24 Port Gigabit Switch
Watchguard Firebox II; running pfSense
Left is my Ubuntu Web Server
Right is Server 2003 with 10TB of storage. Acts as file, ftp, torrent and other crap server.
More pics and progress to come. And yes, I'm painting soon.


----------



## stolid

Home Server (My old computer, previously OC'd to 2.95Ghz)
OS: Windows 7
Case: Rosewill replica of Thermaltake Tsunami Dream
CPU: AMD Opteron 1212 2Ghz
Motherboard: Abit NF-M2
Cooling: Scythe Infinity
Memory: 1GB G.Skill (would add more, but the Scythe Infinity blocks the slots and I'm lazy)
PSU: Slightly questionable 550W
OS HDD: 60GB Samsung (ATA)
Storage HDD(s): 2x 640GB WD Green (RAID1)
Server Manufacturer: Me
Purpose: File, ventrilo, and personal web server

VPS
OS: Ubuntu Server (under Xen)
CPU: A piece of a Xeon E5620
Memory: 512MB
Storage: 50GB of a RAID10 I think
Purpose: Web server


----------



## joshd

Quote:


> Originally Posted by *DIABLOS;13729482*
> Ebuyer, it's a very flexable solution.


Do you mind sharing the link with us?


----------



## mbudden

Quote:


> Originally Posted by *joshd;14238610*
> Do you mind sharing the link with us?


Google fu.

http://www.ebuyer.com/
http://www.ebuyer.com/product/253305


----------



## joshd

Quote:


> Originally Posted by *mbudden;14238777*
> Google fu.
> 
> http://www.ebuyer.com/
> http://www.ebuyer.com/product/253305


Thanks. Wow, that's a good deal. Only £140 for that!


----------



## brickboiler

Some of you guys have some serious stuff in your homes, its amazing.
Personally, my power and cooling budget is pretty minimal so I put this together and its been running surprisingly well.



Its an atom powered eeePC with 4gb of internal flash storage and a 1tb WD passport glued to the top. Its running ubuntu and crashplan and its main purpose in life is to be a remote backup server for some of my friends. It lives in an oddly shaped kitchen cabinet above my microwave, behind all those mugs.


----------



## Liighthead

pics not working D: ^


----------



## SQLinsert

had to ditch all this gear to ebay when the economy failed

looking back i really don't care i guess. would have been nice to have kept my htpc and some hdds with important data

these are from jan-feb 2007

funny to remember microsoft shooting itself in the foot with vista


----------



## Pentium-David

Quote:


> Originally Posted by *Eaglake;13722120*
> OS: FreeNAS 0.7.2
> case: Custom made
> CPU: Intel Celeron 2.4GHz
> MB: Asrock P4i65G
> RAM: 512Mb x2 Corsair XMS
> PSU: Unknown 300W
> OS drive: Delock IDE Flash Module 1GB
> Storage: 2TB x2 Western Digital Green drives
> 
> This is my storage server. It's pretty slow and old, but it's quite silent.
> It has two fans - one for cpu and one 120mm for general cooling
> But I think I'll have to retire this one and make a new one.


I wouldn't trust an "Unknown 300W" power supply holding all the stuff on the server


----------



## SniperXX

I just picked up a free Poweredge 2950 from a client that went on of business. I'll miss them, really cool people, but their server shall live on. I have new cpus and 8GB of memory in the mail to give it some new horsepower then off to the datacenter.


----------



## Slim Shady

Quote:


> Originally Posted by *Pentium-David;14400008*
> I wouldn't trust an "Unknown 300W" power supply holding all the stuff on the server


Thats why all my server have redundant branded PSUs


----------



## Eaglake

Quote:


> Originally Posted by *Pentium-David;14400008*
> I wouldn't trust an "Unknown 300W" power supply holding all the stuff on the server


Maybe I should be concerned.
But it's holding in pretty solid, and also the stress on that PSU is pretty low, so it shouldn't go boom boom








Anyhow I'm saving up for a new HTPC & Storage server combo


----------



## fventura03

my little 10tb server is not worthy of this thread yet







, hehehe


----------



## d4rkf0rm

i just bought 2 sun microsystems enterprise 420r systems to add to my rack, one has a bad riser card though


----------



## hojnikb

Well my server is a tiny Seagate Dockstar running Archlinux ARM.
I use it for torrents, samba and flexget.Its nice, since its silent and uses less than 5W of power.

Specs








1.2Ghz ARM SoC
128megs of ram
1gbit lan
16GB usb stick for storage
passivly cooled


----------



## dhenzjhen

Mobo: TYAN 5211
CPU: P4 2.8Ghz
RAM: 1GB x 2
HD: 400GB x 2 raid1
HBA: 3ware
Chassis: old crappy chassis no cover
OS: FreeBSD 7.1
Location: Garage
Uptime: 4+ years up and still running

Applications:
DHCP server for LAN, Web, firewall, PXE, Samba, Torrent Web download, openvpn.

Lot of stuff check it out here http://rt66.ath.cx


----------



## d4rkf0rm

my rig:


----------



## Eaglake

Quote:


> Originally Posted by *d4rkf0rm;14462978*
> my rig:


What does it do? What is it's role?


----------



## fventura03

Quote:


> Originally Posted by *dhenzjhen;14462475*
> Mobo: TYAN 5211
> CPU: P4 2.8Ghz
> RAM: 1GB x 2
> HD: 400GB x 2 raid1
> HBA: 3ware
> Chassis: old crappy chassis no cover
> OS: FreeBSD 7.1
> Location: Garage
> Uptime: 4+ years up and still running
> 
> Applications:
> DHCP server for LAN, Web, firewall, PXE, Samba, Torrent Web download, openvpn.
> 
> Lot of stuff check it out here http://rt66.ath.cx


yeah, i was being nosy but i love how you have the website setup, you can even monitor LAN bandwidth, wow!


----------



## dhenzjhen

Quote:


> Originally Posted by *fventura03;14463810*
> yeah, i was being nosy but i love how you have the website setup, you can even monitor LAN bandwidth, wow!


yeah with MRTG and with SNMP you can graph them with RRDtools.


----------



## d4rkf0rm

Quote:


> Originally Posted by *Eaglake;14463063*
> What does it do? What is it's role?


its my day to day computer and my gaming computer, but it hasnt seen much action in a while since ive been working on my rack (work in progress....):


----------



## dhenzjhen

Quote:


> Originally Posted by *d4rkf0rm;14463988*
> its my day to day computer and my gaming computer, but it hasnt seen much action in a while since ive been working on my rack (work in progress....):


What application do you run with that monster? Have you tried playing with clusters? run a que like the sun grid engine stuff?


----------



## d4rkf0rm

Quote:


> Originally Posted by *dhenzjhen;14464032*
> What application do you run with that monster? Have you tried playing with clusters? run a que like the sun grid engine stuff?


actually i just got the sun boxes and the rack the same day so my focus is getting all my proliant servers moved out of the closet and mounted on the rack before i crack open the sun machines.

the sun machines have 4x 4 core SPARC processors and 16GB of ram, im going to do some work with zones for starters and go from there


----------



## fventura03

Quote:


> Originally Posted by *dhenzjhen;14463898*
> yeah with MRTG and with SNMP you can graph them with RRDtools.


amazing! was listenign to your mp3 server for a little bit.


----------



## d4rkf0rm

imagine another DL 360 g4p on my rack, the kernel just finished compiling and i just installed it on my rack :3


----------



## Paladin Goo

My home server? Well, it also acts as my media center PC, and is hooked up via an HD4870 HDMI to my television, and streams from my main PC wirelessly to my TV...but I also use it as a file server as well. I used to use my PS3 as my HTPC...but that cinevia crap...pffft. I also use it for rendering when I'm video editing.

Anywho...I also have a dedicated server box that I own and colocate in Atlanta....SO I'll post the specs of both.

Home Server/HTPC:
CPU:AMD Phenom II 1090T @ 4GHz
CPU Cooler: Thermalright TRUE Black (may put old NH-D14 on there soon)
MOBO:ASUS Crosshair IV Formula
RAM:2x4GB OCZ Gold 1333Mhz DDR3
HDD:250GB SATA2 WD Caviar (because it doesn't need much space - as it streams)
CASE: HAF 932
GPU: HIS HD 4870 512MB (reference)
How I stream to it: VLC...looking for better/easier solutions.
OS: Windows 7 Professional x64

Dedicated Server:
http://www.tyan.com/product_SKU_spec.aspx?ProductType=BB&pid=423&SKU=600000179

Right now, it's sitting with a single 8-Core opteron, and 32GB of DDR3 unbuffered CRUCIAL memory. Runs ESXI. Looking to get another CPU to fill the other socket


----------



## Syjeklye

Here's some stuff from work. This is 1 of about 50 servers we have running commercials in cable tv headends.

Mini fridge sized









10 Scsi Drive capable!









Pentium 133mhz cpu board!









Typical Cable TV Headend, unit is at the bottom.









These things run Windows NT 3.51 and proprietary software.


----------



## dhenzjhen

Quote:


> Originally Posted by *fventura03;14464128*
> amazing! was listenign to your mp3 server for a little bit.


haha cool!!


----------



## Eaglake

Quote:


> Originally Posted by *d4rkf0rm;14463988*
> its my day to day computer and my gaming computer, but it hasnt seen much action in a while since ive been working on my rack (work in progress....):


Mmm








Looks nice.
Actually I have always wanted to get myself a rack. To put servers and switches and stuff in there


----------



## ExperimentX

My humble submission... alas, they are all for sale now


----------



## d4rkf0rm

Quote:


> Originally Posted by *ExperimentX;14465023*
> My humble submission... alas, they are all for sale now


thats a very sad story...








why are they for sale? why not keep them?


----------



## dhenzjhen

Quote:


> Originally Posted by *Raven Dizzle;14464624*
> My home server? Well, it also acts as my media center PC, and is hooked up via an HD4870 HDMI to my television, and streams from my main PC wirelessly to my TV...but I also use it as a file server as well. I used to use my PS3 as my HTPC...but that cinevia crap...pffft. I also use it for rendering when I'm video editing.
> 
> Anywho...I also have a dedicated server box that I own and colocate in Atlanta....SO I'll post the specs of both.
> 
> Home Server/HTPC:
> CPU:AMD Phenom II 1090T @ 4GHz
> CPU Cooler: Thermalright TRUE Black (may put old NH-D14 on there soon)
> MOBO:ASUS Crosshair IV Formula
> RAM:2x4GB OCZ Gold 1333Mhz DDR3
> HDD:250GB SATA2 WD Caviar (because it doesn't need much space - as it streams)
> CASE: HAF 932
> GPU: HIS HD 4870 512MB (reference)
> How I stream to it: VLC...looking for better/easier solutions.
> OS: Windows 7 Professional x64
> 
> Dedicated Server:
> http://www.tyan.com/product_SKU_spec.aspx?ProductType=BB&pid=423&SKU=600000179
> 
> Right now, it's sitting with a single 8-Core opteron, and 32GB of DDR3 unbuffered CRUCIAL memory. Runs ESXI. Looking to get another CPU to fill the other socket


Good thing you got the 8236 not the 8230. 8230 2nd batch sucks!! there's always problem with the DIMMs on CPU0 getting MCE errors all the time.


----------



## ExperimentX

Quote:


> Originally Posted by *d4rkf0rm;14465169*
> Quote:
> 
> 
> 
> Originally Posted by *ExperimentX;14465023*
> My humble submission... alas, they are all for sale now
> 
> 
> 
> 
> 
> 
> 
> 
> 
> thats a very sad story...
> 
> 
> 
> 
> 
> 
> 
> 
> why are they for sale? why not keep them?
> 
> 
> 
> Just some RL stuff, basically liquidating everything that I don't 'need' (except my rig, cause that would just be too depressing).
> 
> Any electronics that I haven't touched in a while though will be sold right off. You'll see me posting a lot of stuff in the coming days.
Click to expand...


----------



## d4rkf0rm

Quote:


> Originally Posted by *ExperimentX;14465551*
> Quote:
> 
> 
> 
> Originally Posted by *d4rkf0rm;14465169*
> 
> Just some RL stuff, basically liquidating everything that I don't 'need' (except my rig, cause that would just be too depressing).
> 
> Any electronics that I haven't touched in a while though will be sold right off. You'll see me posting a lot of stuff in the coming days.
> 
> 
> 
> if i had the extra cash i would take all of your machines, they look like theyre in pristine condition. i would be heartbroken to see them go
Click to expand...


----------



## riflepwnage

Home Server/HTPC
Basically used for watching netflix on HDTV and some movies and shows on my Hard drives








AMD X2 6000+ AM2
ASUS M4A785-M
ATI HD4200 Onboard with HDMI Out
4GB of RAM Assorted no name
5X 2TB WD Caviar Green RAID 5
Rocket raid 2320 PCI-E Raid Card
Samsung 640GB Spinpoint
Antec 430 Earth Watts PSU
NZXT HUSH 2 case white


----------



## fventura03

nice setup, netflix is so overrated though







.


----------



## herkalurk

Quote:


> Originally Posted by *fventura03;14551056*
> nice setup, netflix is so overrated though
> 
> 
> 
> 
> 
> 
> 
> .


Netflix is going to start loosing customers. I like their online service, and will keep that, but will be dropping DVDs. We rarely use them dvds, forget to send them back, and find tons of things to watch online. Sadly though, they are now making their customer base choose between them and other services. Before they were very competitive and were pretty much on their own because the blockbuster experiment kinda tanked, now however since the online service is it's own service, they have lost that advantage of dvd and streaming for a lower price.


----------



## fventura03

Quote:


> Originally Posted by *herkalurk;14556998*
> Netflix is going to start loosing customers. I like their online service, and will keep that, but will be dropping DVDs. We rarely use them dvds, forget to send them back, and find tons of things to watch online. Sadly though, they are now making their customer base choose between them and other services. Before they were very competitive and were pretty much on their own because the blockbuster experiment kinda tanked, now however since the online service is it's own service, they have lost that advantage of dvd and streaming for a lower price.


i tried the trial, and the selection of new movies they had for streaming was horrible, it wasn't even good quality, i rather stick to my "other" means of watching videos.


----------



## mbudden

Quote:


> Originally Posted by *fventura03;14564465*
> i tried the trial, and the selection of new movies they had for streaming was horrible, it wasn't even good quality, i rather stick to my "other" means of watching videos.


Cool story bro.
Let's hope the MPAA slaps you with a good ol' fine.


----------



## hick

OS: Windows 7 x64
Case: Norco 470
CPU: Athlon 5200+
Motherboard: Biostar 780g
Cooling: 5x Emmermax 80mm
Memory: 2x2gb ddr2
PSU: Antec EA500
OS HDD: 1.5tb (shared for random downloads)
Storage HDD(s): ~13 TB (well advertised TB...)
Server Manufacturer: My hand

What you use it for - Media Server
Temps, loudness, etc. - Can't hear it 3' away, all drive around 38degrees

Top to bottom.. CCTV DVR, 24port patch panel, 16port dlink GIGA switch, UPS's, HTPC, Onkyo rc260, Server


----------



## fventura03

Quote:


> Originally Posted by *mbudden;14564531*
> Cool story bro.
> Let's hope the MPAA slaps you with a good ol' fine.


umad.jpg


----------



## the_beast

Quote:


> Originally Posted by *fventura03;14564769*
> umad.jpg


ustealinbutthinkitsokforeveryoneelsetopaybutyoubro.jpg


----------



## fventura03

nonsense.


----------



## uzer

Dual-Core Xeon "Woodcrest" Servers
2GB ECC Registered Memory


----------



## parityboy

Quote:


> Originally Posted by *mbudden;14564531*
> Cool story bro.
> Let's hope the MPAA slaps you with a good ol' fine.


Heh, now now kittens let's play nice. We aren't our brother's keeper. Live your own life, take your own risks.


----------



## herkalurk

Quote:


> Originally Posted by *fventura03;14564465*
> i tried the trial, and the selection of new movies they had for streaming was horrible, it wasn't even good quality, i rather stick to my "other" means of watching videos.


720P is bad quality?


----------



## joshd

... 720P is really good quality to most people ...


----------



## Imrac

Quote:


> Originally Posted by *herkalurk;14579313*
> 720P is bad quality?


I think he means that most of netflix streaming is not 720p. At least the "blockbuster" titles aren't. That being said, I don't condone his piracy.


----------



## hick

Quote:


> Originally Posted by *herkalurk;14579313*
> 720P is bad quality?


Quote:


> Originally Posted by *joshd;14581687*
> ... 720P is really good quality to most people ...


Quote:


> Originally Posted by *Imrac;14582328*
> I think he means that most of netflix streaming is not 720p. At least the "blockbuster" titles aren't. That being said, I don't condone his piracy.


720 is just a resolution, the quality comes for the bitrate and netflix bitrate is terrible. But if you watch netflix on a 12" monitor it might look decent.


----------



## fventura03

Quote:


> Originally Posted by *herkalurk;14579313*
> 720P is bad quality?


netflix is not 720p... it says it's HD but it must be downgraded a lot because it's pretty bad. I think hulu does a better job in terms of quality of video.


----------



## jameslapc2

no the fastest is my my sig rig


----------



## gsa700

OS: Gentoo Linux

Case: Antec something....

CPU: AMD Phenom II x2 560 Black Edition ( Does unlock to an x4 B60 but I have no need for that on a file server so I'm running it stock.... )

Motherboard: MSI 880GM-E35 ( I got this for $10 in a combo with the cpu at Microcenter ) It's a nice board with 6 SATA 3 connectors.

Cooling: Stock cooler from my 1090t

Memory: 4 GB G.Skill Ripjaws

PSU: Antec 380w

OS HDD: OCZ 60 Gb SSD

Storage HDD(s): 2 x 1 TB RAID 1

What you use it for: Mainly a file server for my LAN, but also a Vent server for chatting with my buds...


----------



## herkalurk

I've had issues with bad quality from netflix, however the newer shows that were recorded in HD look fine on my 42 " 1080P LCD TV. Not 1080P but for tv watching doesn't seem to be different.


----------



## cyclist14

Do virtual servers count?

Anyways, right now I am running 5 VM's on my G71 for a testlab, using almost 5GB of RAM. I hope to get myself a dedicated server for ESXi and I am poking around trying to find something in the <$500 range, right now I am looking at a few 1950's on eBay that fit the bill.

I've also got two old 1U servers that I got from a friend with P4 Northwoods, 1GB RAM and 20 GB HDD's that I am trying to find a use for, might give FreeNAS a try on one of these guys.


----------



## herkalurk

My work just got some new VM beauties.....

HP DL 580 G7

4x intel xeon x7550, 512 GB ram, added on 12 1 gig ethernet ports, on board it has 2 10 gig copper ports as well. We do everything iscsi with our VM cluster so we need a boat load of ethernet anyway.


----------



## skatingrocker17

I just use a Western Digital Green 1.5TB hard drive in an external enclosure connected to a Netgear 3500L for all of my backups.

I used to use a Dell Dimension E520 with Windows Home Server but I stopped using it to save power.


----------



## cyclist14

Quote:


> Originally Posted by *herkalurk;14618691*
> My work just got some new VM beauties.....
> 
> HP DL 580 G7
> 
> 4x intel xeon x7550, 512 GB ram, added on 12 1 gig ethernet ports, on board it has 2 10 gig copper ports as well. We do everything iscsi with our VM cluster so we need a boat load of ethernet anyway.


Wow, that is some serious power, I sent two DL 580 G5's out to a remote site a few weeks ago, they were pretty close to base as far as hardware, only two sockets filled with 2.26 GHz Xeon's, 8 GB RAM and 6 x 73GB SAS drives.

Right now I've got two DL 380 G7's sitting in our back room that I installed ESXi 4.1 on, those are pretty nice 2x Xeon X5570, 20 GB RAM, 4x 500 GB 7200 RPM drives. We do all of our VM's through NFS exports off of NetApp products so I can understand where you are coming from as far as ethernet link aggregation.


----------



## svtfmook

media server built from left over parts
amd 5200
asus m2n deluxe motherboard
1gb ddr2-800 ram
640gb black main drive (os and music, home videos, pictures)
1.5tb samsung secondary media drive (movies)
evga 7900gs video card
diablotek 400w powersupply
coolermaster centurian 5 case
Ubuntu 11.04 64bit (stripped a lot of the programs out, running xfce on it to lighten it up as much as possible)
running twonky server to stream movies to my logitech revue


----------



## Norse

Changing one of my servers over the weekend to be freenas after poking about with vmware i found it'll work fine for my needs

Sempron 140 unlocked to dual core
2GB
Asus M4A78LT-M LE
2x2TB Drives
its 500GB drive then used for additional storage


----------



## t-ramp

I don't really have a server at the moment, but once I get my hardware sorted out I'll probably throw something together. Chances are it'll include a Sempron 140, but the rest is mostly up in the air.


----------



## Quantum Reality

OS: Windows XP 32-bit
Case: Black generic
CPU: Pentium Dual Core E2180 @ 2.66 GHz
Motherboard: Asus P5Q
Cooling: Arctic Cooling Freezer 7 Pro on the CPU
Memory: Kingston 2 GB DDR2-800
PSU: OCZ 500 Watt
OS HDD: Western Digital 160 GB IDE drive
Storage HDD(s): 2 x Western Digital 500 GB SATA in RAID 1
Server Manufacturer: Me

What you use it for (Print server, backups, file server, etc.): Web server
Not too loud of a server - has a fan in the back plus the PSU and CPU fans.
Uses XAMPP for the webhosting
Pics: http://www.overclock.net/intel-build-logs/605721-new-webserver-build.html


----------



## Pentium-David

Bump. I wanna see some more servers


----------



## Jtvd78

Quote:


> Originally Posted by *Pentium-David;15361426*
> Bump. I wanna see some more servers


This


----------



## Sean Webster

Here is my media server/Dad's PC

i3 2100
Asus P8Z68-V
8gb RipjawsX
HAF 912
Antec Neo 520
Crucial m4 64GB
6 Samsung 2TB F4's(only 2 in the pic b/c it is an old pic lol)


IMG_6601.jpg by seanwebster1212, on Flickr


----------



## Liighthead

niceeee ^


----------



## The_Punisher

OS: ESXi 5.0
Case: Rosewill RSV-L4000 4U rackmount case w/ rails
CPU: 2x Xeon E5405 2.0Ghz, 12MB L2, 4C/4T
Motherboard: SuperMicro X7DA3
Memory: 8x2GB DDR2 FB-DIMM ECC

Drives:
4GB USB (ESXi)
320GB SATA (VM's)
2x 160GB IDE drives (redundant firewalls)
3x1TB WD Green drives (Media drives, RAID-Z)
HP SmartArray P400 SAS RAID Card

Cooling: 2x CM Hyper 101
PSU: Antec NEO ECO 620W
Network: 5 gigabit NICs, assorted
Server Manufacturer: Me, using parts stripped from an HP server.

VMs:
pfSense Firewall
pfSense Firewall (failover)
MineOS CRUX (Minecraft server)
FreeNAS 7
OpenSUSE (testing, other game servers, etc.)



















Build log in sig. My first server build, overall a great experience. I've learned a lot since building it. Running next to my desktop it is completely silent, compared to the 1U HP server it came from which sounded like a jet airplane.


----------



## kamyu

@The_Punisher nice build


----------



## Pentium-David

@ The Punisher

I'm jelly







That is a nice build man!


----------



## L0GIC

Picked up an old server to have a play with, ear plugs are a necessity in close proximity.









2 x Intel Xeon 2.4GHz
2GB ECC RAM
5 x 36GB SCSI HDD's

Sorry for poor quality pictures.


----------



## The_Punisher

Quote:


> Originally Posted by *kamyu;15371880*
> @The_Punisher nice build


Quote:


> Originally Posted by *Pentium-David;15387353*
> @ The Punisher
> 
> I'm jelly
> 
> 
> 
> 
> 
> 
> 
> That is a nice build man!


Thanks. It's not even the best server in this thread, but it is pretty powerful I guess. The best part is that the CPUs, RAM and HDDs came out of a server that was given to me for free, so the cost for me to rebuild the whole thing was less than $300


----------



## Arinoth

Here is what I can remember:

OS: Windows Home Server 2008 (probably going to switch this to Windows 7 mainly due to the fact Team Viewer sees it as a corporate license and is now preventing me from using their free home user version)
Case: Norco RPC-450 4U Rackmount Server Case
Processor: Intel Core i3 2100
Motherboard: Gigabyte H67M-D2-B3 mATX
RAM: Corsair 2X2GB DDR3
Power Supply: Seasonic M12II 520W Modular
Hard Drives:
3x2TB WD HDDs
2x1.5TB Seagate HDDs
Raid: Virtual Raid 0 (FlexRaid)


----------



## snazy2000

@Arinoth Nice Setup, and why not just use RDP? or VNC?


----------



## ZFedora

Chasis: Chatsworth 4U Rackmount
CPU: Intel Core i7 2600k
PSU: Corsair GS600
Motherboard: Intel DP67DE
RAM: 12GB G.Skill DDR3 1333
HDD: 3x Seagate 1TB 7200rpm
OS: Windows Server 2008 R2 Standard
Rack: Chatsworth 45U two post open frame


----------



## NateN34

I'll post a picture of my server tomorrow...VERY ghetto lmao.


----------



## killabytes

Quote:


> Originally Posted by *ZFedora;15403860*
> Chasis: Chatsworth 4U Rackmount
> CPU: Intel Core i7 2600k
> PSU: Corsair GS600
> Motherboard: Intel DP67DE
> RAM: 12GB G.Skill DDR3 1333
> HDD: 3x Seagate 1TB 7200rpm
> OS: Windows Server 2008 R2 Standard
> Rack: Chatsworth 45U two post open frame












LOL nice 404 error.


----------



## overclocker23578

Jesus, all you people with your massive racks and 64 cores and stuff.

My server that should be up and running in a few days will have a Pentium D 820, some rubbish mobo, and 2GB of RAM







. Will be used for FTP, VPN, SRCDS. Will post pics when I get it up and running.


----------



## Pentium-David

Quote:


> Originally Posted by *overclocker23578;15412028*
> Jesus, all you people with your massive racks and 64 cores and stuff.
> 
> My server that should be up and running in a few days will have a Pentium D 820, some rubbish mobo, and 2GB of RAM
> 
> 
> 
> 
> 
> 
> 
> . Will be used for FTP, VPN, SRCDS. Will post pics when I get it up and running.


Nice, better than mine







that's perfect for a server. How much hard drive space?


----------



## Freakn

Here's my little helper at the moment

OS- W7
Chasis- Custom Test Bench
Proc- I7 920 @ 4.2
MB- X58 UD5
GPU- ASUS GTX550ti DC1
OS- 80GB WD
Storage- 2x 2TB, 1x 1.5TB, 3x 1TB, 1x 500GB
Optical- Samsung 22x DVDR
PSU - CM 1200 Gold

Main purpose as a media server for the house but was designed with [email protected] in mind as the second purpose.

Sorry bout the crap photo


----------



## Liighthead

 nice... seams like organised chaos in their


----------



## JeremyFr

OS: Windows 7 Professional 32bit (too lazy to run a true server OS)
Case: Norco RPC-470
CPU: Intel E2200 2.4Ghz Core2Duo
Motherboard: Asus P5LD2
Cooling: 2x120mm fans, 6x80mm fans
Memory: 4x512MB Kingston DDR2 667Mhz
PSU:Antec Earthwatts 500
OS HDD: 80GB Seagate Barracuda
Storage HDD(s): 2x250GB Seagate Barracudas, 1x320GB Seagate, 1x160GB Seagate, 2x1.5TB Seagates, 2x2TB Samsung F3's
Server Manufacturer: Me

Server is used for File/Print Server. I use Media Center Master for media meta data.

Once we get moved into a house this next year it'll be racked up in a closet with other gear as well, but for now it sits on a bookcase lol


----------



## the_beast

Quote:


> Originally Posted by *Arinoth;15395372*
> Here is what I can remember:
> 
> OS: Windows Home Server 2008 (probably going to switch this to Windows 7 mainly due to the fact Team Viewer sees it as a corporate license and is now preventing me from using their free home user version)


Quote:


> Originally Posted by *snazy2000;15395673*
> @Arinoth Nice Setup, and why not just use RDP? or VNC?


VNC is pretty slow and RDP screws up GPU folding. Both require some setting up to use them securely from outside your home LAN also.

The free version of LogMeIn runs fine with Server OSes though and can be used from anywhere with little in the way of setting up - it's what I use, and it's pretty useful.


----------



## mbudden

VNC is fine if you're doing little things.
But then again, that's just my opinion since I've only used VNC in Linux and not Windows.


----------



## killabytes

I've never had an issue with VNC being slow.

I'll post some updated pictures once I get the rest of my drives in!


----------



## Syjeklye

vnc is great. It only really gets "slow" when you're vnc'ing from a vnc.

just be glad you don't have to use pcanywhere. NT 3.51 can't use vnc and the only remote software I can use is rcmd and pcanywhere.


----------



## The Pook

you guys suck. all of you. i'd kill to be able to have the _room_ to run a server.


----------



## NKrader

Quote:


> Originally Posted by *The Pook;15466858*
> you guys suck. all of you. i'd kill to be able to have the _room_ to run a server.


+1

my girlfriend doesnt much like my current server room.

my bedroom lol.. good thing i run sossamanss so can have quiet fans


----------



## Freakn

Mine sits in my garage


----------



## bobfig

if your talking about noise then i cant hear mine unless the room is dead silent or the maxtor drive is doing some thing.

as for room you can make ass small as you want. i ran a small atom itx file server for a while and im sure you could find a small enough case.


----------



## killabytes

Quote:


> Originally Posted by *The Pook;15466858*
> you guys suck. all of you. i'd kill to be able to have the _room_ to run a server.












I'm currently moving my 1U servers to the basement. I'm tired of hearing the noise whilw BF3ing.


----------



## joshd

Do you guys think a small Intel Atom should be able to say, three people all at once? I would be running a Linux OS Distro.


----------



## the_beast

Quote:


> Originally Posted by *joshd;15474259*
> Do you guys think a small Intel Atom should be able to say, three people all at once? I would be running a Linux OS Distro.


to do what? Serve media files? Or provide a full productivity suite?

For media you'll be fine - but don't expect the Atom to do much more than that.


----------



## joshd

Yeah, just to serve files, documents, music, films etc.

What about a Celeron 440 @ 2Ghz then?


----------



## the_beast

A P3 can serve media to multiple clients - a 440 would be fine. Power consumption will be a little higher though.


----------



## bobfig

i wouldn't want a single core cpu. it may be fine with file transfers but i feel that it wouldn't be able to do any trans coding (if needed) or run a good game server. a good start would be to go with a amd am3 socket set up. check out the AMD Athlon II X2 250.


----------



## joshd

Are you sure :s

It's only £35.00, seems to cheap?


----------



## ilhe4e12345

Quote:


> Originally Posted by *jameslapc2;14584370*
> no the fastest is my my sig rig


i think.....i think im in love with your sig-rig......will it marry me?


----------



## joshd

Come on guys. I want to see more *awesome* servers!


----------



## subassy

I just built an iSCSI box. Not awesome in anyway though, very boring. I'll have to post pics of both it as well any my two others whenever I get them finished (one is kind of a legacy thing, the other my "VM server").


----------



## ZFedora

Update:

OS: Windows Server 2008 R2 Datacenter
RAM: 12GB G.Skill DDR3 1333Mhz
CPU: Intel Core i7 2600k
HDD: 2x Hitachi 1TB, 2X Seagate 1TB
PSU: Corsair GS600
GPU: PNY Verto 9600GT
Case: Chatsworth 4U
Rack: Chatswork 45U 2 post open frame

Misc:

Cisco Small Business 16 port switch
Trendnet 16 port patch panel
Netgear FSV318 8 port VPN/Firewall
1500watt APC/Battery backup

I'll post more pictures later


----------



## starwa1ker

Just built this new server over the weekend, here it is:


















Specs:
Intel Atom D510 1.66GHz w/ motherboard
4GB RAM
Corsair CX400 PSU
Lian Li V354B mATX Case
2TB x 2 (Samsung F4 + WD Green)

The whole server cost about $300.

Running:
Windows Home Server 2011
Air Video Server
Streamtome Server
Filezilla Server
Subsonic Server


----------



## joshd

That really is a cool little server, but with a lot of storage space. Do you find that the Intel Atom is generally quick enough?


----------



## starwa1ker

Quote:


> Originally Posted by *joshd;15502202*
> That really is a cool little server, but with a lot of storage space. Do you find that the Intel Atom is generally quick enough?


I was doubtful at first too, but it really surprised me how much the little guy can handle. No problems whatsoever. It's just a little slower turning on and off.


----------



## FiX

http://www.overclock.net/case-mod-work-logs/1155432-matx-board-mitx-case-high-res.html
My soon-to-be home server


----------



## joshd

Do you just fileserve with it, starwa1ker?


----------



## michael_sj123

OS: Windows Home Server
RAM: Corsair Dominator DDR3 1600MHz 4GB CL8
CPU: AMD Phenom II X4 955 @ 3.4GHz
HDD: Western Digital Green 500GB (more to be added once money reaches account lol)
MB: Gigabyte GA-MA770T-UD3
PSU: Corsair 400W something (going to replace it with a Chieftec Nitro+ 500W)
GFX: ASUS GeForce GTX 260

I'm using it as a homeserver, where it backups my computer, stores the images, videos and all other stuff that I download and as a simple status-get'er for my gameservers


----------



## mbudden

Quote:


> Originally Posted by *starwa1ker;15502874*
> I was doubtful at first too, but it really surprised me how much the little guy can handle. No problems whatsoever. It's just a little slower turning on and off.


Heck, I'd be impressed by the power consumption alone.


----------



## the_beast

Quote:


> Originally Posted by *michael_sj123;15509003*
> GFX: ASUS GeForce GTX 260


If that GPU isn't busy, have you considered folding on it to aid medical research?


----------



## michael_sj123

Quote:


> Originally Posted by *the_beast;15510171*
> If that GPU isn't busy, have you considered folding on it to aid medical research?


I have considered it, but I do not know how I do it. I thought folding required a CPU, and not a GPU? Or if I can use both, it's just idling right now, if I can help medical research, then I would gladly fold 24/7.


----------



## the_beast

check the link I posted - there are a few guides to getting things set up. If you have any questions at all then head over to the folding section here and we'll help you out - the basic setup is pretty straightforward but there are a few bits that can catch you out, but there are lots of clever & friendly people over there who can help you out if you get stuck.

You can fold on both the CPU and GPU in both your server and your sig rig - between them they're capable of some serious folding!


----------



## starwa1ker

Quote:


> Originally Posted by *joshd;15507066*
> Do you just fileserve with it, starwa1ker?


Yes, plus media streaming.


----------



## Nish

OS: Windows 7 Professionial
CPU: Core2 Duo E8400
Motherboard: DFI P45 T2Rs Jr
GPU: Ati Radeon 4870 x 2
Cooling: Thermalright 120
Memory: 4 x 2gb DDR2
PSU: Some 600w thing
OS HDD: 500gb Samsung
Storage HDD(s): 7 x WD 1Tb Green and 1 x 1.5Tb Green, 1 x 320gb WD IDE








Controller: Highpoint RocketRAID 2320
Server Manufacturer: me









Mainly used as a File and Print server and hosting my guild chat bot for AoC.

HDD Temps are on the high 30's and I can barely hear it over my sig rig.

I will take some pics the next time I pull it out. The case is a Massive Cube thing. I have no idea on brand or maker as I bought it second hand. It is split in 2 parts internally, 1 for the motherboard side and the other for drives. It holds 10 x HDD's on the back and has slots for 5 x internal 5.25" drives on the front.


----------



## Pentium-David

Quote:


> Originally Posted by *joshd;15476400*
> Yeah, just to serve files, documents, music, films etc.
> 
> What about a Celeron 440 @ 2Ghz then?


Pretty sure even a Slot 1 P3 would be perfect for that, and low power consumption....my Linux P3 Dell computer idled at 15W....compared to my main server which is a Celeron D, idles at like 77W...

Just googled this...looks like a slot 1...possibly a P2 lol
http://www.moesrealm.com/img/server_guts.jpg


----------



## Liighthead

Quote:


> Originally Posted by *Pentium-David;15528985*
> Pretty sure even a Slot 1 P3 would be perfect for that, and low power consumption....my Linux P3 Dell computer idled at 15W....compared to my main server which is a Celeron D, idles at like 77W...
> 
> Just googled this...looks like a slot 1...possibly a P2 lol
> http://www.moesrealm.com/img/server_guts.jpg


illd go with like a atom ;D low power n bit more speed then a p3


----------



## joshd

P4?


----------



## mbudden

TBH. If it's just a file server or something, a P3 would do just fine.
No sense in having a spec'd out server if it's not going to be doing much.


----------



## joshd

But.. a P4 server is cheaper on Ebay than a P3 server :$.

(according to my research.)


----------



## bobfig

But don't they use to es of power. You might be just better off with a 5xx series atom board for less power consumption.


----------



## joshd

Good plan. I tend not to think of that a my parents pay the bill.


----------



## wizardz

here is a picture of my setup.









basically top left server is an old HP ML150G2 serving as a pfsense 2.0 firewall (xeon 2.8, 2gb ram 2x80gb RAID1, 4x10/100/1000 intel adapters)

top right is a windows 2008 r2 fileserver/domain controller (some hp quad core crap with a ghetto raid setup 1x 350gb, 1x500gb, 1x750gb) that will soon be moved to my "backup site"

bottom left is one of the 2 main 2008 R2 file servers. currently this is an old ahtlon xp 4800+ with 2gb ram and 8x500GB raid5 array that holds some media.

bottom right is an athlon be-2350 with 8gb ram that host xenserver (some win2k machines that runs domain duties and some trixboxes for VOIP duties)

on we have a fiber trunk that runs between my basement and my neighbours basement (redundant site) and he host another 10TBs of storage and some other VMs on his vmware host.

oh and my ghetto rack made of 2x4 leftovers from my patio


----------



## ChRoNo16

Server 1: Vmware 5.0 HP DL320 G4 1x Pentium D 3.2Ghz 8gb DDR2 ram 1x 80gb (Vmware OS and Iso storage) 1x500gb (VM's and their drives.)

Server 2: Vmware 5.0 1x Xeon x3220 2.4GHz (Quad) 6gb DDR3 (Would be 8, but dead mobo slot) Evga 790i SLI FTW board. 1x500 for OS a few system drives, and a couple 2.5" drives for more storage (Honestly dont use this server right now, dont need as many vm's for a while)

Server 3: Windows server 2003 AMD Quad core, 2gb ram. 3u Server case, 12x 500gb hot swap drives, goes un-used- need to find a cheap PCIe video card for it, and a little more ram.


----------



## Oedipus

I work with a network consulting company that serves primarily small to medium-sized agricultural businesses. Most of our deployments are single server (DC and DB server) or two servers (DC/DB on one server, and then RDS/TS on the other box) so I was pleased when we finally got to do a three-server install. Yeah, I'm easily excited. I wont name the company, but I'll bet you've eaten fruit that went through this facility.

Server 1:

OS: Windows Server 2008 x64 Enterprise Edition
CPU: Dual intel Xeon X5650's
Memory: 24GB ECC DDR3 1066
HDD: 8 x 2.5" 10k 146GB SAS
Server Maker: Dell R710

Purpose: Domain Controller, file and print server

Server 2:

OS: Windows Server 2008 x64 Standard Edition
CPU: Dual intel Xeon X5650's
Memory: 24GB ECC DDR3 1066
HDD: 6 x 2.5" 15k 146GB SAS
Server Maker: Dell R710

Purpose: Oracle database server

Server 3:

OS: Windows Server 2003 x86 Standard Edition
CPU: intel Xeon X3450
Memory: 4GB ECC DDR3 1066
HDD: 2 x 2.5" 15k 300GB SAS (RAID 1)
Server Maker: Dell R310

Purpose: Application server

Here's a couple pics:

Server 2 and Server 3, getting staged:










In the rack:










There's an R300 and an Optiplex 360 in there too. Not sure of the specs on either; the Opti controls their sprinklers and the 300 will eventually be brought on as a secondary DC.


----------



## MaroonZ24

OS: Windows Xp
Case: Generic Xoxide Case
CPU:Intel Celeron 2.7Ghz
Motherboard:SuperMicro P4SCI
Cooling: Stock Heatsink
Memory: 2.5Gb's
PSU: Allied 400Watt
OS HDD: 80Gb Seagate (Storage Also)
Server Manufacturer: Myself

What you use it for: Minecraft Server, File Server.
Temps, loudness, etc: Minimal


----------



## michael_sj123

Updated my server, it's got a new case, new cpu-cooler (stock didn't quite work out with folding..) and a new PSU..


----------



## the_beast

Quote:


> Originally Posted by *MaroonZ24*
> What you use it for: Minecraft Server, File Server, collecting dust bunnies


fixed


----------



## joshd

I here a Minecraft server has to be pretty spec'd and good I connection?


----------



## morgofborg

Quote:


> Originally Posted by *joshd;15559228*
> I here a Minecraft server has to be pretty spec'd and good I connection?


Yea, it likes ram, and you need decent upload speed to host. I only have like 2.5 mb upload and can only host like 6-8 people before it gets laggy.


----------



## joshd

Wow. That really is bad. I have no hope then, with a 0.25mb up :/...


----------



## Warhaven

Here's my wee file server for my site:

Manufacturer: Apple
Model: 2011 Mac Mini
CPU: 2.3 GHz Core i5
RAM: 8 GB DDR3 1333
HDD: 500 GB
OS: OS X 10.6.8 Server

And has the following peripherals:

1TB RAID 1 (2 x 1TB) over FireWire 800
1TB RAID 1 (2 x 1TB) over USB 2.0


----------



## Warhaven

Quote:


> Originally Posted by *wizardz;15549257*
> 
> oh and my ghetto rack made of 2x4 leftovers from my patio


If you want to redo it in style, there are several Ikea tables that happen to be exactly 1U in width.


----------



## joshd

Quote:


> Originally Posted by *Oedipus;15552933*


Wow.


----------



## CaptainBlame

I run a ZFS based (including ZFS boot) FreeBSD 8 Stable combined server and media pc.

File/Email/NZB services runs in a Jail and XBMC runs on the host DE outputting to my Lounge TV. The system boots straight to XBMC and is shutdown via XBMC using my harmony remote. Everything else is done through SSH and TMUX or VNC.

Hardware
Intel E5300, 8GB Ram, Nvidia 9600GT Silent in a Silverstone LC17

2 * 750GB mirrored in root pool, 4 * 1TB raidz in storage pool. The reason I run two pools is more of an AIX UNIXism. All my jails lives in the storage pool, in the event of a system failure I can easily import the pool to a Fresh FreeBSD install on different hardware and turn on jails and everything is back in business.

For backups I just plugin an external storage drive as i dont have much to backup. My backup script imports the disk as another zfs pool and mounts the filesystems. It then rsyncs and zfs snapshots for point in time restores. Scalability is very easy with zfs I could just add a second external disk to the backup pool or if I ever decide to hord massive amounts of media I could create an offline archive pool.


----------



## axipher

Fractal Design Core 1000
Asus M4A78LT M-LE
Athlon II X3 unlocked
4 GB DDR3
120 GB Solid 3 + 12 TB
WHS 2011
3 TB off-site back-up for the server itself.


----------



## Volvo

Here's a humble print server I built for school use.

OS: Windows Vista Enterprise x64
Case: Fractal Core 1000
CPU: Intel Pentium G620
MB: ASRock H61M-HVGS
GPU: ASUS HD4550
Cooling: CM Hyper TX3 w/ Delta EFC0912DE, chassis cooling by NMB 3610KL-04W-B50
Memory: 2x 2GB KVR1333D3N9
PSU: FSP Aurum AU-400
HDD: Western Digital WD800JS
Backup HDD (Windows image backup at midnight): Western Digital WD800JS
Server Manufacturer: Yours truly.

Useage: Print server, media player.


----------



## u3b3rg33k

updated parts edit








Starting from the top:
Netgear gigE switch (running who knows what)

Mac mini server
*OS:*10.7 server
*Case:*mac mini case
*CPU:* core2duo
*Motherboard:*mac mini server motherboard
*Cooling:*mac mini cooling
*Memory:*Crucial 8GB DDR3
*PSU:*mac mini PSU
*OS HDD:*500GB 7k drive
*Storage HDD:*500GB 7k drive
*Server Manufacturer:*Apple
*What you use it for:*web/email/temp storage
*Temps loudness* cool, inaudible

Xserve G4
*OS:*10.4.11 server
*Case:*Xserve 1U
*CPU:* 1.33 GHz G4 2MB L3
*Motherboard:*Xserve motherboard
*Cooling:*OEM (Al HS, blower)
*Memory:* 2GB DDR
*PSU:*OEM
*OS HDD:*60GB ATA
*Storage HDD:*500GB ATA
*Server Manufacturer:*Apple
*What you use it for:*NVR
*Temps loudness* coolish, not annoying

Intel SR2300
*OS:*Untangle
*Case:*Intel SR2300 Chassis
*CPU:* dual 2.8GHz Xeon
*Motherboard:*SE7501WV2
*Cooling:*OEM (Cu/Al HS, 60x38mm fans)
*Memory:* 7GB ECC DDR
*PSU:*OEM Redundant hot swap 500w
*OS HDD:*raid 1 74GB U320 10k
*Server Manufacturer:*Intel
*What you use it for:*UTM/Edge router
*Temps loudness* cool, not quiet

HP DL380 G5
*OS:*Server 2011 SBS
*Case:*DL380 2U
*CPU:*2x quad 2.5GHz core 2 Xeons
*Motherboard:*DL380
*Cooling:*OEM (Al HS, 6x 60x38mm blowers, total 130w fans)
*Memory:* 12GB ECC DDR2
*PSU:*OEM 850W Redunant hot swap
*OS HDD:*8x 74GB SAS 2.5" 10k (raid 6 w/BBU)
*Server Manufacturer:*HP
*What you use it for:*evaluation
*Temps loudness* cool, Sounds like a jet engine during POST

BigBox
*OS:*Ubuntu Server
*Case:*Antec 4U
*CPU:* e6300
*Motherboard:*P5WDG2 WS Pro
*Cooling:* Intel stock cooler
*Memory:* 8GB DDR ECC
*PSU:*Antec TruePower New 650 (4 rail)
*OS HDD:*4x 2TB WD RE4 Raid 5, 2x 250GB raid1 (9550SXU-12 w/BBU)
*Server Manufacturer:*Me
*What you use it for:*Storage
*Temps loudness* cold, slightly annoying


----------



## mbudden

Quote:


> Originally Posted by *Volvo;15568840*
> Here's a humble print server I built for school use.
> 
> OS: Windows Vista Enterprise x64
> Case: Fractal Core 1000
> CPU: Intel Pentium G620
> MB: ASRock H61M-HVGS
> GPU: ASUS HD4550
> Cooling: CM Hyper TX3 w/ Delta EFC0912DE, chassis cooling by NMB 3610KL-04W-B50
> Memory: 2x 2GB KVR1333D3N9
> PSU: FSP Aurum AU-400
> HDD: Western Digital WD800JS
> Backup HDD (Windows image backup at midnight): Western Digital WD800JS
> Server Manufacturer: Yours truly.
> 
> Useage: Print server, media player.


How're those new Pentiums?


----------



## Slim Shady

Quote:


> Originally Posted by *Volvo;15568840*
> Here's a humble print server I built for school use.
> 
> OS: Windows Vista Enterprise x64


* Choke, Choke *
Why not get Windows Server for FREE if its for school?


----------



## axipher

WHS 2011 is only $50, has a really great remote web interface that allows you to stream music and videos.


----------



## joshd

... if you are a student get it for free @ www.dreamspark.co.uk


----------



## Pentium-David

Quote:


> Originally Posted by *MaroonZ24;15553970*
> OS: Windows Xp
> Case: Generic Xoxide Case
> CPU:Intel Celeron 2.7Ghz
> Motherboard:SuperMicro P4SCI
> Cooling: Stock Heatsink
> Memory: 2.5Gb's
> PSU: Allied 400Watt
> OS HDD: 80Gb Seagate (Storage Also)
> Server Manufacturer: Myself
> 
> What you use it for: Minecraft Server, File Server.
> Temps, loudness, etc: Minimal
> 
> ----


Duuuuuuuuuuuude, I highly, highly, highly recommend you get a new PSU for your server. Allied PSU's are complete garbage made with terrible capacitors and no protection for your components. I'm not trying to be rude, I just don't want that junk to fry your computer and have you lose all your data


----------



## Pentium-David

Here is my Linux box








CPU: Pentium 4 2.8GHz (No HT) socket 478
RAM: 768MB DDR333
Mobo: A-Open MX46-533V
HDD: Western Digital 40GB 7200
PSU: FSP 350W
OS: ClearOS 5.2

This is currently my router and DHCP server. Soon to be used to access my main server through a VPN. I love making use of old hardware!!!


----------



## joshd

Nice server, David. What does a DHCP server do, though?


----------



## Pentium-David

Quote:


> Originally Posted by *joshd;15607747*
> Nice server, David. What does a DHCP server do, though?


Thanks







this mobo/CPU has probably 50,000 hours of use and still kicking. It's hooked up directly into my internet modem, so this computer hands out addresses to the other computers in the house (around 7) starting with 192.168.1.5


----------



## killabytes

Quote:


> Originally Posted by *Pentium-David;15607713*


Very strange. I was just looking at a power supply for my IP phone and saw it's FSP group. I then scroll to your picture...BAM same thing.

Made my night for some reason.


----------



## Pentium-David

Quote:


> Originally Posted by *killabytes;15608000*
> Very strange. I was just looking at a power supply for my IP phone and saw it's FSP group. I then scroll to your picture...BAM same thing.
> 
> Made my night for some reason.


Haha, well I'm glad it made your night. FSP makes a lot of power products. They're a pretty solid brand. Just a little on the pricey side


----------



## joshd

You don't want an underpowered PSU on a server mind you...


----------



## axipher

Quote:


> Originally Posted by *joshd;15608763*
> You don't want an underpowered PSU on a server mind you...


I have an OCZ ZS 550 W powering an Athlon II X3 unlocked haha. I think I'm good for power.


----------



## Cyrious

Minecraft/UT99 server

OS: Win7 ultimate x64
Case: a cheap rosewill one
CPU: S939 Athlon 64 x2 3800+ @ 2ghz 1.1v
Motherboard: MS-7093 w/ Gateway bios (Salvaged out of an HP computer)
Cooling: Phenom II heatsink + 120mm exhaust
Memory: 4GB DDR-333 @ 400mhz (3.5GB useable due to the board being full of derp)
PSU: Antec Earthwatts 380W
OS HDD: 120GB PATA Seagate
Backup HDD: 120GB Fujitsu SATA (gen 1)
Server manufacturer: Me

This server is one i cooked up for a future minecraft/UT99 server. For the minecraft part its getting 2GB of ram to play with and will be host to 10 people myself included. The UT99 part will be for 1v1, 2v2, and 3v3 matches and will get 1GB of ram to use. The rest is for the OS with a nice beefy pagefile.

Temperature wise it runs pretty warm when the processor is not undervolted (43C idle) but when undervolted i reduce it to 30C idle. Its nice and quiet all things considered.
Since its a "headless" server and it has windows installed on it, i use remote desktop to manage it. Until i finally get a job it will remain off and the ports on my modem/router closed as the funds needed to cover the power costs are not forthcoming.

And a pic of the guts


----------



## ZFedora

Quote:


> Originally Posted by *Cyrious;15609072*
> -snip-


Great board, using the same one for a home/small business Active Directory server


----------



## Cyrious

Quote:


> Originally Posted by *ZFedora;15609238*
> Great board, using the same one for a home/small business Active Directory server


except for the fact that when i got this board 2 of the caps had popped disabling the sata ports and turning the onboard video into garbled mush. I replaced them and about 6 months later (it was my master rig at the time) another cap popped, this one feeding the chipset itself. If i had a pack of rubycon capacitors or some solid ones id re-cap the board with those.


----------



## ZFedora

Quote:


> Originally Posted by *Cyrious;15609347*
> except for the fact that when i got this board 2 of the caps had popped disabling the sata ports and turning the onboard video into garbled mush. I replaced them and about 6 months later (it was my master rig at the time) another cap popped, this one feeding the chipset itself. If i had a pack of rubycon capacitors or some solid ones id re-cap the board with those.


Funny you mention capacitors, I actually ripped a cap off while installing a NIC, rendering a PCI lane useless. Besides that, it's been running great.


----------



## Pentium-David

Quote:


> Originally Posted by *Cyrious*
> 
> except for the fact that when i got this board 2 of the caps had popped disabling the sata ports and turning the onboard video into garbled mush. I replaced them and about 6 months later (it was my master rig at the time) another cap popped, this one feeding the chipset itself. If i had a pack of rubycon capacitors or some solid ones id re-cap the board with those.


What are they, mainly OST's? G-Luxon? (So damn common)


----------



## rickyman0319

3d bluray server:

asus 990x board
pII 740 (unlock l3)
1 x 3TB
4 x 1.5 tb
dell sata card
750 gb ( os hd)

tv shows/movies

790 gigabyte mb
amd cpu
5 x 1.5tb
2 x 2 tb
modifled h50

i am looking to upgrade the tv shows/movies server into rack ( 24 bay)
both running win 7 at lease


----------



## Plan9

My file server:
* OS: FreeBSD 8.1
* CPU: AMD64 Phenom(tm) II X3 720 Processor (2812.55-MHz K8-class CPU)
* RAM: 8GB DDR3 (4x 2GB sticks)
* HDD: 1x 80GB (IIRC) HDD (boot disk) - UFS
* HDD: 3x 1TB HDDs - formatted into one ZFS raidz pool

On that I'm also running a few VMs:

Web server (virtual machine):
* OS: CentOS 5.something
* RAM: 250MB
* HDD: 50GB

SSH sandbox (virtual machine):
* OS: FreeBSD 8.1
* RAM: 128MB
* HDD: 2GB

Data IO services (virtual machine):
* OS: ArchLinux
* RAM: 732MB
* HDD: 20GB

I also have a dedicated box in a French data centre (ovh.com) that does various things from remote back ups through to IRC daemons. But that's a just a Celeron with 1GB RAM. I mainly use it for the 5TB monthly bandwidth


----------



## Pentium-David

Quote:


> Originally Posted by *Plan9*
> 
> My file server:
> * OS: FreeBSD 8.1
> * CPU: AMD64 Phenom(tm) II X3 720 Processor (2812.55-MHz K8-class CPU)
> * RAM: 8GB DDR3 (4x 2GB sticks)
> * HDD: 1x 80GB (IIRC) HDD (boot disk) - UFS
> * HDD: 3x 1TB HDDs - formatted into one ZFS raidz pool
> On that I'm also running a few VMs:
> Web server (virtual machine):
> * OS: CentOS 5.something
> * RAM: 250MB
> * HDD: 50GB
> SSH sandbox (virtual machine):
> * OS: FreeBSD 8.1
> * RAM: 128MB
> * HDD: 2GB
> Data IO services (virtual machine):
> * OS: ArchLinux
> * RAM: 732MB
> * HDD: 20GB
> I also have a dedicated box in a French data centre (ovh.com) that does various things from remote back ups through to IRC daemons. But that's a just a Celeron with 1GB RAM. I mainly use it for the 5TB monthly bandwidth


Wow, that is awesome. Lots of Virtual machines, nice


----------



## rickyman0319

question: what is a virtual machine?

what does it do?


----------



## blupupher

Quote:


> Originally Posted by *rickyman0319*
> 
> question: what is a virtual machine?
> 
> what does it do?


It is an operating system running within another operating system using "virtual" hardware.


----------



## killabytes

Too much text, not enough pictures!


----------



## PathOfTheRighteousMan

OS: Windows 7 Ultimate 64 Bit / Virtual Box running Fedora 15 for occasional web hosting for friends
Case: eSys mATX
CPU: Intel E6300 Conroe-B2
Motherboard: ASUS P5KPL-AM
Cooling: 120mm, 80mm, stock intel cooler
Memory: 2GB Crucial Ballistix DDR2
PSU: CoolerMaster RealPower 520W
OS HDD: Seagate 40GB ATA
Storage HDD(s): 2x 320GB Seagate Barracuda, 2x Maxtor 750GB
Server Manufacturer: Moi.

What you use it for: Hosting for COD4, media sharing across home network, backup home laptops
Temps: 53'C top temp at 100% load
Loudness: Quieter than my gaming rig on lowest fan speed

Pics: Later


----------



## Plan9

Quote:


> Originally Posted by *Pentium-David*
> 
> Wow, that is awesome. Lots of Virtual machines, nice


Cheers mate.
Quote:


> Originally Posted by *rickyman0319*
> 
> question: what is a virtual machine?
> what does it do?


It's essentially a whole OS that runs inside an OS. So you can run several servers on one physical hardware machine.
It's also great for trialling - eg testing Windows 8 without having to find a spare HDD. If you want to find out more, then look into VirtualBox or VMWare - both other free software








Quote:


> Originally Posted by *killabytes*
> 
> Too much text, not enough pictures!


All my stuff runs headless and the server hardware is pretty uninteresting to look at. Sorry


----------



## joshd

Get the pics up.

Woot.


----------



## tiro_uspsss

OS: Windows 7 Ultimate 32-bit
case: Lian Li V2000 + mods
motherboard: Tyan Tiger i7520SD S5365
CPU: 2x Intel Xeon SL8WT (2 dual core 2Ghz)
cooling: CPUs - Dynatron i65G.. all fans are 38mm thick except the 140mm fan which is 25mm
RAM: 6x 512MB DDR2-400 ECC+REG Micron (3GB)
PSU: PC Power & Cooling QS 750W (want to change this to a multi-rail PSU)
OS HDD: 1st gen 74GB Raptor
HDDs: 6x1TB (1 Seagate ES.2, 5x WD Black), 2x 2TB (Samsung F4), 2x 500GB (WD RE2&3), 2x 2nd gen 74GB Raptor
purpose: file server
fairly loud - except the 140mm fan, all fans are running at 5V


----------



## blupupher

Quote:


> Originally Posted by *tiro_uspsss*
> 
> OS: Windows 7 Ultimate 32-bit
> case: Lian Li V2000 + mods
> motherboard: Tyan Tiger i7520SD S5365
> CPU: 2x Intel Xeon SL8WT (2 dual core 2Ghz)
> cooling: CPUs - Dynatron i65G.. all fans are 38mm thick except the 140mm fan which is 25mm
> RAM: 6x 512MB DDR2-400 ECC+REG Micron (3GB)
> PSU: PC Power & Cooling QS 750W (want to change this to a multi-rail PSU)
> OS HDD: 1st gen 74GB Raptor
> HDDs: 6x1TB (1 Seagate ES.2, 5x WD Black), 2x 2TB (Samsung F4), 2x 500GB (WD RE2&3), 2x 2nd gen 74GB Raptor
> purpose: file server
> fairly loud - except the 140mm fan, all fans are running at 5V


That is nice.


----------



## joshd

Quote:


> Originally Posted by *blupupher*
> 
> That is nice.


Yeah that's a real nice server you got there. How much money do you estimate it would cost to build it?


----------



## tiro_uspsss

Quote:


> Originally Posted by *joshd*
> 
> Yeah that's a real nice server you got there. How much money do you estimate it would cost to build it?


the core parts?
the CPUs are *really* cheap: ebay article #: 220892801746
the RAM *must* be the exact specs as follows: DDR2-400 ECC+REG - also rather cheap (keep in mind the CPUs cannot not do x64







)
motherboard... this is a little trickier.. this platform is known as 'Sossaman' - there is an exhaustive thread @ XS. There were only 4 different sossaman motherboards ever built: 1 by Intel, 1 by Tyan & 2 by Supermicro. The Tyan & Supermicro boards are very rare & usually very expensive (~$250 *new* on ebay) - both Tyan & Supermicro mobos take the same heatsinks (see my spec list). The Intel mobo is plentiful & cheap: ebay article #: 250899514010. BUT the Intel mobo _heatsinks_ are very different & VERY rare.. & luckily usually only ~$10-15; having said that, myself & other fellows @ XS have modded heatsinks on - its not that hard, just have to be mindful of the mounting pressure as the CPUS are bare die. The other drawback to the Intel mobo is the lack of expansion slots: 1x PCIEx8 & 1x PCI-X 64/133.

the power draw for the bare components is very low - the CPUs are really just yonah laptop chips - just the microcode has changed their ID. Folks @ XS say the following: 2x 2ghz CPUs, Intel mobo, 2x1GB ram, few fans & a HDD take ~100W under load.
my server obviously takes more as it has 13 HDDs, 2 RAID cards, a sound card & truckloads of fans..... idle: ~220W

if you want to know more, either head over to the XS thread: http://www.xtremesystems.org/forums/showthread.php?200843-Building-a-Sossaman-rig
or ask me more via here or PM


----------



## bobfig

well I had to rma my corsair vx450 psu for fan noise so I have my spare pcp&p silencer 750w holding my sever for now. all I have to say is there is a tone of cables on that thing.


----------



## ComGuards

Posted pictures of the physical server(s) before... Here's current snapshot of my core VM server environment. Sure as heck beats having physical machines











And the VMs...



Microsoft Patch Tuesday sure does become a pain though, after a while...


----------



## NKrader

Quote:


> Originally Posted by *tiro_uspsss*
> 
> the core parts?
> the CPUs are *really* cheap: ebay article #: 220892801746
> the RAM *must* be the exact specs as follows: DDR2-400 ECC+REG - also rather cheap (keep in mind the CPUs cannot not do x64
> 
> 
> 
> 
> 
> 
> 
> )
> motherboard... this is a little trickier.. this platform is known as 'Sossaman' - there is an exhaustive thread @ XS. There were only 4 different sossaman motherboards ever built: 1 by Intel, 1 by Tyan & 2 by Supermicro. The Tyan & Supermicro boards are very rare & usually very expensive (~$250 *new* on ebay) - both Tyan & Supermicro mobos take the same heatsinks (see my spec list). The Intel mobo is plentiful & cheap: ebay article #: 250899514010. BUT the Intel mobo _heatsinks_ are very different & VERY rare.. & luckily usually only ~$10-15; having said that, myself & other fellows @ XS have modded heatsinks on - its not that hard, just have to be mindful of the mounting pressure as the CPUS are bare die. The other drawback to the Intel mobo is the lack of expansion slots: 1x PCIEx8 & 1x PCI-X 64/133.
> the power draw for the bare components is very low - the CPUs are really just yonah laptop chips - just the microcode has changed their ID. Folks @ XS say the following: 2x 2ghz CPUs, Intel mobo, 2x1GB ram, few fans & a HDD take ~100W under load.
> my server obviously takes more as it has 13 HDDs, 2 RAID cards, a sound card & truckloads of fans..... idle: ~220W
> if you want to know more, either head over to the XS thread: http://www.xtremesystems.org/forums/showthread.php?200843-Building-a-Sossaman-rig
> or ask me more via here or PM


lol sossamans arent known over around these parts, i do envy that tyan board tho. you can use the copper heatsinks









also one of my rigs with a 80+ psu pulls 105-110watts when crunching

people asked all the same questions they asked you when I posted mine up here haha


----------



## stolid

OS: Ubuntu Server 10.10
CPU: Part of a Xeon W3520 under Xen
Memory: 512MB
OS HDD: 50GB

I use this VPS (a virtual machine) to host some websites and to play around with running a Linux server. The bandwidth is nice, and it's pretty cheap too.









A picture? This will have to do:


----------



## joshd

Nice. Got any "real" pics though







?


----------



## tiro_uspsss

Quote:


> Originally Posted by *NKrader*
> 
> lol sossamans arent known over around these parts, i do envy that tyan board tho. you can use the copper heatsinks
> 
> 
> 
> 
> 
> 
> 
> 
> also one of my rigs with a 80+ psu pulls 105-110watts when crunching
> people asked all the same questions they asked you when I posted mine up here haha


hey I know you!






















yeah I like da sossamans








I'm waiting on $$$ to complete my original server plan I had - a s1366 build.. once that is complete, the tyan sossy rig is probably going under water














& then it'll be used as a video surveillance rig


----------



## Thynsiia

i got these lying around:









2 X Dell poweredge 1850
4 X hp proliant dl385 g1

i have no idea what to do with them, they make allot of noize, and haven't got that much storage


----------



## joshd

Bump. Nice servers people.


----------



## herkalurk

Quote:


> Originally Posted by *ComGuards*
> 
> Posted pictures of the physical server(s) before... Here's current snapshot of my core VM server environment. Sure as heck beats having physical machines
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And the VMs...
> 
> Microsoft Patch Tuesday sure does become a pain though, after a while...


Ok, why does your larger 32 GB ram server only have 3 nics compared to the 24 GB server? Also, what are you using for storage? A home built san?


----------



## joshd

What OS is that running?


----------



## Slim Shady

VMWare ESX


----------



## killabytes

I'll have to snap some pictures once I get everything all cleaned up but I now have...

*Servers*

x8 Sun Microsystems Sun Fire V100
Dell Poweredge 4400
Super Micro 1U Xeon
Home built Intel Atom dual core web server
Home built Intel core 2 duo file server

*Non-Servers*

Symantec VPN 100 Firewall
Watchguard Firebox II, running m0n0wall
Trendnet 10/100/1000 26 port switch

I need to go through my Sun Fires and figure out what works and what doesn't, clean them out and install Solaris. The Dell Poweredge is a mess. It's full of dust and needs a good cleaning. I already started to disassemble it, just a matter of time.

I promise to get some pictures soon.


----------



## Murderous Moppet

You guys with your "real" servers. Pfft, you don't need a real server when you have a recycled HP dc7600 slim! Especially when it has 4 whole gigabytes of DDR2, 5tb of internal storage and a 3.2GHz P4 with hyperthreading.


----------



## NKrader

Quote:


> Originally Posted by *Murderous Moppet*
> 
> You guys with your "real" servers. Pfft, you don't need a real server when you have a recycled HP dc7600 slim! Especially when it has 4 whole gigabytes of DDR2, 5tb of internal storage and a 3.2GHz P4 with hyperthreading.


so much win


----------



## Norse

I have 2 of the following, the server mirrors itself to the otherone so its effectively raid 1 over the network, i end up with around 4TB of space i can use and im currently using around 2.6TBof said 4TB

Sempron 140 unlocked to dual core (pondering getting two cheapo x4's and upping the ram to 4 or 8GB and run 2 ESXI machines)
2GB ram
2x2TB and 1x500 GB OS/Secondary mirror
ASUS M4A78LT-M LE
Cheapo but neat case

https://lh6.googleusercontent.com/-MJ2o2qCkRqk/TwtOH9T_s7I/AAAAAAAAAcU/H_XZ-BgGsFw/s640/2011-03-01%25252021.14.13.jpg
https://lh3.googleusercontent.com/-CZ44vwZFpXE/TwtPBpbF5dI/AAAAAAAAAcg/jhjKERQEZ7k/s640/2011-02-26%25252010.44.25.jpg


----------



## _CH_Skyline_

My first server, used for Minecraft only until I upgrade the storage and platform. Spec are in my sig under 'Minecraft Server'.


----------



## Plan9

Quote:


> Originally Posted by *Murderous Moppet*
> 
> You guys with your "real" servers. Pfft, you don't need a real server when you have a recycled HP dc7600 slim! Especially when it has 4 whole gigabytes of DDR2, 5tb of internal storage and a 3.2GHz P4 with hyperthreading.


Wasn't the hyperthreading on the P4 largely rubbish?

Anyhow, real nerds use virtual machines instead of desktops


----------



## joshd

Quote:


> Originally Posted by *Plan9*
> 
> Wasn't the hyperthreading on the P4 largely rubbish?
> Anyhow, real nerds use virtual machines instead of desktops


I like more machines as I love the look of large piles of networking cables.


----------



## 2thAche

Specs:
OS: Started: Windows Server 2008 R2 x64 Switched to: Win 7 Enterprise x64 (better with media)
CPU: E5300
RAM: 4GB DDR3
PSU: 450W Silentpower
OS HDD: WD 80GB SATA
Data HDDs: 4.2TB RAID 5 (4 1.5TB WDs in a RAID box)
CASE: Antec P180






Most of the used space is taken up by ripped DVDs. I haven't started to do the same thing with Bluray, maybe at some point when I bump the storage. I have it in a small room in the basement, under the stairs where I routed all the cabling including phone, cable and network.


----------



## joshd

Quote:


> Originally Posted by *2thAche*
> 
> Snip.


Nice! I also like the fact that you wrote on the wall all the I/O and what each cable is etc. good work.


----------



## Mr Pink57

pfsense firewall
Opty 146
DFI LANParty RDX200
2x 1gb G.Skillz HZ
36gb Raptor
Enermax Liberty 500w
1x Trendnet NIC (cheapo)
Antec 180b
Cheapo Samsung CD/DVD drive
4x Scythe fans

It sits in the living next to the home entertainment center and matching everything being black so I just hooked it up to the TV via VGA makes for a clean setup plus the computer makes zero noise.

Runs:
Squid with cache
Snort


----------



## tiro_uspsss

Quote:


> Originally Posted by *Mr Pink57*
> 
> *Squid with cache*


wish I knew how to run that ^


----------



## cyberbeat

File Server:
Intel Core i3 2100T
Asrock P67 Extreme 6
Dell SAS5 X2
10X WD20EARS 2TB Hdds
1 250GB 2.5" Boot Drive
Corsair AX850
4GB Corsair DDR3
Fractal Designs Define R3
Solaris 11 Express Raidz6










Everything else Server:
Dell PowerEdge 2650
Dual 2.4GHz Xeons
5X 76GB 10K Drives
4GB ECC DDR
VMWare HyperVisor /W XP and Server 2003


----------



## DzillaXx

Super Micro Dual Core Atom 330 server, runs 24/7 and only uses 30watts. Used as a Media, File, FTP, VPN, Game, etc server. Also sometimes use it as a VM server (nothing to heavy though) as well. running windows home server 2011 with only 2gb of ram. System shows no signs of slow downs so will continue to use till 22nm atoms comes out and will replace with that, hopefully they will make a quad core atom


----------



## Mr Pink57

Quote:


> Originally Posted by *tiro_uspsss*
> 
> wish I knew how to run that ^


Not too tough if you run pfSense, you simply enable it along with cache mgmt. It is really nice having network cache like this, really speeds up overall web browsing.


----------



## tiro_uspsss

Quote:


> Originally Posted by *Mr Pink57*
> 
> Not too tough if you run pfSense, you simply enable it along with cache mgmt. It is really nice having network cache like this, really speeds up overall web browsing.


I do not know *anything Linux








esp./incl. anything command line








I know what Squid cache does - a mate of mine showed me once - loved it & wanted it since, but alas, me is too stoopid


----------



## cyberbeat

I had it for a while on one of my old PCs was great, Might look into seeing if i can run it on Solaris, would be great.


----------



## Pentium-David

Quote:


> Originally Posted by *Murderous Moppet*
> 
> You guys with your "real" servers. Pfft, you don't need a real server when you have a recycled HP dc7600 slim! Especially when it has 4 whole gigabytes of DDR2, 5tb of internal storage and a 3.2GHz P4 with hyperthreading.


That is AWESOME! 5TB in that thing? that's got WIN written all over it for sure *bump* need....more.....pics


----------



## Pentium-David

Here is my seedbox









Pentium 3 933MHz
Intel Mobo
512MB PC133
80GB Maxtor ATA133
GeForce2 MX400 32MB
P.O.S. PowerMan 350W PSU

This thing is a little beast







It seeds so many torrents that it's close to 75% usage all the time...Sometimes it uploads 15GB a day, most of the torrents are Linux Distro's. I like this this thing cause it has the fastest CPU it can support and the most RAM it can support









Yeah I love keeping old stuff in service







Everything except the HDD and PSU were manufactured in 2000


----------



## joshd

Quote:


> Originally Posted by *Pentium-David*
> 
> 
> 
> Here is my seedbox
> 
> 
> 
> 
> 
> 
> 
> 
> Pentium 3 933MHz
> Intel Mobo
> 512MB PC133
> 80GB Maxtor ATA133
> GeForce2 MX400 32MB
> P.O.S. PowerMan 350W PSU
> This thing is a little beast
> 
> 
> 
> 
> 
> 
> 
> It seeds so many torrents that it's close to 75% usage all the time...Sometimes it uploads 15GB a day, most of the torrents are Linux Distro's. I like this this thing cause it has the fastest CPU it can support and the most RAM it can support
> 
> 
> 
> 
> 
> 
> 
> 
> Yeah I love keeping old stuff in service
> 
> 
> 
> 
> 
> 
> 
> Everything except the HDD and PSU were manufactured in 2000


Wow, excellent use of old gear


----------



## Pentium-David

Quote:


> Originally Posted by *joshd*
> 
> Wow, excellent use of old gear


Oh yeah







I love old hardware, more so than new hardware....I mean yeah I have a Sandy Bridge rig but 90% of the components in my house are P4 or older


----------



## blupupher

Quote:


> Originally Posted by *Pentium-David*
> 
> Oh yeah
> 
> 
> 
> 
> 
> 
> 
> I love old hardware, more so than new hardware....I mean yeah I have a Sandy Bridge rig but 90% of the components in my house are P4 or older


Well, if it does what you need it to, then why not.

I have an old socket 370 coppermine in my garage for surfing the internet in there (like finding details when working on a car and such).


----------



## killabytes

Love the Creative CD Drive. I had an 8x one, got it in 99. Before that I was using a 2x Panasonic.


----------



## joshd

We so need an old but still in service server/pc thread..


----------



## Pentium-David

Quote:


> Originally Posted by *killabytes*
> 
> Love the Creative CD Drive. I had an 8x one, got it in 99. Before that I was using a 2x Panasonic.


That's awesome







dang, 2x. The first one I got was a Creative 4x, it said "Quad speed" on the front haha. Pentium 66MHz, 8MB RAM, lagged with Windows 95








Quote:


> Originally Posted by *joshd*
> 
> We so need an old but still in service server/pc thread..


I agree! That would be awesome. I know fg2chase has a pretty awesome P3 server. I want to do something with my Pentium MMX 200 but the mobo died









I also have a Slot A Athlon 500MHz (First Athlon ever released) but the mobo died in that one too...


----------



## stubass

Quote:


> Originally Posted by *Pentium-David*
> 
> 
> 
> Here is my seedbox
> 
> 
> 
> 
> 
> 
> 
> 
> Pentium 3 933MHz
> Intel Mobo
> 512MB PC133
> 80GB Maxtor ATA133
> GeForce2 MX400 32MB
> P.O.S. PowerMan 350W PSU
> This thing is a little beast
> 
> 
> 
> 
> 
> 
> 
> It seeds so many torrents that it's close to 75% usage all the time...Sometimes it uploads 15GB a day, most of the torrents are Linux Distro's. I like this this thing cause it has the fastest CPU it can support and the most RAM it can support
> 
> 
> 
> 
> 
> 
> 
> 
> Yeah I love keeping old stuff in service
> 
> 
> 
> 
> 
> 
> 
> Everything except the HDD and PSU were manufactured in 2000


nice one dude, good to see an old rig put to use.. my oldest still working is a Pentium D 3.4GHz prescott i was going to build something with it buy since i now have oreder a second dual quad core Xeon server i have decided to use the old rig as a netowrk end user for part of my study network


----------



## Pentium-David

Quote:


> Originally Posted by *stubass*
> 
> nice one dude, good to see an old rig put to use.. my oldest still working is a Pentium D 3.4GHz prescott i was going to build something with it buy since i now have oreder a second dual quad core Xeon server i have decided to use the old rig as a netowrk end user for part of my study network


Hey at least that Pentium D is still getting some use! Wow, a dual quad core Xeon server







what are you going to do with that thing?! haha. Are they Sandy Bridge or Nehalem?


----------



## stubass

Quote:


> Originally Posted by *Pentium-David*
> 
> Hey at least that Pentium D is still getting some use! Wow, a dual quad core Xeon server
> 
> 
> 
> 
> 
> 
> 
> what are you going to do with that thing?! haha. Are they Sandy Bridge or Nehalem?


i assume the are nehalem's, they are the ones i posted here and will for the most part be used for Virtualization Studies and then maybe other things too
http://www.overclock.net/t/1206135/2u-rackmount-rackable-systems-server


----------



## Pentium-David

Quote:


> Originally Posted by *stubass*
> 
> i assume the are nehalem's, they are the ones i posted here and will for the most part be used for Virtualization Studies and then maybe other things too
> http://www.overclock.net/t/1206135/2u-rackmount-rackable-systems-server


That's a nice setup! I've always wanted to handle some newer dual CPU setups. Although I do have a dual P3 1.4GHz server. Those are Core based but I bet both of those combined would be on par with a Sandy i5


----------



## killabytes

Here is, finally, some of my gear. I say some becuase a lot is missing from these new pictures of my recently _finished_ server cabinet. Progress Seen Here.



More to come....


----------



## Pentium-David

Awesome! Can't wait to see more pics







are those your V100's?


----------



## killabytes

Most of 'em yup. I posted more pictures in my build thread, link in my above post.


----------



## Odel

OS - Ubuntu Server 11.10 (3.0.0-15 server kernel)
CPU - Intel Xeon 3060 (dual core 2.4)
RAM - 4gb ddr2
HDD - 500gb sata
psu - BFG 550w

Runs:
Squid proxy w/cache
Apache
Ftp server
Samba (printer spooling)
Minecraft

Call it ghetto, call if cheap

I got it all for free or had it laying around


----------



## Pentium-David

Quote:


> Originally Posted by *Odel*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> OS - Ubuntu Server 11.10 (3.0.0-15 server kernel)
> CPU - Intel Xeon 3060 (dual core 2.4)
> RAM - 4gb ddr2
> HDD - 500gb sata
> psu - BFG 550w
> Runs:
> Squid proxy w/cache
> Apache
> Ftp server
> Samba (printer spooling)
> Minecraft
> Call it ghetto, call if cheap
> I got it all for free or had it laying around


That's awesome! You call that ghetto?! That thing is nice....I'm running a Celeron D 3.33GHz, 933MHz P3, and a 2.8GHz P4 in my servers. You have a nice server








And is that a Maxtor drive?! That looks like one of the newer ones


----------



## Punjab

Can I join in? These are my render servers. I use them for farming out animations I create in 3D Studio Max.


















They are Dell Precision T5400s and each runs 2x 3.0ghz quad-core xeons, 10GB of FB-DIMM ECC memory, and a quadro FX 250.
However the Quadros are unnecessary and just came with them.
They run Win XP x64 and I don't honestly know if that's the best solution but it works well for how I use them.
I have since raised them off the floor on to a fashionable piece of IKEA furniture.

I use an old Dell Dimension 8400 for a file server/HTPC. It runs a P4 3.4ghz, 2GB of DDR, some 500GB drives, and a GeForce 7900.


----------



## tiro_uspsss

Quote:


> Originally Posted by *Punjab*
> 
> They are Dell Precision T5400s and each runs 2x 3.0ghz quad-core xeons, 10GB of FB-DIMM ECC memory, and a quadro FX 250.
> However the Quadros are unnecessary and just came with them.
> They run Win XP x64 and I don't honestly know if that's the best solution but it works well for how I use them.
> I have since raised them off the floor on to a fashionable piece of IKEA furniture.
> I use an old Dell Dimension 8400 for a file server/HTPC. It runs a P4 3.4ghz, 2GB of DDR, some 500GB drives, and a GeForce 7900.


I used to love XP64.. was my fav OS till W7 came along








It shares the same kernel as Windows Server 2003 x64 - hence why it is so much more stable, quick etc than XP32







it was really a totally different & far better animal than XP32


----------



## herkalurk

Quote:


> Originally Posted by *Punjab*
> 
> Can I join in? These are my render servers. I use them for farming out animations I create in 3D Studio Max.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> They are Dell Precision T5400s and each runs 2x 3.0ghz quad-core xeons, 10GB of FB-DIMM ECC memory, and a quadro FX 250.
> However the Quadros are unnecessary and just came with them.
> They run Win XP x64 and I don't honestly know if that's the best solution but it works well for how I use them.
> I have since raised them off the floor on to a fashionable piece of IKEA furniture.
> I use an old Dell Dimension 8400 for a file server/HTPC. It runs a P4 3.4ghz, 2GB of DDR, some 500GB drives, and a GeForce 7900.


You may want to get a linux distro and make a small cloud, just submit a queue of work to it, and then the cluster's master node will divy up the work and cut through it all. Windows doesn't have a real "cluster", only a failover cluster.


----------



## Mr Pink57

Could the quadros be used to do some media decoding?


----------



## Punjab

Quote:


> Originally Posted by *herkalurk*
> 
> You may want to get a linux distro and make a small cloud, just submit a queue of work to it, and then the cluster's master node will divy up the work and cut through it all. Windows doesn't have a real "cluster", only a failover cluster.


I don't have much experience with Linux. How would this work with all the Windows based software that each machine requires to produce renders? Primarily 3DStudio.
Quote:


> Originally Posted by *Mr Pink57*
> 
> Could the quadros be used to do some media decoding?


I've thought about this as well as folding at home. They do occasionally have some down time. I'll have to look into that more.


----------



## killabytes

Quote:


> Originally Posted by *Mr Pink57*
> 
> Could the quadros be used to do some media decoding?


The 250s are pretty light weight. I use the 290s at work and they're OK for graphic and multi-desktop display. But they lack stream processors for heavy CUDA work.


----------



## herkalurk

Quote:


> Originally Posted by *Punjab*
> 
> I don't have much experience with Linux. How would this work with all the Windows based software that each machine requires to produce renders? Primarily 3DStudio.


http://helmer.sfe.se/

He's using different render software however, but the concept is there

He was using this queuing software

http://www.drqueue.org/cwebsite/

Also, at a minimum, I would change OS. XP was never a good multi core OS, was optmized for 2 cores. Server 2008 or Win 7 would be a better choice. Linux would be best, why have a gui when they are headless anyway, just wasting CPU time that could be used for rendering.


----------



## Punjab

I've read through that Helmer post before and it is intriguing.

I know that many 3D artists utilize linux instead of a windows OS which is why I said I'm not sure that using XP 64 is the best method. I guess compatibility has always been the biggest question mark in my mind.


----------



## axipher

I just got my new web server box up and running:

Athlon X4 B40
4 GB G.Skill
M4A78LT M-LE
500 GB Samsung
OCZ SZ 550 W
CoolIt Eco with 2x Scythe Slipstreams on a Kaze-Q
9800 GTX+ 512 MB
Fractal Design Core 1000

- Installed Windows 8 Dev Preview
- The 9800 GTX+ is folding 24/7
- Installed VirtualBox and Ubuntu 11.10
- Installed Apache2, and learning about all the other wonderful web services I can host for myself

I have a Maximus III Gene coming my way and looking for an i7-870 or 875k to put in it to replace the Athlon.


----------



## djriful

HP with 16 Cores Xeon... lol this is where my VPS hosted.


----------



## axipher

Quote:


> Originally Posted by *djriful*
> 
> HP with 16 Cores Xeon... lol this is where my VPS hosted.


Well that makes my server seem under-powered now...


----------



## Jayjr1105

My power sipper...

Intel Atom 230
1gb DDR2 667
160GB OS
1TB Storage
Server 2003 Enterprise

Main roles... local network storage, FTP, HTTP, & Ventrilo


----------



## Jim McNasty

I recently built my "ghetto" server tower to host my ever expanding movie collection to save space/power on my HTPC.

i started with this sorry looking machine :


















After gutting it and giving the case a good clean, i collected together all the useable spares from the loft and set about building.
This is what i managed to put together:





































Specs:
AMD X2 3.6Ghz
Asus A8NSLI
nVidia GT 7800
4GB DDR 2 Ram @ 400Mhz

4 x 2TB WD Black
1 X 80GB WD Boot drive
2X Sony 52x DVD Burners

Im not one to toot my own horn .....but toot toot


----------



## Pentium-David

i hope you aren't using that Premier PSU daily, those things are trash


----------



## Jim McNasty

haha hell no, I swaped it out for something bit better!


----------



## Mootsfox

The rack and all its glory. Main server is a Q6600, 4GB RAM, about 10TB in storage. Under that is a dell that is becoming the router, and a smaller HP for etc. Above is the Baytech power management, unit, it is run off serial and networked for remote management of teh powa. Monitor is a 15", touchscreen.









Looking for a 16/24 port gigabit smart switch to replace the 8 port that is hanging. Below that is the DOCSIS 3.0 modem, Linksys Router/AP 1 running DDWRT chilling on the Cisco 10/100 switch which is a lovely shelf. Cisco 6500 with a fiber card. 10/100 or I'd use it :/ I think that is everything remotely interesting.









Biscuit box I made into a patch panel, ran one line to each bedroom (including attic)









Workshop









Scrap pile









Computer scrap pile









Living room, media server is the Dell on the right of the TV. i5 something, 6gb ram, 1TB hdd, etc. TV tuner card is in this, hooks to the TV via HDMI. Above the games on the right is an Airrave. AP2 is upstairs in my office with one of the two networked printers (T630). The other is an inkjet HP. Also Hot Offers.


----------



## killabytes

Doesn't look like much living goes on in that room!


----------



## derickwm

Quote:


> Originally Posted by *Mootsfox*
> 
> The rack and all its glory. Main server is a Q6600, 4GB RAM, about 10TB in storage. Under that is a dell that is becoming the router, and a smaller HP for etc. Above is the Baytech power management, unit, it is run off serial and networked for remote management of teh powa. Monitor is a 15", touchscreen.
> *snip*


The ultimate man cave







I am jelly.


----------



## herkalurk

That dell SCSI disk tray looks lonely, and unused.


----------



## mbudden

Quote:


> Originally Posted by *derickwm*
> 
> The ultimate man cave
> 
> 
> 
> 
> 
> 
> 
> I am jelly.


I wouldn't really call a server rack down in the basement a "ultimate man cave".


----------



## blupupher

Quote:


> Originally Posted by *mbudden*
> 
> I wouldn't really call a server rack down in the basement a "ultimate man cave".


A place to go to get away from everything and do something you like to do, yea, thats a man cave.

Plus he may be refering to this:


----------



## derickwm

Quote:


> Originally Posted by *mbudden*
> 
> I wouldn't really call a server rack down in the basement a "ultimate man cave".


I wasn't really referring to the hardware, just the location. The walls are epic


----------



## mbudden

Quote:


> Originally Posted by *derickwm*
> 
> I wasn't really referring to the hardware, just the location. The walls are epic


My bad


----------



## Mootsfox

Quote:


> Originally Posted by *derickwm*
> 
> I wasn't really referring to the hardware, just the location. The walls are epic


Thanks! The house turned 100 last year, not sure if the basement is original or not. Most likely yes as there is the remains of a coal shoot and well, it looks old








Quote:


> Originally Posted by *killabytes*
> 
> Doesn't look like much living goes on in that room!


We mostly stick to our computers, but social gatherings do happen randomly down there. There is a couch and chairs out of frame.


----------



## mr one

Quote:


> Originally Posted by *blupupher*
> 
> A place to go to get away from everything and do something you like to do, yea, thats a man cave.
> Plus he may be refering to this:


i see technics turntable?







or its just mirage?


----------



## Mootsfox

Quote:


> Originally Posted by *mr one*
> 
> i see technics turntable?
> 
> 
> 
> 
> 
> 
> 
> or its just mirage?


Technics onry. My first (and hopefully last) turntable.


----------



## mr one

Quote:


> Originally Posted by *Mootsfox*
> 
> Technics onry. My first (and hopefully last) turntable.


i had looking for them, when i had turntabalism disease


----------



## derickwm

I haz a server now


----------



## axipher

Quote:


> Originally Posted by *derickwm*
> 
> I haz a server now


That RAD setup should be cooling your server


----------



## derickwm

I was to lazy in custom mounting blocks







and to cheap... lol


----------



## axipher

Quote:


> Originally Posted by *derickwm*
> 
> I was to lazy in custom mounting blocks
> 
> 
> 
> 
> 
> 
> 
> and to cheap... lol


----------



## derickwm

If the 6174 temperatures actually got to a point where I'd be worried about it, trust me there'd be blocks. Reaching 40's and maybe even 50's because my room is warm is not really much of a concern


----------



## axipher

Quote:


> Originally Posted by *derickwm*
> 
> If the 6174 temperatures actually got to a point where I'd be worried about it, trust me there'd be blocks. Reaching 40's and maybe even 50's because my room is warm is not really much of a concern


Since when it water cooling about "need"


----------



## Odel

I also just swapped in a BFG 550w powersupply I got from a friend to replace the generic POS that was in it... I need more disk space though, might get a few TB of green drives








Just gotta decide if I want them as one logical volume or do some mirroring or something...


----------



## beers

Here's my 'server room' (closet).
That thing is really messy..


----------



## mbudden

One thing you forgot, specs.


----------



## Imrac

Quote:


> Originally Posted by *derickwm*
> 
> I haz a server now
> 
> 
> 
> 
> 
> 
> 
> 
> *imaged removed*


I would be careful about keeping a motherboard on an anti-static bag. They are conductive and could damage the components. Use cardboard or other non-conductive material.

http://en.wikipedia.org/wiki/Antistatic_bag


----------



## derickwm

... wasn't aware of that









Glass would be fine eh?


----------



## mbudden

The inside of the antistatic bag is good. The outside... Not so good.


----------



## ZFedora

Just get a case for it


----------



## killabytes

Working away on my Sun Fire V100's.


----------



## ZFedora

Looks awesome, KB. Love the way those Sun servers look


----------



## beers

Quote:


> Originally Posted by *mbudden*
> 
> One thing you forgot, specs.


Not sure how you could say this given my sig rig..


----------



## D-EJ915

Quote:


> Originally Posted by *derickwm*
> 
> ... wasn't aware of that
> 
> 
> 
> 
> 
> 
> 
> 
> Glass would be fine eh?


I use the motherboard's box or the backing cardboard they put inside calendars (EATX sized) for my naked builds.

Here's my latest server I got for MS VMs. I'm running Hyper-V on Server 2008R2. It's a Fujitsu RX200 S5 with dual E5530, 48GB ram, 4 300GB 10k disks and a Brocade 1860 dual port 10GBE/16GB FC adapter.


----------



## herkalurk

Quote:


> Originally Posted by *D-EJ915*
> 
> I use the motherboard's box or the backing cardboard they put inside calendars (EATX sized) for my naked builds.
> Here's my latest server I got for MS VMs. I'm running Hyper-V on Server 2008R2. It's a Fujitsu RX200 S5 with dual E5530, 48GB ram, 4 300GB 10k disks and a Brocade 1860 dual port 10GBE/16GB FC adapter.


And what is the fibre card hooked up to...? A small backing san?


----------



## hick

I have probably posted before but my stuff has changed so..




Mobo - Biostar A870u3
CPU - Athlon x2 240
PSU - Antec Neo 520w
RAM - 2x2gb G.Skill ddr3 1600
HDD - 15tb (manufacturer size not real)
Tuners - 4x kworld 435 tv tuners (MediaPortal TV-Server)
Sata cards - 2x cheap 2 port sata cards

There is a 24 port gigabit dlink green switch, some cheap linksys router, a couple UPS, 360, ps3, htpc, onkyo receiver and server. One day I will actually tidy it up a bit.


----------



## derickwm

Quote:


> Originally Posted by *D-EJ915*
> 
> I use the motherboard's box or the backing cardboard they put inside calendars (EATX sized) for my naked builds.
> Here's my latest server I got for MS VMs. I'm running Hyper-V on Server 2008R2. It's a Fujitsu RX200 S5 with dual E5530, 48GB ram, 4 300GB 10k disks and a Brocade 1860 dual port 10GBE/16GB FC adapter.


Bleh just have it on my glass table top now.


----------



## D-EJ915

Quote:


> Originally Posted by *herkalurk*
> 
> And what is the fibre card hooked up to...? A small backing san?


No just a simple switch/shelf setup for now since I got it to learn FC. I got it really inexpensively compared to 4GB PCI-E HBAs because it was mislabeled.


----------



## herkalurk

Quote:


> Originally Posted by *D-EJ915*
> 
> No just a simple switch/shelf setup for now since I got it to learn FC. I got it really inexpensively compared to 4GB PCI-E HBAs because it was mislabeled.


Well what do you use for your VM storage then? on board? no iscsi storage or anything?


----------



## fb99

Here is my small (compared to the other i saw here) nas server

OS: Debian
Case: Ugly 4U rack mount
CPU: AMD Athlon(tm) II X2 255
Motherboard: GA-890GPA-UD3H (recycled)
Cooling: stock
Memory: 2GB DDR3 (kingston)
PSU: Corsair cx 400
Storage HDD(s): 5x 2TB Samsung F4 ecogreen raid 5

What you use it for:
file server, backup

Missing some screws here ... wiring was not finished too.

I'll put it in that bay :


Switch : netgear gs724tp
APC : Smart-UPS Rack-Mount 750VA LCD 230V

edit : temps :

fan1: 706 RPM (min = 0 RPM)
fan2: 0 RPM (min = 0 RPM)
fan3: 0 RPM (min = 0 RPM)
fan5: 0 RPM (min = 0 RPM)
temp1: +27.0°C (low = +127.0°C, high = +127.0°C) sensor = thermistor
temp2: +19.0°C (low = +127.0°C, high = +127.0°C) sensor = thermal diode
temp3: +25.0°C (low = +127.0°C, high = +127.0°C) sensor = thermistor


----------



## killabytes

Finally got Solaris 11 Express installed on all my Sun Fire severs...took longer than I expected.


----------



## pvp309rcp

My main computer. Might not be the most efficient but it does its job staying on 24/7 around 500-600w. It's a seedbox most of the time but it does pretty much everything else when needed. Details in the computer signature.


----------



## jibesh

I recently rebuilt my ESXi 5 server.

Specs are:

*Processor:* 2 x Intel Xeon E5520
*Motherboard:* TYAN S7002G2NR-LE Dual LGA 1336
*Memory:* 32GB (8 x 4GB) Samsung DDR3-1333 Registered ECC RAM
*Hard Drives:* 8 x 1TB Hitachi 1TB Deskstar (0F10383) - RAID 10
*Raid Controller:* 3ware 9650SE-8LPML
*PSU:* SeaSonic X750 Gold 750W
*Cooling:* 2 x Corsair H50
*Case:* COOLER MASTER HAF 932
*OS:* VMWare ESXi 5


----------



## darknight670

__________________________________________________ __________________________________________

*Asaheim - File Server*

*OS:* OpenIndiana 151a2
*Case:* Fractal R2
*CPU:* Phenom II X4 955BE
*Motherboard:* Gigabyte 880GA-UD3H
*Cooling:* 3 x 120mm fans
*Memory:* 16 GB DDR3 ( 1333 Mhz )
*PSU:* Seasonic S12+ ( 430W )
*OS HDD:* 2 x 1TB Samsung F3
*Storage HDD(s):* 6 x 2TB Samsung F4
*Server Manufacturer:* Me

*What you use it for :* Media Files Server, Plex Media Center, Terraria/Minecraft Server, L2TP VPN, Downloading Machine ( Sickbeard + Couchpotato + Sabnzbd ) , Virtualbox machine
*Temps, loudness:* Usually cool and silent. Not so much when spinning drives or transcoding 1080p








*Any additional software that you use:* Napp-it, Virtualbox

__________________________________________________ __________________________________________


----------



## OC-Guru

-double post-


----------



## OC-Guru

Quote:


> Originally Posted by *fb99*
> 
> Here is my small (compared to the other i saw here) nas server
> OS: Debian
> Case: Ugly 4U rack mount
> CPU: AMD Athlon(tm) II X2 255
> Motherboard: GA-890GPA-UD3H (recycled)
> Cooling: stock
> Memory: 2GB DDR3 (kingston)
> PSU: Corsair cx 400
> Storage HDD(s): 5x 2TB Samsung F4 ecogreen raid 5
> What you use it for:
> file server, backup


Same server case as youtube star maxxarcade? 









I got the HP ProLiant ML110 G5
Xeon @ 2.3Ghz (Dual Core)
4GB DDR2-ECC RAM
3x 500GB 15kRPM HDD's (RAID 0 ~1.3TB) (SATA)
DVD/RW +LightScribe (SATA)
Big heatsink using rear case fan as exhaust
Windows server 2008 R2

I use it as a File Server (FTP out-of-home)
Media server (to transfer media to Xbox 360 & other PC's)
Torrent Box
Active Directory
and I use it for WDS (saves time installing windows on a disc)

picture of my server:


----------



## OC-Guru

-posts messed up-


----------



## Plan9

Quote:


> Originally Posted by *OC-Guru*
> 
> Same server case as youtube star maxxarcade?
> 
> 
> 
> 
> 
> 
> 
> 
> I got the HP ProLiant ML110 G5
> Xeon @ 2.3Ghz (Dual Core)
> 4GB DDR2-ECC RAM
> 3x 500GB 15kRPM HDD's (RAID 0 ~1.3TB) (SATA)
> DVD/RW +LightScribe (SATA)
> Big heatsink using rear case fan as exhaust
> Windows server 2008 R2
> I use it as a File Server (FTP out-of-home)
> Media server (to transfer media to Xbox 360 & other PC's)
> Torrent Box
> Active Directory
> and I use it for WDS (saves time installing windows on a disc)
> picture of my server:


You could at least run SFTP rather than clear text passwords


----------



## OC-Guru

Quote:


> Originally Posted by *Plan9*
> 
> You could at least run SFTP rather than clear text passwords


The clear text passwords are still pretty complicated.. all are over 10 characters long


----------



## Plan9

Quote:


> Originally Posted by *OC-Guru*
> 
> The clear text passwords are still pretty complicated.. all are over 10 characters long


it doesn't matter how complicated your passwords are if you send them in clear text. Hence the whole reason I raised that point to begin with


----------



## deathrow9

Im trying to become more familiar with servers/server software, so I picked this up to play with. Its a HP Proliant 4U(?) rack mount with 4X1.4ghz Xeon w/HT for 8 'CPUs". It has 4gigs of DDR, registered/ecc not buffered I think is what the guy told me. Redundant 800watt max PSUs. 4 36gig SCSI 10k drives in RAID 5. Paid $50.

It currently has no purpose other than looking cool(thinking of ideas other than just cloud/data storage). I dont know how easy it is to take apart entirely but Ill probably do so for fun and while I figure out its main purpose. Currently it has CentOS on it, I am a student so I might put Server 2003 on it(no dvd drive). Other than that Ill probably fold with it as well.

Its also obviously loud as hell but once its up its probably no louder than any thermaltake products Ive had in the past, even so Im going to move it to my closet behind some stuff









Pics!
http://s14.photobucket.com/albums/a304/DEATHROW9/Server/


----------



## herkalurk

Quote:


> Originally Posted by *OC-Guru*
> 
> The clear text passwords are still pretty complicated.. all are over 10 characters long


10 characters is long...?

Try going minimum 20 characters with at least 2 upper, 2 lower, 2 number, 2 symbol in there. That is the standard my work makes me have as an administrator. Just saying, even with that, use encryption for everything. Even if it's a self signed cert you made, it's better in the long run.


----------



## killabytes

Quote:


> Originally Posted by *herkalurk*
> 
> 10 characters is long...?
> Try going minimum 20 characters with at least 2 upper, 2 lower, 2 number, 2 symbol in there. That is the standard my work makes me have as an administrator. Just saying, even with that, use encryption for everything. Even if it's a self signed cert you made, it's better in the long run.


This is what I use for all my systems...

https://www.random.org/passwords/


----------



## Baking Soda

OS:Windows Server 2008 R2
Caseell
CPUentium Dual core
Motherboardell
Coolingell
Memory:2gb of DDR2
PSU:Antec True Power Trio 550W
OS HDD: 80gb Seagate 7200rpm
Storage HDD: 320gb Seagate 7200rpm (plan on adding more)
Server Manufacturer: (Ex: Dell, HP, You?)
Dell/ I added some other stuff to it.

What you use it for (Print server, backups, file server, etc.)
File server for now.
Temps, loudness, etc.
Temps are quite low, not that loud.


----------



## axipher

*OS:* Windows Home Server 2011
*Case:* Fractal Design Core 1000
*CPU:* Xeon X3480
*Motherboard:* EVGA P55 Micro SLI
*Cooling:* Coolit Eco
*Memory:* 8 GB Corsair Vengeance (2x 4 GB)
*GPU:* AMD 6870 1 GB
*PSU:* OCZ 550 W ZS-Series
*OS HDD:* 120 GB Solid 3
*Storage HDD:* 2 TB WD Green (Movies, Shows, Music Videos)
*Storage HDD:* 2x 1.5 TB Spinpoint Mirrored in Windows (Docs, Pics, Music, Software, Client back-ups)
*Storage HDD:* 500 GB Hitachi (Torrents in progress)
*Storage HDD:* 320 GB WD Scorpio Blue (Recorded TV)
*Server Manufacturer:* Custom Built

*What you use it for (Print server, backups, file server, etc.):*
- Home server
- Remote media and file access
- Web server
- Client back-up
- Render Guinea Pig
- Folding


----------



## ZFedora

Figured I'd contribute to this


















(From top to bottom)

-somewhat visible chatsworth products logo (45U 2 post open frame rack)
-trendnet 16 port patch panel (1U)
-belkin cable management (2U)
-netgear fsv318 vpn
-cisco 16 port switch

On the patch panel, #9 is patched in for the switch and the rest of my network, #10 is our production webserver, #1 is for Fax/Telephone










As you can see, a Samsung Syncmaster monitor, 2 APC UPS, one much older than the other, nonetheless, an APC UPS









Descending, there's a 4U server running Windows Server 2008 R2 with AD DC for our Exchange server

Next, you see 2 towers, the one on the left is the server, the one on the right is my desktop. Both are being held up by a decommissioned 2U rackmount chassis. The server is running Windows Server 2008 R2 Datacenter and has Debian x64 running on top of that, which is our websevrer (Lighttpd). It also runs our Exchange server, and is a writable domain controller in our AD. Pretty advanced for a home network, but I enjoy it









Messy cabling, I know. I'll work on that later


----------



## shadow5555

My server 

cpu: dual opteron 2.4ghz
motherboard: tyan s2885
pc2700 ddr 400mhz 4gig max 16gig max
dual raided 250gig hard drives for windoiws
os: windows home server 2011
flex raid with 2tb for data redunancy
1tb
1.5tb
anther 1.5tb will be added soon

windows home server storage pool in use with web access

roles: web access, storage pool, vmware server 2.0, open fire jabber server, in process of making email server( hmail server) also working on photo gallery server


----------



## skatingrocker17

Quote:


> Originally Posted by *Baking Soda*
> 
> OS:Windows Server 2008 R2


I'm using this as my picture too because I'm using the same Dell case.

Caseell
CPU:Intel Core 2 Duo e(something)
Motherboardell
Coolingell
Memory: 3.5GB, maybe 4, I haven't looked at it since December
PSU: Dell
OS HDD: 80gb WD 7200rpm
Storage HDD: 500GB Seagate and 1.5TB WD Green
Server Manufacturer: Dell and myself
Network connection: 1Gbps to gigabit switch
What you use it for: Backups and movie storage
Temps: I don't know, probably pretty low because it's in the basement and it only has 1 120mm fan. I also can't hear it because it's in the basement.


----------



## Norse

Currently got two servers at work that im mucking about with.

Proliant G3 DL380, 3.2ghz x2, 4GB ram and 3x36GB hdd in raid 5 (Server not in use)

Dell Poweredge 2850 3.2ghz x2, 4GB ram and 3x36GB raid 5, 2x36GB Raid 1 and a 73GB. this server is running ESXI 4.1 for mucking about on. sounds like a very loud vacuum cleaner and ive removed a load of the fans too! removed 3 of them and it still runs within acceptable temperatures though it does like to complain a bit and the fans run at 11k RPM



ESXI whining that i had removed fans and only had one power cable in


----------



## Pentium-David

I'll post pictures later but 2 days ago the mobo died in my Celeron D server (some bulging caps, I'll just resolder on some new ones later) I threw together a new server with a Cedar Mill Pentium 4 @ 3.6GHz and 2GB 533 RAM. FSP 400W 80 Plus Gold PSU. Still running Windows Server 2003


----------



## Aestylis

Thought I would leave these here. esxi setup I am working on getting configured. CPU's and memory make me weep like a small child with joy.
I would post pics but we have it housed half-way across the country.










Dual R910's with 2 4870's each and 524GB ram.

EDIT. had to change photos and remove service tags.


----------



## bobfig

Quote:


> Originally Posted by *Aestylis*
> 
> Thought I would leave these here. esxi setup I am working on getting configured. CPU's and memory make me weep like a small child with joy.
> I would post pics but we have it housed half-way across the country.
> 
> 
> 
> 
> 
> 
> 
> 
> Dual R910's with 2 4870's each and 524GB ram.
> EDIT. had to change photos and remove service tags.


nice, looks a lot of fun but i gatta ask. dose it play minecraft?


----------



## OverK1LL

Some pics of my first server build... Still a work in progress.





Spoiler: More pictures below (to save space on the thread)





^ Still need to re-route the last two. I was surprised how well this worked.



^Ethernet is just temporary. This switch is going to supply each NIC on each server with the fiber channels going to the main switch. The BNC cables are the first pull for the analog security cameras.



^Towards the end of the build. Luckily it was a weekend so the massive mess wasn't a problem.



^Cleaned up but still a long way to go.


----------



## Pentium-David

@ Overk1LL, what are the specs of that monster?


----------



## Aestylis

Quote:


> Originally Posted by *OverK1LL*
> 
> Some pics of my first server build... Still a work in progress.
> 
> 
> 
> 
> 
> Spoiler: More pictures below (to save space on the thread)
> 
> 
> 
> 
> ^ Still need to re-route the last two. I was surprised how well this worked.
> 
> 
> ^Ethernet is just temporary. This switch is going to supply each NIC on each server with the fiber channels going to the main switch. The BNC cables are the first pull for the analog security cameras.
> 
> 
> ^Towards the end of the build. Luckily it was a weekend so the massive mess wasn't a problem.
> 
> 
> ^Cleaned up but still a long way to go.


I love it when i get to say this.... "nice rack!"


----------



## stubass

Quote:


> Originally Posted by *OverK1LL*
> 
> Some pics of my first server build... Still a work in progress.
> 
> 
> 
> 
> 
> Spoiler: More pictures below (to save space on the thread)
> 
> 
> 
> 
> ^ Still need to re-route the last two. I was surprised how well this worked.
> 
> 
> ^Ethernet is just temporary. This switch is going to supply each NIC on each server with the fiber channels going to the main switch. The BNC cables are the first pull for the analog security cameras.
> 
> 
> ^Towards the end of the build. Luckily it was a weekend so the massive mess wasn't a problem.
> 
> 
> ^Cleaned up but still a long way to go.


nice job so far, i take it this is for your work by the looks? lucky bugger, i cant wait to build something like this


----------



## OverK1LL

Quote:


> Originally Posted by *stubass*
> 
> nice job so far, i take it this is for your work by the looks? lucky bugger, i cant wait to build something like this


Yup. It's for my business, a wholesale produce company. Wish I had one at my house, but one rack is enough to manage! lol

Quote:


> Originally Posted by *Pentium-David*
> 
> @ Overk1LL, what are the specs of that monster?


Nothing very special. All servers are single socket Xeons with around 16GB-32GB depending on the server. About 15TB of RAID storage combined (most of the storage is for the 2 DVR servers)

Quote:


> Originally Posted by *Aestylis*
> 
> I love it when i get to say this.... "nice rack!"


hahaaha. thanks man.


----------



## Pentium-David

Quote:


> Originally Posted by *OverK1LL*
> 
> Yup. It's for my business, a wholesale produce company. Wish I had one at my house, but one rack is enough to manage! lol
> 
> Nothing very special. All servers are single socket Xeons with around 16GB-32GB depending on the server. About 15TB of RAID storage combined (most of the storage is for the 2 DVR servers)
> 
> hahaaha. thanks man.


hahahahahaha, this guy. "nothing special" You know, just 32GB RAM and 15TB of RAID storage







I thought I was cool with 4TB


----------



## OverK1LL

Quote:


> Originally Posted by *Pentium-David*
> 
> hahahahahaha, this guy. "nothing special" You know, just 32GB RAM and 15TB of RAID storage
> 
> 
> 
> 
> 
> 
> 
> I thought I was cool with 4TB


Okay, let me clarify because when you put it that way what I said sounds very snobbish. I meant for a business setup. Personally I think the rack looks more powerful than it is. Almost all the servers have only 16GB, with the exception of the SBS (running exchange) with 32GB. And I do say "only 16GB" because the RAM usage fills up fast.

Don't get me wrong, they are powerful machines but in the realm of small business servers they are pretty bare bones.

If it makes you feel better I don't even have a server at my house







. Wish I had one though, for media purposes and such.


----------



## G3RG

My folding/rendering server 

It's also in my sig. Also very nice set up there Overkill!


----------



## Pentium-David

Quote:


> Originally Posted by *OverK1LL*
> 
> Okay, let me clarify because when you put it that way what I said sounds very snobbish. I meant for a business setup. Personally I think the rack looks more powerful than it is. Almost all the servers have only 16GB, with the exception of the SBS (running exchange) with 32GB. And I do say "only 16GB" because the RAM usage fills up fast.
> 
> Don't get me wrong, they are powerful machines but in the realm of small business servers they are pretty bare bones.
> 
> If it makes you feel better I don't even have a server at my house
> 
> 
> 
> 
> 
> 
> 
> . Wish I had one though, for media purposes and such.


Haha, I didn't think it was snobbish, just funny







Well, I suggest you throw something old together!!! I've used free systems with Pentium 4's, Athlon XP's, Celeron D's. I even have a Pentium 3 server. Anything works


----------



## OverK1LL

After checking out G3R3's wicked folding server, I'm definitely going to build something for my house.

@G3R3, What case are you using for the MEB board? I couldn't find it in your thread...


----------



## staryoshi

If I make it to Microcenter this weekend, I'll be building a 960T-based server. If not, I don't know what I'll do with my extra parts


----------



## G3RG

Quote:


> Originally Posted by *OverK1LL*
> 
> After checking out G3R3's _wicked_ folding server, I'm definitely going to build something for my house.
> 
> @G3R3, What case are you using for the MEB board? I couldn't find it in your thread...


It's sitting on top of one of these:  Lying on it's side









I may build a motherboard tray of wood to mount the 4p to the wall at some point.


----------



## Moovin

OS: Ubuntu 11.10 LTS
Case: HP proliant gen 2
CPU: Xeon 2.2 GHZ x4
Motherboard: Standard HP
Cooling: Tons of 120mm Fans
Memory: 4GB of Samsung 1600Mhz clock
PSU: 800watt OEM
Storage HDD(s): Various size and speed SCSI drives totaling 284 GB
Server Manufacturer: (Ex: Dell, HP, You?) HP

What you use it for (Print server, backups, file server, etc.) Game server
Temps, loudness, etc. Have yet to test. Loudness: very
Any additional software that you use: source dedicated server, bukkit.
Pics


----------



## NorCa

In my sig! some left over parts from my previous build. Still need to buy lots of hard drives. I'm considering picking a Silverstone LC10-E


----------



## Andrew Colvin

My Fractal Define XL 14tb Unraid Server


----------



## Pentium-David

Quote:


> Originally Posted by *Andrew Colvin*
> 
> 
> My Fractal Define XL 14tb Unraid Server


What cpu is that? and nice SeaSonic


----------



## Andrew Colvin

The Fractal XL build is an Asus H67 M LE with a Pentium G620, 4GB Gskill Ram. Supermicro SAS expander as well.


----------



## OC-Guru

Quote:


> Originally Posted by *Andrew Colvin*
> 
> The Fractal XL build is an Asus H67 M LE with a Pentium G620, 4GB Gskill Ram. Supermicro SAS expander as well.


Thats one sassy server







(xD)


----------



## ramicio

OS: Ubuntu Server 11.10
Case: Norco RPC-3216
CPU: i7-970
Motherboard: MSI X-58 Pro-E
Memory: 6GB (3 x 2GB)
PSU: 700w modular
OS HDD: 32GB SSD
Storage HDDs: 6 x 2TB Hitachi Deskstar 5k3000 in RAID 6 on Areca ARC-1222 w/ cache battery
Server Manufacturer: Me
Use: Fileserver, video encoder, web server




The CPU cooler has been changed. It was improper from the get-go, only for quad-core i7s. The wires to the rear fans have also had a connector added to them and had heat shrink tubing added where the packing tape was. I just had to get the system up and running quickly. It's also been moved the to basement sitting on a counter, not sitting on a laundry basket.

Future plans are a Dell 62xx switch and the XFP uplink module. I will be getting 2 XFP NICs. One for my server, one for my desktop. I am in the market for a small server rack to put this one, and to use to store future builds in between buying parts to complete them. I didn't really want to go with a Norco case and use consumer parts, but I needed a server in a hurry. Money will be getting saved for a SuperMicro 24-bay 4U case, a dual 1366 board, some HBAs, and 24 2TB or 3TB hard drives (haven't decided yet). I will be going with a ZFS setup that emulates RAID 60.


----------



## ZFedora

Quote:


> Originally Posted by *ramicio*
> 
> OS: Ubuntu Server 11.10
> Case: Norco RPC-3216
> CPU: i7-970
> Motherboard: MSI X-58 Pro-E
> Memory: 6GB (3 x 2GB)
> PSU: 700w modular
> OS HDD: 32GB SSD
> Storage HDDs: 6 x 2TB Hitachi Deskstar 5k3000 in RAID 6 on Areca ARC-1222 w/ cache battery
> Server Manufacturer: Me
> Use: Fileserver, video encoder, web server


Looks awesome! Nice job


----------



## ramicio

Quote:


> Originally Posted by *ZFedora*
> 
> Looks awesome! Nice job


Thanks!


----------



## Syjeklye

Quote:


> Originally Posted by *ramicio*
> 
> OS: Ubuntu Server 11.10
> Case: Norco RPC-3216
> CPU: i7-970
> Motherboard: MSI X-58 Pro-E
> Memory: 6GB (3 x 2GB)
> PSU: 700w modular
> OS HDD: 32GB SSD
> Storage HDDs: 6 x 2TB Hitachi Deskstar 5k3000 in RAID 6 on Areca ARC-1222 w/ cache battery
> Server Manufacturer: Me
> Use: Fileserver, video encoder, web server
> 
> 
> The CPU cooler has been changed. It was improper from the get-go, only for quad-core i7s. The wires to the rear fans have also had a connector added to them and had heat shrink tubing added where the packing tape was. I just had to get the system up and running quickly. It's also been moved the to basement sitting on a counter, not sitting on a laundry basket.
> Future plans are a Dell 62xx switch and the XFP uplink module. I will be getting 2 XFP NICs. One for my server, one for my desktop. I am in the market for a small server rack to put this one, and to use to store future builds in between buying parts to complete them. I didn't really want to go with a Norco case and use consumer parts, but I needed a server in a hurry. Money will be getting saved for a SuperMicro 24-bay 4U case, a dual 1366 board, some HBAs, and 24 2TB or 3TB hard drives (haven't decided yet). I will be going with a ZFS setup that emulates RAID 60.


This is frighteningly similar to a server we use for encoding at my work.


----------



## Arsonx

OS: Windows 7 Pro 32bit
MB: SOLTEK K8AN2E-GR Socket 754
CPU: AMD Athlon 64 2800+ 1.8Ghz
RAM: 2 GB Mushkin DDR400
GPU: GeForce 6200 AGP Passive Cooler
PSU: Thermaltake 430w
CASE: Antec 900
HDD:
120 GB for OS, documents, music, pictures
520 GB JBOD (160 x 3 + 1 80)
2 TB for movies and tv shows




Use : Streaming 1080P movies, tv shows, music, pictures to the TV in the living room, PS3 in the bedroom and file sharing. I would also like to setup a ftp service soon, not enough time.

This was the first computer I've ever built back in 04 for gaming purpose (it had a x1650 pro tho) and it has been one of the most reliable, if not the most reliable PC I've had. I could not let that PC go for nothing.


----------



## Texasinstrument

This is a very basic server I use as a NAS for the local Starbucks and McDonalds to use if the users please. WiFi users can connect to it easily and yes, it's actually being used.

OS: Windows 2000 Pro
Case: stock
CPU: Pentium 3 500mhz (upgraded from P2 350)
Motherboard: Intel RC440BX flashed to Intel BIOS from Gateway BIOS
Cooling: stock
Memory: 384MB PC100 SDRAM (upgraded from 64MB)
PSU: a whopping 90W
OS HDD: Quantrum Fireball 8GB
Storage HDD(s): 350GB Western Digital 7200RPM HD
Server Manufacturer: Gateway


----------



## Moovin

Quote:


> Originally Posted by *Texasinstrument*
> 
> This is a very basic server I use as a NAS for the local Starbucks and McDonalds to use if the users please. WiFi users can connect to it easily and yes, it's actually being used.
> OS: Windows 2000 Pro
> Case: stock
> CPU: Pentium 3 500mhz (upgraded from P2 350)
> Motherboard: Intel RC440BX flashed to Intel BIOS from Gateway BIOS
> Cooling: stock
> Memory: 384MB PC100 SDRAM (upgraded from 64MB)
> PSU: a whopping 90W
> OS HDD: Quantrum Fireball 8GB
> Storage HDD(s): 350GB Western Digital 7200RPM HD
> Server Manufacturer: Gateway


Wow. How many people are usually on it?


----------



## Texasinstrument

Quote:


> Originally Posted by *Moovin*
> 
> Wow. How many people are usually on it?


depends on who's using the wifi at the moment. 1-2 users most of the time


----------



## Moovin

Quote:


> Originally Posted by *Texasinstrument*
> 
> depends on who's using the wifi at the moment. 1-2 users most of the time


Ahh. So its just for network storage?


----------



## Texasinstrument

Quote:


> Originally Posted by *Moovin*
> 
> Ahh. So its just for network storage?


yeah. basically that. mostly documents are on there.


----------



## Moovin

Quote:


> Originally Posted by *Texasinstrument*
> 
> yeah. basically that. mostly documents are on there.


Thats pretty cool. How does it handle it with all the old tech?


----------



## Texasinstrument

Quote:


> Originally Posted by *Moovin*
> 
> Thats pretty cool. How does it handle it with all the old tech?


NAS isn't demanding at all. a 386 could run NAS duty easily.


----------



## Moovin

Quote:


> Originally Posted by *Texasinstrument*
> 
> NAS isn't demanding at all. a 386 could run NAS duty easily.


Even with all the slower HDD? My uncle has a server setup in his house for torrenting. Has 1Gbps connections in his house for the server. He ends up having slower download times because the drive cant keep up.


----------



## Texasinstrument

Quote:


> Originally Posted by *Moovin*
> 
> Even with all the slower HDD? My uncle has a server setup in his house for torrenting. Has 1Gbps connections in his house for the server. He ends up having slower download times because the drive cant keep up.


the Western Digital HDD isn't slow. It used to just run on the 8GB Quantrum Fireball from 1997 and it wasn't slow then.


----------



## Moovin

Quote:


> Originally Posted by *Texasinstrument*
> 
> the Western Digital HDD isn't slow. It used to just run on the 8GB Quantrum Fireball from 1997 and it wasn't slow then.


Ahh I didnt see the drive speed. I just glanced at it quick. My fault.


----------



## joshd

Really cool! Think of all the personal informations you will have...


----------



## ZFedora

Quote:


> Originally Posted by *Texasinstrument*
> 
> This is a very basic server I use as a NAS for the local Starbucks and McDonalds to use if the users please. WiFi users can connect to it easily and yes, it's actually being used.
> OS: Windows 2000 Pro
> Case: stock
> CPU: Pentium 3 500mhz (upgraded from P2 350)
> Motherboard: Intel RC440BX flashed to Intel BIOS from Gateway BIOS
> Cooling: stock
> Memory: 384MB PC100 SDRAM (upgraded from 64MB)
> PSU: a whopping 90W
> OS HDD: Quantrum Fireball 8GB
> Storage HDD(s): 350GB Western Digital 7200RPM HD
> Server Manufacturer: Gateway


That's awesome! I love to see old hardware re-purposed! The oldest server I have running is an AMD Athlon 3800+(?) with 512MB ram, not close to a PIII but still older


----------



## Texasinstrument

Quote:


> Originally Posted by *ZFedora*
> 
> That's awesome! I love to see old hardware re-purposed! The oldest server I have running is an AMD Athlon 3800+(?) with 512MB ram, not close to a PIII but still older


Being only 13 I don't have much a sense of time for computers, but using that Athlon 3800 as a server is a waste, that can still play games on lower settings and make a great web browsing machine.


----------



## Texasinstrument

Is that 3800 a single core or a dual?


----------



## ZFedora

Uh yeah I'd rather not use it as anything but a server.


----------



## joshd

Quote:


> Originally Posted by *ZFedora*
> 
> Uh yeah I'd rather not use it as anything but a server.


What OS?


----------



## ZFedora

Quote:


> Originally Posted by *joshd*
> 
> What OS?


Debian 6


----------



## joshd

Quote:


> Originally Posted by *ZFedora*
> 
> Debian *6*


6? Not thinking about upgrading it for better secutiry and hardware support etc?

EDIT: Nvm, I was thinking it was on the same release as Ubuntu.

*Move along folks, nothing happened here!*


----------



## ZFedora

Quote:


> Originally Posted by *joshd*
> 
> 6? Not thinking about upgrading it for better secutiry and hardware support etc?
> EDIT: Nvm, I was thinking it was on the same release as Ubuntu.
> *Move along folks, nothing happened here!*


6 is the newest release Mr. Linux Lobbyist


----------



## joshd

Quote:


> Originally Posted by *ZFedora*
> 
> 6 is the newest release Mr. Linux Lobbyist


Hehe check my edit


----------



## ZFedora

Quote:


> Originally Posted by *joshd*
> 
> Hehe check my edit


I know haha, I was kidding. But yeah, I don't really have a use for the Athlon besides being a server. Everyone else in my family has fairly new laptops/desktops


----------



## Texasinstrument

Quote:


> Originally Posted by *ZFedora*
> 
> I know haha, I was kidding. But yeah, I don't really have a use for the Athlon besides being a server. Everyone else in my family has fairly new laptops/desktops


Being stuck using an ancient 2.4ghz Northwood Pentium 4 and 384MB of RAM because the crappy Corsair PSU in my workstation died, I do value that Athlon very well at the moment (lol)


----------



## joshd

Quote:


> Originally Posted by *Texasinstrument*
> 
> Being stuck using an ancient 2.4ghz Northwood Pentium 4 and 384MB of RAM because the crappy Corsair PSU in my workstation died, I do value that Athlon very well at the moment (lol)


If you run Linux on it it should be fine.


----------



## raiderxx

Slightly modded Antec 900
AMD Athlon 64 3200+
ASUS A8N-VM Mobo
2x500 mb RAM
600W Rosewill PSU
LSI SAS 3081E-R controller card
Running Windows Home Server SP3

1x Seagate Barracuda 160 gig for OS
1x Seagate Barracuda 500 gig
2x Seagate Constellation 500 gig
2x Western Digital Black 1TB
1x Western Digital Black 500 gig


----------



## Texasinstrument

Quote:


> Originally Posted by *raiderxx*
> 
> Slightly modded Antec 900
> Athlon 3200+
> ASUS A8N-VM Mobo
> 500 mb RAM
> 600W Rosewill PSU
> LSI SAS 3081E-R controller card
> Running Windows Home Server SP3
> 1x Seagate Barracuda 160 gig for OS
> 1x Seagate Barracuda 500 gig
> 2x Seagate Constellation 500 gig
> 2x Western Digital Black 1TB
> 1x Western Digital Black 500 gig


Is that an Athlon 64 3200+ or an Athlon XP 3200+?


----------



## raiderxx

Quote:


> Originally Posted by *Texasinstrument*
> 
> Quote:
> 
> 
> 
> Originally Posted by *raiderxx*
> 
> Slightly modded Antec 900
> Athlon 3200+
> ASUS A8N-VM Mobo
> 500 mb RAM
> 600W Rosewill PSU
> LSI SAS 3081E-R controller card
> Running Windows Home Server SP3
> 1x Seagate Barracuda 160 gig for OS
> 1x Seagate Barracuda 500 gig
> 2x Seagate Constellation 500 gig
> 2x Western Digital Black 1TB
> 1x Western Digital Black 500 gig
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Is that an Athlon 64 3200+ or an Athlon XP 3200+?
Click to expand...

64. Thanks for clarifying. (also forgot that I have to sticks of ram.. Updated post.


----------



## mr one

Quote:


> Originally Posted by *Thynsiia*
> 
> i got these lying around:
> 
> 
> 
> 
> 2 X Dell poweredge 1850
> 4 X hp proliant dl385 g1
> i have no idea what to do with them, they make allot of noize, and haven't got that much storage


give one for me


----------



## killabytes

Just some updated pictures for the folks. Most of you know me as a Server Fiend. Sooo Yeah


----------



## hoostie

i7 2600
mobo, asrock extreme4 gen 3 z68
16gb of corsair vengance ddr3 1600mhz
intel pcie gigabit nic
corsair force series 3 60bg ssd for os
pc power and cooling silence 760
highpoint 2720 sgl pcie x8 mini sas card.
cooler master hyper 212 cooler
and now for hard drives added some new, some are old
3x seagte 2tb 5900 rpm
1x 2tb hitachi 7200rpm
2x seagate 1.5tb 7200rpm
3x wd green 1tb 5400rpm
1x seagate 1tb 7200rpm
1x 150gb raptor x 10k rpm (for vm's)
1x 160 gb hitachi 7200rpm 2.5 in (for vm's) had laying around
1x seagate 80gb 7200 rpm (for nightly image backups of server 2008

Using drive bender I have 13.6tb of formatted storage


----------



## Pentium-David

Quote:


> Originally Posted by *Arsonx*
> 
> OS: Windows 7 Pro 32bit
> MB: SOLTEK K8AN2E-GR Socket 754
> CPU: AMD Athlon 64 2800+ 1.8Ghz
> RAM: 2 GB Mushkin DDR400
> GPU: GeForce 6200 AGP Passive Cooler
> PSU: Thermaltake 430w
> CASE: Antec 900
> HDD:
> 120 GB for OS, documents, music, pictures
> 520 GB JBOD (160 x 3 + 1 80)
> 2 TB for movies and tv shows
> 
> 
> Use : Streaming 1080P movies, tv shows, music, pictures to the TV in the living room, PS3 in the bedroom and file sharing. I would also like to setup a ftp service soon, not enough time.
> This was the first computer I've ever built back in 04 for gaming purpose (it had a x1650 pro tho) and it has been one of the most reliable, if not the most reliable PC I've had. I could not let that PC go for nothing.


Nice server but yuck get that POS Thermaltake 430W out of there, it's a crappy HEC unit, get an 80 plus unit in there!


----------



## Texasinstrument

Quote:


> Originally Posted by *Pentium-David*
> 
> Nice server but yuck get that POS Thermaltake 430W out of there, it's a crappy HEC unit, get an 80 plus unit in there!


Are "hec" units bad? I've steered away from them because I don't really know who they are.


----------



## nyxcharon

Something I whipped up today with spare parts laying around. Installed samba on it and using it to as a file server on my home network.


----------



## joshd

Quote:


> Originally Posted by *nyxcharon*
> 
> Something I whipped up today with spare parts laying around. Installed samba on it and using it to as a file server on my home network.


Looks good. What are the specs of it?


----------



## nyxcharon

Quote:


> Originally Posted by *joshd*
> 
> Looks good. What are the specs of it?


Amd Athlon (I think)
2GB DDR2 ram
250 GB HDD
Nvidia gpu -integrated (don't remember)

I just kinda threw it together. Made the heatsink myself








It was just a old plate, heat-sink and fan, did a bit of work to 'em and got it all worked out.


----------



## Moovin

Quote:


> Originally Posted by *nyxcharon*
> 
> Quote:
> 
> 
> 
> Originally Posted by *joshd*
> 
> Looks good. What are the specs of it?
> 
> 
> 
> Amd Athlon (I think)
> 2GB DDR2 ram
> 250 GB HDD
> Nvidia gpu -integrated (don't remember)
> 
> I just kinda threw it together. Made the heatsink myself
> 
> 
> 
> 
> 
> 
> 
> 
> It was just a old plate, heat-sink and fan, did a bit of work to 'em and got it all worked out.
Click to expand...

Not bad! It came out really nice.

Sent from my DROID X2 using Tapatalk


----------



## BodenM

This is my server box that my dad got for me when the business he was working for closed up shop.

OS: Ubuntu 11.10 x86
Case: Stock ProLiant ML570 G2 7RU case
CPU: 4x Intel Xeon Gallatins @ 2.8GHz
Motherboard: Stock ProLiant ML570 G2
Cooling: Stock passive heatsinks + 2x stock air deflectors
Memory: 4GB DDR2
PSU: 3x 600w PSUs (2+1 redundant)
OS HDD: 2x 10kRPM 36.4GB SCSI in RAID 0
Storage HDD(s): see above
Server Manufacturer: Hewlett-Packard

What you use it for: was used for folding, before my parents discovered the power bill (







), now it just sits in my room, unused, gonna sell it so I can start buying parts to build my own PC.
temps, loudness, etc: almost deafeningly loud during startup, quiets down during idle. Seems to run pretty cool.
Pic will come later, cause it's nearly 2am here, xD


----------



## diecast

This box was designed to deal out video locally and remotely. The content source varies so greatly that I needed to be able to transcode and re-encode it quickly and smoothly without interrupting the streams regardless of where it was being accessed from. For this I've implemented a few python programs that I wrote with nginx as the front-end which assists in determining what type of codec is to be used.

OS: Ubuntu 12.04 LTS (soon to be final)
Case: CoolerMaster HAF 922
CPU: Intel i5 2500K
Overclock: 4.532GHz
Motherboard: ASUS P8Z68-V PRO/GEN3
Cooling: CoolerMaster H212+
Memory: G.SKILL Ripjaws X Series 8GB DDR3 1600 (PC3 12800) F3-12800CL9D-8GBXL
PSU: Corsair HX520W Modular
OS SSD: 2x Crucial M4 64GB - Intel RAID0
Storage HDD(s): (list later when I get home, server is offline right now so I can't access drive info)
Server Manufacturer: Me

What you use it for: File server, backup storage
Temps, loudness:
Any additional software that you use:
Pics


----------



## overclocker23578

1st Gen ProLiant DL585, only running 2 of 4 CPU + RAM boards atm

OS: Server 2008 R2 Standard, may switch to ESX/ESXi
Case: DL585 case
CPU: 4 X Opteron 875 Dual cores (2 running atm, don't need all the power)
Motherboard: DL585
Memory: 32GB (running 16 atm)
PSU: 2 800W hot-plug redundant
OS HDD: 4 73GB 15K SCSI drives RAID 5
Storage HDD(s): None
Server Manufacturer: HP

What you use it for: Game server (L4D2, CSS, Minecraft), Web Server (IIS), Domain controller, DNS server, will be doing a lot more in a few weeks (Lots of web hosting, DDNS server)

Temps, loudness, etc: CPUs average 40-50*C under Prime95, Jet engine levels of noise

Pics:


----------



## Pentium-David

Quote:


> Originally Posted by *Texasinstrument*
> 
> Are "hec" units bad? I've steered away from them because I don't really know who they are.


Not all of them but that one is. Buy this: more stable voltages for your hardware and use a lot less electricity http://www.newegg.com/Product/Product.aspx?Item=N82E16817371033


----------



## ramicio

I ordered and added two more 2TB drives to my RAID 6 array late last week. I was running out of space. This is it for what my card can handle.


----------



## joshd

Quote:


> Originally Posted by *ramicio*
> 
> I ordered and added two more 2TB drives to my RAID 6 array late last week. I was running out of space. This is it for what my card can handle.


----------



## Plan9

I've upgraded the hardware on my home server:

*OS:* FreeBSD 8.1-RELEASE (GENERIC)
*CPU:* AMD Phenom(tm) II X3 720 Processor (2812.55-MHz K8-class CPU)
*RAM:* 8GB
*HDD (boot):* 80GB IDE - formated: UFS
*HDD (storage):* 6x 1TB SATAII - formatted: 1 ZFS pool
*ZFS config:*

Code:



Code:


  pool: zprimus
 state: ONLINE
 scrub: scrub completed after 5h8m with 0 errors on Wed Apr  4 06:09:26 2012
config:

 NAME        STATE     READ WRITE CKSUM
 zprimus     ONLINE       0     0     0
   raidz1    ONLINE       0     0     0
     ad10    ONLINE       0     0     0
     ad14    ONLINE       0     0     0
     ad12    ONLINE       0     0     0
   raidz1    ONLINE       0     0     0
     ad2     ONLINE       0     0     0
     ad4     ONLINE       0     0     0
     ad6     ONLINE       0     0     0

Amazingly, even booting from IDE, the system flies. But then I guess most things of the OS is cached and all of the VMs I run are on the ZFS array, thus not bottle-necked by IDE.


----------



## axipher

Amazing servers all









I don't have a new server for posting yet, still working on one, but I have a question.

My friend is looking in to a 8x 2 TB solution from QNAP and I told him I would look into a custom build instead, I started the following thread and was hoping to get some input from all you crazy server fanatics. And help is greatly appreciated.

http://www.overclock.net/t/1241371/nas-server-build-for-8x-2-tb-raid-6-ssd-boot/0_50


----------



## Plan9

Quote:


> Originally Posted by *hoostie*
> 
> 3x seagte 2tb 5900 rpm
> 1x 2tb hitachi 7200rpm
> 2x seagate 1.5tb 7200rpm
> 3x wd green 1tb 5400rpm
> 1x seagate 1tb 7200rpm
> 1x 150gb raptor x 10k rpm (for vm's)
> 1x 160 gb hitachi 7200rpm 2.5 in (for vm's) had laying around
> 1x seagate 80gb 7200 rpm (for nightly image backups of server 2008
> Using drive bender I have 13.6tb of formatted storage


How does that work? Are your drives pooled into one volume? And if so, what sort of redundancy does this offer you (in terms of HDDs dying)?


----------



## hoostie

Quote:


> Originally Posted by *Plan9*
> 
> How does that work? Are your drives pooled into one volume? And if so, what sort of redundancy does this offer you (in terms of HDDs dying)?


Yep, all of my drives are pooled into each other. As far as redundancy, I don't have a whole lot. I can turn on duplication for the files that I don't want to loose. It will then put the files on 2 drives. I do this on photos, and documents, but not my media. If I loose a drive I just loose whats on that drive. The rest of the pool is not affected. My media collection expands quite rapidly, so I just could not see the benefit to cost in going with a hardware raid 5 or 6. This way If I loose a drive, it's not the end of the world. It wont take me that long to get 1 or 2tb of data back if a drive dies. Maybe some day if I decide I want to spend the money I will get a nice card and go raid 6. Right now I just can't convince my self to go that route.


----------



## Plan9

Quote:


> Originally Posted by *hoostie*
> 
> Yep, all of my drives are pooled into each other. As far as redundancy, I don't have a whole lot. I can turn on duplication for the files that I don't want to loose. It will then put the files on 2 drives. I do this on photos, and documents, but not my media. If I loose a drive I just loose whats on that drive. The rest of the pool is not affected. My media collection expands quite rapidly, so I just could not see the benefit to cost in going with a hardware raid 5 or 6. This way If I loose a drive, it's not the end of the world. It wont take me that long to get 1 or 2tb of data back if a drive dies. Maybe some day if I decide I want to spend the money I will get a nice card and go raid 6. Right now I just can't convince my self to go that route.


Are you absolutely sure you'd only loose the data on that disk? Though I've had no experience of the software RAID you're running specifically - the general rule is that if a HDD dies and you have no (spare) redundancy disks in that storage pool, then the whole pool dies.

However even if you are right, if you don't care if you loose a few TB of data from hardware failure, then a lot of that data can just be deleted anyway and thus you could afford to have proper redundancy. So why not do that?

I appreciate this system might work for you, but I'd be waking up in cold sweats if I dared run that kind of set up at home.







The whole uncertainty of the resilience of the system - not to mention guess work as to where the data is stored - scares me. But then if it works for you, then so be it


----------



## hoostie

Quote:


> Originally Posted by *Plan9*
> 
> Are you absolutely sure you'd only loose the data on that disk? Though I've had no experience of the software RAID you're running specifically - the general rule is that if a HDD dies and you have no (spare) redundancy disks in that storage pool, then the whole pool dies.
> However even if you are right, if you don't care if you loose a few TB of data from hardware failure, then a lot of that data can just be deleted anyway and thus you could afford to have proper redundancy. So why not do that?
> I appreciate this system might work for you, but I'd be waking up in cold sweats if I dared run that kind of set up at home.
> 
> 
> 
> 
> 
> 
> 
> The whole uncertainty of the resilience of the system - not to mention guess work as to where the data is stored - scares me. But then if it works for you, then so be it


Yep I am absolutely sure. I even tested it before I set it all up. Everything is stored in ntfs and u can pull a drive and the pool is still good. I do have enough space right now that I could duplicate everything. To be honest it's just media. All of my important files are duplicated, and backed up in the cloud. I have a decent internet conection at home, I can download a lot of stuff quickly







. I have a list of all my media, so I can get it quick if I need too. Right now every drive is a little less than half full. So at most I loose 900gb, the least I can loose is like 400gb. Not saying this is the ideal situation, but it works for me right now. If I loose some tv shows or movies its not going to kill me.


----------



## Plan9

Quote:


> Originally Posted by *hoostie*
> 
> Yep I am absolutely sure. I even tested it before I set it all up. Everything is stored in ntfs and u can pull a drive and the pool is still good. I do have enough space right now that I could duplicate everything. To be honest it's just media. All of my important files are duplicated, and backed up in the cloud. I have a decent internet conection at home, I can download a lot of stuff quickly
> 
> 
> 
> 
> 
> 
> 
> . I have a list of all my media, so I can get it quick if I need too. Right now every drive is a little less than half full. So at most I loose 900gb, the least I can loose is like 400gb. Not saying this is the ideal situation, but it works for me right now. If I loose some tv shows or movies its not going to kill me.


The nice thing about a decent RAID5 is that you're not actually duplicating everything. You're creating repair information via checksums and so forth. On my system for example, I have 6 drives with 2 disks redundancy (ie 2 disks can die and I don't lose any data) and I only loose 1/3 of the storage to account for that - not half the storage (though I'm running ZFS raidz1, not RAID5 - but the principle is similar)

Good to hear that you've checked your pools resilience already though. That would have been my biggest worry









And finally, with all my opinions said and done, it does sound like you've found a good alternative to RAIDing - even if it's not my personal preference. Diversity and variety in the different available solutions is (in my opinion at least) one of the great things about working in IT. I love reading how other people have approached the problems from a completely different angle. so thanks for taking the time to explain your set up


----------



## hoostie

Quote:


> Originally Posted by *Plan9*
> 
> The nice thing about a decent RAID5 is that you're not actually duplicating everything. You're creating repair information via checksums and so forth. On my system for example, I have 6 drives with 2 disks redundancy (ie 2 disks can die and I don't lose any data) and I only loose 1/3 of the storage to account for that - not half the storage (though I'm running ZFS raidz1, not RAID5 - but the principle is similar)
> Good to hear that you've checked your pools resilience already though. That would have been my biggest worry
> 
> 
> 
> 
> 
> 
> 
> 
> And finally, with all my opinions said and done, it does sound like you've found a good alternative to RAIDing - even if it's not my personal preference. Diversity and variety in the different available solutions is (in my opinion at least) one of the great things about working in IT. I love reading how other people have approached the problems from a completely different angle. so thanks for taking the time to explain your set up


Oh don't get me wrong, I would love to go raid 5. Actually I would probably go raid 6. The main reason I haven't is because I kind of have a hodge podge of disks and wanted to use the stuff that I already had. Heck, I just built a 12 disk raid 6 array with 2tb drives at work. I would love to have that setup at my house. Just don't have that kind of spare money sitting around right now. I love reading about everyone solutions to problems too. Also being in IT i like learning and reading what other people do. The more you learn right. On a side note, I do have a raid 5 array with 500gb drives in an older server. It houses all the things I can't loose.


----------



## Mygaffer

I have just got most of my components in. Its a Xeon 1230, 3.2Ghz, Intel barebones, chassis, psu, board, and a raid module. I have to get some ram and then buy some drives down the road. I am really excited to get it going.


----------



## Plan9

Quote:


> Originally Posted by *hoostie*
> 
> Oh don't get me wrong, I would love to go raid 5. Actually I would probably go raid 6. The main reason I haven't is because I kind of have a hodge podge of disks and wanted to use the stuff that I already had. Heck, I just built a 12 disk raid 6 array with 2tb drives at work. I would love to have that setup at my house. Just don't have that kind of spare money sitting around right now. I love reading about everyone solutions to problems too. Also being in IT i like learning and reading what other people do. The more you learn right. On a side note, I do have a raid 5 array with 500gb drives in an older server. It houses all the things I can't loose.


ZFS might be a solution for you then - as you could still use your random mix of disks and have them raided. (I nearly did this myself). However it would mean you cannot run Windows on your server. Not sure if that's a deal breaker for you or not


----------



## Aestylis

Finally getting around to posting some pictures, not the greatest though.









Home ESXI server I built from free/near free parts. See system in Sig.

OS: ESXI 5
Case: NZXT Whisper
CPU: 2x Intel XEON L5320 (quad-core) BSEL Modded to 2.33ghz
Motherboard: Intel S5000PSL
Memory: 20gb ddr-2 667 FBDIMM's (the picture was before an upgrade to the memory)
Cooling: Dual Coolermaster Hyper 101's on the CPU's with Thermaltake LGA771 brackets. 6 total 120mm fans, several others for FBDIMMS and positive airflow out the rear.
PSU: 550w Diablotek PSU (cheap but works great)
OS HDD: 4gb USB flash drive
Storage HDD(s): 8x 250GB WD RE drives with one spare. 750gb WD as a backup drive.
Server Manufacturer: (Ex: Dell, HP, You?) Me.

What you use it for:
(Minecraft server, web server, file server, media server, test environment for certifications, etc.)

Temps, loudness, etc.
Extremely quiet. Haven't gotten the temps out yet.


----------



## ALpHaMoNk

Chasis - Norco 4020
Motherboard- SUPERMICRO X8SIA-F-O LGA 1156
CPU - Intel Xeon X3440 @ 2.53GHZ
Cooling - Dynatron K666 60mm 2 Ball CPU Cooler
Memory - 16GB Kingston 8GB (4 x 4GB) 240-Pin SDRAM ECC Unbuffered DDR3 1333 (PC3 10600)

Storage

OS Drive - Seagate Model ST380815ASSATA 3Gb/s80GB 7,200
Game ISO Drive - Seagates Model ST3750640ASSATA 3Gb/s750GB 7,200
Misc Drives= 1 WD1001FALS 1TB
1 WD10EAVS 1TB
RAID 6
Controller Areca 1680 8 port 2GB Cache
Intel RES2SV240 SAS Expander 24 port
13 x Hitachi 7k2000 & 7k3000 2TB

ODD - LITE-ON Slim 8X DVD Burner Model DS-8A2S-A01
LAN - onboard 2x Intel 82574L (TEAM)
BFG 1000W PSU
OS - Windows 2008 R2 Standard

Purpose

File Server (mostly HD streaming)
Torrent server (no Longer)
Newgroup downloader
VMware
FTP
airvideo/QLoud
Movie\TV Shows Metadata fetcher
Print server
BackUp server (till I build a dedicated backup Server)

-=Still in the works=-


----------



## hoostie

Just redid things a bit. I am using one of my 1tb seagate 7200rpm drives for whs 2011 in a vm. It makes backing up pc's pretty easy. I also added a wd 3tb green drive. I now have a total of 15.4tb of formatted storage. I also redid some of my cable management.

before


After


----------



## ZealotKi11er

Before i spend some real money to build a $500 File server i am using some old hardware to see if it benefit me in any way.

Intel Pentium D 805 @ 2.66GHz
1GB + 512MB DDR1
ASUS P5S800-VM
Intel Stock Cooler or Xigmatek HDT-S1283
ATi Radeon X1650 PRO 512MB AGP
Generic 460W PSU
CM 690 (Heavily Modified)
1 x 20GB SATA HDD
1 x 120GB IDE HDD
1 x 200GB IDE HDD

Not sure what OS to get for this which i plan to use for the real server i will build. I know i cant run Linux because i have 2 programs that require Windows to run.


----------



## ChRoNo16

Depending what your final plans are, you can get a copy of Windows Home Server off newegg for around 50 bucks.

I like version 1.

Make a nice quad core with like 4-8gigs of ram, not real expensive, and should be perfect for backup, media server, whatever you need


----------



## blupupher

Quote:


> Originally Posted by *ChRoNo16*
> 
> Depending what your final plans are, you can get a copy of Windows Home Server off newegg for around 50 bucks.
> I like version 1.
> Make a nice quad core with like 4-8gigs of ram, not real expensive, and should be perfect for backup, media server, whatever you need


No point in getting 8 gigs of RAM if using WHS v1, it is 32 bit only.
WHS 2011 is 64 bit only.


----------



## Plan9

Quote:


> Originally Posted by *ChRoNo16*
> 
> Depending what your final plans are, you can get a copy of Windows Home Server off newegg for around 50 bucks.
> I like version 1.
> Make a nice quad core with like 4-8gigs of ram, not real expensive, and should be perfect for backup, media server, whatever you need


Why? WHS is (in my opinion at least) one of the most pointless OSs Microsoft have ever released (and they've released a lot of crappy OS's)


----------



## axipher

Quote:


> Originally Posted by *Plan9*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ChRoNo16*
> 
> Depending what your final plans are, you can get a copy of Windows Home Server off newegg for around 50 bucks.
> I like version 1.
> Make a nice quad core with like 4-8gigs of ram, not real expensive, and should be perfect for backup, media server, whatever you need
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Why? WHS is (in my opinion at least) one of the most pointless OSs Microsoft have ever released (and they've released a lot of crappy OS's)
Click to expand...

WHS 2011 is an amazing product. Sure a Linux OS can provide all the same features for free, but requires much more setup. For the average user, WHS gives you everything you need and relatively simple setup.
- Shared folders with automatic back-up
- Automatic Client and Server back-ups
- Remote web access to files and on-the-fly transcoding that works on as little as 50 kB/s upload speeds
- Remote management from your smart phone

Normally you can find a copy for $40 every couple weeks if you keep an eye out for it. The only downside I found with WHS is customizing the web site it provides isn't that easy and the remote media player requires Silverlight which makes it inaccessible to most phones.


----------



## Plan9

Quote:


> Originally Posted by *axipher*
> 
> WHS 2011 is an amazing product. Sure a Linux OS can provide all the same features for free, but requires much more setup. For the average user, WHS gives you everything you need and relatively simple setup.
> - Shared folders with automatic back-up
> - Automatic Client and Server back-ups
> - Remote web access to files and on-the-fly transcoding that works on as little as 50 kB/s upload speeds
> - Remote management from your smart phone
> Normally you can find a copy for $40 every couple weeks if you keep an eye out for it. The only downside I found with WHS is customizing the web site it provides isn't that easy and the *remote media player requires Silverlight which makes it inaccessible to most phones*.


This is exactly why I steer away of Microsoft and Apple products. I'd rather spend an extra hour installing my own NAS and have something flexible than have a "one size fits all" solution that refuses to work with competitors hardware / software. In the end you just sell yourself short as you end up with a substandard solution.

* steps off his soap box


----------



## axipher

Quote:


> Originally Posted by *Plan9*
> 
> Quote:
> 
> 
> 
> Originally Posted by *axipher*
> 
> WHS 2011 is an amazing product. Sure a Linux OS can provide all the same features for free, but requires much more setup. For the average user, WHS gives you everything you need and relatively simple setup.
> - Shared folders with automatic back-up
> - Automatic Client and Server back-ups
> - Remote web access to files and on-the-fly transcoding that works on as little as 50 kB/s upload speeds
> - Remote management from your smart phone
> Normally you can find a copy for $40 every couple weeks if you keep an eye out for it. The only downside I found with WHS is customizing the web site it provides isn't that easy and the *remote media player requires Silverlight which makes it inaccessible to most phones*.
> 
> 
> 
> This is exactly why I steer away of Microsoft and Apple products. I'd rather spend an extra hour installing my own NAS and have something flexible than have a "one size fits all" solution that refuses to work with competitors hardware / software. In the end you just sell yourself short as you end up with a substandard solution.
> 
> * steps off his soap box
Click to expand...

This is why I'm currently looking at on-the-fly web transcoding options for Ubuntu, haven't found and stellar ones though that offer the same experience as WHS's option. I do enjoy the automatic client back-up though of all computers on the network. Being able to flawlessly manage back-ups of over 10 PC's on my house has been a godsend seeing as most of them are student's with poor back-up habits.


----------



## Plan9

Quote:


> Originally Posted by *axipher*
> 
> This is why I'm currently looking at on-the-fly web transcoding options for Ubuntu, haven't found and stellar ones though that offer the same experience as WHS's option. I do enjoy the automatic client back-up though of all computers on the network. Being able to flawlessly manage back-ups of over 10 PC's on my house has been a godsend seeing as most of them are student's with poor back-up habits.


audio or video?

I use *mpd* for audio (you can mix and match your front end) and *media tomb* for video (only over the LAN though as it's a uPnP media server)

[edit]

also sorry for being all preachy earlier. I just have a bee in my bonnet about companies that are deliberately uncooperative.


----------



## axipher

Quote:


> Originally Posted by *Plan9*
> 
> Quote:
> 
> 
> 
> Originally Posted by *axipher*
> 
> This is why I'm currently looking at on-the-fly web transcoding options for Ubuntu, haven't found and stellar ones though that offer the same experience as WHS's option. I do enjoy the automatic client back-up though of all computers on the network. Being able to flawlessly manage back-ups of over 10 PC's on my house has been a godsend seeing as most of them are student's with poor back-up habits.
> 
> 
> 
> audio or video?
> 
> I use *mpd* for audio (you can mix and match your front end) and *media tomb* for video (only over the LAN though as it's a uPnP media server)
> 
> [edit]
> 
> also sorry for being all preachy earlier. I just have a bee in my bonnet about companies that are deliberately uncooperative.
Click to expand...

Both audio and video and remote access would be nice. Although I don't mind setting up multiple programs if some are better than other at certain things.

I'll look in to MDP and MediaTomb tonight. I'm thinking of trying out XBMC and Boxee OS's on my HTPC to replace Windows 7 I'm using now. I'm assuming MediaTomb's uPnP service should work fine with either of those options.

And no problem man, I see where you're coming from in regards to companies not being cooperative. At least Silverlight is a little more open then some things.


----------



## Plan9

Quote:


> Originally Posted by *axipher*
> 
> Both audio and video and remote access would be nice. Although I don't mind setting up multiple programs if some are better than other at certain things.
> I'll look in to MDP and MediaTomb tonight.


Shout if you need any help








Quote:


> Originally Posted by *axipher*
> 
> I'm thinking of trying out XBMC and Boxee OS's on my HTPC to replace Windows 7 I'm using now. I'm assuming MediaTomb's uPnP service should work fine with either of those options.


Most definitely. They're both essentially the same anyway as Boxee is based on XBMC (I thought Boxee had discontinued it's PC support though. Surprised to hear there's still a Boxee OS floating about). In fact you can use a uPnP client on your phone and (on android at least) when I click a video to play it gives me the option to play on the phone or play on my XBMC media centre. You can also get XBMC remotes (you need the web interface enabled on XBMC) which are apps that run on your phone / tablet and allow you to use them as a remote control for your media centre.

Personally, I'd recommend you install XBMCbuntu. It's XBMC's official OS installer and (as you've already guessed) it's based off Ubuntu. So it should be a familiar internals to what you're used to plus a stress free way to get XBMC on your media centre (I think it works as a live CD too). Check their site for more details.
Quote:


> Originally Posted by *axipher*
> 
> And no problem man, I see where you're coming from in regards to companies not being cooperative. At least Silverlight is a little more open then some things.


This is the problem though, Silverlight isn't more open - it only pretends to be. I can't run Netflix on my system because Netflix depends on DRM extensions in Silverlight that MS never released to Linux. So I either have to run XBMC in Windows and loose some of the other Linux features I use (not all that appealing) or loose Netflix. And what's most frustrating is that Netflix offers unlimited bandwidth and unlimited numbers of videos views - so the DRM is completely pointless as you'd just stream from the cloud rather than spend all that extra time pratting about with ripping the stream and then filling your own storage up. So in the movie industries attempt to reduce piracy, they've instead made it hard to watch things legitimately thus making the piracy route more attractive. (and now I'm back on my soap box! maybe I should start a blog lol)

Seriously though, the IT sector is in a serious mess and it's not all one sided with the downloaders being at fault







. Us good guys get screwed left, right and centre.


----------



## axipher

Quote:


> Originally Posted by *Plan9*
> 
> Quote:
> 
> 
> 
> Originally Posted by *axipher*
> 
> Both audio and video and remote access would be nice. Although I don't mind setting up multiple programs if some are better than other at certain things.
> I'll look in to MDP and MediaTomb tonight.
> 
> 
> 
> Shout if you need any help
> 
> 
> 
> 
> 
> 
> 
> 
> Quote:
> 
> 
> 
> Originally Posted by *axipher*
> 
> I'm thinking of trying out XBMC and Boxee OS's on my HTPC to replace Windows 7 I'm using now. I'm assuming MediaTomb's uPnP service should work fine with either of those options.
> 
> Click to expand...
> 
> Most definitely. They're both essentially the same anyway as Boxee is based on XBMC (I thought Boxee had discontinued it's PC support though. Surprised to hear there's still a Boxee OS floating about). In fact you can use a uPnP client on your phone and (on android at least) when I click a video to play it gives me the option to play on the phone or play on my XBMC media centre. You can also get XBMC remotes (you need the web interface enabled on XBMC) which are apps that run on your phone / tablet and allow you to use them as a remote control for your media centre.
> 
> Personally, I'd recommend you install XBMCbuntu. It's XBMC's official OS installer and (as you've already guessed) it's based off Ubuntu. So it should be a familiar internals to what you're used to plus a stress free way to get XBMC on your media centre (I think it works as a live CD too). Check their site for more details.
> Quote:
> 
> 
> 
> Originally Posted by *axipher*
> 
> And no problem man, I see where you're coming from in regards to companies not being cooperative. At least Silverlight is a little more open then some things.
> 
> Click to expand...
> 
> This is the problem though, Silverlight isn't more open - it only pretends to be. I can't run Netflix on my system because Netflix depends on DRM extensions in Silverlight that MS never released to Linux. So I either have to run XBMC in Windows and loose some of the other Linux features I use (not all that appealing) or loose Netflix. And what's most frustrating is that Netflix offers unlimited bandwidth and unlimited numbers of videos views - so the DRM is completely pointless as you'd just stream from the cloud rather than spend all that extra time pratting about with ripping the stream and then filling your own storage up. So in the movie industries attempt to reduce piracy, they've instead made it hard to watch things legitimately thus making the piracy route more attractive. (and now I'm back on my soap box! maybe I should start a blog lol)
> 
> Seriously though, the IT sector is in a serious mess and it's not all one sided with the downloaders being at fault
> 
> 
> 
> 
> 
> 
> 
> . Us good guys get screwed left, right and centre.
Click to expand...

Thanks for all the great info man









I started downloading XBMCUbuntu. But now I think I'm reading that you say it won't work with Netflix? Well that's a bummer... I'll try it out anyway, hopefully it work with my Rosewill MCE remote I already have.


----------



## bobfig

Quote:


> Originally Posted by *axipher*
> 
> Thanks for all the great info man
> 
> 
> 
> 
> 
> 
> 
> 
> I started downloading XBMCUbuntu. But now I think I'm reading that you say it won't work with Netflix? Well that's a bummer... I'll try it out anyway, hopefully it work with my Rosewill MCE remote I already have.


the problem with netflix is silverlite. they do have one open sourced for linux but the problem is that it doesn't work with DRM. netflix needs the DRM or it wont work.

http://www.go-mono.com/moonlight/faq.aspx


----------



## Plan9

Quote:


> Originally Posted by *bobfig*
> 
> the problem with netflix is silverlite. they do have one open sourced for linux but the problem is that it doesn't work with DRM. netflix needs the DRM or it wont work.
> http://www.go-mono.com/moonlight/faq.aspx


There are Netflix plugins for the Boxee Box (which is Linux based and uses XBMC source), ChromeBook (Linux again) as well as various non-x86 devices such as Android handsets and games consoles (all of which wouldn't have Silverlight libraries). So I really cannot see the big deal with them releasing a DRM free player for the more traditional Linux installs as well.

However there are rumours that Netflix are working on a Chrome plug in - though not due until mid to late this year


----------



## Pentium-David

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Before i spend some real money to build a $500 File server i am using some old hardware to see if it benefit me in any way.
> Intel Pentium D 805 @ 2.66GHz
> 1GB + 512MB DDR1
> ASUS P5S800-VM
> Intel Stock Cooler or Xigmatek HDT-S1283
> ATi Radeon X1650 PRO 512MB AGP
> Generic 460W PSU
> CM 690 (Heavily Modified)
> 1 x 20GB SATA HDD
> 1 x 120GB IDE HDD
> 1 x 200GB IDE HDD
> Not sure what OS to get for this which i plan to use for the real server i will build. I know i cant run Linux because i have 2 programs that require Windows to run.


Nice server!!! Although I don't like the sound of a Generic Power Supply, what is the brand???
Quote:


> Originally Posted by *Aestylis*
> 
> Finally getting around to posting some pictures, not the greatest though.
> 
> 
> 
> 
> 
> 
> 
> 
> Home ESXI server I built from free/near free parts. See system in Sig.
> OS: ESXI 5
> Case: NZXT Whisper
> CPU: 2x Intel XEON L5320 (quad-core) BSEL Modded to 2.33ghz
> Motherboard: Intel S5000PSL
> Memory: 20gb ddr-2 667 FBDIMM's (the picture was before an upgrade to the memory)
> Cooling: Dual Coolermaster Hyper 101's on the CPU's with Thermaltake LGA771 brackets. 6 total 120mm fans, several others for FBDIMMS and positive airflow out the rear.
> PSU: 550w Diablotek PSU (cheap but works great)
> OS HDD: 4gb USB flash drive
> Storage HDD(s): 8x 250GB WD RE drives with one spare. 750gb WD as a backup drive.
> Server Manufacturer: (Ex: Dell, HP, You?) Me.
> What you use it for:
> (Minecraft server, web server, file server, media server, test environment for certifications, etc.)
> Temps, loudness, etc.
> Extremely quiet. Haven't gotten the temps out yet.


Duuuuuude, NOOOOO. Get that Diablotek out of there ASAP man!!!! That thing is junk. Not trying to be mean but get that trash out of there before it takes something out!!!! I'm surprised the 5V rail doesn't go pop on the bootup.


----------



## bigkahuna360

Well its still a WIP but here it is.

OS: Windows Server 2008 (Not purchased yet)
Case: Coolermaster CM 690 II Advanced with a Mass Effect Color Scheme
CPU: Intel i& 960 @ 4GHz
Motherboard: Intel DX58SO
GPU: ATI HD5450
Cooling: H100 (Not purchased yet)
Memory: Corsair 6GBs 3x2GB DDR3 1333
PSU: Silverstone Strider (Not purchased yet)
OS HDD: Hitachi 500GB





I do have all accessories painted as well theyre just not placed in yet.


----------



## frank anderson

I'd add to this, it's not a real server like a blade or a Dell, HP flavor.. but for what I use it for at home on my 300mbit fiber, it's enough..

*OS:* Windows Server 2003 Enterprise SP2, I'll upgrade to 2008 eventually once I'm in the mood..
*Case:* Xigmatek Asgard Pro
*CPU:* 2500K
*Motherboard:* Gigabyte Z68 UD3
*Cooling:* Corsair H70
*Memory:* 16GB Corsair Vengeance
*PSU:* Corsair AX750
*OS HDD:* (If you have one) : Crucial M4
*Storage HDD(s):* 4x WD Black 2 TB Raid 5 on LSI Megaraid 9265
*Server Manufacturer: (Ex: Dell, HP, You?)* All me baby !!

*What you use it for (Print server, backups, file server, etc.)*
hmm there's a ton of stuff installed on it, but lets leave it at the core functions... Syslog for my VPN firewall, file server / Media server, SQL server 2005, IIS for web serving, VM Workstation, P2P..
*Temps, loudness, etc.*
The system is pretty loud because of the Icy Dock, I originally thought the dock was going to keep my hard drives cool and quiet while giving me the option of "quick removal", I was wrong... The hard disk hangs at 60c if the AC is off and fan is set on low, I have the fan set on medium now and it stays in a low 40's with or without AC.

The LSI Megaraid hits almost 100c, 60c with a side window fan pointing at it.

CPU and board wise, stays at a comfortable low 30s, 40s if under load.
*Any additional software that you use*
I use to host game servers on it, but since then has been pretty much idle, now it's more for file serve, backup, my blog hosting, and porn... lol









Occasionally do some dvd ripping and encoding, various testing on VM environments.
*Pics*

Not the best quality, but here you go..










WD Caviar Black 2TB / Raid 5 / 5.45TB Usable









Icy Dock 4 in 3 SATA Backplane









I had a old H70 laying around, since the 2500K has a built in GPU which I will be using, this is perfect.


----------



## tiro_uspsss

Quote:


> Originally Posted by *frank anderson*
> 
> *snip*


how'd you get the LSI to play nice with the GB mobo??


----------



## Pentium-David

That's an amazing server but I still don't get why people think they need a quad core and 16GB RAM for a file/backup server. I have Pentium 3's and Pentium 4's that do that and they are still overkill for such simple tasks.


----------



## ZFedora

Quote:


> Originally Posted by *Pentium-David*
> 
> That's an amazing server but I still don't get why people think they need a quad core and 16GB RAM for a file/backup server. I have Pentium 3's and Pentium 4's that do that and they are still overkill for such simple tasks.


Exactly, a 16GB, quad core, 750W file server? You could do the same thing with an Arm processor and use probably about 30 watts max. Such a waste of resources..


----------



## tiro_uspsss

Quote:


> Originally Posted by *Pentium-David*
> 
> That's an amazing server but I still don't get why people think they need a quad core and 16GB RAM for a file/backup server. I have Pentium 3's and Pentium 4's that do that and they are still overkill for such simple tasks.


actually.. your P4 probably sucks as much power (I'm referring specifically to the CPU)







& ram? it's so cheap, why not?


----------



## blupupher

Quote:


> Originally Posted by *Pentium-David*
> 
> That's an amazing server but I still don't get why people think they need a quad core and 16GB RAM for a file/backup server. I have Pentium 3's and Pentium 4's that do that and they are still overkill for such simple tasks.


Quote:


> Originally Posted by *ZFedora*
> 
> Exactly, a 16GB, quad core, 750W file server? You could do the same thing with an Arm processor and use probably about 30 watts max. Such a waste of resources..


Did you read the post? Yes it is overkill for a simple file server, but:
Quote:


> Originally Posted by *frank anderson*
> 
> ...
> *What you use it for (Print server, backups, file server, etc.)*
> hmm there's a ton of stuff installed on it, but lets leave it at the core functions... Syslog for my VPN firewall, file server / Media server, SQL server 2005, IIS for web serving, *VM Workstation*, P2P..
> *Temps, loudness, etc.*
> The system is pretty loud because of the Icy Dock, I originally thought the dock was going to keep my hard drives cool and quiet while giving me the option of "quick removal", I was wrong... The hard disk hangs at 60c if the AC is off and fan is set on low, I have the fan set on medium now and it stays in a low 40's with or without AC.
> The LSI Megaraid hits almost 100c, 60c with a side window fan pointing at it.
> CPU and board wise, stays at a comfortable low 30s, 40s if under load.
> *Any additional software that you use*
> I use to host game servers on it, but since then has been pretty much idle, now it's more for file serve, backup, my blog hosting, and porn... lol
> 
> 
> 
> 
> 
> 
> 
> 
> Occasionally do some *dvd ripping and encoding, various testing on VM environments*.
> ...


----------



## Oedipus

Hey, hardware nazis, my sig is a file server. I propose that y'all deal with it and lay off the man.


----------



## frank anderson

Quote:


> Originally Posted by *tiro_uspsss*
> 
> how'd you get the LSI to play nice with the GB mobo??


I never had a problem with this LSI + Gigabyte board combo from the beginning, I just plugged it in, and it worked.. The only problem I have is the LSI card gets as hot as my GTX580...


----------



## tiro_uspsss

Quote:


> Originally Posted by *frank anderson*
> 
> I never had a problem with this LSI + Gigabyte board combo from the beginning, I just plugged it in, and it worked.. The only problem I have is the LSI card gets as hot as my GTX580...


darn it!







I have no luck what-so-ever getting my 9240-8i working with my GB X58-UD7


----------



## frank anderson

Quote:


> Originally Posted by *tiro_uspsss*
> 
> darn it!
> 
> 
> 
> 
> 
> 
> 
> I have no luck what-so-ever getting my 9240-8i working with my GB X58-UD7


That is strange, are you trying to use that as a boot drive? My LSI is only doing data handling, OS is installed on a Crucial M4 via the Intel chipset, maybe that's why I am not seeing any issues...

You can try the forums over at http://www.storagereview.com/, I know a lot of SAN Storage professionals that hang out there and they are very knowledgeable in this area.


----------



## tiro_uspsss

Quote:


> Originally Posted by *frank anderson*
> 
> That is strange, are you trying to use that as a boot drive? My LSI is only doing data handling, OS is installed on a Crucial M4 via the Intel chipset, maybe that's why I am not seeing any issues...
> You can try the forums over at http://www.storagereview.com/, I know a lot of SAN Storage professionals that hang out there and they are very knowledgeable in this area.


nope









http://www.xtremesystems.org/forums/showthread.php?278016-recommend-a-RAID-card-compat.-with-X58-Gigabyte-mobo..&highlight=

there is a run-down on virtually everything I have tried.. I have tried a few other BIOS settings since that thread - never ever have gotten it to work


----------



## 1rkrage

Here's my new server running WHS 2011













Spoiler: Specs



AMD FX-4100
Asus M5A78L-M LX Plus 760G MB
8GB Samsung low voltage
Fractal Design Define XL

2x 500gb Hitachi Travelstar 7200rpm
1x 1TB Samsung F3
1x 2TB Seagate Baracuda Green
1x 1TB USB WD MyBook





Spoiler: Why I Love Micro Center...







I'll get a PERC 6 in the future and hopefully fill up those bays


----------



## Pentium-David

I wasn't trying to be mean to him or lay into him. I was just thinking, why not save money and buy something cheaper or use something laying around, you know? Wasn't trying to start anything...
Quote:


> Originally Posted by *tiro_uspsss*
> 
> actually.. your P4 probably sucks as much power (I'm referring specifically to the CPU)
> 
> 
> 
> 
> 
> 
> 
> & ram? it's so cheap, why not?


Wasn't talking about power usage, more about capability. And my server has a 65W Cedar Mill P4, it's actually pretty efficient


----------



## ndoggfromhell

My Home Server after transplant. Got the case local for $40
Intel Core2Duo 2ghz
Gigabyte Ep35-DS3R
(4) Seagate 2.5in SATA 500Gb in lower bay (1.5Tb Array)
(4) Seagate 3.5in SATA 500Gb in upper bar (1.5Tb Array)
(4) Seagate 3.5in SATA 1Tb in internal bay (3Tb Array)
(1) Western Digital 2.5 SATA 250Gb above internal drive bays on left side
4Gb DDR2-800
(3) Hardware RAID Adapters
Windows Home Server 2011 w/ Drive Bender software to make all the arrays seen as 1 drive.
BluRay Burner

It's about half full now. Mostly Tv-eps. Some Bluray rips and about 80Gb music.
In the other half of the case i've got the firewall installed. It's a Zotac AM2 board. 2Gb memory, 120Gb 2.5 SATA drive. Dual NICs on a PCI-Express card.
Firewall runs Astaro home version. It does an awesome job, has kept a few visitors from getting spyware/virus from poor surfing habits.


----------



## afropelican

That is one HUUGGGE Case. It would look so much better if it was black on outside and inside and had blue leds.


----------



## ndoggfromhell

Quote:


> Originally Posted by *afropelican*
> 
> That is one HUUGGGE Case. It would look so much better if it was black on outside and inside and had blue leds.


God no! it's in my bedroom and I need complete darkness to sleep. I think I've removed every LED fan from every case i've ever owned. LED = Rice anyways. As for the color... black would look nicer, but I'm not painting it. I should also note, it's not that big of a case. It's tall... but shallow. Dimensions are 330 X 360 X 720mm (W x D x H)


----------



## ramicio

Yeah, indicator lights are rice


----------



## bobfig

Quote:


> Originally Posted by *ramicio*
> 
> Yeah, indicator lights are rice


lulz, i don't have lights in my server because i didn't want my dad to know that its running all the time. got away with it for a couple moths till he figured out my little plan. now he doesn't care.


----------



## tiro_uspsss

Quote:


> Originally Posted by *bobfig*
> 
> lulz, i don't have lights in my server because i didn't want my dad to know that its running all the time. got away with it for a couple moths till he figured out my little plan. now he doesn't care.


LOL nice!


----------



## Norse

Quote:


> Originally Posted by *bobfig*
> 
> lulz, i don't have lights in my server because i didn't want my dad to know that its running all the time. got away with it for a couple moths till he figured out my little plan. now he doesn't care.


thats genius







i just made sure my two servers were really quiet


----------



## blooder11181

windows xp sp3

white atx oem

:xeon 2.8ghz HT 2mb 800 604

asus nct-d dual 604

em dual fan cooler

2x1gb ddr2 400 ecc

nvidia quadro nvs 290 256mb pcixpress 16x

psu 600watts (real 335watts)

hitachi 80gb ata 100 ide

from a trade (upgrades to come)
for play games!!!!!!!!!!!!!


----------



## ramicio

Old people think anything that runs 24/7 is automatically inefficient.


----------



## Plan9

Quote:


> Originally Posted by *ramicio*
> 
> Old people think anything that runs 24/7 is automatically inefficient.


They would be kind of right though. We don't _need_ to have these servers running 24/7. In fact most of the time most of these servers are sat idle. However we _prefer_ to have them on 24/7 as it makes our lives easier (scheduling / automated routines, not having to turn the server on / shut it down, etc). We in a (largely) peaceful developed world and thus have the luxury to afford such things and we should remember that they are just luxuries. We shouldn't take these things for granted as the majority of the worlds population cannot afford personal computers nor broadband internet let alone home servers.


----------



## Norse

Quote:


> Originally Posted by *ramicio*
> 
> Old people think anything that runs 24/7 is automatically inefficient.


Old equipment also drinks power compared to new equipment that'll do the same for a fifth the power usage


----------



## Pentium-David

Quote:


> Originally Posted by *Norse*
> 
> Old equipment also drinks power compared to new equipment that'll do the same for a fifth the power usage


But REALLY old equipment doesn't use much. My Pentium 3 seedbox only needs 33 Watts


----------



## Norse

Quote:


> Originally Posted by *Pentium-David*
> 
> But REALLY old equipment doesn't use much. My Pentium 3 seedbox only needs 33 Watts


true but P4 era stuff does like the juice


----------



## axipher

Quote:


> Originally Posted by *Norse*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Pentium-David*
> 
> But REALLY old equipment doesn't use much. My Pentium 3 seedbox only needs 33 Watts
> 
> 
> 
> true but P4 era stuff does like the juice
Click to expand...

So does my Bulldozer server... Idle isn't too bad, but when she gets loaded up, power jumps up super high.


----------



## pvt.joker

Quote:


> Originally Posted by *Pentium-David*
> 
> But REALLY old equipment doesn't use much. My Pentium 3 seedbox only needs 33 Watts


that's why i love my atom based seedbox.. max 40w usage, and in a nice 2U chassis..









my file server is a lil more power hungry sadly.. but it still runs soo much cooler than my last server (dual xeon 2.8's, 6x scsi320 15krpm drives in hotswap)


----------



## Pentium-David

Yeah Atom's are pretty darn efficient. My file server has a Pentium 4 3.6GHz. But it's a 65W Cedar Mill core, computer idles around 65W which isn't too bad at all. Plus, depending on your power supply, something a little more power hungry can be good for it, cause power supplies like to be at LEAST 15% usage or higher


----------



## blooder11181

i need some help on my (1º)xeon rig.
http://www.overclock.net/t/1252326/help-on-xeon-and-psu


----------



## Imrac

I am excited for the 3770T processor. Quad core, 8 thread 45watt TDP, makes me smile.


----------



## ndoggfromhell

Quote:


> Originally Posted by *ramicio*
> 
> Yeah, indicator lights are rice


Fan LEDs are rice. I'm sorry but let's admit that they serve no performance benefit whatsoever... infact, LED's can get warm so they are technically adding (albeit small amounts) heat to the system. I still have the drive LED's and Power LED hooked up. They are necessary, but I can't imagine how blinding that case would be with 10 120mm Fans lit up.


----------



## ramicio

The only fan I've ever had with LEDs was the awful stock i7-970 cooler. I can't even stand that they are using ultra-bright LEDs on motherboards as status indicators. I do love bright power and HDD LEDs, though. I liked the older NICs and switches that actually flashed really fast. Now they mostly just blink slowly. I don't like that the Norco case I have has very dim HDD activity LEDs. The green LEDs that indicate that there is a drive in the bay are all ultra-bright water-clear LEDs, while the drive activity LEDs are some dual-color jawns with a milky-white lens and very dim diodes. A light pipe directs both of their outputs through one hole in the front. Even in a dark room you have to get right up in its grill to see drive activity.


----------



## joshd

Bump. More excellent servers please


----------



## tiro_uspsss

I might repost mine... I've been having power issues (drives dropping, hotswap etc) - it seems I have it stable now - I added a 2nd PSU; so thats a Gigabyte Odin Pro 1200W + a Hiper 580W


----------



## parityboy

Chances are the 1200W is faulty, if the 580W is the one you added.


----------



## ALpHaMoNk

Quote:


> Originally Posted by *ramicio*
> 
> The only fan I've ever had with LEDs was the awful stock i7-970 cooler. I can't even stand that they are using ultra-bright LEDs on motherboards as status indicators. I do love bright power and HDD LEDs, though. I liked the older NICs and switches that actually flashed really fast. Now they mostly just blink slowly. I don't like that the Norco case I have has very dim HDD activity LEDs. The green LEDs that indicate that there is a drive in the bay are all ultra-bright water-clear LEDs, while the drive activity LEDs are some dual-color jawns with a milky-white lens and very dim diodes. A light pipe directs both of their outputs through one hole in the front. Even in a dark room you have to get right up in its grill to see drive activity.


which Norco case do you have? I have the 4020 first gen, and even though some of the hdd leds are not fully functioning, the rest of them are pretty clear and easy to see..green for drive indicator and blue during activity.


----------



## Imrac

those norco cases turn me on. Nom Nom Nom


----------



## ramicio

Quote:


> Originally Posted by *ALpHaMoNk*
> 
> which Norco case do you have? I have the 4020 first gen, and even though some of the hdd leds are not fully functioning, the rest of them are pretty clear and easy to see..green for drive indicator and blue during activity.


I just got mine a few months ago. Both LEDs shine through a single light pipe. Even so, the activity LEDs are blue and some other color, and they are low-current, low brightness LEDs with a smoky white diffused lens. They have to fight the green LEDs through a single pipe. I'll be getting rid of it sooner than later for a Supermicro chassis and board. I don't like running 24/7 with consumer hardware.


----------



## ALpHaMoNk

which model case do you have?
I know what you mean about the light pipe pretty much how mine is as well.
I run a supermicro mobo and xeon processor, Ilike the supermicro cases but they just don't fit my budget. I have been running with my Norco for about 4yrs now. Other than the couple of failed LEDs and no SFF-8087 on the backplane (wasn't available at the time) it has servered me well. Runs 24/7. The only consumer grade product that you would really have to worry about are the hard drives. since they tend to have a higher failure rate than older smaller drives.
If you plan on getting rid of your Norco let me know what price and model you have


----------



## ramicio

It's in my signature, I have the 3216. The differences between consumer and enterprise hard drives is the same story as other hardware; features. They aren't using NASA tech. There is nothing in them that makes them more reliable to run 24/7 over consumer drives. I'm fine with a consumer drive running 24/7. What I don't like for 24/7 use are CPUs and RAM. There's a reason why restarting a computer that's been on forever makes it run better when it reboots. Problems of not having ECC evince themselves when doing heavy work for long periods of time. In my instance, that is video encoding, and it's a pain in the ass to have to re-encode a file when it takes between 24 and 48 hours per movie just because one bit got flip-flopped and ruins an entire frame.

I think the Supermicro cases are an excellent deal. You have to shell out more at once, so they aren't for builds where you buy pieces here and there. They come with redundant PSUs, so there's a big chunk of change over what you get with the Norcos. They come with rack rails, Norcos don't, and their rails suck. All of the fans are 4-pin PWM and paired with a Supermicro board means you don't have a constantly-loud server. Their cooling is ducted and made to run passive heat sinks. Their backplanes actually have features to them, and aren't just some passive electronics. You get more bang for the buck if you can afford to shell out that much at once. I only went with the Norco because I needed it now, and I didn't even want to deal with trying to adapt the innards of a Supermicro case to deal with consumer hardware.


----------



## ALpHaMoNk

Oh yes Ok I should have checked you have the 16bay case. Nice build as well. no NASA tech in the hard drives but you will get longer warranty and are built to spin 24/7. I run hitachi consumer drivers. In the end it is really a matter of what will suite your needs and what roles your server play.

I once thought about getting a supermicro case but then realized that the benifits were not worth the cost for me in a home environment. redundant PSU is great but if my psu was to really die on me, I wouldn't suffer from a day or two of downtime to replace it. it is not the backbone of my home business or anything. The cooling will make the sever much louder (which doesn't bother me as much as other) the fans really spin on those cases and do provide great cooling. The bakplanes are indeed way better with nice LEDs, but as far as functions? which functions are you looking for? Believe my i love their cases but cost per needs I just find them too expensive for what most of us would use them for in our homes.
If i had to go with an Enterprise grade chasis i would most likely pick this one up


----------



## notyettoday

I haven't taken pictures but the specs are as follows:

XP X64
~2001 Antec SOHO Full Tower
Phenom x4 9650
Asus M3N78-Pro
Rocketfish Best Buy Tower HSF
4x1gb DDR2 800
OS Drive 80gb WD800JD
Storage 2x1tb Seagate 7200rpm Raid 1

I use it for file sharing to my HTPC and Folding of course


----------



## ramicio

Quote:


> Originally Posted by *ALpHaMoNk*
> 
> Oh yes Ok I should have checked you have the 16bay case. Nice build as well. no NASA tech in the hard drives but you will get longer warranty and are built to spin 24/7. I run hitachi consumer drivers. In the end it is really a matter of what will suite your needs and what roles your server play.
> I once thought about getting a supermicro case but then realized that the benifits were not worth the cost for me in a home environment. redundant PSU is great but if my psu was to really die on me, I wouldn't suffer from a day or two of downtime to replace it. it is not the backbone of my home business or anything. The cooling will make the sever much louder (which doesn't bother me as much as other) the fans really spin on those cases and do provide great cooling. The bakplanes are indeed way better with nice LEDs, but as far as functions? which functions are you looking for? Believe my i love their cases but cost per needs I just find them too expensive for what most of us would use them for in our homes.
> If i had to go with an Enterprise grade chasis i would most likely pick this one up


I still disagree about the drives. There's an extreme burden of proof that noone's been able to show that the drives are physically even built differently.

As far as the backplane features, I'm referring to SGPIO and SES-2 type stuff. Indication of failed drives with LEDs, instead of needing to write up labels. Temperature sensing is good.


----------



## ALpHaMoNk

Quote:


> Originally Posted by *ramicio*
> 
> I still disagree about the drives. There's an extreme burden of proof that noone's been able to show that the drives are physically even built differently.
> As far as the backplane features, I'm referring to SGPIO and SES-2 type stuff. Indication of failed drives with LEDs, instead of needing to write up labels. Temperature sensing is good.


i know it is not easy to swollow about the drives, but i am sure they are different I can't see companies sticking to them if they weren't for real. also SGPIO and SES-2 type stuff will only work if SAS drives are present (forgive me if i am wrong) but i do believe that is the case...now we are back to enterprise drives. temp sensing should also be present through the raid controller


----------



## ramicio

Drive temp is a SMART function, but backplane temperature is what I was referring to. I would like to go with SAS...full duplex. WD got rid of their TLER tool because people were buying regular drives, using this tool, and using the drives with RAID cards instead of buying into their enterprise drive scheme. I'm confident that there is nothing different about the drives that make them last longer. If you pay more for a drive, you get a better warranty. It's not that they are backing the product more for free. You pay for it.


----------



## Murlocke

I'll be upgrading my 52TB server soon with faster parts and more bandwidth for parity syncs. Currently getting about 65MB/s during them, and this should allow me to get about 105MB/s, with the drives being the limiting factor.
http://www.overclock.net/t/987494/52tb-unraid-server/0_50

Going to be grabbing these:
Motherboard:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182235

Processor:
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115065

RAM:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820231440

SAS Card (x3):
http://www.provantage.com/supermicro-aoc-sas2lp-mv8~7SUP92PM.htm

SAS Cable (x6):
http://www.monoprice.com/products/product.asp?c_id=102&cp_id=10254&cs_id=1025406&p_id=8186&seq=1&format=2

6 SAS cables. 24 drives FTW.


----------



## ramicio

You might want to check your choice of RAM. Will it even work? I'd just get ECC and call it a day. It's only 4 GB you're getting. It will only add ~$20 to the build, and peace of mind that your parity is being calculated correctly.


----------



## Imrac

Yeah I don't believe that ram will work. The details of the newegg page state:
Supports up to 32 GB DDR3 *ECC Registered memory* (RDIMM) in 6 DIMM sockets Supports up to 16 GB DDR3 *ECC Un-Buffered memory* (UDIMM) in 4 DIMM sockets

And according to the user manual page 2-10 & 2-11:

Code:



Code:


Platform DIMM Type Intel® Xeon® Series Processors X8SIA/X8SIA-F

Non-ECC UDIMM Only               Not Supported
ECC UDIMM Only                   Supported (see Table 1)
RDIMM Only (with ECC)            Supported (see Table 2)
Mixed ECC with non-ECC           Not Supported
Mixed UDIMM/RDIMM                Not Supported


----------



## Murlocke

Quote:


> Originally Posted by *Imrac*
> 
> Yeah I don't believe that ram will work. The details of the newegg page state:
> Supports up to 32 GB DDR3 *ECC Registered memory* (RDIMM) in 6 DIMM sockets Supports up to 16 GB DDR3 *ECC Un-Buffered memory* (UDIMM) in 4 DIMM sockets
> And according to the user manual page 2-10 & 2-11:
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> Platform DIMM Type Intel® Xeon® Series Processors X8SIA/X8SIA-F
> Non-ECC UDIMM Only               Not Supported
> ECC UDIMM Only                   Supported (see Table 1)
> RDIMM Only (with ECC)            Supported (see Table 2)
> Mixed ECC with non-ECC           Not Supported
> Mixed UDIMM/RDIMM                Not Supported


It will work that's only for xeon processors. I have a supermicro board currently that says the same thing and I have non-ecc ram in it. The board only supports ECC ram with a xeon processor from what I read too, which is much more expensive than a i3.

It's says "Supports up to 16 GB DDR3 ECC Un-Buffered memory (UDIMM) in 4 DIMM sockets". That's exactly what non-ecc RAM is unless i'm mistaken.








Quote:


> Originally Posted by *ramicio*
> 
> You might want to check your choice of RAM. Will it even work? I'd just get ECC and call it a day. It's only 4 GB you're getting. It will only add ~$20 to the build, and peace of mind that your parity is being calculated correctly.


I see 8GB. 8GB (2 x 4GB)


----------



## Imrac

Actually there is a difference between ECC un-buffered ram and non-ecc ram. Here is an article that shows the difference between the U and R counterparts of ECC http://www.servethehome.com/unbuffered-registered-ecc-memory-difference-ecc-udimms-rdimms/

I could see how using non-ecc ram may be supported as the memory controller is on the processor itself. I would be a little weary myself though.


----------



## ALpHaMoNk

Quote:


> Originally Posted by *ramicio*
> 
> Drive temp is a SMART function, but backplane temperature is what I was referring to. I would like to go with SAS...full duplex. WD got rid of their TLER tool because people were buying regular drives, using this tool, and using the drives with RAID cards instead of buying into their enterprise drive scheme. I'm confident that there is nothing different about the drives that make them last longer. If you pay more for a drive, you get a better warranty. It's not that they are backing the product more for free. You pay for it.


Gotcha! Normally if the drive temps are ok the backplanes should be ok as well... but i get what you were refering to. that sure was crap that WD removed the ability to alter TLER which made me switch over to Hitachi drives, now they aquired Hitachi. My first setup was 8x1TB WD Fals drives and when i had to rma one drive, it was locked out of the tool. For home servers consumer grade drives work well for our setups.

Quote:


> Originally Posted by *Murlocke*
> 
> I'll be upgrading my 52TB server soon with faster parts and more bandwidth for parity syncs. Currently getting about 65MB/s during them, and this should allow me to get about 105MB/s, with the drives being the limiting factor. http://www.overclock.net/t/987494/52tb-unraid-server/0_50
> Going to be grabbing these:
> Motherboard:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16813182235
> Processor:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16819115065
> RAM:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16820231440
> SAS Card (x3):
> http://www.provantage.com/supermicro-aoc-sas2lp-mv8~7SUP92PM.htm
> SAS Cable (x6):
> http://www.monoprice.com/products/product.asp?c_id=102&cp_id=10254&cs_id=1025406&p_id=8186&seq=1&format=2
> 6 SAS cables. 24 drives FTW.


nice unraid build ...really nice work on the cable management!


----------



## Irisservice

In the Works
So wanted to update my server.
Server plays many roles.

ftp server
file server
backup server
Media streaming
media converting
DLT backups

So this is what i came up with..

ASUS P8B WS LGA 1155 Intel C206 ATX Intel Xeon E3 Server/Workstation Motherboard
Intel Xeon E3-1245 Sandy Bridge 3.3GHz LGA 1155 95W Quad-Core Server Processor
Kingston 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) ECC Unbuffered Server Memory
Corsair Force Series GT CSSD-F90GBGT-BK 2.5" 90GB SATA III
COOLER MASTER Hyper 212 EVO
ASUS DRW-24B1ST/BLK/B/AS Black SATA 24X DVD Burner
COUGAR CF-V12HP Vortex Hydro-Dynamic-Bearing 3x
Intel SC5650WS Case
SeaSonic X Series X650 Gold Power Supply

looking at this for storage
Anyone have thoughts on the Adaptec RAID 6805
http://www.newegg.com/Product/Product.aspx?Item=N82E16816103220

looking to run 6x Seagate Barracuda ST3000DM001 3TB 7200 RPM 64MB
http://www.newegg.com/Product/Product.aspx?Item=N82E16822148844

Running in raid 6...

Full Front Shot


Full rear Shot


Inside


Raid


CD\DVD Burner- LTO 4 Tape and SSD Vault


What she is replacing on rack.


Stock photo of rack kit


----------



## Murlocke

Quote:


> Originally Posted by *Imrac*
> 
> Actually there is a difference between ECC un-buffered ram and non-ecc ram. Here is an article that shows the difference between the U and R counterparts of ECC http://www.servethehome.com/unbuffered-registered-ecc-memory-difference-ecc-udimms-rdimms/
> I could see how using non-ecc ram may be supported as the memory controller is on the processor itself. I would be a little weary myself though.


You are right. To use an i3 on that board I need ECC Unbuffered.

Thanks for the catch, I will switch my RAM.


----------



## frank anderson

Quote:


> Originally Posted by *Irisservice*
> 
> snip..


wow, nice server case, may I ask what make and model is the case? The one with the 6 inbuilt drive trays. Thx~


----------



## anthony92

It's a work in progress but here she is.

Roles so far:
- NFS
- PlexMediaServer (Working on getting xbmc ~wip due to no htpc)
- utorrent
- SABnzbd
- Sickbeard
- CouchPotato

Specs:
- OS W2k8 (would have uesd esxi but mobo wasn't supported and too lazy to replace, all is good though)
-CPU Q6600
- OS DRIVE wb black 1tb
-RAID CONTROLLER areca 1280ml
- RAID DRIVE (RAID 5) x3 2tb wd green (this will continue to grow when I start ripping my media)
- CASE antec 900 v2
- Norco hot swap bays 5x3
- PSU GS650


----------



## Irisservice

Quote:


> Originally Posted by *frank anderson*
> 
> wow, nice server case, may I ask what make and model is the case? The one with the 6 inbuilt drive trays. Thx~


intel chassis sc5650ws...it came with a 1000w psu but i swaped it for a more efficient unit.
http://ark.intel.com/products/48258/Intel-Workstation-Chassis-SC5650WS


----------



## Plan9

*My home server has changed a little since my last post:*

*My file server:*
* OS: FreeBSD 8.1
* CPU: AMD64 Phenom(tm) II X3 720 Processor (2812.55-MHz K8-class CPU)
* RAM: 8GB DDR3 (4x 2GB sticks)
* HDD: 1x 80GB (IIRC) HDD (boot disk) - UFS
* HDD: 6x 1TB HDDs - formatted into one ZFS raidz pool

On that I'm also running a few VMs:

*Web server (virtual machine):*
I have another couple of dedicated servers for live sites, this is more for personal use (eg server management pages)
* OS: CentOS 5.something
* RAM: 250MB
* HDD: 50GB

*SSH sandbox (virtual machine):*
A bit OTT security, but it means that nobody has direct SSH access into my home network.
* OS: FreeBSD 8.1
* RAM: 128MB
* HDD: 2GB

*Automated Data IO services (virtual machine):*
This is essentially for anything internet related that can be left to run on it's own. So stuff like downloading podcasts, checks iPlayer for specific keywords then notifies me of them (saves me having to constantly pop on there in case a documentary i might like gets aired) etc
* OS: ArchLinux
* RAM: 2GB
* HDD: 20GB

Media server (virtual machine):
Mostly just for Subsonic and Media Tomb (though i might ditch entirely as I've spent days on it and still can't get transcoding to work properly). I plan to test it for ripping my DVD collection too, but I'm not expecting great things.
* OS: ArchLinux
* RAM: 2GB
* HDD: 20GB

I also have 3 dedicated boxes in various data centres for e-mail, web, IRC daemons and so on


----------



## tiro_uspsss

Quote:


> Originally Posted by *anthony92*
> 
> - OS W2k8 (would have uesd esxi but mobo wasn't supported and too lazy to replace, all is good though)


why not use Windows Server 08 *R2*?


----------



## joshd

Quote:


> Originally Posted by *Plan9*
> 
> *My home server has changed a little since my last post:*
> *My file server:*
> * OS: FreeBSD 8.1
> * CPU: AMD64 Phenom(tm) II X3 720 Processor (2812.55-MHz K8-class CPU)
> * RAM: 8GB DDR3 (4x 2GB sticks)
> * HDD: 1x 80GB (IIRC) HDD (boot disk) - UFS
> * HDD: 6x 1TB HDDs - formatted into one ZFS raidz pool
> On that I'm also running a few VMs:
> *Web server (virtual machine):*
> I have another couple of dedicated servers for live sites, this is more for personal use (eg server management pages)
> * OS: CentOS 5.something
> * RAM: 250MB
> * HDD: 50GB
> *SSH sandbox (virtual machine):*
> A bit OTT security, but it means that nobody has direct SSH access into my home network.
> * OS: FreeBSD 8.1
> * RAM: 128MB
> * HDD: 2GB
> *Automated Data IO services (virtual machine):*
> This is essentially for anything internet related that can be left to run on it's own. So stuff like downloading podcasts, checks iPlayer for specific keywords then notifies me of them (saves me having to constantly pop on there in case a documentary i might like gets aired) etc
> * OS: ArchLinux
> * RAM: 2GB
> * HDD: 20GB
> Media server (virtual machine):
> Mostly just for Subsonic and Media Tomb (though i might ditch entirely as I've spent days on it and still can't get transcoding to work properly). I plan to test it for ripping my DVD collection too, but I'm not expecting great things.
> * OS: ArchLinux
> * RAM: 2GB
> * HDD: 20GB
> I also have 3 dedicated boxes in various data centres for e-mail, web, IRC daemons and so on


Wow, cool servers!

Is there an Arch Linux server edition?


----------



## Plan9

Quote:


> Originally Posted by *joshd*
> 
> Wow, cool servers!
> Is there an Arch Linux server edition?


There was an unofficial ArchServer distro, but that seems to have died. I just use vanilla Arch though. I rarely have any problems with it and i can always take snapshots / rollback the virtual machine if I am attempting to do something quite risky.

I probably wouldn't recommend Arch for servers for other people though - at least not unless you know what you're doing and are comfortable with the OS. But then if you do fall into that latter category then you'd probably disregard other peoples advice and run what you want anyway (as i had done lol).

I'm thinking of wiping the CentOS server and putting Arch on that too as quite honestly, CentOS is starting to annoy me.


----------



## joshd

Quote:


> Originally Posted by *Plan9*
> 
> There was an unofficial ArchServer distro, but that seems to have died. I just use vanilla Arch though. I rarely have any problems with it and i can always take snapshots / rollback the virtual machine if I am attempting to do something quite risky.
> I probably wouldn't recommend Arch for servers for other people though - at least not unless you know what you're doing and are comfortable with the OS. But then if you do fall into that latter category then you'd probably disregard other peoples advice and run what you want anyway (as i had done lol).
> I'm thinking of wiping the CentOS server and putting Arch on that too as quite honestly, CentOS is starting to annoy me.


I had another failed install of Arch the other day









Is there like a definitive guide to install it with KDE or something? It looks great, and good fun once you get going with it.


----------



## Plan9

Quote:


> Originally Posted by *joshd*
> 
> I had another failed install of Arch the other day
> 
> 
> 
> 
> 
> 
> 
> 
> Is there like a definitive guide to install it with KDE or something? It looks great, and good fun once you get going with it.


I usually just follow the beginners guide.
What happened to your install?


----------



## joshd

Quote:


> Originally Posted by *Plan9*
> 
> I usually just follow the beginners guide.
> What happened to your install?


I got everything working but it didn't install a GUI. So I did "pacman -S kde" and I got an error, presumably something to do with my internet connection or the repos i added at the start?


----------



## Plan9

Quote:


> Originally Posted by *joshd*
> 
> I got everything working but it didn't install a GUI. So I did "pacman -S kde" and I got an error, presumably something to do with my internet connection or the repos i added at the start?


Did you install Xorg?


----------



## joshd

Quote:


> Originally Posted by *Plan9*
> 
> Did you install Xorg?


At the package selection?


----------



## Plan9

Quote:


> Originally Posted by *joshd*
> 
> At the package selection?


I'm guessing you didn't follow the beginners guide then?

This is an absolute must!
https://wiki.archlinux.org/index.php/Beginners'_Guide

You wont need everything on there, but do go through it as you'll pretty much fail at Arch if you try to install a desktop in a differently to the instructions on there


----------



## herkalurk

if you really want to skip that hard, just install xorg* x11*









But that would be inefficient and most people don't need all the x11 packages.....


----------



## killabytes

Finally got my WatchGuard Firebox II front LEDs to work with pfSense:






Finally I know what's going on!


----------



## tiro_uspsss

gearing up for a rebuild / case transplant + hardware upgrade (finally!)


----------



## Oedipus

I guess I'll post what's in my sig.

OS: Windows Server 2008 R2 Enterprise
CPU: Dual intel Xeon E5-2650's
Memory: 64GB ECC DDR3 1333
HDD: 4 x 2TB Greens, 1 1.5TB Green, and 1 250GB Barracuda
Case: Caselabs TX10-D

Purpose: DC, file server, Hyper-V


----------



## killabytes

^^

Awesome pictures, but uh...outdoors?


----------



## Oedipus

It's hard enough to take pictures of the inside of a black box, let alone trying to do it indoors with crap lighting.

But yes, outdoors.


----------



## tiro_uspsss

Quote:


> Originally Posted by *Oedipus*
> 
> *snip*


the case is...... awesome.









tho let me make a recommendation: check out the castors in my post previous







the castors on my..little (little compared to yours!







) case are high quality poly-urethane; if you have hardwood floors, take it from a timber floor layer that those plastic castors will rip into the floors varnish quicker than you can say 'caselabs'


----------



## Oedipus

Luckily for that reason, I have linoleum.


----------



## tiro_uspsss

Quote:


> Originally Posted by *Oedipus*
> 
> Luckily for that reason, I have linoleum.


ahhh, all good then, all good!


----------



## ramicio

I try to take all my pictures outdoors. All I have is a camera phone, so indoor pictures always look like crap.


----------



## raiderxx

Quote:


> Originally Posted by *ramicio*
> 
> I try to take all my pictures outdoors. All I have is a camera phone, so indoor pictures always look like crap.


Those are great pics from a camera phone. Of course it helps that it is bright and sunny out.


----------



## ramicio

They aren't my pictures, I was just responding that someone seemed to think it was silly to take pictures of stuff outdoors.


----------



## ramicio

Why isn't this thread a sticky?


----------



## killabytes

Quote:


> Originally Posted by *ramicio*
> 
> Why isn't this thread a sticky?


Because no one has PMed a mod.


----------



## chmodlabs

Quote:


> Originally Posted by *wtomlinson*
> 
> i have a couple more, although these were a couple years ago, and they belonged to work. please excuse the cleanliness of them, i was in a very sandy place far from here.
> 
> First, some sort of Dell desktop. no memory of the specs i just know they used 2 video cards for 3 total monitors. XP Pro.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Second, 3 PowerEdge servers. one dedicated for network monitoring (the nice piece on the wall with Solarwinds running), one dedicated IRC server, and the 3rd just for regular admin use. all running Server 2003.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 4 HP Proliant (G6 i think). 2 DNS, 2 Exchange. all running Server 2003.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 2 PowerEdge servers for DNS, 2 IBM servers for Exchange. everything on the left was for satellite equipment. all running Server 2003.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 4 PowerEdge servers. 2 for DNS, 2 for Exchange. all running Server 2003. everything on right is for another satellite setup.


What do you mean by "satellite equipment" ? Just curios of it's current use to you lol.

- chmodlabs


----------



## Imrac

Sat internet I am guessing. Looks like a military setup somewhere in the middle east. With all that sand around and 100+ degree server room, I am sure you have some fun maintenance stories.

BTW that post you quotes was almost 2 years old.


----------



## u3b3rg33k

Someone aught to introduce you ^^ to an air compressor...


----------



## Shiveron

Quote:


> Originally Posted by *chmodlabs*
> 
> What do you mean by "satellite equipment" ? Just curios of it's current use to you lol.
> - chmodlabs


Military satellite connection.
Quote:


> Originally Posted by *u3b3rg33k*
> 
> Someone aught to introduce you ^^ to an air compressor...


Someone ought to introduce you to a desert based server room. There's literally no point in air compressor cleaning it, because it will all settle back down on the equipment in less than 15 minutes.


----------



## dushan24

I might post some photos of our rack in our co-lo...


----------



## killabytes

Quote:


> Originally Posted by *dushan24*
> 
> I might post some photos of our rack in our co-lo...


Yes.....


----------



## ugotd8

Here's my 10TB home fileserver becoming a 20TB home fileserver. 



First built back in '08 around the time OpenSolaris was released. Now it's running OpenIndiana 151_a5, with a 20TB zpool (2x5 drive raidz1 vdevs). Using builtin CIFS.

Can anyone identify the case ? Hint, rhymes with cracker. I think I got in back in '05 or something. Still going strong.









Specs:

Mobo: ASUS M2N-LR (dual PCI-X FTW)
CPU: Phenom 9600 Quad
RAM: 8GB of something, I forget
PSU 1 (mobo/cpu): 650W
PSU 2 (drives): 350W
SATA Controllers 2x: SuperMicro AOC-SAT2-MV8
Drives: hodgepodge really, 5x HD204UIs, 4x WD20EARX, 1x ST32000542AS, 2x 250GB for root mirror

And yes, you counted right, there are 22 drives spinning in the pic.







Not bad for a budget fileserver IMHO.


----------



## Oedipus

cm stacker, baby


----------



## TheImperial2004

My home NAS :

Norco 4224 :










Power LED (Blue) & Activity LED (Green - not shown) :










Cable management isn't one of my best traits :










120mm fan bracket with 3 CoolerMaster 2000 RPM fans (reasonably quite) :










Used 2 Mini-SAS to SATA REVERSE cables to connect the backplate with the MB Sata ports :










i3 2120 (3.1 GHz) + 16 GB Vengeance Blue RAM DIMMS :










SilverStone Strider (Fully Modular - 1000w) was a big help :










Crucial m4 64GB SSD for the OS (Ubuntu Server w/ ZFS) :










Molex Hell :










Storage :

4x Hitachi 4TB Deskstar (I believe) . Total = 12TB (RAID-Z1) .


----------



## ugotd8

^^^ Love that case. Wife said drives or case take your pick.  I'm gonna get one of those next time tho.


----------



## tycoonbob

Spoiler: NUS (Network Unified Storage) Server -- Work in Progress, build log link in sig












Spoiler: Hyper-V Host 01













Spoiler: Hyper-V Host 02



Photos soon.


----------



## Gangsta Hotdog

It's an old HP g3001 a mate gave me, upgraded the ram, installed windows server 2003, and boom, had a Minecraft server.








OS: Windows server 2003
Case: Stock.
CPU: Pentium D @ 3.00ghz (forgot the model number)
Motherboard: Stock.
Cooling: Stock
Memory: 2 sticks of the 1gb kingston cheap ddr2 667Mhz stuff.
PSU: Stock
OS HDD: A 120gb 7200rpm Sata seagate drive, not sure of model number.
Server Manufacturer: HP technically I suppose.

What you use it for: Currently it's sat under the bed doing nothing, but until a couple of months ago I had it running a 24/7 12 slot Minecraft server. It would have been able to handle around 20 from what I've seen, but my Internet speed was the limiting factor. I had it set a few players lower than my internet could actually handle, just to leave some breathing room. It was also running an apache server hosting a single page which redirected the user to my website freely hosted by some web server renting company (to save bandwidth for the minecraft server), I forgot their name though.
Sorry for the lack of pics, haven't got any and figured it would be a waste of time to get it out and dust it off considering it isn't anything special or even an actual server case. Not even sure why I bothered to post this, lol!
Also, first post! Yay!


----------



## Irisservice

[quote name="tycoonbob" url="


Spoiler: Hyper-V Host 01









[/quote]
what case is this?
can we see a pic with front open?


----------



## tycoonbob

Quote:


> Originally Posted by *Irisservice*
> 
> what case is this?
> can we see a pic with front open?


Sure. I started to take one of that last night, when I took the rest...but decided against it because it needs to have the dust cleaned out.








It's a Rosewill RSV-L4000...which comes with 2 HDD areas...each holding 4 drives, non-hot swap. This is also the version that comes with the fan bar with 3 120mm fans...and there is also a 120mm fan in front of each HDD area, and 2 80mm in the back. Fairly basic, roomy, cheap (~$100), and quiet. I currently have 3 of these chassis...one for each of my Hyper-V Hosts, and one that my PC is in. People complain because it doesn't have hot-swap, or that it doesn't come with rails ($35 extra)...but for a $100 4U rackmount chassis, you are crazy if you expect all that. It's a great case for a home environment, or SMB...as long as you don't need Hotswap.

I will take some photos of the front, opened and add them to my original post...after I finish my Cinnamon Pretzel and take the GSD for a walk. Stay tuned!

EDIT:
2 New photos have been uploaded. The server on the bottom is the exact same chassis, but it's actually my PC. I currently have everything laying everywhere in my office...while I'm building my mDC (miniDataCentre) out in the garage...which will happen after I finish up my NUS server. I will get my other server uploaded soon.


----------



## ikem

i have been slowly working my server up.

Re-added the second hdd cage and moved the existing drives to it. Planning on running a raid 5 with 3 2tb drives in the spare hdd rack.

Hardware consists of:

Lian Li V1200
AMD FX 8150 (won here on OCN)
Gigabyte 990fxa-UD5
8gb Kingston Hyperx 1600
Gigabyte HD 6870 (TC Folding)
Seagate 500gb - OS Drive
2x Seagate 1tb - One for Backups Currently and One for Data.
Blu-Ray reader for ripping down the road.
Corsair TX650 - MDPC Sleeving.

This is mainly used for video streaming to my itx rig, TC folding, NAS, Backups

Im planning on expanding its use later.


----------



## tonyjones

my storage pod finally complete


----------



## tycoonbob

Quote:


> Originally Posted by *tonyjones*
> 
> my storage pod finally complete


Looks like one of those Backblaze pods.


----------



## reezin14

Quote:


> Originally Posted by *tycoonbob*
> 
> Looks like one of those Backblaze pods.


It does [email protected] tonyjones how much storage is in there?


----------



## giga_hertz

Hi folks,

My machines almost always run Linux so they are all servers in their own right









As a server-role only PC's I two file-backup servers with 4 Hitachi 1TB hdd's each. Old stuff one is a AMD dual core Athlon the other a P4.
They both boot from usb flash and one has Ubuntu 9 and the other OpenSuSE 11.3.

Both flash installs are quite different but they all make the same thing: Mount a simple nfs share on the Hitachi hdd's.

They are almost always shutdown so I only use them to really backup my data once weekly, sometimes once a month.

The trick is both machines have encrypted hard-drives AND Software RAID10 configurations.
So after booting any of the machine I login over ssh and mount the encrypted RAID 10 filesystem.

This is the Safest setup from both Hardware and Software.

But what really brings me to this discussion is my usual File Server, Bitorrent, ftp, image sharing etc server:

http://www.excito.com/

I have the old Bubba Two that is an amazing Machine!

Their new Bubba 3 server is Awsome!

I consider that stuff as the Best home File Server in the Market today.
Solutions like the Pico server are not as good as they rely on External USB hdd's.

The Bubba Two is now out of production but it still run it sometimes 24/7 for weeks in a row.
Never had a single failure.

The new Bubba 3 however is simply Awsome. The only downside to it is the price.
But if you live in an area were the electricity price is very high it can pay itself in a single year just on the energy savings when compared to a high powered PC-like build.

And the way it looks is simply Jaw-dropping! Excellent! It is simply a bit bigger then an external 3.5'' hdd case. But much better looking.

So if someone is looking for a excellent low-power, almost completely silent solution ...Bubba is the way to go.

Regards.


----------



## Onions

Quote:


> Originally Posted by *tonyjones*
> 
> my storage pod finally complete


45 drives.... wow image if there all 4tb drives


----------



## tycoonbob

Quote:


> Originally Posted by *Onions*
> 
> 45 drives.... wow image if there all 4tb drives


My only complaint with a set up like this, and the backblaze...is they are not designed to run as a standalone. They use slow sata port multipliers, instead of any kind of raid controller. Backblaze designed these pods to replication between pods, not between drives inside a pod. Yes, you can do UnRaid or something and be just fine...but I'm more of a hardware raid kinda guy.


----------



## ramicio

The main beef I have with a case like that is that you have to put them on a rail (and quite a strong one) to be able to slide the whole thing out just to change a failed drive. Cases with hot-swap bays will never die. The drives look too close together to be cooled. The cases seem way too expensive for just being a piece of sheet metal. There are no hot-swap trays or anything, so they are way overpriced.


----------



## tycoonbob

Quote:


> Originally Posted by *ramicio*
> 
> The main beef I have with a case like that is that you have to put them on a rail (and quite a strong one) to be able to slide the whole thing out just to change a failed drive. Cases with hot-swap bays will never die. The drives look too close together to be cooled. The cases seem way too expensive for just being a piece of sheet metal. There are no hot-swap trays or anything, so they are way overpriced.


Exactly! It's not practical in my opinion. If you have that much data, you got to care about it...heat issues is one thing I never want to worry about.


----------



## herkalurk

If you need as many drives as possible, I'd invest in some good ole super micro....

http://www.supermicro.com/products/chassis/4U/847/SC847A-R1400LP.cfm

Don't forget an extra 4U more of disks attached via SAS cables....









http://www.supermicro.com/products/chassis/4U/847/SC847E26-RJBOD1.cfm


----------



## Paratrooper1n0

Quote:


> Originally Posted by *Murderous Moppet*
> 
> You guys with your "real" servers. Pfft, you don't need a real server when you have a recycled HP dc7600 slim! Especially when it has 4 whole gigabytes of DDR2, 5tb of internal storage and a 3.2GHz P4 with hyperthreading.


Why did you steal from my school?


----------



## chmodlabs

Quote:


> Originally Posted by *Imrac*
> 
> Sat internet I am guessing. Looks like a military setup somewhere in the middle east. With all that sand around and 100+ degree server room, I am sure you have some fun maintenance stories.
> BTW that post you quotes was almost 2 years old.


lol just realized that.

- chmodlabs


----------



## Bonn93

My file server can beat every other one posted here.

HP Dx5150 Microtower.

AMD Athlon64 1.99Ghz Single Core CPU
2GB DDR2?
160GB 7200 RPM SATA
320GB 5400 RPM IDE
80GB 2.5 Inch SATA

Running Windows Server 2008 R2 64 Bit with no problems at all...

Currently running as Torrent Server, Media Streaming Server, NTFS/SMB ( NFS from memory?) shares

80GB Laptop drive is boot
160GB Sata is my local PC backup drive from Acronis Image
320GB IDE is Movies, Music Etc.

Used to have a awesome Wireless network at home and gigabit switches, moved to Sydney its running a 100MB connection to a shotty Telstra ADSL Thomson modem / router with the worst 802.11 specs ive ever seen.









Currently trying to fund a new box and wireless network.

I3 2120T Low Power, 4GB DDR3, Intel Server Board, SSD Boot Drive, and 4+ 1TB drives.

LinkSys or Cisco dual band / 450MBPS + Wireless....


----------



## jrl1357

Ok, so my dad got this 'workstation' server from his work ~6 years go. He didnt need it so he gave it to me. I had it running as a one k folding rig for awhile but then the psu died. So my plans are now to upgrade it and convert it in to a home server. Current config:

CPU: Intel Pentium 4 2.8GHz
Motherboard: Dell atx socket 775 3xSATA v1, 1xIde, jbod, raid 0, 1, 5, 10
Ram: 2x256mb ddr (atleast one stick is failing)
Storage: none at the moment
Case: Dell
PSU: Dell 250watt, dead.
OS: -

Planned config:

CPU: Intel Pentium D 3.4GHz ($20, ebay)
Motherboard: Dell atx socket 775 3xSATA v1, 1xIde, jbod, raid 0, 1, 5, 10
Ram: 1x512mb ($10, newegg)
Storage: 1x Seagate Barracuda 1tb (alreay owned as an external, just need to bust the casing) 1x western digital se 250gb (owned) 1x samsung 250gb, (owned)
Case: Dell
PSU: Seasonic 300watt ($40, newegg)
OS: most likley debian, may also consider centos, freebsd and openbsd. (feel free to suggets anything) In debian im thinking of debian/kfreebsd just for kicks

This is a sucky phone pic of a sucky tablet pic of the fount bezel and case: http://www.overclock.net/gallery/image/view/id/785395/album/636778 (if anyone knows how to get the img url for something in your gallery, please say so)

So what do you guys think of the home server upgrade? Waist of time or good idea? I had a couple of drive around i thought, why not create a home server so i can back them up easyley and stream them to different computers all around my house with out taking any thing from my current rig? Everything would cost $70. The one thing im unsure about is the cpu. Would a p4 be enough? Or is a pd worth $20?


----------



## S3phro

My old dev server got pwned by a power surge last week, so it gave me an excuse to upgrade!

I rely heavily on having a dev environment at hand not only to test new products (Server 2012 etc...) but also packages and deployments using SCCM so I require a fair bit of room to be able to run multiple machines to build a realistic environment.

So my new DEV Box is as follows!

CPU: Intel Xeon 1230 V2
Mobo: Gigabyte Z68P-DS3
RAM: 4x8gb G.Skill ARES 1600mhz (32gb total)
HDD:2x2tb in RAID1
OS: vSphere (ESX) 5.0

I only built it today, but I've built my domain, just implementing my SQL server now and then hopefully my SCCM 2007 server before I goto bed.









I'll be migrating the machines off of the raid 1 array and onto my SUN when it's built, then replacing the two 2tb drives with two SSDs for vSphere to run on it's own.

Gah, photobucket is hating me so I can't get a screenie up, I'll make another post later of the vSpher interface if anyone's interested..


----------



## Boyboyd

Just a cheap NAS for my blu-rays and TV series. Serving to 2 clients at a time, max.

4x 2TB
5x1TB
1x 250GB drive (OS)

In software raid 4 (flexraid)



When the drives are idling they sit at ambient temperature pretty much.



They load at 30, with some very low speed 120mm fans pushing air over them.

and it sits in a lack rack


----------



## Farih

Got a simple home server.

AMD Fusion E-350 [2x 1,8ghz] 4GB DDR3 1333mhz.

500GB OS, 2TB storage and 640GB Back-up.

Its slow but does eveything i need.
-Feeds 2 media players.
-Backs up 3 computers
-Plays a radio 24/7 so i can listen me tunes everywhere in the world.
-runs a 24/7 download hub [dc++]
-Passive cooling
-VNC
-Pulling just about 15 to 20Watts doing what it does. [max load at around 35 to 40watts







]

Using this board, one of the best for a fusion E-350 in a home server if you ask me.

Bit low on the SATA ports but got room to expand


----------



## Shiveron

Ahh the good ol' lack rack


----------



## Boyboyd

Quote:


> Originally Posted by *Shiveron*
> 
> Ahh the good ol' lack rack


It's ok, but the legs aren't the right width apart anymore, so you need to move them out by 14mm which is hard to do with the materials it's made of, lol.


----------



## ALpHaMoNk

Quote:


> Originally Posted by *herkalurk*
> 
> If you need as many drives as possible, I'd invest in some good ole super micro....
> http://www.supermicro.com/products/chassis/4U/847/SC847A-R1400LP.cfm
> Don't forget an extra 4U more of disks attached via SAS cables....
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.supermicro.com/products/chassis/4U/847/SC847E26-RJBOD1.cfm


if i ever reach the point of needing that many drives in a single case my first choice.....my dream chasis!


----------



## Boyboyd

You need a forklift truck to rack it though. I'm deadly serious, they make those.


----------



## ramicio

If you ever need that many drives, I would venture that you're rich as hell, and you should just venture into cases that use 2.5" bays.


----------



## Murlocke

Quote:


> Originally Posted by *ramicio*
> 
> If you ever need that many drives, I would venture that you're rich as hell, and you should just venture into cases that use 2.5" bays.


I have almost that many drives and i'm considered low-middle wage for income.









50 slots, that would be $7500 at $150/3TB drives, and that's if you buy 50 drives without getting a bulk discount. Assuming you browse around try to get a good bulk deal, with 50 drives you can probably get a bulk deal of about $100 per which would make it $5000. You aren't required to fill the server immediately though, you could buy it and put a single drive in it and expand when needed.

Really not that bad considering hardly anyone (non-business) would fill the server up right when they got it. $5000-$7500 over 5-10 years? Lots can afford that and It's way more future proof.

Though, I believe that case comes with enterprise class hardware in it.. if it's the same case I was looking at it's like $7000 without the drives. A little absurd for home servers since you can get 2x Norco 4224 that will fit 48 drives for $800.
Quote:


> Originally Posted by *Boyboyd*
> 
> You need a forklift truck to rack it though. I'm deadly serious, they make those.


Yeah, I can't move my 24 drive cases when they have all slots full. I have to remove the drives then move it. They get pretty heavy, and i'm not a very strong guy.


----------



## ramicio

How do you figure you spread that over 5-10 years? Technology changes that rapidly, so you're not future-proofing anything. Only if you pumped that much money into an instant, fully-stocked build, would it be future-proof. Lower middle class people do not find $7,000 to be anywhere near affordable. That's a 4th of their income, so you either live with mommy and daddy, or are upper middle class.


----------



## DigitalSavior

Quote:


> Originally Posted by *ramicio*
> 
> How do you figure you spread that over 5-10 years? Technology changes that rapidly, so you're not future-proofing anything. Only if you pumped that much money into an instant, fully-stocked build, would it be future-proof. Lower middle class people do not find $7,000 to be anywhere near affordable. That's a 4th of their income, so you either live with mommy and daddy, or are upper middle class.


Why have all of my storage available from day one if it's not going to be utilized. I can start small with a couple of drives and expand as my needs grow, mitigating the cost over time.


----------



## ramicio

Because expanding many times like that puts the data at risk... There's not even a guarantee that you will have access to the same disks even a year later. I built my server about a year ago, and now the drives I used are completely gone thanks to WD monopolizing. Now if I want to expand, I not only have to buy a new RAID card (out of ports), but I now have to buy an entire set of different drives and transfer the data to a new array. Had I just bought 16 drives at once I would not have this problem.


----------



## tycoonbob

Quote:


> Originally Posted by *ramicio*
> 
> Because expanding many times like that puts the data at risk...


How?


----------



## Aestylis

Quote:


> Originally Posted by *ramicio*
> 
> Because expanding many times like that puts the data at risk... There's not even a guarantee that you will have access to the same disks even a year later. I built my server about a year ago, and now the drives I used are completely gone thanks to WD monopolizing. Now if I want to expand, I not only have to buy a new RAID card (out of ports), but I now have to buy an entire set of different drives and transfer the data to a new array. Had I just bought 16 drives at once I would not have this problem.


You know that it isn't necessary to keep all your drives identical right?. Think of it this way, Let's say I purchase 8x 2tb drives, all one brand, all fairly similiar date codes, and all the same firmware and hw revision. One year later half of them fail, come to find out it was an issue with that specific date/hardware code. Now if I had staggered them with different makes/models, same size with similiar features, this wouldn't have happened.

Sure you may take a hit in seek times, or read/write times, but if you were going to expand slowly over time....

Edit..
If you plan right, you can also pre-purchase multiple raid controllers and span your array across them.


----------



## axipher

Quote:


> Originally Posted by *tycoonbob*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ramicio*
> 
> Because expanding many times like that puts the data at risk...
> 
> 
> 
> How?
Click to expand...

Depending on the method used to pool them, would have to rebuild when adding drives, plus you now have drives of varying usage and age.


----------



## tycoonbob

Quote:


> Originally Posted by *axipher*
> 
> Depending on the method used to pool them, would have to rebuild when adding drives, plus you now have drives of varying usage and age.


Chances are if you buy a chassis like that, you have a plan. Chances are if you have a plan, and you aren't buying drives to fill it up at once...you got a raid controller that allows online migration, and you are using it for that. Chances are, you are also not putting all those drives in one giant array. Chances are you are going to do a Raid 5, 6, or 10...which (depending on the controller) would allow for online expansion.

Ever heard of Gigabyte Boundaries?

If you don't have a plan, you probably went with something like UnRaid...which can mis-match drives all day.


----------



## ALpHaMoNk

Quote:


> Originally Posted by *Boyboyd*
> 
> You need a forklift truck to rack it though. I'm deadly serious, they make those.


There is a user that uses this case at home...not from these forums though....his build is amazing!
Quote:


> Originally Posted by *ramicio*
> 
> If you ever need that many drives, I would venture that you're rich as hell, and you should just venture into cases that use 2.5" bays.


you don' thave to be rich to own that many drives. like said .. you expand as your needs grow. the case will most likely be your biggest cost up front.

Quote:


> Originally Posted by *Murlocke*
> 
> I have almost that many drives and i'm considered low-middle wage for income.
> 
> 
> 
> 
> 
> 
> 
> 
> 50 slots, that would be $7500 at $150/3TB drives, and that's if you buy 50 drives without getting a bulk discount. Assuming you browse around try to get a good bulk deal, with 50 drives you can probably get a bulk deal of about $100 per which would make it $5000. You aren't required to fill the server immediately though, you could buy it and put a single drive in it and expand when needed.
> Really not that bad considering hardly anyone (non-business) would fill the server up right when they got it. $5000-$7500 over 5-10 years? Lots can afford that and It's way more future proof.
> Though, I believe that case comes with enterprise class hardware in it.. if it's the same case I was looking at it's like $7000 without the drives. A little absurd for home servers since you can get 2x Norco 4224 that will fit 48 drives for $800.
> Yeah, I can't move my 24 drive cases when they have all slots full. I have to remove the drives then move it. They get pretty heavy, and i'm not a very strong guy.


Many will say you can't future propf with today's tech, but when it comes to having the right case with the right amount of drive bays as needed up front yes you can! as you and i did with going with a Norco case right from the jump. I have upgraded my internal and my case once and that was when i purchased the norco.

Quote:


> Originally Posted by *ramicio*
> 
> How do you figure you spread that over 5-10 years? Technology changes that rapidly, so you're not future-proofing anything. Only if you pumped that much money into an instant, fully-stocked build, would it be future-proof. Lower middle class people do not find $7,000 to be anywhere near affordable. That's a 4th of their income, so you either live with mommy and daddy, or are upper middle class.


if you pumped it all in a fully stocked build you would lose over time. warranty would run out and un-needed wear on the drives spinning and not being used. plus cost of electricity to keep it all turning. expand drives as you need then. 3-5 drives at a time tends to work well for me.

Quote:


> Originally Posted by *ramicio*
> 
> Because expanding many times like that puts the data at risk... There's not even a guarantee that you will have access to the same disks even a year later. I built my server about a year ago, and now the drives I used are completely gone thanks to WD monopolizing. Now if I want to expand, I not only have to buy a new RAID card (out of ports), but I now have to buy an entire set of different drives and transfer the data to a new array. Had I just bought 16 drives at once I would not have this problem.


data will always be at some kind of a risk. depending on OS and hardware will depend on how much risk.....we will all need to expend at some point as our collections grow. I have expanded on hw raid 5 maybe two times and once when i expanded and moved to hw raid 6.....many are using unraid, flexraid and many other ways of pooling their data so the risks will vary.


----------



## ugotd8

Quote:


> Originally Posted by *Aestylis*
> 
> You know that it isn't necessary to keep all your drives identical right?. Think of it this way, Let's say I purchase 8x 2tb drives, all one brand, all fairly similiar date codes, and all the same firmware and hw revision. One year later half of them fail, come to find out it was an issue with that specific date/hardware code. Now if I had staggered them with different makes/models, same size with similiar features, this wouldn't have happened.
> Sure you may take a hit in seek times, or read/write times, but if you were going to expand slowly over time....
> Edit..
> If you plan right, you can also pre-purchase multiple raid controllers and span your array across them.


Good post. Here it is in practice, using 2 PCI-X SuperMicro 8-port sata controllers:

Code:



Code:


# zpool status mp2
  pool: mp2
 state: ONLINE
  scan: resilvered 775G in 3h55m with 0 errors on Mon Jul 16 18:37:10 2012
config:

        NAME        STATE     READ WRITE CKSUM
        mp2         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c7t4d0  ONLINE       0     0     0
            c7t2d0  ONLINE       0     0     0
            c6t2d0  ONLINE       0     0     0
            c6t7d0  ONLINE       0     0     0
            c7t0d0  ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            c6t1d0  ONLINE       0     0     0
            c7t1d0  ONLINE       0     0     0
            c6t0d0  ONLINE       0     0     0
            c7t3d0  ONLINE       0     0     0
            c6t3d0  ONLINE       0     0     0

# ./sdisks.pl mp2
Pool Name: mp2
DISK    VDEV     MODEL               FIRMWARE    PORC    LCC     TEMP
============================================================================
c7t4d0  raidz1-0 HD204UI             1AQ10001    0       146     22
c7t2d0  raidz1-0 WD20EARX-00PASB0    51.0AB51    48      654     27
c6t2d0  raidz1-0 HD204UI             1AQ10001    0       144     22
c6t7d0  raidz1-0 WD20EARX-00PASB0    51.0AB51    14      54      27
c7t0d0  raidz1-0 ST32000542AS        CC95                        29
c6t1d0  raidz1-1 HD204UI             1AQ10001    0       145     21
c7t1d0  raidz1-1 WD20EARX-00PASB0    51.0AB51    49      638     27
c6t0d0  raidz1-1 HD204UI             1AQ10001    0       143     21
c7t3d0  raidz1-1 WD20EARX-00PASB0    51.0AB51    48      629     26
c6t3d0  raidz1-1 HD204UI             1AQ10001    0       148     22

# iostat -zxcn
     cpu
 us sy wt id
  0  1  0 99
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.1    0.0    1.6    0.7  0.0  0.0    2.7    2.1   0   0 c3t1d0
    0.6    1.6   61.8  183.9  0.0  0.0   15.6    2.4   1   1 c6t0d0
    0.6    1.6   61.7  183.9  0.0  0.0   15.8    2.5   1   1 c6t2d0
    0.7    1.6   76.9  183.9  0.0  0.0   15.7    2.5   1   1 c6t3d0
    0.8    1.7   76.6  183.9  0.0  0.0    3.5    8.9   0   0 c6t7d0
    0.7    1.6   76.6  183.9  0.0  0.0    3.8   10.8   0   0 c7t0d0
    0.7    1.6   62.2  183.9  0.0  0.0    4.4   11.1   0   0 c7t2d0
    0.6    1.6   61.6  183.9  0.0  0.0    3.8   11.4   0   0 c7t3d0
    0.8    1.6   77.7  183.9  0.0  0.0   11.9    2.1   0   0 c7t4d0
    0.1    0.0    1.7    0.7  0.0  0.0    3.1    2.2   0   0 c3t0d0
    0.7    1.6   76.9  183.9  0.0  0.0   15.7    2.5   1   1 c6t1d0
    0.7    1.6   77.1  183.9  0.0  0.0    4.5   11.1   0   0 c7t1d0


----------



## S3phro

How do you justify needing that much space at home though?

In an enterprise environment I can understand but no way would you be using one controller for that many drives, sure you'd have redundancy on the drives but if the hardware in the chassi fails you have one huge single point of failure..

I've come to the conclusion if you need that much storage at home you're into some seriously kinky stuff..


----------



## ugotd8

Quote:


> Originally Posted by *S3phro*
> 
> How do you justify needing that much space at home though?
> In an enterprise environment I can understand but no way would you be using one controller for that many drives, sure you'd have redundancy on the drives but if the hardware in the chassi fails you have one huge single point of failure..
> I've come to the conclusion if you need that much storage at home you're into some seriously kinky stuff..


Well, for one, I don't have to justify it. Two, get your mind out of the gutter.


----------



## S3phro

Quote:


> Originally Posted by *ugotd8*
> 
> Well, for one, I don't have to justify it. Two, get your mind out of the gutter.


I was referring to AlphaMonk not you sorry..


----------



## ugotd8

Quote:


> Originally Posted by *S3phro*
> 
> I was referring to AlphaMonk not you sorry..


Oops. Back to Defcon 1.


----------



## dushan24

Stop flaming, more servers...

This is a good thread, don't ruin it with pointless arguments.

People can spend their money on whatever they want, who cares.


----------



## ramicio

Quote:


> Originally Posted by *S3phro*
> 
> I've come to the conclusion if you need that much storage at home you're into some seriously kinky stuff..


I guess blu-ray movies, entire TV series, and all music in FLAC is kinky...

Go ahead and use different drives, don't come crying when something weird happens and you lose your data. If you can't understand why expanding little by little puts your data at risk, you shouldn't even be messing with such hardware.


----------



## Plan9

Quote:


> Originally Posted by *ramicio*
> 
> Because expanding many times like that puts the data at risk... There's not even a guarantee that you will have access to the same disks even a year later. I built my server about a year ago, and now the drives I used are completely gone thanks to WD monopolizing. Now if I want to expand, I not only have to buy a new RAID card (out of ports), but I now have to buy an entire set of different drives and transfer the data to a new array. Had I just bought 16 drives at once I would not have this problem.


That doesn't really make a whole lot of sense because you're suggesting that you cannot mix and match drives which, is not only possible, but recommended practice when running large volumes of consumer grade HDDs.
Quote:


> Originally Posted by *ramicio*
> 
> Go ahead and use different drives, don't come crying when something weird happens and you lose your data. If you can't understand why expanding little by little puts your data at risk, you shouldn't even be messing with such hardware.


Care to educate us?

Everything I've read to date has suggested it's best not to buy multiple drives from the same batches. This is the 1st time I've ever heard anyone recommend the exact opposite. So if there's any merit to your claim then I'd genuinely love to be educated on this matter


----------



## ramicio

This is why normal people stress test the drives first and weed out the DOAs. Nice try. I'm not going to spoon feed you the information. If you don't understand how expanding works and how writing entire drives at a time is a huge risk, you won't grasp that this sentence in itself is an explanation. I never said you can't mix different drives, I just said it's ******ed, risky, and only hurts performance. My philosophy is "go big or go home."


----------



## Plan9

Quote:


> Originally Posted by *ramicio*
> 
> This is why normal people stress test the drives first and weed out the DOAs. Nice try.


DOAs aren't the issue. It's defects that manifest themselves 6/12 months down the line. If there's a common fault then the whole batch dies. In fact this kind of scenario is disappointingly common.
Quote:


> Originally Posted by *ramicio*
> 
> I'm not going to spoon feed you the information. If you don't understand how expanding works and how writing entire drives at a time is a huge risk, you won't grasp that this sentence in itself is an explanation.


So basically you don't actually know what you're talking about and would rather berate and bluff your way out?
Quote:


> Originally Posted by *ramicio*
> 
> I never said you can't mix different drives, I just said it's ******ed, risky, and only hurts performance.


Cannot / should not. The context was pretty clear on both our parts.
Quote:


> Originally Posted by *ramicio*
> 
> My philosophy is "go big or go home."


Yeah, because bigger capacity drives are less error prone than smaller capacity drives









If you actually have any scientific merit to your claims then I suggest you make them known. A number of us on here would genuinely be interested (and happily accept we were wrong). However the way you're carrying on, you'll just be brushed off as yet another internet anom who talks big but can't substantiate their claims.


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> If you actually have any scientific merit to your claims then I suggest you make them known. A number of us on here would genuinely be interested (and happily accept we were wrong). However the way you're carrying on, you'll just be brushed off as yet another internet anom who talks big but can't substantiate their claims.


x2


----------



## Aestylis

Quote:


> Originally Posted by *tycoonbob*
> 
> x2


X3'ed.


----------



## Onions

so i think this is relevant







http://ncix.com/products/index.php?sku=74459&vpn=0S03363&manufacture=HGST&promoid=1261


----------



## Shiveron

Quote:


> Originally Posted by *Aestylis*
> 
> X3'ed.


X4'ed


----------



## ALpHaMoNk

Quote:


> Originally Posted by *S3phro*
> 
> How do you justify needing that much space at home though?
> In an enterprise environment I can understand but no way would you be using one controller for that many drives, sure you'd have redundancy on the drives but if the hardware in the chassi fails you have one huge single point of failure..
> I've come to the conclusion if you need that much storage at home you're into some seriously kinky stuff..


like many of us on here with home servers, Mine too is used to house all my movies hd and not but mainly in HD. TV shows, music and more. if which hw that fails? I am using a raid card with an expander. if the expander dies i can get another one, if the raid card dies I have a spare same model Data would still be intact i would just have to swap out the failed part. as for the kinky stuff, non of that is on my server lol,,, they are stored on the my old drives in a box.








Quote:


> Originally Posted by *ramicio*
> 
> This is why normal people stress test the drives first and weed out the DOAs. Nice try. I'm not going to spoon feed you the information. If you don't understand how expanding works and how writing entire drives at a time is a huge risk, you won't grasp that this sentence in itself is an explanation. I never said you can't mix different drives, I just said it's ******ed, risky, and only hurts performance. My philosophy is "go big or go home."


I test all drives that i buy...I understand your concern ramicio but expanding itself doesn't put the data at risk, it is failed hw that does that...so we are always at some level of risk. if a batch of drives die during an expansion then yes there will be trouble, if a batch of drives die at idle, there is still trouble. if data is at such a high concern, there needs to be a backup plan in place. my server is not backed up, it is just not practical for me to backup the entire 30plus TBs but i do have a folder that is backed up local and offsite for my most valuables. buy and building a server with all drives populated at once just makes no sense.. even if you have the cash. again the downside of doing it all in one shot.

1) higher electricity bill for drives not in use but spun up.
2) premature use of drives that are not being used or filled but spun up.
3) waste of warranty on drives that are not being used but spinning
4) price changes on drives, could have saved a few hundred if you got them spread out in time as you needed them.
5) as already mentioned, the chance of landing drives from a bad batch

as for drive models no longer being available.. this is true but not a problem, the newer version of the drives will not only most likely be cheaper but will only perform as your well as your weakest drive.
My setup started with 2tb hitachi 7k2000 drives, I needed to expand and could no longer find 7k2000 drives so i moved on to use 7k3000 drives and no performance issues. they are all still 2tb drives. and all still hitachi drives.


----------



## jrl1357

I'm going to have to x5 this. I have never lost an array to expending, but I lost a six +1 drive raid 5 to a bad batch. Its better to keep backups on completely different drive when you start the array and then expend the array slowly. If the model isn't available chose the closest one, all that happens is the faster drive is stunted at the lower ones performance, unless your controller is from like 1980


----------



## Plan9

Quote:


> Originally Posted by *ramicio*
> 
> Expanding over and over DOES put the data at risk, ******* *******. You are taking an entire drive, reading it, and writing it back. That whole path through the controller is also risk to the data. It's very most possible you hit the URE expanding little by little like this. You ******* are blindly assuming that everything operates in a perfect world without stray particles. But go ahead, expand your multi-TB array 2 drives at a time over and over again. Just don't come crying when you have files that are corrupt from doing so. Cheap people who like to ****** rig stuff are the ones who like to mix drives.


You've still not actually stated why it's dangerous.

From what little information you've posted, it sounds like you're argument is largely focused around the dangers of adding fresh drives to worn drives - but that's less dangerous than having an entire array of worn drives (as you would if you bought you're entire pool from the start). If I've misunderstood you, then please enlighten me, but the dumbed down version you've stated here really don't explain much at all.

As for the "stray particles" argument; that's going to happen regardless of whether you buy all your gear up front or expand over time. So your argument is moot. Plus random failures like that is why there's a whole plethora of read and write checks from file system based checksums through to real time memory checks for write holes.

So you're not really making a coherent argument here and everything you've stated would be a risk regardless of how you built your storage pool. What's worse is that given the supposed superiority of your understanding, you've still failed to provide any technical explanation aside crudely proclaiming Heisenberg's Uncertainty Principle.


----------



## Boyboyd

Quote:


> Originally Posted by *ramicio*
> 
> Expanding over and over DOES put the data at risk, ******* *******. You are taking an entire drive, reading it, and writing it back. That whole path through the controller is also risk to the data. It's very most possible you hit the URE expanding little by little like this. You ******* are blindly assuming that everything operates in a perfect world without stray particles. But go ahead, expand your multi-TB array 2 drives at a time over and over again. Just don't come crying when you have files that are corrupt from doing so. Cheap people who like to ****** rig stuff are the ones who like to mix drives.


There's no reason to use language like that or be offensive, even if it is censored.


----------



## tycoonbob

Quote:


> Originally Posted by *ramicio*
> 
> Expanding over and over DOES put the data at risk, ******* *******. You are taking an entire drive, reading it, and writing it back. That whole path through the controller is also risk to the data. It's very most possible you hit the URE expanding little by little like this. You ******* are blindly assuming that everything operates in a perfect world without stray particles. But go ahead, expand your multi-TB array 2 drives at a time over and over again. Just don't come crying when you have files that are corrupt from doing so. Cheap people who like to ****** rig stuff are the ones who like to mix drives.


I kinda understand where you are going with the URE argument, but when using a real raid controller, the drives are being scrubbed in the background. When you expand an array, that's similar to a rebuild in how the drive performance will be lessened, etc. When doing a rebuild of a Raid 5, and you encounter a URE, rebuild would stop and kill the array. However, when using a quality controller, UREs will be discovered well before the rebuild, due to background scrubbing.

A typical consumer grade SATA HDD is rated at 1x10^14 URE (1 bit in about 12TB), with enterprise grade HDDs rated at 1x10^15 or 1x10^16. This shows that when buying quality parts, you lessen the risk. Also, it depends on how large your array is, and how large it is going to be. Most importantly, background scrubbing can prevent any UREs in the first place. Don't go cheap on your controller for this reason alone.


----------



## Plan9

Quote:


> Originally Posted by *tycoonbob*
> 
> I kinda understand where you are going with the URE argument, but when using a real raid controller, the drives are being scrubbed in the background. When you expand an array, that's similar to a rebuild in how the drive performance will be lessened, etc. When doing a rebuild of a Raid 5, and you encounter a URE, rebuild would stop and kill the array. However, when using a quality controller, UREs will be discovered well before the rebuild, due to background scrubbing.
> A typical consumer grade SATA HDD is rated at 1x10^14 URE (1 bit in about 12TB), with enterprise grade HDDs rated at 1x10^15 or 1x10^16. This shows that when buying quality parts, you lessen the risk. Also, it depends on how large your array is, and how large it is going to be. Most importantly, background scrubbing can prevent any UREs in the first place. Don't go cheap on your controller for this reason alone.


I went cheap on my controller but only because I run ZFS which does all that stuff in software.

In fact I've been really impressed with ZFS as I've tried my damnedest to break my pool and no matter what I threw at it, ZFS recovered the data (to the point where I was removing the parity drive mid-write then forced a kernel panic and everything still popped up fine).

I do some pretty nasty stuff to my file server so keep a close check on the performance and reliability of the drives - thus far I've had memory, motherboards and even the mouse fail yet the HDDs are still performing. I really couldn't ask for better.


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> I went cheap on my controller but only because I run ZFS which does all that stuff in software.
> In fact I've been really impressed with ZFS as I've tried my damnedest to break my pool and no matter what I threw at it, ZFS recovered the data (to the point where I was removing the parity drive mid-write then forced a kernel panic and everything still popped up fine).
> I do some pretty nasty stuff to my file server so keep a close check on the performance and reliability of the drives - thus far I've had memory, motherboards and even the mouse fail yet the HDDs are still performing. I really couldn't ask for better.


I personally am not a fan of software raid, but yes...when using software raid the controller isn't really a controller. It's more of a HBA or expander. My previous comment was directly pointed at hardware raids.

Another thought, in an enterprise setting (which I see several different environments being an IT Consultant), their SANs, NASs, DASs, and/or NUSs...are also expanding based on need. Does that mean that these corporations are doing their storage wrong?


----------



## jrl1357

Quote:


> Originally Posted by *Plan9*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tycoonbob*
> 
> I kinda understand where you are going with the URE argument, but when using a real raid controller, the drives are being scrubbed in the background. When you expand an array, that's similar to a rebuild in how the drive performance will be lessened, etc. When doing a rebuild of a Raid 5, and you encounter a URE, rebuild would stop and kill the array. However, when using a quality controller, UREs will be discovered well before the rebuild, due to background scrubbing.
> A typical consumer grade SATA HDD is rated at 1x10^14 URE (1 bit in about 12TB), with enterprise grade HDDs rated at 1x10^15 or 1x10^16. This shows that when buying quality parts, you lessen the risk. Also, it depends on how large your array is, and how large it is going to be. Most importantly, background scrubbing can prevent any UREs in the first place. Don't go cheap on your controller for this reason alone.
> 
> 
> 
> I went cheap on my controller but only because I run ZFS which does all that stuff in software.
> 
> In fact I've been really impressed with ZFS as I've tried my damnedest to break my pool and no matter what I threw at it, ZFS recovered the data (to the point where I was removing the parity drive mid-write then forced a kernel panic and everything still popped up fine).
> 
> I do some pretty nasty stuff to my file server so keep a close check on the performance and reliability of the drives - thus far I've had memory, motherboards and even the mouse fail yet the HDDs are still performing. I really couldn't ask for better.
Click to expand...

ZFS? Your running bsd right? I considering debian gnu/kfreebsd over linux just for ZFS support. You think its worth it?

@ramicio there is no reason what so ever to use that kind of language. Even censored its A. Being, well, what you said and B. agianst tos.


----------



## ugotd8

Quote:


> Originally Posted by *Plan9*
> 
> I went cheap on my controller but only because I run ZFS which does all that stuff in software.
> In fact I've been really impressed with ZFS as I've tried my damnedest to break my pool and no matter what I threw at it, ZFS recovered the data (to the point where I was removing the parity drive mid-write then forced a kernel panic and everything still popped up fine).
> I do some pretty nasty stuff to my file server so keep a close check on the performance and reliability of the drives - thus far I've had memory, motherboards and even the mouse fail yet the HDDs are still performing. I really couldn't ask for better.


+1 I've done some of the same. Plus how cool is ZFS export/import ?

ZFS_is_da_bawls.









@jrl1357 - Try the OpenIndiana build 151_a5. Totally worth it.


----------



## Plan9

Quote:


> Originally Posted by *jrl1357*
> 
> ZFS? Your running bsd right? I considering debian gnu/kfreebsd over linux just for ZFS support. You think its worth it?
> @ramicio there is no reason what so ever to use that kind of language. Even censored its A. Being, well, what you said and B. agianst tos.


Yeah, FreeBSD.
Personally I'd recommend you use FreeBSD rather than some weird GNU/BSD hybrid. While there area few minor differences in the common userland (eg different command switches for _ps_), FreeBSD really is a fantastic OS to work with. I'd definitely recommend at least trying it first anyway.
Quote:


> Originally Posted by *ugotd8*
> 
> +1 I've done some of the same. Plus how cool is ZFS export/import ?
> ZFS_is_da_bawls.
> 
> 
> 
> 
> 
> 
> 
> 
> @jrl1357 - Try the OpenIndiana build 151_a5. Totally worth it.


There's too many cool ZFS features to mention, but snapshotting is one of my favourite.

How is OpenIndiana? I was largely disappointed with the crappiness of OpenSolaris despite being a Solaris advocate (or maybe because I'm a Solaris advocate?







)


----------



## jrl1357

I'm already trying freebsd







one of three oses on my main rig right now. I'm just better with debian, but thats by experience, and i'm gaining in freebsd. I'll give openindiana a shot too


----------



## ugotd8

Quote:


> Originally Posted by *Plan9*
> 
> Yeah, FreeBSD.
> Personally I'd recommend you use FreeBSD rather than some weird GNU/BSD hybrid. While there area few minor differences in the common userland (eg different command switches for _ps_), FreeBSD really is a fantastic OS to work with. I'd definitely recommend at least trying it first anyway.
> There's too many cool ZFS features to mention, but snapshotting is one of my favourite.
> How is OpenIndiana? I was largely disappointed with the crappiness of OpenSolaris despite being a Solaris advocate (or maybe because I'm a Solaris advocate?
> 
> 
> 
> 
> 
> 
> 
> )


I think both OpenIndiana AND OpenSolaris are great. I have been a Solaris admin for, well, a long time.









Agreed OpenSolaris wasn't as good as it could have been, but believe me the developers worked down the hall and it was a pretty cool time, although Sun was dying and on it's last breath just about the time OpenSolaris was becoming popular so some mistakes were made developer & corporate wise. Can you tell I'm an SA by that massive run-on sentence ?









Anyway, OpenIndiana has been great so far. I run my server headless and therefore just used the text installer which doesn't install any of the GUI tools. Looking forward to playing with the auto_snapshot feature of the new version of ZFS.

EDIT: props to all my unix brethren here, didn't expect to find much more than 16-year old gamers here on OCN.


----------



## vpadro

And where are the post your server entries?

Please stay on topic. =)


----------



## Plan9

Quote:


> Originally Posted by *ugotd8*
> 
> I think both OpenIndiana AND OpenSolaris are great. I have been a Solaris admin for, well, a long time.
> 
> 
> 
> 
> 
> 
> 
> 
> Agreed OpenSolaris wasn't as good as it could have been, but believe me the developers worked down the hall and it was a pretty cool time, although Sun was dying and on it's last breath just about the time OpenSolaris was becoming popular so some mistakes were made developer & corporate wise. Can you tell I'm an SA by that massive run-on sentence ?
> 
> 
> 
> 
> 
> 
> 
> 
> Anyway, OpenIndiana has been great so far. I run my server headless and therefore just used the text installer which doesn't install any of the GUI tools. Looking forward to playing with the auto_snapshot feature of the new version of ZFS.
> EDIT: props to all my unix brethren here, didn't expect to find much more than 16-year old gamers here on OCN.


I hadn't realised OpenIndiana could run headless - it was one of the reasons I opted for FreeBSD over OpenSolaris.
Also, I thought Oracle stopped releasing the source for ZFS after v28. Is OpenIndiana still getting updates (eg encryption)?
Quote:


> Originally Posted by *vpadro*
> 
> And where are the post your server entries?
> Please stay on topic. =)


This is, kind of


----------



## ugotd8

Quote:


> Originally Posted by *Plan9*
> 
> I hadn't realised OpenIndiana could run headless - it was one of the reasons I opted for FreeBSD over OpenSolaris.
> Also, I thought Oracle stopped releasing the source for ZFS after v28. Is OpenIndiana still getting updates (eg encryption)?
> This is, kind of


There is discussion of encryption coming to OI here. There is a project page for encryption at illumos, but no activity to speak of.

Last time I looked into it, it looked like ZFS + encryption + open-source = non-existent. If you are willing to live without open-source, Solaris 11 express has encryption and is free.

Captain obvious warning: choose carefully, once you do the zfs upgrade to your pool these days it's like getting married to a branch.


----------



## Plan9

Quote:


> Originally Posted by *ugotd8*
> 
> There is discussion of encryption coming to OI here. There is a project page for encryption at illumos, but no activity to speak of.
> Last time I looked into it, it looked like ZFS + encryption + open-source = non-existent. If you are willing to live without open-source, Solaris 11 express has encryption and is free.
> Captain obvious warning: choose carefully, once you do the zfs upgrade to your pool these days it's like getting married to a branch.


Yeah, this is why I stuck with FreeBSD.

I was tempted to go with pure Solaris, but IIRC the licences doesn't cover home servers as that's considered production use (or something).


----------



## ugotd8

Quote:


> Originally Posted by *Plan9*
> 
> Yeah, this is why I stuck with FreeBSD.
> I was tempted to go with pure Solaris, but IIRC the licences doesn't cover home servers as that's considered production use (or something).


To my mind, "home use" and "production" are mutually exclusive.


----------



## wtomlinson

Quote:


> Originally Posted by *chmodlabs*
> 
> What do you mean by "satellite equipment" ? Just curios of it's current use to you lol.
> - chmodlabs


http://www.marcorsyscom.usmc.mil/sites/cins/Fact%20Books/NSC/SATCOM/2010%20SWAN%20Fact%20Sheet.pdf

Quote:


> Originally Posted by *Imrac*
> 
> Sat internet I am guessing. Looks like a military setup somewhere in the middle east. With all that sand around and 100+ degree server room, I am sure you have some fun maintenance stories.
> BTW that post you quotes was almost 2 years old.


Yes you are correct. Iraq. and Yes, I have some interesting maintenance stories. I tried to suppress them though.









Quote:


> Originally Posted by *u3b3rg33k*
> 
> Someone aught to introduce you ^^ to an air compressor...


Please see the post below. This video wasn't from me ( I was in the Marines), but it's the same stuff over there. 



Quote:


> Originally Posted by *Shiveron*
> 
> Military satellite connection.
> Someone ought to introduce you to a desert based server room. There's literally no point in air compressor cleaning it, because it will all settle back down on the equipment in less than 15 minutes.


This +1000. Using an air compressor over there more than once a week would be the equivalent of trying to shovel snow in a blizzard.


----------



## swat565

What type of containers are those servers held in wtomlinson? Were the retrofitted/ghetto rigged pelican cases or purpose build for rack-mount hardware?


----------



## parityboy

Quote:


> Originally Posted by *ALpHaMoNk*
> 
> There is a user that uses this case at home...not from these forums though....his build is amazing!


That's *treadstone* over on hardforum.com, and yes his build is amazing.


----------



## ALpHaMoNk

Quote:


> Originally Posted by *parityboy*
> 
> That's *treadstone* over on hardforum.com, and yes his build is amazing.


that's exactly who i was refering to







may great builds over on [H]


----------



## wtomlinson

Quote:


> Originally Posted by *swat565*
> 
> What type of containers are those servers held in wtomlinson? Were the retrofitted/ghetto rigged pelican cases or purpose build for rack-mount hardware?


Are you talking about the Proliants in the green cases? We got those like that, so I assume they were retrofitted at some point (probably by a government contractor like General Dynamics). They had lids to go to the front and back. 98% of our IT equipment was COTS (commercial off the shelf) gear, so it pretty much has to be forced into cases like those. So in essence, it was built for the stuff, but sort of ghetto rigged, if you want to look at it that way.


----------



## swat565

Quote:


> Originally Posted by *wtomlinson*
> 
> Are you talking about the Proliants in the green cases? We got those like that, so I assume they were retrofitted at some point (probably by a government contractor like General Dynamics). They had lids to go to the front and back. 98% of our IT equipment was COTS (commercial off the shelf) gear, so it pretty much has to be forced into cases like those. So in essence, it was built for the stuff, but sort of ghetto rigged, if you want to look at it that way.


Yeah just looking at what you guys did to make em portable like that. Looking at cheap way to move servers around to different locations in a portable case.


----------



## Theloudtrout

Quote:


> Originally Posted by *Boyboyd*
> 
> It's ok, but the legs aren't the right width apart anymore, so you need to move them out by 14mm which is hard to do with the materials it's made of, lol.


Isn't that just because you are using server rails ?


----------



## Boyboyd

Yeah it fits perfectly if you just put some screws in the front of it, but I wouldn't. The legs are hollow.


----------



## Theloudtrout

Quote:


> Originally Posted by *Boyboyd*
> 
> Yeah it fits perfectly if you just put some screws in the front of it, but I wouldn't. The legs are hollow.


Ah, that's a shame i just ordered a Lack coffee table for this exact purpose. Looks like i will be shoving wood down the centers.

Thanks for the info man.


----------



## Boyboyd

No problem. 2"x2" wood works if you sand it a little.


----------



## wtomlinson

Quote:


> Originally Posted by *swat565*
> 
> Yeah just looking at what you guys did to make em portable like that. Looking at cheap way to move servers around to different locations in a portable case.


Yea it definitely made moving things easier. We had UPS, switches, routers, servers, kvms... basically anything that was rackable.


----------



## ugotd8

Craigslist FTW !



Whee... got me a proper chassis


----------



## Plooto




----------



## Irisservice

Quote:


> Originally Posted by *ugotd8*
> 
> Craigslist FTW !
> 
> Whee... got me a proper chassis


Nice How much..


----------



## ugotd8

Quote:


> Originally Posted by *Irisservice*
> 
> Nice How much..


Not an epic deal, but > 300, < 400.


----------



## Boyboyd

Quote:


> Originally Posted by *ugotd8*
> 
> Craigslist FTW !
> 
> Whee... got me a proper chassis


That looks like it has redundant PSUs installed. Does it? Great find anyway. You going to put it on rails?


----------



## ramicio

Between $300 and $400 isn't an epic deal?


----------



## ugotd8

Quote:


> Originally Posted by *Boyboyd*
> 
> That looks like it has redundant PSUs installed. Does it? Great find anyway. You going to put it on rails?


No, just the one power supply and no rails.
Quote:


> Originally Posted by *ramicio*
> 
> Between $300 and $400 isn't an epic deal?


Well, poor choice of words on my part I suppose. I sure am happy with it tho.









Anyone know the pinout on the supermicro front panel connector cable ?


----------



## overclocker23578

HP XW6400 coming some time this week, £200 on ebay

2x E5335 2.0GHz Quads
4GB DDR2 FBDIMM, will be upgraded to 8 or 12

Will be used as a render slave


----------



## ugotd8

Quote:


> Originally Posted by *ugotd8*
> 
> No, just the one power supply and no rails.
> Well, poor choice of words on my part I suppose. I sure am happy with it tho.
> 
> 
> 
> 
> 
> 
> 
> 
> Anyone know the pinout on the supermicro front panel connector cable ?


nm, Found it.


----------



## ramicio

Just look at any [somewhat recent] Supercicro motherboard manual and it will tell you the pinout. Their motherboards and cases are made to work together and just be plug and play.


----------



## cdoublejj

Quote:


> Originally Posted by *tycoonbob*
> 
> 
> 
> Spoiler: Hyper-V Host 01


What case is that?


----------



## tycoonbob

Quote:


> Originally Posted by *cdoublejj*
> 
> What case is that?


Rosewill RSV-L400 ($89.99)

It's a cheapo, but it's a great case if you are on a budget.


----------



## Imrac

Quote:


> Originally Posted by *tycoonbob*
> 
> Rosewill RSV-L400 ($89.99)


Oh man.. I think I just found my new case for my server build. Could you do me a favor and see how much room is in there for a CPU cooler? I just purchased a Dark Night CPU cooler for my esxi server and would love to shove it in that case.


----------



## tycoonbob

Quote:


> Originally Posted by *Imrac*
> 
> Oh man.. I think I just found my new case for my server build. Could you do me a favor and see how much room is in there for a CPU cooler? I just purchased a Dark Night CPU cooler for my esxi server and would love to shove it in that case.


Well, that cooler is 120mm tall, which is 4.72441 inches. The case itself is actually 4.25 rack units tall (yes, it's a little over a 4U, lol). 4U is 7 inches, and measuring from the bottom of the case to the top, is right under 7 inches (account for the thickness of the case). From the CPU socket to the top, there is about 5.75 inches for a cooler, leaving .25-.5 inches clearance from the top of the case. That cooler (height wise) should fit without a problem.


Spoiler: Case Measurement Photos



This is from the bottom of the case (below motherboard).


This is from the top of the motherboard.




Be sure to click the photos for a larger image. Let me know if this answers things for you!


----------



## ugotd8

Wow, looks like you could fit a 4x3 or 5x3 enclosure on the right side there under the switch panel in the front. If so, that would give you 13 drives in a sub $100 case. Nice find !


----------



## Doomtomb

Too much talk, not enough server pics.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *mbreitba*
> 
> I'm a 10% co-owner of this company and an employee, does that make it partially my equipment?
> 
> http://nosupportlinuxhosting.com/images/NSLH_DC_Pic.jpg
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Starting in the middle (2nd visible rack)with the Promise iSCSI arrays
> 
> Promise m610i - 8TB RAW capacity - ~4TB formatted in RAID10
> 
> Promise m610i - 8TB RAW capacity - ~4TB formatted in RAID10
> 
> Promise m610i - 8TB RAW capacity - ~4TB formatted in RAID10
> 
> Bladecenter - right to left
> 
> Dual Xeon 5420's w/32GB RAM + mirrored 250GB SATA HDD + 20Gbit InfiniBand
> 
> Dual Xeon 5420's w/32GB RAM + mirrored 250GB SATA HDD + 20Gbit InfiniBand
> 
> Dual Xeon 5420's w/32GB RAM + mirrored 250GB SATA HDD + 20Gbit InfiniBand
> 
> Dual Xeon 5420's w/32GB RAM + mirrored 250GB SATA HDD + 20Gbit InfiniBand
> 
> Blank
> 
> Blank
> 
> Blank
> 
> Pic is blank - have added single Xeon 5506 w/ 2GB RAM and mirrored 250 SATA HDD - this is a control system for our InfiniBand network
> 
> Dual Xeon 5620's w/ 48GB RAM + mirrored X25-V SSD + 20Gbit InfiniBand
> 
> Dual Xeon 5520's w/ 48GB RAM + 3 32GB 10krpm SAS HDD + 20Gbit InfiniBand
> 
> 16 port KVM
> 
> 8 port KVM + 17" LCD/Keyboard/touchpad
> 
> Promise Vtrak M500i - 6TB RAW - ~ 5TB formatted in RAID5 - backup volumes
> 
> Promise Vtrak M300i - 3.8TB RAW - ~ 1.9TB formatted in RAID 10
> 
> 6x - Tyan Transport - Dual Opteron 270's w/ 4GB RAM - dedicated hosting solution for one of our customers
> 
> Not seen second rack bottom - Dual APC 3000VA Rack mount UPS's & TrippLite 4500VA UPS
> 
> Third rack (to the right)
> 
> Very top - Dell Poweredge 350 - P3 850 w/ 512MB RAM - Firewall for our network - runs pfsense
> 
> First box - ZFSBuild.com project box - Xeon 5504, 12GB RAM, 2x Intel X25V (boot) 2x Intel X25-E (Write cache 32GB) 2x Intel X25-MG2 (160GB read cache) 20x Western Digital RE3 1TB drives. Dual port Mellanox Infinihost III EX 20Gbit Infiniband card.
> 
> Promise M610I - 16TB RAW - ~8TB formatted capacity RAID 10
> 
> Spare bladecenter
> 
> 2x PowerWare 9025 5000VA 208 Volt UPS's
> 
> First rack - mostly unseen
> 
> Dell PowerEdge 350 - P3 850 - 512MB RAM - Load Balancer for SpamAssassin filtering
> 
> 6x mix of Tyan and Supermicro systems - Dual opteron varying speed - 4GB RAM - Ubuntu systems running SpamAssassin and ClamAV for virus filtering
> 
> Dell Poweredge 2540 or something like that - dual P3 1133's 1GB RAM - used to be MSSQL server, now just runs WhatsUpGold
> 
> Another Poweredge - similar specs connected to powervault 220S - Tape backup library used for some critical backups - needs to be upgraded because it doesn't have enough capacity without rotating tapes constantly.
> 
> Somewhere in this rack exists an Areca SATA->SCSI unit w/ 12 500GB SATA HDD's that we use as a backup staging system. All backups go to this system, then are spooled off to tape.
> 
> Also not pictured - APC 1200VA and APC 2200VA UPS


Please tell me you run [email protected] on that puppy.


----------



## tycoonbob

Quote:


> Originally Posted by *ugotd8*
> 
> Wow, looks like you could fit a 4x3 or 5x3 enclosure on the right side there under the switch panel in the front. If so, that would give you 13 drives in a sub $100 case. Nice find !


Yup, it has 3 5.25" bays there, which can easily hold another 5 drives with THIS.


----------



## ugotd8

Quote:


> Originally Posted by *tycoonbob*
> 
> Yup, it has 3 5.25" bays there, which can easily hold another 5 drives with THIS.


Hehe, or in the ultimate irony, you could use this.


----------



## pvt.joker

guess i need to get a pic of my server and seedbox up.. so here we go..











Top one in the 2U case is my atom based seedbox
Bottom in the 4U case is my Phenom II file server ~7TB currently, but looking to swap out 4-5 drives for 2tb each.


----------



## airbozo

Quote:


> Originally Posted by *ugotd8*
> 
> No, just the one power supply and no rails.
> Well, poor choice of words on my part I suppose. I sure am happy with it tho.
> 
> 
> 
> 
> 
> 
> 
> 
> Anyone know the pinout on the supermicro front panel connector cable ?


If you want a set of rails, I have a set you can have for the price of shipping... I'm just going to put them in the scrap bin...

EDIT: Thought I had pics of my servers. Will get them up this weekend.


----------



## cdoublejj

Quote:


> Originally Posted by *tycoonbob*
> 
> Rosewill RSV-L400 ($89.99)
> It's a cheapo, but it's a great case if you are on a budget.


Cheapo1? 90 bucks gosh dang rob me blind! I certainly like the look, has good amount of fans and has no shortage of stand holes, so i'm gonna guess it supports almost every mobo under the sun. maybe that includes proprietary dell or hp mobos?


----------



## tycoonbob

Quote:


> Originally Posted by *cdoublejj*
> 
> Cheapo1? 90 bucks gosh dang rob me blind! I certainly like the look, has good amount of fans and has no shortage of stand holes, so i'm gonna guess it supports almost every mobo under the sun. maybe that includes proprietary dell or hp mobos?


Yeah, I would guess that it could handle just about any motherboard you could buy online...maybe proprietary ones as well. For $90, it has 5 120mm fans and 2 80mm fans...very quiet, cool, and spacious. The only thing that could improve it is if it had hot swap bays, which would obviously jack the price up a bit to add the backplanes...so for $90 it's awesome the way it is. Can even be had on sale for $70 at times.


----------



## ugotd8

Quote:


> Originally Posted by *airbozo*
> 
> If you want a set of rails, I have a set you can have for the price of shipping... I'm just going to put them in the scrap bin...
> EDIT: Thought I had pics of my servers. Will get them up this weekend.


Hey thanks for the generous offer, but I will have to pass. No use for rails in my basement, I'll never have a rack in there.


----------



## cdoublejj

Quote:


> Originally Posted by *tycoonbob*
> 
> Yeah, I would guess that it could handle just about any motherboard you could buy online...maybe proprietary ones as well. For $90, it has 5 120mm fans and 2 80mm fans...very quiet, cool, and spacious. The only thing that could improve it is if it had hot swap bays, which would obviously jack the price up a bit to add the backplanes...so for $90 it's awesome the way it is. Can even be had on sale for $70 at times.


I'd just need a place to put it. I probably shouldn't even bee looking because i have number of asus elan vital T10's at my disposal. I should probably post some pictures of my recycled craptastic server i pieced together from an old Asus PC-DL build.


----------



## culexor

Parts are on the way! I'm building a new server for my gaming community & (small) hosting business. It will be used primarily for TF2 & Minecraft. I'll post pics once the parts arrive and I start assembling everything. Once it's all put together and I do some testing, it's off to Chicago!



http://imgur.com/UpABG




http://imgur.com/qrnWi


----------



## rust0r

I'll keep it short, see my mini-build log/final roundup here: 26 Drive 27TB Tower Server - Running Disparity

*26 Drive, 27TB Tower Server:
*





Currently running 16 drives, room for 10 more (total of 26). I wanted something with little cable mess, and ease of access without going the Norco route

Thanks for all who have posted their systems and given me crazy ideas


----------



## culexor

Just put this together. Waiting on another 16GB of memory before I ship it off to the datacenter.



http://imgur.com/sVuSA


E3-1230V2
Supermicro X9SCM-F Motherboard
16GB DDR3 1600 ECC (soon to be 32GB)
500GB WD RE4
128GB Samsung 830
Supermicro SC512F-350B Chassis

The integrated IPMI feature is amazing!


----------



## ZFedora

Quote:


> Originally Posted by *rust0r*
> 
> I'll keep it short, see my mini-build log/final roundup here: 26 Drive 27TB Tower Server - Running Disparity
> *26 Drive, 27TB Tower Server:
> *
> 
> 
> Currently running 16 drives, room for 10 more (total of 26). I wanted something with little cable mess, and ease of access without going the Norco route
> Thanks for all who have posted their systems and given me crazy ideas


Looks awesome! Nice job!


----------



## rust0r

Quote:


> Originally Posted by *ZFedora*
> 
> Looks awesome! Nice job!


Thank you !


----------



## tiro_uspsss

Quote:


> Originally Posted by *culexor*
> 
> Just put this together. Waiting on another 16GB of memory before I ship it off to the datacenter.
> 
> 
> http://imgur.com/sVuSA
> 
> E3-1230V2
> Supermicro X9SCM-F Motherboard
> 16GB DDR3 1600 ECC (soon to be 32GB)
> 500GB WD RE4
> 128GB Samsung 830
> Supermicro SC512F-350B Chassis
> The integrated IPMI feature is amazing!


interesting rig!








what is it going to be used for?


----------



## culexor

I run a gaming community and rent a few servers out to some friends. Mostly Source (TF2, CSS, etc.) server hosting as well as Minecraft.

I decided not to go with a RAID array because to me it wasn't worth the extra few hundred bucks for a proper RAID card and additional drives. I'm sending an additional drive to the data center so they have one on hand in case of a failure. I can afford a few hours of downtime, so the extra cost to reduce the downtime (by buying a RAID card + drives) isn't really worth it to me.


----------



## ZFedora

Quote:


> Originally Posted by *culexor*
> 
> I run a gaming community and rent a few servers out to some friends. Mostly Source (TF2, CSS, etc.) server hosting as well as Minecraft.
> I decided not to go with a RAID array because to me it wasn't worth the extra few hundred bucks for a proper RAID card and additional drives. I'm sending an additional drive to the data center so they have one on hand in case of a failure. I can afford a few hours of downtime, so the extra cost to reduce the downtime (by buying a RAID card + drives) isn't really worth it to me.


I thought I heard in another thread you decided on Chicago, what datacenter did you plan on?


----------



## culexor

Quote:


> Originally Posted by *ZFedora*
> 
> I thought I heard in another thread you decided on Chicago, what datacenter did you plan on?


Continuum Data Centers


----------



## ZFedora

Quote:


> Originally Posted by *culexor*
> 
> Continuum Data Centers


Ah, nice datacenter, toured it before, about half an hour out of the city.


----------



## cr4z

I dont have any pictures but my specs...for my system at home.

Case-Antec 900
MB-Gigabyte GA MA790X
CPU-AMD Phenom 9950
RAM-8GB G Skill
PSU-Corsair 650w
OS HDD-WD320gb w/ESXi (Datastore)
Storage HDD-Various WD 1tb-3tb (total usable is 5 tb with the 3 tb as Parity)
I have a 1300va UPS and it will last about 30 mins.

I originally built the system with a GTX GPU for gaming, back in 2008. I realized earlier this year I just use it to browse the internet and my system was a bit over kill. So I decided to virtualize, I installed ESXi, and my virtual machines are:

Ubuntu for my file processing
WinXP for mySQL DB
UnRAID for my file server

I use the file server mainly for storage of movies and tv shows being shared to my multiple XBMC devices through out the house.

As far as what I play with at work...is a bit too much to say, I will just tease with the Data room cost ~180M


----------



## tiro_uspsss

Quote:


> Originally Posted by *cr4z*
> 
> I have a 1300va UPS and it will last about 30 mins.


I like info like this


----------



## Blindsay

OS: Server 2008 R2
Case: Rosewill RSV-L4000
CPU: i5 3570k
Motherboard: Asrock Z77 Pro3
Cooling: Stock
Memory: 2x 4GB Microcenter branded DDR3 1333
PSU: Antec High Current Gamer 900W
OS HDD: 120GB Agility 3
Storage HDD(s): 10x 2TB WD Green Power on a Dell SAS6 card (well 8 on the card, on mobo)
Server Manufacturer: me









What you use it for (Print server, backups, file server, etc.): Used to run Exchange 2010 and serves as my domain controller and my file server.

You can also see my roommates minecraft server on the lower shelf


----------



## tycoonbob

Quote:


> Originally Posted by *Blindsay*
> 
> OS: Server 2008 R2
> Case: Rosewill RSV-L4000
> CPU: i5 3570k
> Motherboard: Asrock Z77 Pro3
> Cooling: Stock
> Memory: 2x 4GB Microcenter branded DDR3 1333
> PSU: Antec High Current Gamer 900W
> OS HDD: 120GB Agility 3
> Storage HDD(s): 10x 2TB WD Green Power on a Dell SAS6 card
> Server Manufacturer: me
> 
> 
> 
> 
> 
> 
> 
> 
> 
> What you use it for (Print server, backups, file server, etc.): Used to run Exchange 2010 and serves as my domain controller and my file server.
> You can also see my roommates minecraft server on the lower shelf


Looks great! I love that chassis, and I actually have 3 of them myself! One idea comes to mind, that I have been thinking of doing. I'm sure you don't turn your server off/on too often, but seeing how you probably have to pull the case around to reach the power button, I have thought about taking something like a case intrusion switch, and mounting it in one of the PCI bay fillers, and wiring it into the motherboard switch, that way you can access it from back there. I want to do it myself, but haven't got around to it. Just an idea, but yeah...box looks great.


----------



## Blindsay

Quote:


> Originally Posted by *tycoonbob*
> 
> Looks great! I love that chassis, and I actually have 3 of them myself! One idea comes to mind, that I have been thinking of doing. I'm sure you don't turn your server off/on too often, but seeing how you probably have to pull the case around to reach the power button, I have thought about taking something like a case intrusion switch, and mounting it in one of the PCI bay fillers, and wiring it into the motherboard switch, that way you can access it from back there. I want to do it myself, but haven't got around to it. Just an idea, but yeah...box looks great.


i never shut it off unless i lose power, but my arm is just long enough that i can stand on something and reach to the front of it to turn it on.


----------



## ugotd8

In case you missed it, these are an insane deal. I ordered a couple, should be here tomorrow.


----------



## tycoonbob

Quote:


> Originally Posted by *ugotd8*
> 
> In case you missed it, these are an insane deal. I ordered a couple, should be here tomorrow.


If I had $300 laying around, I would buy one just to have it. Dual Socket F with 16 DIMMs? Can load up dual hexacores, and up to 128GB of RAM...and use it for virtualization, and build a virtual SAN box to access all 24 drives. That chassis alone is worth more than double the price tag! Not to mention the redundant 900w PSU, and the SATA Controllers.


----------



## pvt.joker

holy monkeys! I sooo want that.. And here I was thinking I couldn't afford to go to a 24bay chassis..
might have to call (as much as I hate calling places, soo much easier to just order online) and see if there are any left for that price.. hard to pass up.


----------



## dushan24

That's amazing, I assume this is in North America?

Any info regarding shipping?

PS: To ship to Australia would probably not be worth it, but still...


----------



## ugotd8

Quote:


> Originally Posted by *dushan24*
> 
> That's amazing, I assume this is in North America?
> Any info regarding shipping?
> PS: To ship to Australia would probably not be worth it, but still...


My shipping was $65, but that was from Utah to Colorado. No idea about international. Paid on Tuesday, got them today.


----------



## ugotd8

Fun action shot, more to come later:



The top cover looks scratched/beat but that's just the plastic covering, it's like new underneath.


----------



## ramicio

I wish these new line of cases with the platinum PSUs were out there as used already. I can find plenty of deals on the 800s and 900s, but they are never 80+ rated and they are usually plain gross SATA backplanes or expander backplanes.


----------



## swat565

Thought I should post up my Setup, mind the messy wiring as its not finalized.

Top to bottom:
-Catalyst 3550 Switch
-Cisco 3725 with 48 channel voice module+ 12.4 advanced enterprise IOS
-Catalyst 4006 chassis with 48 port 10/100 module
-Dell SC1435
x2 Opteron dual cores @ 1.7ghz
16GB of ram
150GB VelociRaptordrive
500 WD black drive


----------



## ugotd8

My new server:

Chassis: SC846TQ (24 bay SATA hotswap)
Motherboard: SuperMicro H8DME-2
IPMI: Supermicro AOC-SIM1U(+) & and AOC USB2RJ45 (just awesome with IPMIView, remote power control, remote console and sensor view)
CPU: (2) Opteron 2212 HE
RAM: 16GB DDR2-667 ECC
Controllers: (3) AOC-SAT2-MV8
Disks: (2) 250GB SATA for OS, HD204UI & WD20EARX for zfs pool
OS: OpenIndiana 151_A5

Took these pics before the HW reconfig and right after unboxing...


----------



## jlcpcnc

OS: Windows 7 Ultimate 64Bit / Virtual Box Windows Server 2008 R2 Standard
Case: Zalman Z9
CPU: Xeon 3220
Motherboard: Intel Corporation DP43BF (XU1)
Cooling: 220CFM Delta AC Fan - 2 120MM
Memory: 6GB
PSU: 1200w
OS HDD: Raid 5 3x250GB
Storage HDD(s): 1TB
Server Manufacturer: I made this!
What you use it for - file, Printer, Media, Web, Email, Backup
Temps, loudness, etc. --- It's a server its loud and hot

Any additional software that you use -
I love Love LOVE - EASEUS Todo Pro backup software!
WAMP
MyDefrag
Advance SystemCare Pro


----------



## CSCoder4ever

OS: Windows Home Server 2011 x64
Case: Powerup executive mid-tower case
CPU: Intel Core i3 2100
Motherboard: ECS h61h2-m2
Cooling: Stock + Cougar 12CM 1200RPM silent fan
Memory: Patriot Memory gamer 2 8GB (2 x 4GB) PC3 10666 @ 1333
PSU: Ultra xfinity 500W
Storage HDD(s): Seagate Barracuda 1TB 7200 RPM (it will grow, also used as the boot)
Server Manufacturer: I assembled it.
Uses: File, media, print, local web, local game, and backup

It is the Quietest system in my rig collection.

and the specs don't seem like the best, but it was just built rather recently, so give it time.




thought I'd share it.


----------



## bobfig

UPDATE!!!!

i just got my PERC 5i installed and with 3 1tb hdd its running nicely. just have to wait for it to finish initializing and all will be good.


----------



## johnvosh

Here's my IBM server that I picked up for $50, just had to add hard drive & an O/S

OS: Windows XP Pro w/ SP3
Case: IBM eServer Xseries 225
CPU: Dual Xeon 3.06GHz dual core (Prestonia)
Motherboard: IBM
Cooling: stock
Memory: 2.5GB DDR ECC
PSU: Dual redundant 350 watt
OS HDD: 2x IBM 73.4GB USCSI 10K RPM, raid
Storage HDD(s): None yet
Server Manufacturer: IBM
Other: DDS4 SCSI Tape drive, floppy drive, dual port 100/1000 network card
Video: Radeon 7500 64MB AGP

What you use it for: Nothing yet. Still need to set up (add more HDD's), but will be eventually used for a music/video streaming and file backup
This is also the loudest system I have!

Pics


----------



## dklic6

OS: FreeNAS
Case: NZXT Source 210
CPU: 2500k
Motherboard: Asus p8z68-v pro
Cooling: Zalman Max and random fans
Memory: 8gb Kingston hyper-x 1600
PSU: Corsair 650w
OS HDD: (If you have one) USB drive
Storage HDD(s): Not pictured: 2x 1tb seagate baracuda 1x WD 500gb
Server Manufacturer: Self
Without drives









I use it for an NAS right now. I'm new to the game and I'm trying to set up SSH/FTP and Streaming Video/Music

I know this thing is overkill, but the mobo and processor are on ebay right now. I plan on replacing them with *http://www.newegg.com/Product/Product.aspx?Item=N82E16813128452* that. I really would like some advice on a decent controller for RAID5. As I said, I'm very new to the server game.

I'm starting a custom case and I'm planning on doing a build log for it. Here's a tease:


----------



## tycoonbob

Quote:


> Originally Posted by *dklic6*
> 
> OS: FreeNAS
> Case: NZXT Source 210
> CPU: 2500k
> Motherboard: Asus p8z68-v pro
> Cooling: Zalman Max and random fans
> Memory: 8gb Kingston hyper-x 1600
> PSU: Corsair 650w
> OS HDD: (If you have one) USB drive
> Storage HDD(s): Not pictured: 2x 1tb seagate baracuda 1x WD 500gb
> Server Manufacturer: Self
> Without drives
> 
> 
> 
> 
> 
> 
> 
> 
> I use it for an NAS right now. I'm new to the game and I'm trying to set up SSH/FTP and Streaming Video/Music
> I know this thing is overkill, but the mobo and processor are on ebay right now. I plan on replacing them with *http://www.newegg.com/Product/Product.aspx?Item=N82E16813128452* that. I really would like some advice on a decent controller for RAID5. As I said, I'm very new to the server game.
> I'm starting a custom case and I'm planning on doing a build log for it. Here's a tease:


If you are looking for a real hardware raid controller for Raid 5 (also consider Raid 6 as well as Raid 10), check out LSI cards. Most hardware controllers are based on LSI chips, which is why I recommend LSI MegaRAID controllers. If you are looking for the best bang for the buck, I would definitely say that would be a Dell PERC 6/i.


----------



## bobfig

well idk how well the atom 525 would do with video streaming especially if you need to trans code it for the end device. just getting a nice matx and a good duel core like the Pentium G620 and keep your motherboard. i know my first set up i ran am atom 230 for file sharing and it sucked. it couldnt even stream a video correctly to my ps3. now that was a single core with HT. at the moment i have my old e8400 in my server and its been awesome. i also have been messing with a perc 5i that i got off of ebay for $55 and its ok so far. speeds are alright and definitely worth what i payed for it.


----------



## Xylene

delete me


----------



## dklic6

I hear what you are saying about the lack of power from that Atom. Are there any itx sized boards that would do well for what I'm looking for? I was also looking at going this route.
http://www.newegg.com/Product/Product.aspx?Item=N82E16819116399
Combo would save me a couple bones.

Those dell controllers are right on my price point on ebay. Does anybody else have suggestions? Much appreciated so far.


----------



## bobfig

that's the cpu i linked. if you lived near a microcenter in houston or dallas you could get the Intel Pentium G630 for $50 instead of $68 on newegg.









as for a itx motherboard i looked and the only ones i could recommend are $80+. here is one i like link

now if you could fit an matx witch i like the idea better because of more slots to put cards like a tv card so it can also be a DVR, it could save you a couple of bucks.

for the raid card take a look over the dell perc thread http://www.overclock.net/t/359025/perc-5-i-raid-card-tips-and-benchmarks
if you need any pics ill get some for you of mine if you want.


----------



## dklic6

Thanks for the advice bob. I live about 15 mins away from microcenter. I'm not too worried about a tuner card yet. The server case that is half done is constructed for an itx mobo, but I could make adjustments at this point.


----------



## AMD SLI guru

Howdy everybody! I wanted to post my "servers" and just add to the community of knowledge of what is out there and such. Please keep in mind that I was using spare hardware and none of these systems are complete by my standards. They might need a new case, ram, more hard drives, SAS expander with LSI card, power supplies, fans, and cpu coolers. Once I get home, I'll update this post with pictures.

Server #1) Processor: AMD Phenom 1090T Motherboard: ASUS ROG Crosshair IV Formula Ram: Corsair 8 gigs of DDR 3 1600 Case: Rosewill RSV-L4000 Raid Cage: iStar 3 to 5 Hard Drives: 4x 2tb Western Digital Green , 750 gig Seagate , 120 gig 7200rpm 2.5 boot drive Video Card: BFG 9800GT OS: Freenas 8.2 Uses: This is used to backup my movie collection I own. I use a program called Jriver to stream the content to my HTPC.

Server #2) Processor: AMD Phenom 955 Motherboard: something.... I have no idea Ram: 4gigs DDR2 800 Case: iStar.... no idea on the model # but it's solid steel Hard Drives: 5x 1TB mix and matched drives, 120gig 7200rpm 2.5 boot drive OS: Freenas 8.2 Uses: This is used to backup my TV show collection I own. I use Jriver to stream it to my HTPC.



Server 3) Shipped and on it's way to me right meow Sata IO Board: Supermicro SAS826TQ Motherboard: Supermicro X7SBE Processor:Intel Core 2 Duo E8400 3.0Ghz Ram: Kingston 4GB RAM (1) Supermicro SAT2-MV8 (2) 800 watt PSU (redundant) Once I get my hands on this, we shall see what this bad boy can do. Just have to replace the PSU and fans but for 200 bucks it's hard to beat.


----------



## DizzlePro

Hey Guys, I got a quick question to ask, is this server any good?

http://www.ebay.co.uk/itm/ws/eBayISAPI.dll?ViewItem&item=251093589524&clk_rvr_id=380902992552#ht_4386wt_1163


----------



## Norse

Quote:


> Originally Posted by *DizzlePro*
> 
> Hey Guys, I got a quick question to ask, is this server any good?
> http://www.ebay.co.uk/itm/ws/eBayISAPI.dll?ViewItem&item=251093589524&clk_rvr_id=380902992552#ht_4386wt_1163


its not bad for the price but it will be really noisy, the main thing going for it is the huge amount of DIMM slots if you were going to do memory intensive tasks ie virtualisation


----------



## ndoggfromhell

Be cautious using the Dell PERC cards... the older 5 and 6 series won't see 3Tb drives. I ended up getting an Intel (rebranded LSI) in my project server I'm building.


----------



## dklic6

Quote:


> Originally Posted by *ndoggfromhell*
> 
> Be cautious using the Dell PERC cards... the older 5 and 6 series won't see 3Tb drives. I ended up getting an Intel (rebranded LSI) in my project server I'm building.


Crud. I just bought one on ebay for $50 and the pass through cables for $10. Three 1TB drives should be plenty for my needs anyway though.


----------



## AMD SLI guru

I actually got one of those $189 Supermicro 2u servers. First and foremost, Its LOUD. The 3x80mm fans are hella loud, and pump out HUGE amounts of CFM. It was actually shocking how much air those fans pushed through the case. Even replacing the fans, the dual server PSU's are loud. Totally not for a living room environment.

I ended up taking out the PSU's and running a normal 600watt consumer PSU through. Keep in mind the PSU wont fit inside the case so I have it supported on the back side of the case. Considering I'm not going to be moving this rig, I could care less about how the PSU is set up.

As it stands, It's quiet / silent, and works perfectly for my FreeNAS 8.2 uses. The only thing I would do after all the modding is getting more ram. It only ships with 4 gigs and I would prefer 8gigs.


----------



## utnorris

I will get some PICS up soon, but for now here are the specs:

(Description)

OS: Windows 7 x64
Case: CaseLabs M8
CPU: AMD 960t (unlocked to 6 cores)
Motherboard: Asus Sabertooth 990FX
Cooling: CPU is water cooled with a Koolance 370 CPU block, the rest are air cooled
Memory: 16Gb of Gskill Sniper low voltage 1600Mhz ram
PSU: Corsair AX750
OS HDD: OCZ Vertex Agility 3 60Gb
Storage HDD(s): 12 drives for a total of 22TB (FlexRAID) and 4 x 1TB drives in a RAID5 setup for the important stuff
Server Manufacturer: Self Built

What you use it for (Print server, backups, file server, etc.) - Print server, Streaming, recording TV, video conversions
Temps, loudness, etc. - Louder than I want due to the HD rack fans
Any additional software that you use - FLEXRAID
Pics - Coming soon


----------



## ChRoNo16

yo SLI guru- Where did you pick up that server from?


----------



## AMD SLI guru

picked it up by contacting them at the http://www.avsforum.com/t/1412640/are-you-looking-for-a-less-expensive-norco-4220-4224-alternative

it was 271 bucks with shipping and the rails from Utah to me. I really like it so far. I currently have 6x 1tb and 6x2tb drive loaded in mine and it runs like a champ for freenas.

like i said, it's hella loud stock so you're going to need to mod it to get things nice and quiet.


----------



## dushan24

Quote:


> Originally Posted by *AMD SLI guru*
> 
> picked it up by contacting them at the http://www.avsforum.com/t/1412640/are-you-looking-for-a-less-expensive-norco-4220-4224-alternative
> 
> it was 271 bucks with shipping and the rails from Utah to me. I really like it so far. I currently have 6x 1tb and 6x2tb drive loaded in mine and it runs like a champ for freenas.
> 
> like i said, it's hella loud stock so you're going to need to mod it to get things nice and quiet.


How did you contact, by emailing [email protected] ?


----------



## Shmerrick

I would post pics of our server room, but then I would go to jail...


----------



## ugotd8

Quote:


> Originally Posted by *dushan24*
> 
> How did you contact, by emailing [email protected] ?


I got a quickest response from [email protected]


----------



## AMD SLI guru

I actually just called them. It's easier to just speak to somebody about what they were offering than to email and wait.

They accept paypal or CC over the phone and had fast shipping.

*Warning: you need to also get the tiny screws for mounting the hard drives in the hard drive bays. These are not included and will need to be bought as well. You can not mount the hard drives in the slots WITHOUT these screws.*

What I did was I saved all the screws I collected from taking out the PSU's and I had enough to mount the drives in the bays. I'm not joking about these kinds of screws. normal screws that come with normal 3.5 inch drives WILL NOT WORK. The case has steel guards that prevent the drive from sliding back in the case.

I'll take photo's tonight and show you how. I'll also make a new post here to show how to mod and install these drives and such.


----------



## Plooto

So I want a server and a router. I'll probably use the server for a file/media server and a print server and anything else I can think of. Could I also use the same server as a router or would I need a separate machine. I would just buy a consumer one but I'd rather do this and I need loads of Ethernet ports. Don't want to spend more than £300. Noise isn't too much of a problem, but nothing leaf blower like.


----------



## ugotd8

Quote:


> Originally Posted by *AMD SLI guru*
> 
> I actually just called them. It's easier to just speak to somebody about what they were offering than to email and wait.
> 
> They accept paypal or CC over the phone and had fast shipping.
> 
> *Warning: you need to also get the tiny screws for mounting the hard drives in the hard drive bays. These are not included and will need to be bought as well. You can not mount the hard drives in the slots WITHOUT these screws.*
> 
> What I did was I saved all the screws I collected from taking out the PSU's and I had enough to mount the drives in the bays. I'm not joking about these kinds of screws. normal screws that come with normal 3.5 inch drives WILL NOT WORK. The case has steel guards that prevent the drive from sliding back in the case.
> 
> I'll take photo's tonight and show you how. I'll also make a new post here to show how to mod and install these drives and such.


Good tip on the HD screws. I got 3 bags of 100 from provantage for $4 a bag IIRC. Looked it up, cheaper than that actually:

Supermicro Accessory Screw BAG100PCS Label For 24x Hotswap 3.5HDD Tray Retail

*MCP-410-00005-0N* 3 3.36 10.08

Subtotal: 10.08


----------



## Gunfire

Here's mine









Mainly for streaming music

P4-640
2Gb DDR
750GB PATA

It's quiet, low wattage, and I got it for free at work so it's good enough for me


----------



## Blindsay

Quote:


> Originally Posted by *Gunfire*
> 
> Here's mine
> 
> 
> 
> 
> 
> 
> 
> 
> Mainly for streaming music
> P4-640
> 2Gb DDR
> 750GB PATA
> It's quiet, low wattage, and I got it for free at work so it's good enough for me


im not sure that you can put low wattage and p4 together


----------



## Gunfire

Quote:


> Originally Posted by *Blindsay*
> 
> im not sure that you can put low wattage and p4 together


Well for the price I paid, I'm not complaining


----------



## AMD SLI guru

Quote:


> Originally Posted by *ugotd8*
> 
> Good tip on the HD screws. I got 3 bags of 100 from provantage for $4 a bag IIRC. Looked it up, cheaper than that actually:
> Supermicro Accessory Screw BAG100PCS Label For 24x Hotswap 3.5HDD Tray Retail
> *MCP-410-00005-0N* 3 3.36 10.08
> Subtotal: 10.08


good to know! :-D yeah, I didn't have any on hand and I didn't really look around online to find them.


----------



## Jtvd78

I guess, as the starter of this thread, I should post my server too









This is my main server, I use it for backups and as a file server. WHS claims it has 5.13TB of usable space.
In a few days, I'm installing a new NIC (HP NC360T)

OS: WHSv1
Case: Antec 300
CPU: AMD Athlon II X2 240E
Motherboard: MSI 760GM-E51
Memory: 4GB
PSU: Antec Neo Eco w00W
OS HDD (If you have one): WD Green 500GB
Storage HDD(s): 2x Samsung F4 2TB, 1x WD 640GB, 1x WD 500GB IDE
Server Manufacturer (Ex: Dell, HP, You?): Me


































This is my router build. Im going to run pfSense for Firewall, VPN, Routing, etc. Its not done right now. I'm waiting for the NIC (HP NC360T)

OS: pfSense
Case: Mini-Box M350
CPU: Atom N2800
Motherboard: Intel D2800MT
Memory: 4GB
PSU: Some Laptop PSU
OS HDD (If you have one): Crucial m4 32GB mSata
Storage HDD(s): N/A
Server Manufacturer (Ex: Dell, HP, You?): Me


----------



## CSCoder4ever

Quote:


> Originally Posted by *Jtvd78*
> 
> I guess, as the starter of this thread, I should post my server too
> 
> 
> 
> 
> 
> 
> 
> 
> This is my main server, I use it for backups and as a file server. WHS claims it has 5.13TB of usable space.
> In a few days, I'm installing a new NIC (HP NC360T)
> OS: WHSv1
> Case: Antec 300
> CPU: AMD Athlon II X2 240E
> Motherboard: MSI 760GM-E51
> Memory: 4GB
> PSU: Antec Neo Eco w00W
> OS HDD (If you have one): WD Green 500GB
> Storage HDD(s): 2x Samsung F4 2TB, 1x WD 640GB, 1x WD 500GB IDE
> Server Manufacturer (Ex: Dell, HP, You?): Me
> 
> 
> Spoiler: images
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This is my router build. Im going to run pfSense for Firewall, VPN, Routing, etc. Its not done right now. I'm waiting for the NIC (HP NC360T)
> OS: pfSense
> Case: Mini-Box M350
> CPU: Atom N2800
> Motherboard: Intel D2800MT
> Memory: 4GB
> PSU: Some Laptop PSU
> OS HDD (If you have one): Crucial m4 32GB mSata
> Storage HDD(s): N/A
> Server Manufacturer (Ex: Dell, HP, You?): Me
> 
> 
> Spoiler: images


Nice servers! In what circumstance would it be a good idea to make a custom router out of curiosity?


----------



## Jtvd78

Quote:


> Originally Posted by *CSCoder4ever*
> 
> Nice servers! In what circumstance would it be a good idea to make a custom router out of curiosity?


I wanted to have a firewall for my whole network, so I decided to build this guy. It goes right after the modem, and right before the switch. So any traffic that comes in is essentially safe (for the most part).
With a custom router, you can set custom rules, and even add additional functionality to it. Another big plus is being able to VPN into my network. Now, when I'm on vacation, I can have an encrypted, firewalled tunnel to the internet. Additionally, I can access my home computer, and server. Securely.

I'm debating on having a web cache on it too. That way, the router would cache the most used data, so it would increase loading speeds.

There are several advantages to it. You can obviously do more with it, but those are just the features I'm using.

EDIT: I'm also going to VPN my phone to my home network, so my phone's internet is also firewalled. And I can stream videos and music from my server


----------



## D-EJ915

The nice thing about having a dedicated firewall/router is it lessens the load on your wireless, etc. compared to an all-in-one. I haven't had to reboot my pfsense system since I installed it.

uhh for a picture...here's my current VM host. going from 20 to 32gb in a hot minute.


----------



## ramicio

Why are server room pictures so taboo? It's as ridiculous as people who censor out their license plates on the pictures they post of their cars.


----------



## u3b3rg33k

This might be a good place to ask - I want to build a new box to run untangle on, to replace my dual 2.8 xeon (socket 604) box. I'm leaning towards a DP atom box, as the 130W+ average draw of the xeon rig is getting old.


----------



## Plooto

^^ Same here but my post has been overtaken loads,


----------



## AMD SLI guru

Quote:


> Originally Posted by *u3b3rg33k*
> 
> This might be a good place to ask - I want to build a new box to run untangle on, to replace my dual 2.8 xeon (socket 604) box. I'm leaning towards a DP atom box, as the 130W+ average draw of the xeon rig is getting old.


that's actually what my router is. I'm running a dual core atom with untangle and i hardly hit 5% cpu use. I have it running 24/7 and it doesn't make a sound.

Totally worth it imo. I went out and found it on sale on Newegg for $230 for the supermicro 1u DC atom box. just add some ram and a HDD and you're good to go.


----------



## Manyak

I wanted to downsize my server from the CM Stacker I was using, but was having trouble finding a decent mid-tower for the job. Enter the NZXT Tempest 410. Why is it perfect?

- 12 HDD bays, 8 of them hot-swappable
- Good airflow for all HDDs
- Built in dust filters
- Side fan to keep RAID and network cards cool
- Fits a 120mm tower heatsink
- Lots of fans at low speeds = QUIET!

As far as the hardware itself:
- Xeon E3110
- 8GB RAM
- 4x 1TB Caviar Black
- 8x 2TB Caviar Green
- LSI 9280-16i4e RAID card
- Intel Quad Port PT Gigabit NIC
- Seasonic X650 Gold PSU


----------



## Jtvd78

Quote:


> Originally Posted by *u3b3rg33k*
> 
> This might be a good place to ask - I want to build a new box to run untangle on, to replace my dual 2.8 xeon (socket 604) box. I'm leaning towards a DP atom box, as the 130W+ average draw of the xeon rig is getting old.


Check out my post. I just built an atom router for pfSense, and it should be perfect. I'm going to get it running today after I install the NIC.

My Post


----------



## Jtvd78

Quote:


> Originally Posted by *Manyak*
> 
> I wanted to downsize my server from the CM Stacker I was using, but was having trouble finding a decent mid-tower for the job. Enter the NZXT Tempest 410. Why is it perfect?
> - 12 HDD bays, 8 of them hot-swappable
> - Good airflow for all HDDs
> - Built in dust filters
> - Side fan to keep RAID and network cards cool
> - Fits a 120mm tower heatsink
> - Lots of fans at low speeds = QUIET!
> As far as the hardware itself:
> - Xeon E3110
> - 8GB RAM
> - 4x 1TB Caviar Black
> - 8x 2TB Caviar Green
> - LSI 9280-16i4e RAID card
> - Intel Quad Port PT Gigabit NIC
> - Seasonic X650 Gold PSU
> 
> [Snip]


I have to say, I am very jealous.


----------



## ugotd8

Very nice, but where are the hotswap bays ? I don't see a sata backplane on this model.


----------



## Jtvd78

The NC360Ts just came in








Ill post the pictures of the updated servers soon.

EDIT: The installation:

Problem: How do you get this...









To fit into this....









Solution: With a little bit of sanding, I could solve my little problem
















It fits perfectly!









Here is the card installed:









And here is a shot from the back:









Currently installing pfSense


----------



## Manyak

Quote:


> Originally Posted by *ugotd8*
> 
> Very nice, but where are the hotswap bays ? I don't see a sata backplane on this model.


Well it's not _exactly_ hotswap, but with enough cable slack you can just slide the HDDs out the front and swap them out all the same.


----------



## ugotd8

Quote:


> Originally Posted by *Manyak*
> 
> Well it's not _exactly_ hotswap, but with enough cable slack you can just slide the HDDs out the front and swap them out all the same.


Ah ok, wasn't sure if there was some pics missing. Nice server, I just retired my stacker too. Sort of bittersweet, used that baby for years.


----------



## Manyak

Quote:


> Originally Posted by *ugotd8*
> 
> Ah ok, wasn't sure if there was some pics missing. Nice server, I just retired my stacker too. Sort of bittersweet, used that baby for years.


LOL, it's funny you say that because I actually hesitated at the curb when throwing mine out


----------



## Carlitos714

Description / Usage: Backups, seedbox, file server, folding sometimes, and HTPC

OS: Windows 7 Ultimate 64 bit

Case: Lian Li PC-A70B

CPU: i7-920

Motherboard: EVGA X58 Classified E760

Memory: 3 x 2 GB Corsair Dominators 1600 mhz

PSU: Corsair HX1000

OS HDD (If you have one): Seagate 250 gb

Storage HDD(s): DELL PERC 5I W/SAMSUNG F4 HD204UI 2TB x 6 (RAID 5)

Server Manufacturer (Ex: Dell, HP, You?): That would be ME


----------



## Doomtomb

Quote:


> Originally Posted by *Manyak*
> 
> I wanted to downsize my server from the CM Stacker I was using, but was having trouble finding a decent mid-tower for the job. Enter the NZXT Tempest 410. Why is it perfect?
> 
> - 12 HDD bays, 8 of them hot-swappable
> - Good airflow for all HDDs
> - Built in dust filters
> - Side fan to keep RAID and network cards cool
> - Fits a 120mm tower heatsink
> - Lots of fans at low speeds = QUIET!
> 
> As far as the hardware itself:
> - Xeon E3110
> - 8GB RAM
> - 4x 1TB Caviar Black
> - 8x 2TB Caviar Green
> - LSI 9280-16i4e RAID card
> - Intel Quad Port PT Gigabit NIC
> - Seasonic X650 Gold PSU


Well done. A proper server but how on earth do you people afford the RAID cards? That card is $810 on Newegg. Which servers at work do you steal em from?


----------



## FiX

I would post my home server, but it's nothing special (a Dell Optiplex GX620 with some extra RAM and a new HDD). Just give me a while and I'll get pics of my new home server up.
Specs of new home server (and will be same for the servers I'm going to be colocating in New York (probably with ColoCrossing)):
CPU: Xeon E3-1230v2
RAM: 32GB ECC RAM (probably Kingston)
Motherboard: SUPERMICRO MBD-X9SCL-O
HDDs: 2x Samsung F3 1TB in RAID1 (probably software RAID)
HDDs (2): Crucial M4 128GB
Case: Rack-mountable 2U (undecided)
PSU: Undecided
Going to be used as a general purpose home server and probably a few more at work as general purpose game servers / Xen HVM or PV boxes - I own a hosting company


----------



## GrimNights

Quote:


> Originally Posted by *Jtvd78*
> 
> I guess, as the starter of this thread, I should post my server too
> 
> 
> 
> 
> 
> 
> 
> 
> This is my main server, I use it for backups and as a file server. WHS claims it has 5.13TB of usable space.
> In a few days, I'm installing a new NIC (HP NC360T)
> OS: WHSv1
> Case: Antec 300
> CPU: AMD Athlon II X2 240E
> Motherboard: MSI 760GM-E51
> Memory: 4GB
> PSU: Antec Neo Eco w00W
> OS HDD (If you have one): WD Green 500GB
> Storage HDD(s): 2x Samsung F4 2TB, 1x WD 640GB, 1x WD 500GB IDE
> Server Manufacturer (Ex: Dell, HP, You?): Me
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This is my router build. Im going to run pfSense for Firewall, VPN, Routing, etc. Its not done right now. I'm waiting for the NIC (HP NC360T)
> OS: pfSense
> Case: Mini-Box M350
> CPU: Atom N2800
> Motherboard: Intel D2800MT
> Memory: 4GB
> PSU: Some Laptop PSU
> OS HDD (If you have one): Crucial m4 32GB mSata
> Storage HDD(s): N/A
> Server Manufacturer (Ex: Dell, HP, You?): Me
> 
> 
> Spoiler: Warning: Spoiler!


You sir I have posted what I wanted to do with a home server and then some.


----------



## Manyak

Quote:


> Originally Posted by *Doomtomb*
> 
> Well done. A proper server but how on earth do you people afford the RAID cards? That card is $810 on Newegg. Which servers at work do you steal em from?


lol, nowhere I bought it myself


----------



## Oedipus

Just got this bad boy in at the office. It's not mine, seeing as how it was $20k plus another $4k for RDS and server CALs, but it's still interesting.



















Dell Poweredge T620

Dual Xeon E5-2665s
64GB DDR3 1600
4 x 2.5" 300GB 15K SAS RAID 10 via PERC H710
Server 2008 R2 Enterprise

This will be replacing an old Citrix box for ~35 remote users.


----------



## george_orm

Quote:


> Originally Posted by *Doomtomb*
> 
> Well done. A proper server but how on earth do you people afford the RAID cards? That card is $810 on Newegg. Which servers at work do you steal em from?


U can pick them up for 100 bucks or less on eBay,


----------



## Manyak

Quote:


> Originally Posted by *george_orm*
> 
> U can pick them up for 100 bucks or less on eBay,


Link one then. A 9280 16i4e.


----------



## parityboy

^^ lol, I think he was referring to a Dell PERC 5/i or PERC 6/i. They're the only $100 RAID cards you'll see on eBay.


----------



## blooder11181

Quote:


> Originally Posted by *Oedipus*
> 
> Just got this bad boy in at the office. It's not mine, seeing as how it was $20k plus another $4k for RDS and server CALs, but it's still interesting.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Dell Poweredge T620
> Dual Xeon E5-2665s
> 64GB DDR3 1600
> 4 x 2.5" 300GB 15K SAS RAID 10 via PERC H710
> Server 2008 R2 Enterprise
> This will be replacing an old Citrix box for ~35 remote users.


can you had 2 gpus in that and runs metro 2033 or battlefield 3 on ultra settings


----------



## parityboy

*@Oedipus*

Is it me or is/are the PSU(s) missing from that photo?


----------



## ZFedora

Quote:


> Originally Posted by *Oedipus*
> 
> Just got this bad boy in at the office. It's not mine, seeing as how it was $20k plus another $4k for RDS and server CALs, but it's still interesting.
> ---
> Dell Poweredge T620
> Dual Xeon E5-2665s
> 64GB DDR3 1600
> 4 x 2.5" 300GB 15K SAS RAID 10 via PERC H710
> Server 2008 R2 Enterprise
> This will be replacing an old Citrix box for ~35 remote users.


Oh god, if a server could be sexy, the new Poweredges would be it


----------



## Oedipus

Quote:


> Originally Posted by *parityboy*
> 
> *@Oedipus*
> Is it me or is/are the PSU(s) missing from that photo?


They're behind the motherboard.


----------



## tiro_uspsss

Quote:


> Originally Posted by *Oedipus*
> 
> Just got this bad boy in at the office. It's not mine, seeing as how it was $20k plus another $4k for RDS and server CALs, but it's still interesting.


please don't tell me that the 1st CPU blows its hot air into the second??


----------



## tycoonbob

Quote:


> Originally Posted by *tiro_uspsss*
> 
> please don't tell me that the 1st CPU blows its hot air into the second??


And no exhaust fans...strange. I'm sure that fan bar has some loud powerful fans, but still...slap some deltas in the exhaust bay!


----------



## Buzzin92

VPS' (2 of them)

*New York*
2x Xeon E3-1270 V2 (Turbo up to 3.9GHz)
32GB ECC Buffered RAM
50GB RAID 0 SSD Allocated storage
Gigabit networking/internet

*Germany*
1x Xeon E3-1230 (Turbo up to 3.5GHz)
32GB ECC Buffered RAM
75GB RAID 0 SSD Allocated storage
Gigabit networking/internet

And my latest home server:
Pentium G620
8GB DDR3 1600MHz CL8
32GB SSD OS
4 x 1500GB Storage drives

The image isn't the actual server, this is a clients build. But it looks pretty much identical (Same case, PSU, Motherboard etc)


----------



## Oedipus

Quote:


> Originally Posted by *tiro_uspsss*
> 
> please don't tell me that the 1st CPU blows its hot air into the second??


More or less. There's a big shroud that guides the air from the fanbar through to the back of the case.


----------



## parityboy

Quote:


> Originally Posted by *Oedipus*
> 
> They're behind the motherboard.


Ahhh, on my PE1900 they're behind some shrouding at the "end" of the motherboard, going from front to back.


----------



## ikem

kinda a flashy server/workstation.

2x E5-2650s
16gb non ECC Ram
Crucial M4 64gb
Raid 5 3x1tb
SIngle 2tb Seagate.


----------



## Gunfire

Quote:


> Originally Posted by *ikem*
> 
> kinda a flashy server/workstation.
> 2x E5-2650s
> 16gb non ECC Ram
> Crucial M4 64gb
> Raid 5 3x1tb
> SIngle 2tb Seagate.
> 
> 
> Spoiler: Warning: Spoiler!


That is beautiful









What is it used for??


----------



## Boyboyd

OS: Server 08
Case: Poweredge 2850
CPU: 2 x Dual core xeons
Motherboard: Poweredge
Memory: 2GB ECC DDR2
PSU: Redundant
OS HDD (If you have one): 2 x 74GB 15k SCSI drives (raid 1)
Storage HDD(s): 2 x 146GB 15k SCSI drives (raid 1)
Server Manufacturer (Dell, HP, You?):

Roles: domain controller, file server, remote access,

It has some impressive hardware, shame it's kind of old. The storage card is amazing. I think it's a PERC 4. This is a hardware raid 0 array i made when I was messing about with it.























































That's not it's final resting place. It's going to work when i'm done configuring it. Also planning to add more RAM before it becomes impossible to find.


----------



## parityboy

*@BoyBoyd*

I knew you were in the UK from the look of your radiators.







I then checked your location to confirm.


----------



## Blindsay

Quote:


> Originally Posted by *Boyboyd*
> 
> OS: Server 08
> Case: Poweredge 2850
> CPU: 2 x Dual core xeons
> Motherboard: Poweredge
> Memory: 2GB ECC DDR2
> PSU: Redundant
> OS HDD (If you have one): 2 x 74GB 15k SCSI drives (raid 1)
> Storage HDD(s): 2 x 146GB 15k SCSI drives (raid 1)
> Server Manufacturer (Dell, HP, You?):
> Roles: domain controller, file server, remote access,
> It has some impressive hardware, shame it's kind of old. The storage card is amazing. I think it's a PERC 4. This is a hardware raid 0 array i made when I was messing about with it.
> 
> That's not it's final resting place. It's going to work when i'm done configuring it. Also planning to add more RAM before it becomes impossible to find.


pretty nice 4k result there for a non ssd


----------



## Boyboyd

Yep, classic design.
Quote:


> Originally Posted by *Blindsay*
> 
> pretty nice 4k result there for a non ssd


Thanks. That was my first thought too. The raid card with 512MB of RAM really helps though. Unfortunately it's going to waste seeing as how it'll only host 2 databases. The rest will be photos and office documents. As well as the active directory stuff too.


----------



## utnorris

Quote:


> Originally Posted by *parityboy*
> 
> ^^ lol, I think he was referring to a Dell PERC 5/i or PERC 6/i. They're the only $100 RAID cards you'll see on eBay.


Actually, you can find PERC5i cards for around >$50, PERC6i for >$100, but the cards people really like are the M1015 (LSI 9240) ~$120, Dell H200 (LSI 9240) ~$100 or less. If you want cache and BBU, then the M5014 or M5015 ~ $200. Any of the LSI 9240 rebadged cards can be used with the Intel RES2SV240 and have 20 ports. They run around $250, so with a M1015 you would be looking at $350-$400.

http://forums.servethehome.com/showthread.php?148-Intel-RES2SV240-24-port-SAS2-Expander-Wiki

This is a great place to find out what OEM cards are what:
http://forums.servethehome.com/forumdisplay.php?19-RAID-Controllers-and-Host-Bus-Adapters


----------



## dushan24

Quote:


> Originally Posted by *Oedipus*
> 
> More or less. There's a big shroud that guides the air from the fanbar through to the back of the case.


It's called a thermal zone.


----------



## ikem

Quote:


> Originally Posted by *Gunfire*
> 
> That is beautiful
> 
> 
> 
> 
> 
> 
> 
> 
> What is it used for??


folding currently.

file/backup server
numerous game servers.

for lans, gaming. it really isnt that big.


----------



## reezin14

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *ikem*
> 
> kinda a flashy server/workstation.
> 2x E5-2650s
> 16gb non ECC Ram
> Crucial M4 64gb
> Raid 5 3x1tb
> SIngle 2tb Seagate.






@ikem that is a nice server you got there.I guess we kinda fall into the same catagory(flashy),except you have way more power. BTW wasn't *42-174* in cpu mag?









OS: WHS11
Mobo: Asus E-350
Mem: 8GB's
Storage: x3 3TB(flexraid), x2 2TB(external),x1 160GB(OS)
Uses: backups,file server,remote access,media streaming.Still need to get a modular psu & mod the hdd cage so that I can have up to 10 hard-drives.


----------



## ikem

Quote:


> Originally Posted by *reezin14*
> 
> @ikem that is a nice server you got there.I guess we kinda fall into the same catagory(flashy),except you have way more power. BTW wasn't *42-174* in cpu mag?
> 
> 
> 
> 
> 
> 
> 
> 
> OS: WHS11
> Mobo: Asus E-350
> Mem: 8GB's
> Storage: x3 3TB(flexraid), x2 2TB(external)
> Uses: backups,file server,remote access,media streaming.Still need to a modular psu & mod the hdd cage so that I can have up to 10 hard-drives.


yep it was in the August Edition.


----------



## CSCoder4ever

Quote:


> Originally Posted by *dushan24*
> 
> It's called a thermal zone.


Quote:


> Originally Posted by *dushan24*
> 
> Uncalled for...
> I'm just trying to share my knowledge with other people who share my interests...


I appreciate it! I am pretty much new to the server world, and wanting to learn more.


----------



## Volvo

Got a couple of servers.

Print server:

- Intel Pentium Dual Core E2180
- Zotac G41-ITX WiFi
- Sapphire HD6450 LP Passive
- 2x 2GB DDR2 800MHz KVR
- 80GB WD 3.5" 7,200RPM HDD
- InWin BP671 Silver
- FSP250-60GHT TFX PSU
- Foxconn Orb Cooler w/ Copper Core + Delta AFB0912VH Fan
- Windows 7 Professional

Storage server/HTPC

- Intel Core 2 Duo E8400
- Asus P5G41T-M LX
- Reference AMD Radeon HD5770
- 2x 4GB DDR3 1333MHz G.Skill NT
- 80GB WD 3.5" 7,200RPM HDD
- InWin BK644 Black
- FSP450-60GHS(85) SFX PSU
- Scythe Samurai Z Rev. B + Delta AFC0912DE Fan
- 2x 60x25mm Nidec exhaust fans
- Server 2008 R2


----------



## Boyboyd




----------



## dushan24

Quote:


> Originally Posted by *Boyboyd*


Nice, my PowerEdge 1950's are about that loud.

Though if you want seriously loud, we took an old 2P Opteron box out of the datacentre recently, thing is deafening.


----------



## CSCoder4ever

Quote:


> Originally Posted by *Boyboyd*


That is impressively loud. I actually wonder if my server's doing it right, it's dead silent, even when booting


----------



## Oedipus

Gen 9 and earlier poweredge's are loud. The 12th gen servers are particularly quiet.


----------



## tiro_uspsss

*edited*


----------



## Boyboyd

Quote:


> Originally Posted by *dushan24*
> 
> Nice, my PowerEdge 1950's are about that loud.
> Though if you want seriously loud, we took an old 2P Opteron box out of the datacentre recently, thing is deafening.


Quote:


> Originally Posted by *CSCoder4ever*
> 
> That is impressively loud. I actually wonder if my server's doing it right, it's dead silent, even when booting


Quote:


> Originally Posted by *Oedipus*
> 
> Gen 9 and earlier poweredge's are loud. The 12th gen servers are particularly quiet.


It's only that loud on a cold boot. Which hardly ever happens when it's in use. I think it's to try and clear out the dust (though there isn't any, even after 6+ years of 24/7 use).

TBH noise isn't an issue, as it's on a different floor to the offices. But given the choice i'd still rather have a current gen server. But i work with what i've got.


----------



## CSCoder4ever

Quote:


> Originally Posted by *Boyboyd*
> 
> It's only that loud on a cold boot. Which hardly ever happens when it's in use. I think it's to try and clear out the dust (though there isn't any, even after 6+ years of 24/7 use).
> TBH noise isn't an issue, as it's on a different floor to the offices. But given the choice i'd still rather have a current gen server. But i work with what i've got.


is there advantages to manufactured servers? I'd think that even with servers custom is the way to go, At least then you can optimize the silence factor if you so desired.


----------



## jibesh

Quote:


> Originally Posted by *CSCoder4ever*
> 
> is there advantages to manufactured servers? I'd think that even with servers custom is the way to go, At least then you can optimize the silence factor if you so desired.


Typically, servers are housed in a server room or a datacenter so noise is not a factor. Companies buy manufactured servers because of their warranty, reliability / compatibility of components and support.


----------



## TheBirdman74

Not trollz but why do gamers and overclockers need da Servers for?

I'm serious, what use is it to you?


----------



## CSCoder4ever

Quote:


> Originally Posted by *TheBirdman74*
> 
> Not trollz but why do gamers and overclockers need da Servers for?
> I'm serious, what use is it to you?


I find networking a very interesting subject. personally. So I find using a server to be a learning opportunity, as well as a personal NAS that can do more than just being a NAS.


----------



## Boyboyd

Quote:


> Originally Posted by *TheBirdman74*
> 
> Not trollz but why do gamers and overclockers need da Servers for?
> I'm serious, what use is it to you?


At work. And at home i have 7TB NAS.


----------



## JQuantum

Quote:


> Originally Posted by *TheBirdman74*
> 
> Not trollz but why do gamers and overclockers need da Servers for?
> I'm serious, what use is it to you?


I don't think mine is necessarily a server but some people could find virtualization particularly interesting... for me it's to mess with different environments.

Main purpose: Gaming desktop - sig rig 2012...
current main vm: dev web server - ubuntu server 12.04
other vm: Ubuntu Desktop 12.04 to run eclipse and to mess with... I prefer Fedora though
- win 7: just a windows safe room >.> don't really plan to use it but if a program doesn't run on 8 it's an option.
- CentOS 6 minimal - it's more of a crash machine for my vps of similar specs.

Reboots to OS X 10.8.1.

I really want more ram but 16 seems to be ok for now. My CPU struggles sometimes while running a vm and playing a game though :S least I'm guessing it is the CPU not the GPU causing those fps to drop.


----------



## dushan24

Quote:


> Originally Posted by *Boyboyd*
> 
> It's only that loud on a cold boot. Which hardly ever happens when it's in use. I think it's to try and clear out the dust (though there isn't any, even after 6+ years of 24/7 use).


I was under the impression it was because the various temperature sensors required a small amount of time to give accurate readings
And the fans ran at full speed just to be safe, until the temperature was confirmed and they could spin down?

Though thermistors (should they be used) would give an instantaneous reading (due to the fluctuating potential difference), so I'm not sure...


----------



## ramicio

I think it runs the gamut to calibrate for speed control. How loud are the 3U Supermicro servers when they run PWM with real Supermicro boards?


----------



## Boyboyd

Quote:


> Originally Posted by *dushan24*
> 
> I was under the impression it was because the various temperature sensors required a small amount of time to give accurate readings
> And the fans ran at full speed just to be safe, until the temperature was confirmed and they could spin down?
> Though thermistors (should they be used) would give an instantaneous reading (due to the fluctuating potential difference), so I'm not sure...


That's also a possibility. I was just going on how my old fat PS3 operated. I've never actually put our server under enough load to raise the fan speed noticeably. But i have seen that when you remove a hot-swap fan the rest increase to 100% to compensate. Even for a few minutes after you put it back in.


----------



## tycoonbob

Quote:


> Originally Posted by *TheBirdman74*
> 
> Not trollz but why do gamers and overclockers need da Servers for?
> I'm serious, what use is it to you?


http://www.overclock.net/t/1303032/uses-for-your-home-server


----------



## swat565

Quote:


> Originally Posted by *TheBirdman74*
> 
> Not trollz but why do gamers and overclockers need da Servers for?
> I'm serious, what use is it to you?


I think what your getting confused about is thinking all of us just play games and overclock. Many members on this forum are in the IT field, having servers to work with on your own time is more valuable to your career than what can be measured with dollar signs


----------



## Oedipus

Quote:


> Originally Posted by *Boyboyd*
> 
> It's only that loud on a cold boot. Which hardly ever happens when it's in use. I think it's to try and clear out the dust (though there isn't any, even after 6+ years of 24/7 use).
> TBH noise isn't an issue, as it's on a different floor to the offices. But given the choice i'd still rather have a current gen server. But i work with what i've got.


Yes I know. I have dealt with my fair share of Poweredges. Relatively speaking, the newer generation servers are much quieter than the older ones (including the 1950, though I think the 2600-2800 series is the most rambunctious), be it at cold boot or at idle/load. The spinup noise on the newer models is much more muffled, more of a "whoosh" sound than the jet engine sound that the older servers have.

At idle, a T620 (for example) is barely louder than an Optiplex 7010.


----------



## dushan24

Quote:


> Originally Posted by *swat565*
> 
> I think what your getting confused about is thinking all of us just play games and overclock. Many members on this forum are in the IT field, having servers to work with on your own time is more valuable to your career than what can be measured with dollar signs


And it's fun


----------



## Boyboyd

Quote:


> Originally Posted by *Oedipus*
> 
> Yes I know. I have dealt with my fair share of Poweredges. Relatively speaking, the newer generation servers are much quieter than the older ones (including the 1950, though I think the 2600-2800 series is the most rambunctious), be it at cold boot or at idle/load. The spinup noise on the newer models is much more muffled, more of a "whoosh" sound than the jet engine sound that the older servers have.
> At idle, a T620 (for example) is barely louder than an Optiplex 7010.


I really do want a newer server. But before that I want a fully gigabit network.


----------



## Norse

Quote:


> Originally Posted by *Boyboyd*
> 
> I really do want a newer server. But before that I want a fully gigabit network.


it doesnt cost much to upgrade to gigabit


----------



## Boyboyd

Quote:


> Originally Posted by *Norse*
> 
> it doesnt cost much to upgrade to gigabit


Yeah hardly anything. But it's the planning + having to re-wire the building. I'm hoping it will be done by the end of the year though. It's just not important enough to most of the staff. It 'works' now, but none of them seem to care that it would be much much better on gigabit.


----------



## tycoonbob

Quote:


> Originally Posted by *Boyboyd*
> 
> Yeah hardly anything. But it's the planning + having to re-wire the building. I'm hoping it will be done by the end of the year though. It's just not important enough to most of the staff. It 'works' now, but none of them seem to care that it would be much much better on gigabit.


Typical. Why fix what isn't broken, and why invest in IT since they aren't revenue generating. Deal with that crap at all my clients...they have no idea how much time, and money could be saved by upgrades such as going gigabit.


----------



## Manyak

Quote:


> Originally Posted by *tycoonbob*
> 
> Typical. Why fix what isn't broken, and why invest in IT since they aren't revenue generating. Deal with that crap at all my clients...they have no idea how much time, and money could be saved by upgrades such as going gigabit.


Sooooo true.


----------



## Norse

Quote:


> Originally Posted by *Boyboyd*
> 
> Yeah hardly anything. But it's the planning + having to re-wire the building. I'm hoping it will be done by the end of the year though. It's just not important enough to most of the staff. It 'works' now, but none of them seem to care that it would be much much better on gigabit.


Yea thats true, luckily where i work moved offices before i needed to rewire half the old office to allow us to have more ports. im having trouble of "it works" here when it comes to our ancient servers we use.....with no high availility or redundancy


----------



## swat565

Quote:


> Originally Posted by *dushan24*
> 
> And it's fun


Shhh, Don't give away the secret!
If we call it "work" we get payed for it


----------



## CSCoder4ever

Quote:


> Originally Posted by *swat565*
> 
> Shhh, Don't give away the secret!
> If we call it "work" we get payed for it


You're not supposed to have fun and work?


----------



## Fir3Chi3f

Quote:


> Originally Posted by *Buzzin92*
> 
> VPS' (2 of them)
> *New York*
> 2x Xeon E3-1270 V2 (Turbo up to 3.9GHz)
> 32GB ECC Buffered RAM
> 50GB RAID 0 SSD Allocated storage
> Gigabit networking/internet
> *Germany*
> 1x Xeon E3-1230 (Turbo up to 3.5GHz)
> 32GB ECC Buffered RAM
> 75GB RAID 0 SSD Allocated storage
> Gigabit networking/internet
> And my latest home server:
> Pentium G620
> 8GB DDR3 1600MHz CL8
> 32GB SSD OS
> 4 x 1500GB Storage drives
> The image isn't the actual server, this is a clients build. But it looks pretty much identical (Same case, PSU, Motherboard etc)
> 
> 
> Spoiler: Warning: Spoiler!


VPSs are cheating









I had no clue you actually got your home server together tho! Looks good!

Quote:


> Originally Posted by *Boyboyd*


----------



## dushan24

I agree that VPS are cheating 

IMO VPS should only count if you have customized it in some special way or are leasing dedicated hardware.


----------



## Master__Shake

OS: Windows 7 64-bit Ultimate
Case: Norco 4020
CPU: Intel Core i3 2120
Motherboard: Sapphire Pure Black P67 Hydra
Memory: 8 Gb Patriot Viper DDR3 1600
PSU: OCZ ZT 750
OS HDD (If you have one): Corsair Force Series GT 120 gb
Storage HDD(s): 12 Seagate ST2000DM001 2TB Drives. RAID 6
RAID Card: LSI 9260-4i
Expander: Intel RESCV240
Server Manufacturer (Ex: Dell, HP, You?): Me


----------



## suicidegybe

Here's my server:
OS: Windows Server 2008 R2
CPU: Intel X3430 Quad core
RAM: 16GB Kingston EEC Registered 1066Mhz Upgrading to 32GB for RAM caching of RAID Array
Dual Ceton InfiniTV 4, 8 tuners
Adaptec 3405 running 6 TB RAID 10
240GB OCZ Agility 3 VM Drive for WHS 2011
120GB OCZ Agility 3 OS Boot Drive
Cooling: Corsair H70
HDD: WD Green 2TB EARS x2, Samsung 1TB, WD Black 640GB
I run Bitcasa for off site back up of my over 4TB media collection. I use Server 2008 to host WHS 2011 in Hyper-v and I am also going to add Windows 7 Home Premium for Media center extender support I have eliminated all cable boxes from my house, except the one I get free and I don't use it. I use the ceton's to stream live TV thru out the house. Other than that I am learning and adding new services all the time. I'm looking into Surveillance DVR Services.
.


----------



## hambone96

Here is my FreeNAS 7 server that I use to back up my laptop and stuff.

OS: FreeNAS 7
Case: Dell Dimension 2400
CPU: 2.8GHz / 1MB / 800FSB Socket 478 P4
Motherboard: Dell Dimension 3000
Memory: 2x 1GB Patriot DDR400
PSU: 250W Dell
OS HDD (If you have one): 2GB PNY Flash Drive
Storage HDD(s): 3x 400GB in software RAID 5
Server Manufacturer (Ex: Dell, HP, You?): Me and Dell.






Think it needs to be cleaned?


----------



## dushan24

I can see your COA haha.


----------



## herkalurk

My 2 servers



*Bottom, self made*
OS: CentOS 6.3 x86_64
Case: Ultra X-Blaster
CPU: Phenom x4 9500 2.2 GHz Quad core
Motherboard: Asus M3N78 Pro
Memory: 4x1 GB Corsair XMS 2 DDR2 800
PSU: Ultra 600W
OS HDD (If you have one): 300 GB WD Velociraptor
Storage HDD(s): 2x1TB (samsung, hitatchi)
Server Manufacturer (Ex: Dell, HP, You?): Me
Roles: Webserver (mostly php stuff), hlstats host, TS3, central log storage, my linux playground







, SABnzbd( and etc )

*TOP HP N40L*
OS: Windows Server 2008 R2
Case: HP
CPU: AMD Turion II Neo 1.5 GHz Dual Core
Motherboard: HP
Memory: 1X 2GB HP DDR3
PSU: Low Power HP
OS HDD (If you have one): HP Preinstalled 250 GB Seagate 7200 RPM Sata
Storage HDD(s): 3x 2TB Samsung in software RAID 5
Server Manufacturer (Ex: Dell, HP, You?): Mostly HP
Roles: Local Caching DNS, DHCP, Primary File server, email server (imap/smtp), my windows playground


----------



## joshd

Quote:


> Originally Posted by *herkalurk*
> 
> My 2 servers
> 
> *Bottom, self made*
> OS: CentOS 6.3 x86_64
> Case: Ultra X-Blaster
> CPU: Phenom x4 9500 2.2 GHz Quad core
> Motherboard: Asus M3N78 Pro
> Memory: 4x1 GB Corsair XMS 2 DDR2 800
> PSU: Ultra 600W
> OS HDD (If you have one): 300 GB WD Velociraptor
> Storage HDD(s): 2x1TB (samsung, hitatchi)
> Server Manufacturer (Ex: Dell, HP, You?): Me
> Roles: Webserver (mostly php stuff), hlstats host, TS3, central log storage, my linux playground
> 
> 
> 
> 
> 
> 
> 
> , SABnzbd( and etc )
> *TOP HP N40L*
> OS: Windows Server 2008 R2
> Case: HP
> CPU: AMD Turion II Neo 1.5 GHz Dual Core
> Motherboard: HP
> Memory: 1X 2GB HP DDR3
> PSU: Low Power HP
> OS HDD (If you have one): HP Preinstalled 250 GB Seagate 7200 RPM Sata
> Storage HDD(s): 3x 2TB Samsung in software RAID 5
> Server Manufacturer (Ex: Dell, HP, You?): Mostly HP
> Roles: Local Caching DNS, DHCP, Primary File server, email server (imap/smtp), my windows playground


Cool case on the bottom! Looks really server-ish


----------



## dushan24

Quote:


> Originally Posted by *joshd*
> 
> Cool case on the bottom! Looks really server-ish


Yeah, those little HP server cases are nice, there are a few other ones too (albeit slightly bigger), you can Google for them.

You can't buy them direct though, they come with parts inside :-(


----------



## ZFedora

Quote:


> Originally Posted by *dushan24*
> 
> Yeah, those little HP server cases are nice, there are a few other ones too (albeit slightly bigger), you can Google for them.
> You can't buy them direct though, they come with parts inside :-(


Here's a somewhat similar design: http://www.newegg.com/Product/Product.aspx?Item=N82E16811123173&name=Server-Chassis


----------



## Oedipus

hot swap bays make me wet


----------



## ZFedora

Quote:


> Originally Posted by *Oedipus*
> 
> hot swap bays make me wet


----------



## dushan24

Quote:


> Originally Posted by *ZFedora*


Meh, I can top that, give me 5 min to find the pic.


----------



## dushan24

I've blurred the service tag and other labels for obvious reasons...


----------



## dushan24

We also have three Dell R810's and a lot of other cool stuff


----------



## dushan24

Quote:


> Originally Posted by *ZFedora*
> 
> Here's a somewhat similar design: http://www.newegg.com/Product/Product.aspx?Item=N82E16811123173&name=Server-Chassis


Nice find, I like that one.

Probably going to get this for my next build (I was very impressed by this case)
http://www.newegg.com/Product/Product.aspx?Item=N82E16811219040&name=Server-Chassis


----------



## hambone96

Quote:


> I can see your COA haha.


If you can read it, you can have it


----------



## ZFedora

Quote:


> Originally Posted by *dushan24*
> 
> Nice find, I like that one.
> Probably going to get this for my next build (I was very impressed by this case)
> http://www.newegg.com/Product/Product.aspx?Item=N82E16811219040&name=Server-Chassis


I've read online that Norco hotswaps are pretty shoddy, made of cheap plastic. It looks awesome though, love their cases


----------



## herkalurk

Quote:


> Originally Posted by *Oedipus*
> 
> hot swap bays make me wet


That is not a cheap server....

We have an older one at work using 3.5 IN drives, no the 2.5 in, and even then, not cheap.


----------



## Disturbed117

Quote:


> Originally Posted by *Oedipus*
> 
> hot swap bays make me wet


That's a sexy looking machine.


----------



## Oedipus

Quote:


> Originally Posted by *dushan24*
> 
> We also have three Dell R810's and a lot of other cool stuff


Nothing bigger than 710s in our environment, unfortunately.


----------



## hambone96

Wow! I wish I had that much money!


----------



## Imrac

*Description / Usage:* This is my VM/FileStorage/Backup server. My backups are replicated to a friends server with similiar storage capacity off site, and he replicates to mine. The file storage VM is running OpenIndiana with the IBM M1015 passed through. This also occasionally hosts game servers.

*OS:* ESXi 5.0
*Case:* RaidMax with iStarUSA hotswappable 4 in 3 HDD bays.
*CPU:* Intel i7 3770s
*Cpu Cooling:* Dark Knight II
*Motherboard:* Asrock Z77 Extreme-m
*Memory:* 32GB Corsair Vengeance
*HBA:* IBM m1015 flash with IT
*PSU:* 400w Earth Watts
*OS HDD:* 2gb Flash Drive
*Storage HDD(s):* 1x 76GB Raptor, 1x 500GB WD, 4x 1TB Samsung F3 and 4x 2TB WD Greens
*Server Manufacturer):* Me


----------



## Junior82

Just picked up this rack for $100 off craigslist, sure beats the wire shelving that everything was on before.
Server's in rack are:

Server on the Left:
OS: ESXi 5.0
Case: HP
CPU: Intel Xeon E3 1230
Cpu Cooling: HP
Motherboard: HP
Memory: 12GB Kingston DDR3 ECC
PSU: HP
OS HDD: 8GB Kangaru Flash Drive
Storage HDD(s): 2x Seagate 1TB, x1 Seagate 2TB, x1 Seagate 500GB
Server Manufacturer): HP ProLiant ML110 G7
This server is running several VM's

Server on the Right:
OS: Untangle
Case: Dell
CPU: Pentium 4 3.2GHz w/ HT
Cpu Cooling: Stock
Motherboard: Dell
Memory: 1GB
PSU: Dell
OS HDD: 80GB 2.5" 7200rpm
Server Manufacturer): Dell Dimension 4700
Running Untangle

Sitting on top of the Dell Box is a Iomega StorCenter ix2 2TB NAS used for storage/Backups

OS: Windows Server 2003
Case: Compaq
CPU: Intel Pentium III 933
Cpu Cooling: passive heatsink
Motherboard: Compaq
Memory: 2GB
PSU: Compaq
OS HDD: 17.3GB in RAID 1
Server Manufacturer): Compaq DL360
Server is used for: Sabnzbd+, couch potato, sickbeard

Just ordered a new switch to replace the 16port Netgear gigabit switch with a Netgear GS724T. Also ordering another server, Dell 2950 in a couple weeks.


----------



## herkalurk

Quote:


> Originally Posted by *Junior82*
> 
> Dell 2950 in a couple weeks.


That 2950 would be a good replacement ESXI host.


----------



## jackbrennan2008

Quote:


> Originally Posted by *Imrac*
> 
> *Description / Usage:* This is my VM/FileStorage/Backup server. My backups are replicated to a friends server with similiar storage capacity off site, and he replicates to mine. The file storage VM is running OpenIndiana with the IBM M1015 passed through. This also occasionally hosts game servers.
> 
> *OS:* ESXi 5.0
> *Case:* RaidMax with iStarUSA hotswappable 4 in 3 HDD bays.
> *CPU:* Intel i7 3770s
> *Cpu Cooling:* Dark Knight II
> *Motherboard:* Asrock Z77 Extreme-m
> *Memory:* 32GB Corsair Vengeance
> *HBA:* IBM m1015 flash with IT
> *PSU:* 400w Earth Watts
> *OS HDD:* 2gb Flash Drive
> *Storage HDD(s):* 1x 76GB Raptor, 1x 500GB WD, 4x 1TB Samsung F3 and 4x 2TB WD Greens
> *Server Manufacturer):* Me


Nice server. I was also impressed by the 'swap with a friend' for redundancy









Good job!

Sent from a local cell tower.


----------



## Matrixvibe

unRAID Media Server

OS: unRAID Pro
Case: Antec Three Hundred
CPU: Pentium 4 640 HT 3.2GHz
Motherboard: Asus P5N73-AM + TP-link gigabit Ethernet Card
Memory: 1.5GB Kingston DDR2 667
PSU: Corsair CX430 v2
Storage HDD(s): 3x 2TB WD Red (1x Parity, 2x Storage), 1x 320GB WD Caviar Blue (Cache drive)
Server Manufacturer (Ex: Dell, HP, You?): Myself

Only things I bought were the OS license and WD Red drives. The rest I already had laying around as spare parts. Looking to upgrade motherboard and CPU in the future. Adding a Supermicro AOC-SAS2LP-MV8 and more drives in the near future.










Drilled a couple of holes for cable management



























Added APC UPS


----------



## Junior82

Quote:


> Originally Posted by *herkalurk*
> 
> That 2950 would be a good replacement ESXI host.


Just ordered the 2950 should be here by the end of the week.


----------



## evermooingcow

How do many of you deal with the noise of boxes like the 2950? Do you not care? dedicate a room to it? colo?


----------



## Oedipus

I would imagine they are located in a semi-dedicated room.


----------



## Boyboyd

Quote:


> Originally Posted by *evermooingcow*
> 
> How do many of you deal with the noise of boxes like the 2950? Do you not care? dedicate a room to it? colo?


Mine is far, far away in a room all by itself and a fax machine. There's no way i could cope if it was in my office.










Safe...


----------



## dushan24

That's a precarious position...

Why not make a LakRack?


----------



## Boyboyd

Quote:


> Originally Posted by *dushan24*
> 
> That's a precarious position...
> Why not make a LakRack?


I have one un-used at home but the rails i have don't fit. Admittedly I could just buy some more rails.

It's not as precarious as it looks, and it's not in a high traffic area, but i'll admit it's less than ideal.


----------



## dushan24

Quote:


> Originally Posted by *Boyboyd*
> 
> I have one un-used at home but the rails i have don't fit. Admittedly I could just buy some more rails.
> It's not as precarious as it looks, and it's not in a high traffic area, but i'll admit it's less than ideal.


Haha, fair enough.


----------



## jackbrennan2008

I sifted through all my old stuff and managed to put together this little server running ESXi 5.1. The only thing i had to buy was a new CPU (Got a 3570K) for my sig rig seeing as i donated the 2600k to the server below.

CPU: 2600K
RAM: 16GB Corsair Vengeance
Mobo: Gigabyte GA-P67A-UD7-B3
HDD: 1x 1.5TB WD Black and 1x 500GB
Network datastore: 500GB NAS storage (2 disks in RAID 1)
GPU: Geforce 210 passive

This will be running my webserver as well as my VPN and Windows Server 2008 R2.

I've just got a new fiber line installed with an upload speed of around 200mbps which will be nice for a few of my public facing servers.

Not bad for a frankenstein server.

I'll try to take some pictures tomorrow.

Sent from my mobile phone.


----------



## 100PARIK

Hello fellas!

My server is as follows:

*CPU:* Q6600 (G0)
*GPU:* BFG GTX280
*RAM:* Patriot PDC22G8500ELK 4gb PC2-8500
*PSU:* Xigmatek Tauro 750W
*MOBO:* ASUS Striker Extreme
*CASE:* Norco RPC-430 (modified to fit GTX280)


----------



## blooder11181

can you you use a less power hungry gpu?


----------



## tiro_uspsss

Quote:


> Originally Posted by *100PARIK*
> 
> Hello fellas!
> My server is as follows:
> *CPU:* Q6600 (G0)
> *GPU:* BFG GTX280
> *RAM:* Patriot PDC22G8500ELK 4gb PC2-8500
> *PSU:* Xigmatek Tauro 750W
> *MOBO:* ASUS Striker Extreme
> *CASE:* Norco RPC-430 (modified to fit GTX280)


what is it precisely serving (as)?


----------



## pm40elys40

Storage server for music and movies.

Windows 7 Ultimate x64
Casetronic/Travla TE-1160 1U
Intel Core 2 Duo T7250
MSI FUZZY GME965 IPC Board
4096MB DDR2-800
Seasonic SS-250M1U (Travla rebadged)
OCZ Vertex2E 120GB
WD3200BEKT
4x WD20EARS
Silicon Image SiI3124 controller

Manufactured by: Me!


----------



## 100PARIK

Quote:


> can you you use a less power hungry gpu?


Yes sir, I want to use a different gpu when I get a chance to buy it
Just like most of us here this server was thrown together from old/existing parts.


----------



## Kosire

*CPU:* Intel Xeon E3-1225V2 8 MB
*Cooler:* Noctua NH-L12
*Motherboard:* ASUS P8H77-I
*RAM:* Kingston HyperX blu 2 x 8 GB
*GPU:* Intel HD Graphics P4000
*HDD:* Seagate Barracuda 7200.14 3000GB 64MB SATA
Seagate Barracuda 7200.14 3000GB 64MB SATA
Seagate Barracuda 7200.14 3000GB 64MB SATA
Seagate Barracuda 7200.14 3000GB 64MB SATA
Seagate Barracuda 7200.11 1500GB 32MB SATA
*SSD:* Samsung 830 Series 128 GB SSD
*PSU:* Corsair Builder Series CX430 V2
*Case:* Lian Li PC-Q25B


----------



## 100PARIK

Quote:


> Quote:
> Originally Posted by 100PARIK
> 
> Hello fellas!
> My server is as follows:
> CPU: Q6600 (G0)
> GPU: BFG GTX280
> RAM: Patriot PDC22G8500ELK 4gb PC2-8500
> PSU: Xigmatek Tauro 750W
> MOBO: ASUS Striker Extreme
> CASE: Norco RPC-430 (modified to fit GTX280)
> 
> what is it precisely serving (as)?










It serves as HTPC, file share server, and occasional gaming rig.


----------



## Kosire

Quote:


> Originally Posted by *100PARIK*
> 
> 
> 
> 
> 
> 
> 
> 
> It serves as HTPC, file share server, and occasional gaming rig.


You know you can edit your posts right?


----------



## VirtualFido

No pictures, but its not pretty.

*OS:* Ubuntu Server 12.04 LTS
*CPU:* Intel Core i5 2500K
*Cooler:* Scythe Shuriken Rev.B SCSK-1100
*Motherboard:* ASUS P8H77-I
*RAM:* Corsair Vengeance DDR3 1600MHz 2 x 8GB
*GPU:* Intel HD Graphics 3000
*SSD:* Kingston SSDNow S100 SSD 16GB
*HDD:* Seagate Barracuda 7200.14 3000GB 64MB SATA
Seagate Barracuda 7200.14 3000GB 64MB SATA
Samsung 2000GB F4 EcoGreen 32MB
Samsung 2000GB F4 EcoGreen 32MB
*PSU:* Corsair Builder Series CX430 V2
*Case:* Random no-name


----------



## 100PARIK

Quote:


> Originally Posted by *Kosire*
> 
> You know you can edit your posts right?


Yes, I know that. I wrote previous post from my phone.... editing requires too much touching though


----------



## dushan24

Quote:


> Originally Posted by *Kosire*
> 
> 
> *CPU:* Intel Xeon E3-1225V2 8 MB
> *Cooler:* Noctua NH-L12
> *Motherboard:* ASUS P8H77-I
> *RAM:* Kingston HyperX blu 2 x 8 GB
> *GPU:* Intel HD Graphics P4000
> *HDD:* Seagate Barracuda 7200.14 3000GB 64MB SATA
> Seagate Barracuda 7200.14 3000GB 64MB SATA
> Seagate Barracuda 7200.14 3000GB 64MB SATA
> Seagate Barracuda 7200.14 3000GB 64MB SATA
> Seagate Barracuda 7200.11 1500GB 32MB SATA
> *SSD:* Samsung 830 Series 128 GB SSD
> *PSU:* Corsair Builder Series CX430 V2
> *Case:* Lian Li PC-Q25B


That's clean, nice one!


----------



## Jtvd78

So, i see a lot of you are running VMs. Why do you exactly need to have 5 VMs running? And how is the performance of virualizing, like in VMware, compared to just installing the OS directly?


----------



## dushan24

Quote:


> Originally Posted by *Jtvd78*
> 
> So, i see a lot of you are running VMs. Why do you exactly need to have 5 VMs running? And how is the performance of virualizing, like in VMware, compared to just installing the OS directly?


Are you asking me?

A true hypervisor (Xen, ESXi etc.) should have practically equal performance to a bare metal install.

Paravirtualisation (Virtuozzo etc.) will be less due to overheads etc. of the host OS and the fact it doesn't get hardware pass through etc.

We run a ~200 server strong VMWare infrastructure at work...

The reason to run many VM's are:
Segregating work - Bad idea for one server to do too much.
Redundancy - Multiple VM's accross multiple hosts incase one goes down.
Load balancing - Multiple VM's doing the same thing and sharing a heavy load (usually Web/SQL)
For home setups - Trying things in different OS's and experimenting.

Many other reasons too...


----------



## FiX

Quote:


> Originally Posted by *dushan24*
> 
> Are you asking me?
> A true hypervisor (Xen, ESXi etc.) should have practically equal performance to a bare metal install.
> Paravirtualisation (Virtuozzo etc.) will be less due to overheads etc. of the host OS and the fact it doesn't get hardware pass through etc.
> We run a ~200 server strong VMWare infrastructure at work...
> The reason to run many VM's are:
> Segregating work - Bad idea for one server to do too much.
> Redundancy - Multiple VM's accross multiple hosts incase one goes down.
> Load balancing - Multiple VM's doing the same thing and sharing a heavy load (usually Web/SQL)
> For home setups - Trying things in different OS's and experimenting.
> Many other reasons too...


Xen is has two types - PV and HVM.
PV (paravirtualisation) performs better but the domU knows it's being virtualised.
HVM (hardware virtualisation) does not perform as well, and cannot run on all hardware, but the domU does not know it is in virtualisation. This type of virtualisation supports windows whereas PV doesn't.


----------



## BadDad62

Usage: General Storage/Downloader/Media Streaming Not quite finished, A few little things to change.

OS: Win 7
Case: TJO7-E
CPU: i3 2130
Mobo: Asrock Z-77 Pro M
Ram: 8g Gskill 1600Mhz
Gpu: GTX 260
PSU: Tt Toughpower 775w
OS HDD: 64g SSd
+ Watercooled








Storage HDD(s): WD 2Tb x 8
Server Manufacturer: BadDad62


----------



## ZealotKi11er

Just build my first server.

AMD A8-3870K @ 3.0Ghz w/HD 6550
Corsar H60
Biostar TA75A+
8GB G.Skill DDR3-1600
Antec 302
Antec EA-650

HDDs
1 x Seagate 500GB OS, Will probably replace with small SSD.
2 x Seagate 2TB.
Will keep adding more HDD in the future.

This way i freed up my Gaming PC and dont have to run it 24/7.


----------



## BiscuitHead

Hey guess what guys....



My server sucks. Still in my first semester of a Windows Administration program, and I was able to get this GX280 for free from my dad. Once I got my MSDN AA license, I put server 2008 on it and have just been playing around with it. Eventually I'll add more HDDs to it and actually put it to good use.


----------



## RogueRage

===========================NAS==================== ========
Case:.....LIAN LI PC-V354B Micro ATX
Mobo:.....ASUS M5A88-M AM3+ AMD 880G HDMI SATA 6Gb/s USB 3.0 Micro ATX AMD Motherboard
RAM:......G.SKILL Sniper Series 8GB 240-Pin DDR3 SDRAM DDR3 1600 Low Voltage Desktop
CPU:.......AMD FX-8150 FX 8-Core Black Edition Processor Socket AM3+ - FD8150FRGUBOX
Cooler:....CORSAIR CWCH60 Hydro Series H60 High Performance Liquid CPU
PSU:.......KINGWIN Lazer Platinum Series LZP-550 550W ATX SLI Ready
HD:.........Western Digital Caviar Green WD20EARX 2TB 64MB Cache SATA 6.0Gb/s (x4) Raid5
SSD:.......Intel 520 Series Solid-State Drive 120 GB SATA 6 Gb/s 2.5-Inch
Fan:........(120mm x120mm x 15mm) CoolerMaster Blade Master XtraFlo 120 Slim Case Fan
BD-RW:....Home Surplus

http://www.overclock.net/t/1315802/silent-nas-build


----------



## parityboy

Quote:


> Originally Posted by *dushan24*
> 
> Are you asking me?
> A true hypervisor (Xen, ESXi etc.) should have practically equal performance to a bare metal install.
> Paravirtualisation (Virtuozzo etc.) will be less due to overheads etc. of the host OS and the fact it doesn't get hardware pass through etc.
> We run a ~200 server strong VMWare infrastructure at work...
> The reason to run many VM's are:
> Segregating work - Bad idea for one server to do too much.
> Redundancy - Multiple VM's accross multiple hosts incase one goes down.
> Load balancing - Multiple VM's doing the same thing and sharing a heavy load (usually Web/SQL)
> For home setups - Trying things in different OS's and experimenting.
> Many other reasons too...


- *Stability*. Virtualised "hardware" will always be the same, no matter what the underlying physical hardware is.
- *Stability*. Each application stack can run on a known "hardware"/OS combination. Upgrades can be tested in a contained environment, away from a production installation and without having to buy new hardware.


----------



## dushan24

Quote:


> Originally Posted by *parityboy*
> 
> - *Stability*. Virtualised "hardware" will always be the same, no matter what the underlying physical hardware is.
> - *Stability*. Each application stack can run on a known "hardware"/OS combination. Upgrades can be tested in a contained environment, away from a production installation and without having to buy new hardware.


Good additions









Though depending on the hypervisor and whether or not it has a compatibility mode there is a bit of a but to point 1.


----------



## parityboy

That's a very good point actually: with the likes of KVM and ESXi, how transparent is the PCI pass-through mode? Can the guest see a PCIe card's BIOS for example?


----------



## Sodalink

I'm almost done with my server and can post picks now.

I use it as a storage, streaming, bedroom htpc and some gaming to on high/mid. At some point I will use it as a
DVR server for security cameras.

Specs:
AMD A6 3500 x3 2100Ghz APU
ASUS FM1 Motherboard
Patriot 16GB DDR3 1600 1.5v
Corsair 430CX v2
NZXT H2 White case
Asus Blu-Ray drive
OCZ 90GB Agility 3 SSD
Hitachi 5x2TB 7200rpm drives in Raid 5 (8TB storage)
Zerotherm Nirvana CPU Cooler


----------



## dushan24

Quote:


> Originally Posted by *parityboy*
> 
> That's a very good point actually: with the likes of KVM and ESXi, how transparent is the PCI pass-through mode? Can the guest see a PCIe card's BIOS for example?


We use ESXi 4.1 at work and run all the hosts in Enhanced VMWare Compatibility Mode, which means the VM's are presented an emulated CPU that is then passed by the hypervisor to the physical CPU.

No direct BIOS access, though you can copy strings from the host to the VM via ESXi

This allows VMotioning machines onto any host running the same EVC mode regardless of underlying hardware.
We have 3 generations of Intel servers in our environment so it saved us a lot.

I run Xen at home, and I believe it is the same.
But since I have only one host, there is no sense in using compatibility mode, so I can't tell you for sure.


----------



## Peanuthead

Quote:


> Originally Posted by *dushan24*
> 
> We use ESXi 4.1 at work and run all the hosts in Enhanced VMWare Compatibility Mode, which means the VM's are presented an emulated CPU that is then passed by the hypervisor to the physical CPU.
> 
> This allows VMotioning machines onto any host running the same EVC mode regardless of underlying hardware.


You are correct. ESXi presents the CPU to all of the hosts using the lowest common denominator of all of the CPUs of the hosts.


----------



## parityboy

Stupid question: why would the *guest* CPU be passed to the *host*? Does the host actually see it as a CPU (and treat it as part of an SMP setup) or as some kind of kernel process?


----------



## Peanuthead

It's not really passing the CPU but more so cloaking the actually CPU in the host to the LCD (lowest common denominator) CPU found among all of the hosts. Does that make more sense?


----------



## parityboy

Yes.


----------



## dushan24

Sorry, I didn't explain it perfectly.


----------



## dushan24

Quote:


> Originally Posted by *Peanuthead*
> 
> You are correct. ESXi presents the CPU to all of the hosts using the lowest common denominator of all of the CPUs of the hosts.


Only if you tell it to


----------



## Peanuthead

Quote:


> Originally Posted by *dushan24*
> 
> Only if you tell it to


Correct. If someone is in there then I am making the presumption that you will be setting the LCD.


----------



## dushan24

Quote:


> Originally Posted by *Peanuthead*
> 
> Correct. If someone is in there then I am making the presumption that you will be setting the LCD.


That's what we do in our environment.

8 hosts, all running ESXi 4.1
3 generations of Intel CPU's

EVC mode set to the LCD.

PS: We're upgrading to ESXi 5 real soon


----------



## Ryanb213

1 down, 5 to go.


----------



## dushan24

I like how everything's at the front on Rackables









@Ryanb123, what will they be doing?


----------



## killabytes

Quote:


> Originally Posted by *Fir3Chi3f*
> 
> VPSs are cheating
> 
> 
> 
> 
> 
> 
> 
> 
> I had no clue you actually got your home server together tho! Looks good!


I wouldn't consider a VPS cheating. They're renting it, it's in their name. Now, people posting servers from work or school. That's cheating. Obviously they're going to be nicer than what folks have at home.


----------



## Boyboyd

Quote:


> Originally Posted by *killabytes*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Fir3Chi3f*
> 
> VPSs are cheating
> 
> 
> 
> 
> 
> 
> 
> 
> I had no clue you actually got your home server together tho! Looks good!
> 
> 
> 
> I wouldn't consider a VPS cheating. They're renting it, it's in their name. Now, people posting servers from work or school. That's cheating. Obviously they're going to be nicer than what folks have at home.
Click to expand...

What if we're the server administrator at our work?


----------



## nerdalertdk

Quote:


> Originally Posted by *Boyboyd*
> 
> What if we're the server administrator at our work?


If you are buying the server with our own money, then yeah, if not, then no


----------



## 72bluenova

HP ProLiant DL380 G5 Server 2×Xeon Quad-Core 3.0GHz + 16GB RAM + 8×73GB 10K SAS for home. I should be getting it during this week. Will post pictures once I get it.

I really don't have much space inside the house, and I doubt the wife will let me have this box screaming with the fans either.









So, during the wait for the box to arrive, it is wiring time. Will have it running in the garage and I need to get a UPS for it as well.

Main purpose for the box is to install ESXi, then some Windows 2008 servers on it to study for the MCITP


----------



## dushan24

Quote:


> Originally Posted by *Boyboyd*
> 
> What if we're the server administrator at our work?


Doesn't count, if you could count work servers then I'd be listing our 3 new R810's


----------



## Oedipus

I say post them anyway as long as you don't claim they're yours.


----------



## dushan24

Perhaps, we have a whole cabinet of cool stuff.

The R810's are simply the most powerful.


----------



## Ecstacy

Quote:


> Originally Posted by *dushan24*
> 
> Perhaps, we have a whole cabinet of cool stuff.
> The R810's are simply the most powerful.


Pics?


----------



## dushan24

Quote:


> Originally Posted by *Ecstacy*
> 
> Pics?


Perhaps, I posted some in another thread, forget where.


----------



## shadow5555

Here is my office room which is my network/servers/bench working area setup. Let me know what you think



Black computer is MY WHS 2011 box
dual opty 2.4s with 10gig ecc ram
500 windowws
7tbs of storage

16port business class cisco gig switch

White Computer is my dedi untangle firewall distro box
crap p4 build with dual gig nics



Hiding in corner is my esx 3.5 hp proliant g3 server

dual opty 2.8 with 8gig ecc ram and 6 73gig scsci drives

I know using a outdoor chair as my office chair doesnt working well, but it is what I have at the moment. My ex decided to break my office chair, dont ask long story as it goes lol.


----------



## killabytes

Quote:


> Originally Posted by *shadow5555*
> 
> Here is my office room which is my network/servers/bench working area setup. Let me know what you think
> 
> Black computer is MY WHS 2011 box
> dual opty 2.4s with 10gig ecc ram
> 500 windowws
> 7tbs of storage
> 16port business class cisco gig switch
> White Computer is my dedi untangle firewall distro box
> crap p4 build with dual gig nics
> 
> Hiding in corner is my esx 3.5 hp proliant g3 server
> dual opty 2.8 with 8gig ecc ram and 6 73gig scsci drives
> I know using a outdoor chair as my office chair doesnt working well, but it is what I have at the moment. My ex decided to break my office chair, dont ask long story as it goes lol.


We're case bros!


----------



## Mootsfox

Do you use the tape drive?


----------



## killabytes

Quote:


> Originally Posted by *Mootsfox*
> 
> Do you use the tape drive?


Not so much anymore. I use it for system state only. Tis old.









EDIT:

Here ya go


----------



## Mootsfox

Quote:


> Originally Posted by *killabytes*
> 
> Not so much anymore. I use it for system state only. Tis old.
> 
> 
> 
> 
> 
> 
> 
> 
> EDIT:
> Here ya go


When I was about 10, I got to go through a building that was closing down, and they had a huge tape library (shut down) that I got to walk through. I've been awed by them since


----------



## spice003

that pic looks kinda like google's tape back up room , with the arm and everything


----------



## dushan24

Quote:


> Originally Posted by *spice003*
> 
> that pic looks kinda like google's tape back up room , with the arm and everything


Many places have them, Googles ones actually aren't that big compared to some (I assume you're referring to the newly released Google data centre photos)

Back on topic, those huge tape libraries are beastly.


----------



## 72bluenova

So the server arrived.










I am not very happy with the outside, according to the auction on eBay it said it was in excellent condition. I have not powered the box yet, as I had not time to do wiring unfortunately.



















The one thing that really has me irritated is that the front bezel is broken. You have to hold it shut to be able to press the power button.


----------



## tiro_uspsss

*snip*


----------



## killabytes

Quote:


> Originally Posted by *Mootsfox*
> 
> When I was about 10, I got to go through a building that was closing down, and they had a huge tape library (shut down) that I got to walk through. I've been awed by them since


The data center I work at has an IBM unit that holds about 2,000 LTO tapes. One of my main duties is backup and restore.


----------



## dushan24

Quote:


> Originally Posted by *killabytes*
> 
> The data center I work at has an IBM unit that holds about 2,000 LTO tapes. One of my main duties is backup and restore.


LTO5?

That's what we have in ours, though it's a MUCH smaller unit.


----------



## killabytes

Quote:


> Originally Posted by *dushan24*
> 
> LTO5?
> That's what we have in ours, though it's a MUCH smaller unit.


Yup 5. It's smaller than the picture posted, but still large.


----------



## VictorB

IHer is my cheap quality ZFS fileserver

Case: Fractal Design ARC midi
Motherboard: Asus F1A75-M
APU: AMD A6 3500 2.1ghz triple core
Memory: Corsair 16gb kit 2x8gb 1333mhz
Powersupply: Corsair GS500
SSD: Samsung 830 128gb
Storage: 5x 3TB Seagate Barracuda 7200.14
Nic: HP NC364T Quad port gbit

In this video i explain my build






















I also modded a noisy 1u 48port switch for home use with a 12cm fan



Its running NAS4FREE


----------



## dushan24

Quote:


> Originally Posted by *VictorB*
> 
> IHer is my cheap quality ZFS fileserver
> Case: Fractal Design ARC midi
> Motherboard: Asus F1A75-M
> APU: AMD A6 3500 2.1ghz triple core
> Memory: Corsair 16gb kit 2x8gb 1333mhz
> Powersupply: Corsair GS500
> SSD: Samsung 830 128gb
> Storage: 5x 3TB Seagate Barracuda 7200.14
> Nic: HP NC364T Quad port gbit
> In this video i explain my build
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I also modded a noisy 1u 48port switch for home use with a 12cm fan
> 
> 
> 
> Its running NAS4FREE


Nice man REP


----------



## VictorB

Thanx Man









(Why did you quote the whole message if it in the post above your message







)


----------



## beers

Quote:


> Originally Posted by *killabytes*
> 
> We're case bros!


Dude I have that same case and Evercool unit...

Edit:
Also, a not inherently spectacular display whilst replacing dead motherboard:


----------



## frank anderson

Just adding some recently taken pics.
Quote:


> Usage: whatever I decide to throw at it including the kitchen sink...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Windows Server 2003 Ent Ed, SP2 (I'm too lazy to upgrade)
> Xigmatek Pretorian (I think)
> i5 2500K (stock)
> Gigabyte GA-Z67-UD3
> 16GB - Corsair Vengeance CML16GX3MA1600C9 (running at 1333 stock)
> Corsair AX750
> Crucial M4 240GB
> WD2002FAEX x4 Raid 5 via LSI
> built by ME


----------



## 72bluenova

Quote:


> Originally Posted by *frank anderson*


What controller is that?


----------



## beers

Quote:


> Originally Posted by *72bluenova*
> 
> What controller is that?


I'd guess http://www.neweggbusiness.com/Product/Product.aspx?Item=N82E16816118160&nm_mc=KNC-GoogleBiz&cm_mmc=KNC-GoogleBiz-_-pla-_-NA-_-NA&gclid=CKSMjrGb4LMCFcU-Mgod5DYA1Q


----------



## herkalurk

Quote:


> Originally Posted by *killabytes*
> 
> The data center I work at has an IBM unit that holds about 2,000 LTO tapes. One of my main duties is backup and restore.


Nice. I also get the joy of managing backup, however my work isn't as large. We have a Dell Powervault ML6030 LTO5 library, holds 195 tapes with 4 drives currently. Still fun to watch the wee robot wiz around though. Just a question, what backup software does your company use (if it's ok to divulge)?


----------



## killabytes

Quote:


> Originally Posted by *herkalurk*
> 
> Nice. I also get the joy of managing backup, however my work isn't as large. We have a Dell Powervault ML6030 LTO5 library, holds 195 tapes with 4 drives currently. Still fun to watch the wee robot wiz around though. Just a question, what backup software does your company use (if it's ok to divulge)?


It's okay.

The IBM unit uses Tivoli. While our remote sites use Symantic.


----------



## Oedipus

Backup Exec 2012 is horrendous to navigate, and the way it integrates with cartridges versus tapes is nothing short of stupefying. I'm looking at other options for our next server installs.


----------



## tycoonbob

System Center 2012 Data Protection Manager. Awesome software.


----------



## Oedipus

Speaking of which, I've been working on getting SCCM 2012 installed on my sig system. What a pain. After battling all of the prereqs (IIS this, BITS that, etc.) I've gotten to the point in the install where it fails to create the SQL cert, at which point I find out that SQL 2012 doesn't work with SCCM 2012.

Do you run SQL 2008? I've used SCCM 2007 before and it's cool enough to warrant all this effort.


----------



## killabytes

Quote:


> Originally Posted by *Oedipus*
> 
> Backup Exec 2012 is horrendous to navigate, and the way it integrates with cartridges versus tapes is nothing short of stupefying. I'm looking at other options for our next server installs.


Yes, I hate it. Sadly we use it for remote sites. So it's even worse when they load up the wrong tape for the day. Nothing but phone tag.
Quote:


> Originally Posted by *tycoonbob*
> 
> System Center 2012 Data Protection Manager. Awesome software.


Just did a quick Google, looks nice. I wish I had the position of choice.


----------



## herkalurk

Quote:


> Originally Posted by *killabytes*
> 
> It's okay.
> The IBM unit uses Tivoli. While our remote sites use Symantic.


How do you like TSM? We use it as well. Most people I've talked to aren't very happy with it.


----------



## tycoonbob

Quote:


> Originally Posted by *Oedipus*
> 
> Speaking of which, I've been working on getting SCCM 2012 installed on my sig system. What a pain. After battling all of the prereqs (IIS this, BITS that, etc.) I've gotten to the point in the install where it fails to create the SQL cert, at which point I find out that SQL 2012 doesn't work with SCCM 2012.
> Do you run SQL 2008? I've used SCCM 2007 before and it's cool enough to warrant all this effort.


I have done about 12 production System Center 2012 Configuration Manager installs (and countless lab installs) and I always use SQL Server 2008 R2. To be more specific, it requires SQL Server 2008 R2 SP1 CU4, but you can just install SQL Server 2008 R2 SP2 and be good. SQL Server 2012 won't be supported until Service Pack 1 for System Center 2012 is released early next year.

If you ever have any questions about it, let me know!


----------



## killabytes

Quote:


> Originally Posted by *herkalurk*
> 
> How do you like TSM? We use it as well. Most people I've talked to aren't very happy with it.


Very powerful. I enjoy the exposure. A lot of it is over my head right now, new to the job. My previous job used Windows Backup, lol.


----------



## ElectroGeek007

Here is my "server," though it's nothing compared to some of the awesomeness in this thread...









OS: Windows 7 Pro x64
Case: Dell Dimension 2400 case







(its actually a decent little case after a bit of modding)
CPU: Intel Core 2 Duo E6750 (2.66 GHz)
Motherboard: Biostar G41D3C
Memory: 4GB GSkill Ripjaws
PSU: Antec EarthWatts Green 430w
OS HDD (If you have one): Western Digital 60GB








Storage HDD(s): coming soon...
Server Manufacturer (Ex: Dell, HP, You?): Pretty much me...

Used mainly to host a Minecraft server (on a RAMDisk), and as a file server with an external drive attached to it. I hope to soon buy an actual storage drive (or perhaps 2 to RAID?).


----------



## gc8dc95

Server: Dell Poweredge 2950 III
OS: ESXi 5.1
CPU: 2 x Intel Xeon Quads @ 2.33GHz
Memory: 16Gb
OS HDD: HP 16Gb USB Drive
Storage HDD(s): 4 x 146Gb 15k SAS in RAID 10

This is my first actual server and has been fun to play around with. I am going to use it to run several VM's for Microsoft and linux learning.


----------



## VictorB

So part2 of my build is online!












Part1




For more info scroll back in this thread


----------



## Rexel

*OS:* unRAID Server Plus 5.0-rc5
*Case:* CoolerMaster Silenco 550
*CPU:* intel Core i7-920
*Motherboard:* ASUS P6T SE
*Memory:* 6GB OCZ 3x2GB DDR3 PC10666
*PSU:* CoolerMaster M 750
*OS HDD (If you have one):* Kingston USB Drive
*Storage HDD(s):* 3x 1.5TB Data, 1x 2TB Parity
*Server Manufacturer (Ex: Dell, HP, You?):* I build it myself

The server is mainly used to store document, family photos, movies and music. Besides that it also streams the movies to the TV's and it is used for downloading.

http://postimage.org/image/6zedq4wq5/

http://postimage.org/image/4w3yigwx9/


----------



## Blindsay

Quote:


> Originally Posted by *ElectroGeek007*
> 
> Here is my "server," though it's nothing compared to some of the awesomeness in this thread...
> 
> 
> 
> 
> 
> 
> 
> 
> OS: Windows 7 Pro x64
> Case: Dell Dimension 2400 case
> 
> 
> 
> 
> 
> 
> 
> (its actually a decent little case after a bit of modding)
> CPU: Intel Core 2 Duo E6750 (2.66 GHz)
> Motherboard: Biostar G41D3C
> Memory: 4GB GSkill Ripjaws
> PSU: Antec EarthWatts Green 430w
> OS HDD (If you have one): Western Digital 60GB
> 
> 
> 
> 
> 
> 
> 
> 
> Storage HDD(s): coming soon...
> Server Manufacturer (Ex: Dell, HP, You?): Pretty much me...
> Used mainly to host a Minecraft server (on a RAMDisk), and as a file server with an external drive attached to it. I hope to soon buy an actual storage drive (or perhaps 2 to RAID?).


How well does minecraft work on a ramdisk? is it noticeably better? Any guide on setting that up? Right now my minecraft server is just on a SSD.


----------



## VictorB

18TB bruto space and 12 TB netto space + 2x500gb mirror boot. The 3TB seagate disks are on the IBM M1015 controller. There is a 128gb samsung ssd in the 5.25 drive bays as L2arc zfs cache


----------



## Sodalink

Quote:


> Originally Posted by *VictorB*
> 
> IHer is my cheap quality ZFS fileserver
> Case: Fractal Design ARC midi
> Motherboard: Asus F1A75-M
> APU: AMD A6 3500 2.1ghz triple core
> Memory: Corsair 16gb kit 2x8gb 1333mhz
> Powersupply: Corsair GS500
> SSD: Samsung 830 128gb
> Storage: 5x 3TB Seagate Barracuda 7200.14
> Nic: HP NC364T Quad port gbit
> In this video i explain my build
> 
> Its running NAS4FREE


MIne is similar to yours....
Quote:


> Originally Posted by *Sodalink*
> 
> I'm almost done with my server and can post picks now.
> I use it as a storage, streaming, bedroom htpc and some gaming to on high/mid. At some point I will use it as a
> DVR server for security cameras.
> Specs:
> AMD A6 3500 x3 2100Ghz APU
> ASUS FM1 Motherboard
> Patriot 16GB DDR3 1600 1.5v
> Corsair 430CX v2
> NZXT H2 White case
> Asus Blu-Ray drive
> OCZ 90GB Agility 3 SSD
> Hitachi 5x2TB 7200rpm drives in Raid 5 (8TB storage)
> Zerotherm Nirvana CPU Cooler


How are you looking that APU? I personally love it for what I paid for it. Also do you have any idea how much energy it uses? I'm a bit hesitant to leave it on 24/7 and only turn it on when I need it even though I feel like that might cause some wear to the drives.


----------



## VictorB

I did some testing today with my APU

APU Motherboard psu usb stick mem 33watt idle
+3 Fans 10watt
+Quad nic 7 watt
+M1015 diskcontroller 7Watt
+2x samsung 500gb harddisk 23 watt <- strange so much in compare to the seagates the samsung disks run also hotter
+6x 3tb seagate and 128gb ssd 36watt idle

total 110 watt idle now

I really like the APU is cheap and without any problem. The idle could be a bit lower but its ok


----------



## axipher

Quote:


> Originally Posted by *VictorB*
> 
> I did some testing today with my APU
> 
> APU Motherboard psu usb stick mem 33watt idle
> +3 Fans 10watt
> +Quad nic 7 watt
> +M1015 diskcontroller 7Watt
> +2x samsung 500gb harddisk 23 watt <- strange so much in compare to the seagates the samsung disks run also hotter
> +6x 3tb seagate and 128gb ssd 36watt idle
> 
> total 110 watt idle now
> 
> I really like the APU is cheap and without any problem. The idle could be a bit lower but its ok


I may have missed it but what APU?

Have you tried under-volting it and under-clocking it, maybe even disabling a module?


----------



## VictorB

a6-3500 2.1triple core. i didn't do undervolting or underclocking. but under zfsguru its clocks back to 100mhz idle


----------



## beers

Here's a casual shot when I was changing cases into a new (and nicely cheap at $30 shipped) Three Hundred
It got pretty messy afterwards, though







. Idles ~90w.
Quote:


> AMD Sempron 145
> Asus M5A78L-M LX+
> 4 GB OCZ Platinum
> 64 GB Samsung 830
> 8x 750 GB RAID 5
> Corsair CX430
> CentOS 6.3


----------



## CiBi

I use it mostly for downloading torrents but it also serves as a print and fileserver.

*OS:* Windows XP but I'm going to install Windows Server 2003 32bit soon
*Case:* Some Medion crap case
*CPU:* Intel Pentium 4 550 (1 core 2 threads @ 3.40GHz)
*Motherboard:* Microstar MS-7091
*Memory:* 4 dimms of 512MB DDR memory (2GB total @ 200MHz)
*PSU:* Stock Medion crap PSU
*OS HDD (If you have one):* Western Digital Raptor 36GB (10.000rpm)
*Storage HDD(s):* External WD drives
*Server Manufacturer (Ex: Dell, HP, You?):* It started out as a Medion PC but hardly anything is left of it except for case, psu and motherboard

PIC's (click for high res)


this is from before I added memory but the rest is the same


----------



## DJEndet

With all the nice and shiny server you guys have here it kinda makes me ashamed of posting mine.









Used for Torrent DL, media playback to our ps3 and some file backups.

OS: Windows XP
Case: HP Case
CPU: AMD Athlon XP 3200+ 2,2ghz
Motherboard: HP OEM Mobo
Memory: 2 dimms of 512MB DDR memory (1gb 166mhz)
PSU: Stock HP PSU
OS HDD (If you have one): 200gb IDE 5400 RPM oldie
Storage HDD(s): 2x 160gb IDE 5400 RPM oldies
Server Manufacturer: HP even though it's just a desktop.










Sorry for the mushy picture, took it with my phone while I was performing maintenance on it. Hasn't seen use in 3 years until today! Also I got no idea why it's sideways xD


----------



## Jtvd78

Keep em coming, guys. Great servers!


----------



## AMD SLI guru

a few little goodies :-D



Router:

OS: Untangle

CPU: Intel Atom dual core with HT

Ram: 8gigs SO-DIMM

HDD: 120gig 5400 laptop drive

HTPC:

OS: Windows 7 64bit

CPU: Intel 2600K

Ram:16gigs DDR3 1600mhz

HDD: 250gig Laptop drive

GPU: Nvidia GTX550ti

Freenas Rig:

OS: Freenas

CPU: Core2duo @ 3ghz

Ram: 8gigs ECC DDR2 667

HHD: 6x 1TB drives & 6x 2TB Drives

Cyberpower UPS's in the bottom. I'm still working on some folding rigs to install but it's pretty much the basic setup.


----------



## dushan24

Quote:


> Originally Posted by *AMD SLI guru*
> 
> a few little goodies :-D
> 
> 
> 
> 
> Router:
> OS: Untangle
> CPU: Intel Atom dual core with HT
> Ram: 8gigs SO-DIMM
> HDD: 120gig 5400 laptop drive
> 
> HTPC:
> OS: Windows 7 64bit
> CPU: Intel 2600K
> Ram:16gigs DDR3 1600mhz
> HDD: 250gig Laptop drive
> GPU: Nvidia GTX550ti
> 
> Freenas Rig:
> OS: Freenas
> CPU: Core2duo @ 3ghz
> Ram: 8gigs ECC DDR2 667
> HHD: 6x 1TB drives & 6x 2TB Drives
> 
> Cyberpower UPS's in the bottom. I'm still working on some folding rigs to install but it's pretty much the basic setup.


Very nice Nice man, how much was the rack?

Most of my stuff is in towers, I'd love to get it into some 2U's


----------



## Oedipus

Dell rack = good times


----------



## EchoGecko

OS: Ubuntu 12.04 LST + XBMC + WEBMIN +
Case: Custom
CPU:AMD Athlon X2 4450e Brisbane 2.3GHz 2 x 512KB L2 Cache Socket AM2 45W Dual-Core
Motherboard: ASUS M2N-SLI Deluxe AM2 NVIDIA nForce 570 SLI MCP ATX AMD
Memory: 2 GB (512MBx4)
PSU: 850 Watt -Yes very much over kill, but it is more energy efficient then the 450 watt i was using, and since I plan on adding 1 hard drive a month it will do.
OS HDD (If you have one): 1 TB, for OS and BT downloads
Storage HDD(s): 2.0TBx8 + 1.5TBx4 20.74TB usable, going to add another 1.5TB soon
Server Manufacturer (Ex: Dell, HP, You?): ME

Speeds:
2TBx8 (onboard Sata + PCIE) Write 98 MBps, Read 96 MBps (Streaming Media)
1.5TBx4 (3Ware 9500s 12 port PCI-X in PCI slot) Write 70 MBps, Read 74-78 MBps (Backups)

SATA Ports
6 -3.0 GBps -OnBoard MCP55 (1TB OS)(2TBx5)
2 -3.0 GBps -PCI-E JBM362 (2TBx2)
2 -3.0 GBps -PCI-E Sil 3132 (2TBx1)(16.0TB LVM2)
12-1.5 GBps -PCI-X in PCI slot 3ware 9500s (1.5TBx4)(6.0TB LVM2)

The setup shown here is my old one, it used drives in the following config
1x750GB OS+BitTorrent drive
4x1.5TB Media drive (LVM2)
7x1TB Backup drive in Raid 6

Yes it's in a Speaker Box, as it is meant for a HTPC, with XBMC running most of the time as a front end, I control Transmission though the web client or remote client, has a RF adapter at the top for a remote, and due to the slow 120 mm fans it is quite and cool.

Power Usage, i measured power usage at a few different levels
-240 Watts, Start Up. first 60 or so seconds
-85 Watts, idle, all drives except OS drive spun down, downloading torrents
-125 Watts, all Drives spun up, backing up main computer to system
-139 Watts, all Drives spun up, XBMC playing 1080p Movie

Drive Temps -32-38*C, depends on day/summer/winter,
CPU Temps -38-40*C, uses a SilentFlux Bubble cooler, which is silent, and sort of like a all in one water cooling system, without the need for a pump. its silent which is what i like,

the whole rig (-drives cost) back in late 2010 $316 that covers CPU, Ram, Case Build, Raid Cads, PSU, MB, Fans, and Cables,

now for the Pictures.

This case has been damaged, dropped on a corner and cracked, and is no longer in use, so RIP, I will have some pics of my new rig soon
-The Flash did white out the black screen, show the air filter that I installed to remove the dust.


----------



## CrazyMonkey

My FTP Server (running Filezilla)

Case: OEM Case
PSU: LCPower 550W V2.2
Mobo: DFI Lanparty NF3 Ultra D
CPU: Opteron 170
RAM: OCZ 2x512MB DDR 500
VGA: MSI 6800GT 256MB
Storage: 1TB Seagate 32MB, 2x320GB Seagate SP 7200.11, 400GB WD
OS HDD: 160GB Maxtor SP
OS: Windows XP SP3 Pro


----------



## AMD SLI guru

Quote:


> Originally Posted by *dushan24*
> 
> Very nice Nice man, how much was the rack?
> Most of my stuff is in towers, I'd love to get it into some 2U's


I got the rack used for 500 bucks locally. just searched around and found it. The model # is a Dell 4220.

Quote:


> Originally Posted by *Oedipus*
> 
> Dell rack = good times


couldn't agree more


----------



## Jtvd78

New build coming up


----------



## blooder11181

the geforce g210 its for what?


----------



## rockosmodlife

Quote:


> Originally Posted by *blooder11181*
> 
> the geforce g210 its for what?


His new build, duh!


----------



## blooder11181

the hd6450 its better all the way.


----------



## Oedipus

Quote:


> Originally Posted by *AMD SLI guru*
> 
> I got the rack used for 500 bucks locally. just searched around and found it. The model # is a Dell 4220.
> 
> couldn't agree more


Black Box racks are sweet, too. The ones I deal with are 45U so they are really imposing, but I like how airy the Dells are.


----------



## Iris

Quote:


> Originally Posted by *Oedipus*
> 
> Black Box racks are sweet, too. The ones I deal with are 45U so they are really imposing, but I like how airy the Dells are.


I also like how they look. Some of the server guys I work with don't really care about looks, but there's nothing like a set of nice looking racks in a row.


----------



## Oedipus

Awww yeah. I believe that caring about the aesthetics of a system deployment is correlated to how much pride you take in your work. Ultimately, I understand why they think looks don't matter, but the aesthetics represent more than what meets the eye.


----------



## axipher

I think when it comes to server racks, or any electrical equipment in an industrial environment (my work), you are never going for the looks of the case or the hardware, but rather a clean installation. So things like cable management, panel layout, uniformity between racks and panels, etc.

Those are the things that are beautiful when you leave "consumer-space".


----------



## CSCoder4ever

Quote:


> Originally Posted by *Jtvd78*
> 
> New build coming up
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!


Nice, Another server I hope?


----------



## Iris

Quote:


> Originally Posted by *axipher*
> 
> I think when it comes to server racks, or any electrical equipment in an industrial environment (my work), you are never going for the looks of the case or the hardware, but rather a clean installation. So things like cable management, panel layout, uniformity between racks and panels, etc.
> 
> Those are the things that are beautiful when you leave "consumer-space".


Thats partial what I meant. Dell racks look nice, but i've seen very poor cable management and it looks terrible and is hell to troubleshoot. Even an ugly setup can look nice with organization and cable management.


----------



## Oedipus

Quote:


> Originally Posted by *axipher*
> 
> I think when it comes to server racks, or any electrical equipment in an industrial environment (my work), you are never going for the looks of the case or the hardware, but rather a clean installation. So things like cable management, panel layout, uniformity between racks and panels, etc.
> 
> Those are the things that are beautiful when you leave "consumer-space".


Absolutely. That's what I was referring to when I said "aesthetics," not like hot pink racks with red patch cables and switches with heart-shaped port covers.


----------



## Iris

Hot pink racks, that's an idea.


----------



## CSCoder4ever

Quote:


> Originally Posted by *Oedipus*
> 
> hot pink racks with red patch cables and switches with heart-shaped port covers.


Quote:


> Originally Posted by *Iris*
> 
> Hot pink racks, that's an idea.


That would be very amusing! lol


----------



## CloudX

Cool thread! Here is my little guy hanging out in the garage









OS: Windows Server 2008 Standard
Case: Asus EssentioCM5770
CPU: Intel Core2Quad Q8200 @ 2.33Ghz
Motherboard: Asus
Memory: 6GB DDR2 800
PSU: 350 watt
OS HDD (If you have one): 320GB Seagate 7200rpm
Storage HDD(s): 2x500GB WD 7200RPM
Server Manufacturer (Ex: Dell, HP, You?): Asus
Duties: sFTP, Air Playit Server, http, media hub

Sorry for the crappy pic, but that's all it is. I do have a small desk with a monitor and essentials on it. Never really sit there and use it though. Sometimes I'll play music with it when I'm working in the garage. That's why I got some old speakers out there


----------



## dushan24

Enough said...


----------



## Jtvd78

Quote:


> Originally Posted by *CSCoder4ever*
> 
> Nice, Another server I hope?


YEAHH! Of course! Actually, im just upgrading my current server. Im finally moving onto ESXi








The old parts are going to my mom's computer
Quote:


> Originally Posted by *blooder11181*
> 
> the geforce g210 its for what?


Motherboard didn't have integrated graphics. I need it to install the OS

EDIT: The rest of the parts are in. Though, I can't start the build until my current server is done backing up


----------



## Iris

This is the nicest data center i've been in. The Switch SuperNAP here in Vegas. Had to do data sanitization work for clients there. Just beautiful.


----------



## Jtvd78

Quote:


> Originally Posted by *Iris*
> 
> This is the nicest data center i've been in. The Switch SuperNAP here in Vegas. Had to do data sanitization work for clients there. Just beautiful.


Do you mind informing me on what data sanitization is?


----------



## Iris

Quote:


> Originally Posted by *Jtvd78*
> 
> Do you mind informing me on what data sanitization is?


Data wiping / formatting. Usually DOD 7 pass, depends upon request. We also crush and degauss hard drives as well.


----------



## pm40elys40

Current:
Zotac NM10-E-E-ITX (Atom D525)
Kingston KVR800D2N5/2G
Fortron FSP150-50TNF
Sandisk Extreme3 8.0GB
WD3200BEVT
Intel Centrino Advanced-N 6200
Industrial 1U case
UbuntuLinux-1210-AMD64

No fans
Power requirements: 30W

MiniMedia Server


----------



## HandGunPat

My current server started off as a surveillance computer and I swapped out the motherboard, better heatsink, different case, more ram, added the GPU, and added more ram to it!

- Windows XP SP3 32-bit
- Intel Pentium E2140 @ 1.60 GHz
- Xigmatek HDT-S1283 w/ Bolt Through Kit
- 2GB DDR2 800 MHz Kingston RAM
- ECS P45T-A Motherboard
- XFX HD 4350 1GB
- Corsair CX430
- NZXT Source 210
- 2x 250GB Samsung Drives

I will be upgrading to a HP Desktop that will actually become a server!

- HP Pro 3000MT
- Intel C2Q Q9550 @ 2.83 GHz
- Stock HP Motherboard
- Dual Gigabit Intel NIC
- Corsair H50 (Have it laying around)
- 3x2TB WD RED Drives in Raid 5
- 4x2GB DDR3-1600 Crucial RAM
- Corsair CX430
- Server 2008 R2 or WHS (Haven't decided yet)
- Still have to find a Raid card.

This server will be running basic network drives, be a torrent box, along as a media box serving media up to the various devices around the house. It will also back up the PCs in the house. I might run folder redirection on the local PCs on the network back to the server. I am unsure though. Might play with some IIS stuff in the future also!

I will post pictures when I get home.


----------



## vipergtrdj

My File/ Torrent/ Media server is an older Dell that my work upgraded from last year. They upgrade every 3years.
Intel Core 2 Duo 2.6ghz (i think)
7gb DDR2 ram
Galaxy GT 430 1gb
500gb SATA Western Digital HD
1GB NIC Card
Windows 8 Enterprise

Not much ... but it serves the purposes. I also play games on it occasionally since its connected to the TV









I am trying to think of setting up a small web server where I can upload files when I am on the go - right now I have it limited to only within the LAN.


----------



## CloudX

I didn't know we could post work servers!! I have a cab in two very nice facilities in LA. Very similar to that Vegas pic a couple posts back. Getting registered felt like I was joining the secret service, I almost thought they were going to ask for my blood..









Looking forward to deploying and running a whole floor of servers for my employer one day!


----------



## Jtvd78

The new server is complete!

Before:

After:


OS: ESXi Host
Case: NZXT Source 220
CPU: AMD FX-8320
Motherboard: GIGABYTE GA-990FXA-UD3
Memory: G.SKILL Sniper Gaming Series 32GB 1866
PSU: SeaSonic M12II 620W
OS HDD (If you have one): Mushkin Enhanced Ventura Plus 8GB USB 3.0
Storage HDD(s): 3xSAMSUNG EcoGreen F4 2TB, 2xWD 640GB Green
Server Manufacturer (Ex: Dell, HP, You?): MEEE!!!!!

This is pretty much a major upgrade from my old server. The only hardware that I kept was the CPU cooler and the Hard Drives.
I haven't actually set up anything yet (I just finished building), but Its gonna run:
Backups
Mumble server
SFTP server
NAS
Minecraft server
Terraria srever
Seedbox
Newsdownloader


----------



## HandGunPat

I love the NZXT Source 2xx Series cases. I will move the HP Desktop out of it's case into a Source 220.


----------



## Jtvd78

Quote:


> Originally Posted by *HandGunPat*
> 
> I love the NZXT Source 2xx Series cases. I will move the HP Desktop out of it's case into a Source 220.


Its a great case. I did a build before for my brother's computer, and i love it.
I especially like it for a server build, because its the only case its size with that many 3.5" bays


----------



## Ecstacy

Quote:


> Originally Posted by *HandGunPat*
> 
> I love the NZXT Source 2xx Series cases. I will move the HP Desktop out of it's case into a Source 220.


Newegg has the Source 210 White for $29.99 after MIR.


----------



## pvt.joker

was in the main data center for work a couple weeks back and snuck this one.. 1 of about 30 rows of HP blades.. I was upgrading about 300 of em. This is 1 of 3 data centers, and this is only 1 room of 3 at this location..







I love my job..


----------



## killabytes

Blargh, stop posting work photos people.

This is for personal systems! Lets keep it that way!


----------



## CSCoder4ever

Quote:


> Originally Posted by *killabytes*
> 
> Blargh, stop posting work photos people.
> This is for personal systems! Lets keep it that way!


Agreed! though sometimes it's fun looking at data centers, but at least show your own personal Server too!


----------



## killabytes

Quote:


> Originally Posted by *CSCoder4ever*
> 
> Agreed! though sometimes it's fun looking at data centers, but at least show your own personal Server too!


I look at one all day. Booooooring!


----------



## CSCoder4ever

Quote:


> Originally Posted by *killabytes*
> 
> I look at one all day. Booooooring!


Someday I'll work in a place with data centers... I'll have to enjoy my server for the while... lol


----------



## killabytes

Quote:


> Originally Posted by *CSCoder4ever*
> 
> Someday I'll work in a place with data centers... I'll have to enjoy my server for the while... lol


Keep at it. It will come. I'm on my second job based inside a DC. With each step it gets better!


----------



## Stige

Just what I rent for no apparent reason lol

http://stigez.com/phpsysinfo/index.php?disp=dynamic


----------



## CSCoder4ever

Quote:


> Originally Posted by *killabytes*
> 
> Keep at it. It will come. I'm on my second job based inside a DC. With each step it gets better!


Alright, but since this is a "POST YOUR SERVER!!!" thread, I'll share a pic of what I added on to my server



Though I am thinking about replacing the case, anyone want to recommend a good case that doesn't collect a whole lot of dust?


----------



## killabytes

Quote:


> Originally Posted by *Stige*
> 
> Just what I rent for no apparent reason lol
> http://stigez.com/phpsysinfo/index.php?disp=dynamic


*sigh*

I find myself doing the same. I rent for a few months then snap out of it thinking...why?

I have had a VPS for sometime form ChicagoVPS

2GB RAM
50GB HDD
1Gbps Internet
2TB Bandwidth
2 IPV4 addresses

$7/month

Love it









Quote:


> Originally Posted by *CSCoder4ever*
> 
> Alright, but since this is a "POST YOUR SERVER!!!" thread, I'll share a pic of what I added on to my server
> 
> Though I am thinking about replacing the case, anyone want to recommend a good case that doesn't collect a whole lot of dust?


Thats clean looking. Post up more!


----------



## beers

Quote:


> Originally Posted by *Stige*
> 
> Just what I rent for no apparent reason lol
> http://stigez.com/phpsysinfo/index.php?disp=dynamic


phpsysinfo still looking nice.
Quote:


> Originally Posted by *killabytes*
> 
> *sigh*
> I find myself doing the same. I rent for a few months then snap out of it thinking...why?
> I have had a VPS for sometime form ChicagoVPS
> 2GB RAM
> 50GB HDD
> 1Gbps Internet
> 2TB Bandwidth
> 2 IPV4 addresses
> $7/month
> Love it
> 
> 
> 
> 
> 
> 
> 
> 
> Thats clean looking. Post up more!


I <3 the chicagoVPS vpses. I bought two of them on the black friday sale for $30/year a piece with around the same specs, they might have been cut down in bandwidth and IPs though.


----------



## CSCoder4ever

Quote:


> Originally Posted by *killabytes*
> 
> Thats clean looking. Post up more!


Alright, request Granted, and you didn't notice the prodigy next to it?









Also was taken on an older 3.5 mega pixel camera. lol

anyways a couple more:



not as good, but it's clear enough. lol



The 2nd part of the first picture, since the h61 doesn't support AHCI natively... this little card does the hot-plug work for it.

I'll get to dusting on it soon enough, though the air compressor I have is broken so... It's going to take a while until I can actually clean it... but it will happen soon!

and part of the reason why I want to replace this case with something more dust-proof.... any recommendations?


----------



## wholeeo

Spoiler: Warning: Spoiler!







Was about to overkill it with a H60 but decided not to,


----------



## CSCoder4ever

Quote:


> Originally Posted by *wholeeo*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> Was about to overkill it with a H60 but decided not to,


Is that 16GB of memory you have on there? if you are using WHS 2011 you are only able to use 8GB of them, just to let ya know.


----------



## wholeeo

Quote:


> Originally Posted by *CSCoder4ever*
> 
> Is that 16GB of memory you have on there? if you are using WHS 2011 you are only able to use 8GB of them, just to let ya know.


Yeah I know. Had some extra ram laying around that I received from RMA which I wouldn't make much reselling so I decided to spread 24gb amongst my htpc & server. I'll be switching the OS over to Windows Server 2008 R2 E once I get a cheap SSD for the server.


----------



## CiBi

Quote:


> Originally Posted by *wholeeo*
> 
> 
> Was about to overkill it with a H60 but decided not to,


at least slap an Hyper 212+ on there


----------



## CSCoder4ever

Quote:


> Originally Posted by *CiBi*
> 
> at least slap an Hyper 212+ on there


Does that go for my server too?


----------



## CiBi

Quote:


> Originally Posted by *CSCoder4ever*
> 
> Does that go for my server too?


Yes, and LOTS of dust filters! I don't know why but stock CPU coolers make me feel a little sick...


----------



## CSCoder4ever

Quote:


> Originally Posted by *CiBi*
> 
> Yes, and LOTS of dust filters! I don't know why but stock CPU coolers make me feel a little sick...


Alright, I'll get this cooler at some point then, and that's just it, any cases you'd highly recommend with a good deal of dust filters?


----------



## CiBi

Quote:


> Originally Posted by *CSCoder4ever*
> 
> Alright, I'll get this cooler at some point then, and that's just it, any cases you'd highly recommend with a good deal of dust filters?


The Corsair 900D will have lots of magnetic dust filters, even on top of the case... But it will cost between 300 and 400$ and I guess that's more then you are willing to spend on a server case.

You could also just keep your current case and buy some magnetic dust filters...
magnetic dust filter


----------



## killabytes

Dust is an ongoing battle for me as well. The old Antec server case I used had removable filters, always had to keep them clean. Now I'm using a Thermaltake Matrix, full mesh front. Same deal.

Joys of having servers.


----------



## FiX

Quote:


> Originally Posted by *Stige*
> 
> Just what I rent for no apparent reason lol
> http://stigez.com/phpsysinfo/index.php?disp=dynamic


I had both the hard drives in my EX4S RAID1 array go bad








(Assuming that that's a dedicated server from Hetzner like I think it is)


----------



## Stige

Quote:


> Originally Posted by *FiX*
> 
> I had both the hard drives in my EX4S RAID1 array go bad
> 
> 
> 
> 
> 
> 
> 
> 
> (Assuming that that's a dedicated server from Hetzner like I think it is)


Yeah it is, by far the cheapest ones out there and I know a friend who used to use them aswell, only reason I actually rented from them.


----------



## Whisenhunter

Server that I currently have for personal use:

Dell PowerEdge 1900
Single 3.00GHz Intel Xeon X5365
10GB Fully Buffered ECC SDRAM
Single 800 Watt PSU
Dell XM771 Perc 5i SAS Raid Controller
Dual 1TB Seagate Barracuda Hard Drives in a RAID1 Configuration
Single Broadcom BCM5708 Ethernet Card

VMWare ESXi 5.1.0
Virtual Machines Running on Host:
- Windows Server 2003R2
- Windows 7 Ultimate

I also a bunch of HP DL385G1s lying around with dual 2.4GHz AMD Opteron processors and 16GB of memory each. Looking to re-purpose these for something or sell them, right now they're used as test servers for miscellaneous projects I'm given.

Pictures soon.


----------



## swat565

OK finally getting to posting up my setup:



From top to Bottom (left to right):

OpenIndiana based ZFS storage, Perc 6i/r flashed to IT firmware with two pools:
-x4 147GB Seagate Cheetah15k Drives in Raidz1
-x4 250GB 7.2k Seagate Barracuda Drives in Raidz1

Dell Dimension 3000 acting as router running PFsense
Cisco Catalyst 4006 with Supervisor Engine II, 1 48 port 10/100 module and 1 24 port gigabit module
x3 Poweredge 1950's with Dual Xeon 5050 and 8GB of Ram
x2 Poweredge 2950 (currently powered off and used just for test use when needed)
x1 Poweredge SC1435 with Dual Opteron 2380 and 16GB of Ram

At the moment I'm getting ready to set these up in a ESXI High Avaibility Cluster. Nothing up and running yet as the Xeon 5000 series isn't supported for HA (keeping eyes peeled for Xeon 5100's which are). I have about x6 other 1950's sitting in the corner that are missing CPU/RAM that I might be adding in later on.

Another upgrade I'm tempted to do is get a Supervisor Engine II+ for my Catalyst 4006, as I won't have to deal with CatOS and can use IOS which makes managing my setup alot easier.

Also my 42U rack is currently in my shed, which hopefully this summer once snow all this equipment will be on it.

And before anyone asks I do own all of the hardware


----------



## wholeeo

Quote:


> Originally Posted by *CiBi*
> 
> at least slap an Hyper 212+ on there


At the moment my server is in my basement which is pretty cold.



Perhaps in Spring/Summer I'll need to slap something else on it.


----------



## CloudX

Yes its cold! hah


----------



## Ecstacy

Quote:


> Originally Posted by *wholeeo*
> 
> At the moment my server is in my basement which is pretty cold.
> 
> Perhaps in Spring/Summer I'll need to slap something else on it.


Core 0 and Core 1 are 11 degrees apart... You might need to reset your heatsink.


----------



## wholeeo

Quote:


> Originally Posted by *Ecstacy*
> 
> Core 0 and Core 1 are 11 degrees apart... You might need to reset your heatsink.


I'm not going to re-seat the Intel stock heatsink:. From what I've come to learn around these forums is the further away temps are from TDP the more inaccurate they are. It could also be due to Intel's choice of using pigeon poop for thermal paste between the die and IHS.


----------



## Oedipus

Quote:


> Originally Posted by *wholeeo*
> 
> I'm not going to re-seat the Intel stock heatsink:. From what I've come to learn around these forums is the further away temps are from TDP the more inaccurate they are. It could also be due to Intel's choice of using pigeon poop for thermal paste between the die and IHS.


Or the HSF isn't on there correctly.


----------



## wholeeo

Quote:


> Originally Posted by *Oedipus*
> 
> Or the HSF isn't on there correctly.


It's on there correctly, there's only one way it can be put on. It's not my first time around the block. Besides, there's no way in Santa's workshop that my basement is 6c so what does that tell you


----------



## crust_cheese

There's a shocking lack of UNIX in this thread.

OS: OpenBSD 5.1 (preparing to make the move to 5.2 as we speak)
Case: Dell OEM case
CPU: 1.6 GHz Pentium 4
Motherboard
RAM: 512 MB (DDR [as in DDR 1], I think)
PSU: Whatever generic crap came with the box.
OS HDD: 100 GB IDE drive. Not sure what manufacturer.
Server manufacturer: Dell.

To be honest, it's a generic Dell box I stole off the side of the road one evening.
It's currently set up as a router for my room network and as a web server (for the heck of it).
It's interesting to accumulate some networking experience.


----------



## Oedipus

Those clamshell cases are the worst. Luckily Dell didn't stick with that design for very long.


----------



## CloudX

Side of the road server ftw!!


----------



## NKrader

mmmm building right now









powdercoated Lian Li PC-A77FB, old case couldnt sell so now its server case








SuperMicro CSE-M35T-1B, 5x hotswap bay from old build








Dual CPU Supermicro H8DMi-2 plus 4gb of ecc ram, got that for $40.00
2x AMD Opteron - 2419 EE sixcore 1.8ghz - 10$ for those

now to find a powersupply and harddrives and heatsinks... the most expensive part LOL

pretty stoked for this one, just wanted to share with you guyssssssss


----------



## BiscuitHead

Quote:


> Originally Posted by *Oedipus*
> 
> Those clamshell cases are the worst. Luckily Dell didn't stick with that design for very long.


QFT


----------



## caraboose

I'm in the middle of moving my main server right now, so only outside pictures for the time being...


OS: Server 2008 R2
Case: Norco RPC-4116 (one hdd power LED is dead)
CPU: 2x Opteron 6128
Motherboard: Asus KGPE-D16
Memory: 8x4GB Kingston
PSU: Corsair HX1000 (for the time being)
OS HDD (If you have one): Patriot Inferno 60GB
Storage HDD(s): 4x2TB Seagate M001 Raid 5, 4x1TB WD Blue, Raid 10, 8x500GB WD Black Raid 50. Two raid controllers. Areca ARC-1222 and Perc5/i (Perc being replaced by another ARC-1222 or similar)
Server Manufacturer: Me


----------



## Norse

OS: ESXI
Case: N/A
CPU: 4x Quad core 8347HE 1.9ghz
Motherboard: DL585 G2
Memory: 32GB (16x2GB)
PSU: Redundant
OS HDD (If you have one): USB Stick
Storage HDD(s): 3x500GB Raid 5
Server Manufacturer (Ex: Dell, HP, You?): HP

HP DL585 G2, upgraded to quad cores (G2, G5 and G6 are identical), Used for virtualisation for training, currently have 18 VM's on it simulating a whole network (3 servers, 14 desktops and 1 "wallboard" PC) which are all exact copies of some of the things at work

Will in the future be moved into my rack at work where i run my own servers for testing and mucking about on (aswell as into an airconned room as it generates quite a bit of heat)

Power usage is 285 watts idling (was 335 or so with 4xdual core 2.6ghz and 16GB (8x2GB), not sure when 100% CPU/Memory load but im sure it'll scare my bank balance

Pics are before i upgraded the memory and CPU's (stock was 4x2.6ghz dual core and 8x2GB dimm), new thermal paste is MX5 so should result in nice temps


----------



## swat565

Quote:


> Originally Posted by *Norse*
> 
> OS: ESXI
> Case: N/A
> CPU: 4x Quad core 8347HE 1.9ghz
> Motherboard: DL585 G2
> Memory: 32GB (16x2GB)
> PSU: Redundant
> OS HDD (If you have one): USB Stick
> Storage HDD(s): 3x500GB Raid 5
> Server Manufacturer (Ex: Dell, HP, You?): HP
> HP DL585 G2, upgraded to quad cores (G2, G5 and G6 are identical), Used for virtualisation for training, currently have 18 VM's on it simulating a whole network (3 servers, 14 desktops and 1 "wallboard" PC) which are all exact copies of some of the things at work
> Will in the future be moved into my rack at work where i run my own servers for testing and mucking about on (aswell as into an airconned room as it generates quite a bit of heat)
> Power usage is 285 watts idling (was 335 or so with 4xdual core 2.6ghz and 16GB (8x2GB), not sure when 100% CPU/Memory load but im sure it'll scare my bank balance
> Pics are before i upgraded the memory and CPU's (stock was 4x2.6ghz dual core and 8x2GB dimm), new thermal paste is MX5 so should result in nice temps


Very nice, how much cpu usage are you seeing for what your running?


----------



## Norse

Quote:


> Originally Posted by *swat565*
> 
> Very nice, how much cpu usage are you seeing for what your running?


about 5-10% as its just idling, none of the virtual servers really do anything intensive (Exchange, Active directory, Fileshare etc), its mostly just so i borderline have a copy of the office so i can test things ie.......defragging the exchange DB completly FUBAR'ed it and because it was in the test enviroment.......i didnt get put up against the wall blindfolded and shot


----------



## Cyrious

AMD Phenom II x4 940 (Underclocked to 2ghz)
ASUS M3A78-EM Micro-ATX motherboard
2x2GB DDR2-800 (plan on upgrading to 8 gigs once my main rig gets switched over to a DDR3 board)
Integrated Radeon HD 3200 graphics (ewww)
160GB, 120GB, and 40GB HDDs (40 is OS, rest is storage)
Win7 Ultimate 32-bit (will upgrade to 64-bit in the future

Not a true server as i use it directly on a regular basis as a web browsing machine, but it does perform some minor fileserver functions and in the future i intend on running a gameserver or two on it. No pix of the guts because frankly its *horrid* inside, and it doesnt help that its in a re-used HP case. The largest fans the case supports is 80mm and that is officially only in one spot. I had to ghetto rig in 2 more 70mm fans, one right next to the processor (intake), and another on the expansion slots (Exhaust) so the processor is not continually recycling the same air, heating up, and becoming a noisy little bastard (CPU fan maxed does 5500 RPM, push/pull fans are 6500, made worse by 13 blades instead of 7). This machine IS right next to my head.


----------



## VictorB

Part3 is online now!









I did some hardware changes so now i have 6x 3tb in raidz2 and 2x 500gb mirror for boot and torrent. I also put in a IBM M1015 for more sata channels.

The OS is back to ZFSguru. I love the disk management system. And the speeds are great 110+ over nfs and 100+ over samba!

Also i measured the idle power consumption of every single part. So there is a nice overview in power consumption.

I made some benchmarks and show my UPS

















Part1! 



Part2!


----------



## killabytes

Finally all cleaned up!

Specs:

Intel Core 2 Quad Q6600
8GB DDR2 RAM
x2 200GB SATA in RAID 1 for OS
x4 3TB SATA in RAID 5 for data
OS: Windows Server 2008 R2








I'm currently working on getting a 1U case to put my Intel Atom board in for a new pfSense system. It's time to replace my WatchGuard Firebox II.


----------



## Pip Boy

Quote:


> Originally Posted by *VictorB*
> 
> 
> 
> 
> 
> Part3 is online now!
> 
> 
> 
> 
> 
> 
> 
> 
> I did some hardware changes so now i have 6x 3tb in raidz2 and 2x 500gb mirror for boot and torrent. I also put in a IBM M1015 for more sata channels.
> The OS is back to ZFSguru. I love the disk management system. And the speeds are great 110+ over nfs and 100+ over samba!
> Also i measured the idle power consumption of every single part. So there is a nice overview in power consumption.
> I made some benchmarks and show my UPS
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Part1!
> 
> 
> 
> Part2!


hi Victor, great setup









i posted on part 1 of your vid a while back mentioning that i have an almost exact setup but use an A10-5800k (got it cheap)

Im interested in the 2 x 500 gb mirror? is this setup as a kind of redundancy for the O/S ? and if so what stopped you from simply taking an offline baremetal backup (using something like redo backup) every few months or before an update to the other drive? this way any errors are not mirrored

thanks


----------



## killabytes

Quote:


> Originally Posted by *phill1978*
> 
> hi Victor, great setup
> 
> 
> 
> 
> 
> 
> 
> 
> i posted on part 1 of your vid a while back mentioning that i have an almost exact setup but use an A10-5800k (got it cheap)
> Im interested in the 2 x 500 gb mirror? is this setup as a kind of redundancy for the O/S ? and if so what stopped you from simply taking an offline baremetal backup (using something like redo backup) every few months or before an update to the other drive? this way any errors are not mirrored
> thanks


Forgive me for answering for someone else but...RAID isn't a backup. It's for redundancy. It's to keep the servers/PCs running while a drive has failed. Has little or nothing to do with data itself.


----------



## Pip Boy

Quote:


> Originally Posted by *phill1978*
> 
> hi Victor, great setup
> 
> 
> 
> 
> 
> 
> 
> 
> i posted on part 1 of your vid a while back mentioning that i have an almost exact setup but use an A10-5800k (got it cheap)
> Im interested in the 2 x 500 gb mirror? is this setup as a kind of redundancy for the O/S ? and if so what stopped you from simply taking an offline baremetal backup (using something like redo backup) every few months or before an update to the other drive? this way any errors are not mirrored
> thanks


Quote:


> Originally Posted by *killabytes*
> 
> Forgive me for answering for someone else but...RAID isn't a backup. It's for redundancy. It's to keep the servers/PCs running while a drive has failed. Has little or nothing to do with data itself.


I cant forgive the fact you didnt read what i posted









which was "is the setup a kind of redundancy for the O/S?" and "what stopped you taking an offline backup"

I never said RAID was a backup, just asked if the O/S itself was in a mirror and if so why was that the case vs an offline O/S backup ? I know the difference between a backup and redundancy

the reason i asked is because i have been following his build and he was using a thumb drive before for the server os


----------



## Lord Xeb

PowerEdge 1950
2x Core 2 Quad Xeons @ 1.6GHz
16GB of ram
2 300GB 10k SAS drives RAID 0 using Perc 5/i
Redundant PSUs.
ESXi

Got a second one coming.

I believe the new one will be
2 2GHz quads
4GB ram (pulling 8 from the above one)
3 300GB 3k SAS w/ perc 5/i RAID 0
ESXi


----------



## xinel

HP N36L
4GB usb OS hdd
4x2TB hdds
8GB ram
freenas box, file server and torrents

whitebox
atom n450
2GB ram
500GB hdd
ubuntu server, ntp, openssh, openvpn, iptables etc


----------



## caraboose

Server rack finally got its new home.
Nothing is wired up yet. Just got it all moved today. Tomorrow, the fun part.




Edit: well for some reason it rotated the last to pictures..


----------



## VictorB

Quote:


> Originally Posted by *phill1978*
> 
> hi Victor, great setup
> 
> 
> 
> 
> 
> 
> 
> 
> i posted on part 1 of your vid a while back mentioning that i have an almost exact setup but use an A10-5800k (got it cheap)
> Im interested in the 2 x 500 gb mirror? is this setup as a kind of redundancy for the O/S ? and if so what stopped you from simply taking an offline baremetal backup (using something like redo backup) every few months or before an update to the other drive? this way any errors are not mirrored
> thanks


Its just a simple mirror







The OS is simple to setup don't need a full backup of it.


----------



## Pip Boy

Quote:


> Originally Posted by *VictorB*
> 
> Its just a simple mirror
> 
> 
> 
> 
> 
> 
> 
> The OS is simple to setup don't need a full backup of it.


i hadn't realized that zfs could mirror the actual os ! Does the os contain ZFS or pool information that might be critical should the os die or is that only on the physical storage?


----------



## Dream Killer

Quote:


> Originally Posted by *VictorB*
> 
> VIDEO
> 
> Part3 is online now!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I did some hardware changes so now i have 6x 3tb in raidz2 and 2x 500gb mirror for boot and torrent. I also put in a IBM M1015 for more sata channels.
> 
> The OS is back to ZFSguru. I love the disk management system. And the speeds are great 110+ over nfs and 100+ over samba!
> 
> Also i measured the idle power consumption of every single part. So there is a nice overview in power consumption.
> 
> I made some benchmarks and show my UPS
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Part1!
> 
> 
> 
> Part2!


RAID5 is such a terrible thing. I'm glad someone also sees the light in ZFS here.

ZFS is so easy to use that a front end for a couple of zpools is unnecessary. If you're comfortable with the CLI, you should just do a bare illumos install to remove the overhead of ZFS Guru.
Quote:


> Originally Posted by *phill1978*
> 
> Quote:
> 
> 
> 
> Originally Posted by *VictorB*
> 
> Its just a simple mirror
> 
> 
> 
> 
> 
> 
> 
> The OS is simple to setup don't need a full backup of it.
> 
> 
> 
> i hadn't realized that zfs could mirror the actual os ! Does the os contain ZFS or pool information that might be critical should the os die or is that only on the physical storage?
Click to expand...

ZFS stores information in the metadata inside the zpool's drives. You can yank out all the members of the zpool, transplant them in a different os that supports ZFS or even a different architecture and just type 'zpool import'. The zpool automagically restores itself - smb/cifs shares and all.


----------



## Pip Boy

Quote:


> Originally Posted by *Dream Killer*
> 
> RAID5 is such a terrible thing. I'm glad someone also sees the light in ZFS here.
> ZFS is so easy to use that a front end for a couple of zpools is unnecessary. If you're comfortable with the CLI, you should just do a bare illumos install to remove the overhead of ZFS Guru.
> ZFS stores information in the metadata inside the zpool's drives. You can yank out all the members of the zpool, transplant them in a different os that supports ZFS or even a different architecture and just type 'zpool import'. The zpool automagically restores itself - smb/cifs shares and all.


nice









do you still use the thumb drive for anything? i have ZFS guru running on an usb 3.0 internal header with a thumb drive attached (overkill as its 64gb)


----------



## Jeci

I thought you guys would be the best people to ask, what drives would you all recommend for a 4 x 2TB RAID5 setup? I'm going to assume you're all going to say steer clear of WD greens/blues?


----------



## killabytes

Quote:


> Originally Posted by *Jeci*
> 
> I thought you guys would be the best people to ask, what drives would you all recommend for a 4 x 2TB RAID5 setup? I'm going to assume you're all going to say steer clear of WD greens/blues?


I personally prefer Hitachi. I remember a study that, perhaps Google, did that found they have a better MTTF. I "think" it's the drive of choice for them as well.

With that said, almost any enterprise level drive would be better than consumer.

I have zero experience with the new WD Reds.


----------



## tycoonbob

Quote:


> Originally Posted by *killabytes*
> 
> I personally prefer Hitachi. I remember a study that, perhaps Google, did that found they have a better MTTF. I "think" it's the drive of choice for them as well.
> With that said, almost any enterprise level drive would be better than consumer.
> I have zero experience with the new WD Reds.


I think you mean "MTBF", hehe. But I agree...I am a huge Hitachi fan, and since Hitachi is now owned by Toshiba...I am a big fan of Toshiba (for consumer drives). The DT01ACA300 drives are 3TB, 65MB Cache, 7200RPM...2 year warranty, around $150. They have CCTL (which is their version of WD's TLER, which is only available in WD enterprise drives, or the newer WD REDs).

So if you are looking for 2TB drives...I would highly recommend this:
Toshiba DT01ACA200 - 2TB, 7200RPM, 64MB Cache, SATA III...2 year warranty, $109.99

(Newegg has the warranty information listed wrong. On Toshiba's site they clearly state their DT01ACAx00 drives have a 2 year warranty)


----------



## Citra

Quote:


> Originally Posted by *tycoonbob*
> 
> I think you mean "MTBF", hehe. But I agree...I am a huge Hitachi fan, and since Hitachi is now owned by Toshiba...I am a big fan of Toshiba (for consumer drives). The DT01ACA300 drives are 3TB, 65MB Cache, 7200RPM...2 year warranty, around $150. They have CCTL (which is their version of WD's TLER, which is only available in WD enterprise drives, or the newer WD REDs).
> So if you are looking for 2TB drives...I would highly recommend this:
> Toshiba DT01ACA200 - 2TB, 7200RPM, 64MB Cache, SATA III...2 year warranty, $109.99
> (Newegg has the warranty information listed wrong. On Toshiba's site they clearly state their DT01ACAx00 drives have a 2 year warranty)


Hitachi is WD.


----------



## Dream Killer

Quote:


> Originally Posted by *phill1978*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Dream Killer*
> 
> RAID5 is such a terrible thing. I'm glad someone also sees the light in ZFS here.
> ZFS is so easy to use that a front end for a couple of zpools is unnecessary. If you're comfortable with the CLI, you should just do a bare illumos install to remove the overhead of ZFS Guru.
> ZFS stores information in the metadata inside the zpool's drives. You can yank out all the members of the zpool, transplant them in a different os that supports ZFS or even a different architecture and just type 'zpool import'. The zpool automagically restores itself - smb/cifs shares and all.
> 
> 
> 
> nice
> 
> 
> 
> 
> 
> 
> 
> 
> 
> do you still use the thumb drive for anything? i have ZFS guru running on an usb 3.0 internal header with a thumb drive attached (overkill as its 64gb)
Click to expand...

I use an internal flash drive for initial boot into the OS. I store Zones, KVMs and SMB shares inside the ZFS pool.
Quote:


> Originally Posted by *Jeci*
> 
> I thought you guys would be the best people to ask, what drives would you all recommend for a 4 x 2TB RAID5 setup? I'm going to assume you're all going to say steer clear of WD greens/blues?


Companies saw the trend in SOHO RAID setups and are making products targeting such people. It's really sad because it wasn't that long ago when Seagate made a good, well-rounded drive like the 7200.11 at cheap prices and slapped a 5 year warranty on it. Don't worry about what a company wants you to buy, do your research and look at what you expect out of your array.

Some questions to ask yourself:

*What kind of load am I going to put through drives?*
Most SOHO drives spend most of their lives idle.

*What kind of environment are the drives subjected in?*
Cool the drives properly and make sure they're mounted in a way to isolate and reduce vibration from the other drives. Don't move your server while it's live, yatta yatta yatta.

*When (NOT 'IF'!) a drive fails, can I afford an extra one?*
You need 4x for the array, get 5. Sometimes the cheaper drive makes more sense than the more durable, expensive one.

I personally went with the cheapest WD Green drives. They have lower MTBF, don't tolerate vibration as much and are a bit slower. However their price point allowed me to buy more of them so it's cheaper to replace the failed disks. I currently have 2 brand new ones in caddies ready to go should drives fail on the live array. After all, we chose redundancy because we expect failure, right?

*When a controller fails, will I find another one?*
Controllers fail just as bad as drives - it's never graceful. Finding a cheap PERC6i on eBay is nice but they've been most likely been pulled from a hard-working server that has been running 24/7 the past 6 years. RAID is also different per controller so it would be near impossible to recover your data if another controller can't be found. Buy two.

I chose ZFS because they're not controller dependent. One less device to fail and replace.


----------



## killabytes

Quote:


> Originally Posted by *tycoonbob*
> 
> I think you mean "MTBF", hehe. But I agree...I am a huge Hitachi fan, and since Hitachi is now owned by Toshiba...I am a big fan of Toshiba (for consumer drives). The DT01ACA300 drives are 3TB, 65MB Cache, 7200RPM...2 year warranty, around $150. They have CCTL (which is their version of WD's TLER, which is only available in WD enterprise drives, or the newer WD REDs).
> So if you are looking for 2TB drives...I would highly recommend this:
> Toshiba DT01ACA200 - 2TB, 7200RPM, 64MB Cache, SATA III...2 year warranty, $109.99
> (Newegg has the warranty information listed wrong. On Toshiba's site they clearly state their DT01ACAx00 drives have a 2 year warranty)


I do. For some reason my phone decided that MTTF was a word.


----------



## tycoonbob

Quote:


> Originally Posted by *Citra*
> 
> Hitachi is WD.


Yes and no.









Hitachi was bought out by Western Digital Company, but to satisfy FTC regulations about the purchase (and to avoid a duopoly -- WD and Seagate), they sold the 3.5" consumer division of Hitachi (including all IP, facilities, and patents) to Toshiba...ergo, Toshiba now owns the Hitachi Deskstar line.

WD owns the Hitachi Ultrastar line (enterprise SATA, NL-SAS, and SAS).

Heck, the new Toshiba DT01ACA200 and DT01ACA300 clearly state on the drive label that it was made by Hitachi. It actually says:
MADE IN CHINA BY Hitachi Global Storage Products (Shenzhen) Co., Ltd. CN

You can see it on the product pictures on Newegg at the link I posted above, but in case you want better proof...here is a photo of MY Toshiba DT01ACA300 (3TB):


Here are some links as well that were posted during this acquisition period:
http://www.techspot.com/news/46396-western-digital-cleared-to-buy-hitachi-gst-with-conditions.html
http://www.tomshardware.com/news/wd-toshiba-hdd-hard-drive,14858.html

Boy I'm getting tired of answer this question.


----------



## Pip Boy

ok then server people. I have a really dumb question









how do you know which drive has failed out of the many? i.e physically which sata port? what is your method? i would say physically label each drive and work out which one is named SDB/SDD/SDC etc.. on the system in correspondence is this correct? *this is if your running off the on board sata and have no lights or obvious signs


----------



## beers

Quote:


> Originally Posted by *phill1978*
> 
> how do you know which drive has failed out of the many? i.e physically which sata port? what is your method? i would say physically label each drive and work out which one is named SDB/SDD/SDC etc.. on the system in correspondence is this correct? *this is if your running off the on board sata and have no lights or obvious signs


I usually assign the drives in sequence (sda is on the first SATA port and in the first drive bay, so on and so forth). Makes it trivial to see a failed array drive and just pull the right one without any effort, the first time. You can also match up the model and serial numbers and then do a `lshw -class disk` to verify where everything is virtually before assigning mount point by UUID.


----------



## killabytes

Quote:


> Originally Posted by *phill1978*
> 
> ok then server people. I have a really dumb question
> 
> 
> 
> 
> 
> 
> 
> 
> how do you know which drive has failed out of the many? i.e physically which sata port? what is your method? i would say physically label each drive and work out which one is named SDB/SDD/SDC etc.. on the system in correspondence is this correct? *this is if your running off the on board sata and have no lights or obvious signs


Are they connected to a RAID controller or just onboard SATA?

My RAID card allows me to flash each LED to identify each HDD.


----------



## Citra

Quote:


> Originally Posted by *tycoonbob*
> 
> Yes and no.
> 
> 
> 
> 
> 
> 
> 
> 
> Hitachi was bought out by Western Digital Company, but to satisfy FTC regulations about the purchase (and to avoid a duopoly -- WD and Seagate), they sold the 3.5" consumer division of Hitachi (including all IP, facilities, and patents) to Toshiba...ergo, Toshiba now owns the Hitachi Deskstar line.
> WD owns the Hitachi Ultrastar line (enterprise SATA, NL-SAS, and SAS).
> Heck, the new Toshiba DT01ACA200 and DT01ACA300 clearly state on the drive label that it was made by Hitachi. It actually says:
> MADE IN CHINA BY Hitachi Global Storage Products (Shenzhen) Co., Ltd. CN
> You can see it on the product pictures on Newegg at the link I posted above, but in case you want better proof...here is a photo of MY Toshiba DT01ACA300 (3TB):
> 
> Here are some links as well that were posted during this acquisition period:
> http://www.techspot.com/news/46396-western-digital-cleared-to-buy-hitachi-gst-with-conditions.html
> http://www.tomshardware.com/news/wd-toshiba-hdd-hard-drive,14858.html
> Boy I'm getting tired of answer this question.


Rep+


----------



## Pip Boy

Quote:


> Originally Posted by *killabytes*
> 
> Are they connected to a RAID controller or just onboard SATA?
> My RAID card allows me to flash each LED to identify each HDD.


onboard

+ thanks 4 tip beer


----------



## Dream Killer

Quote:


> Originally Posted by *phill1978*
> 
> ok then server people. I have a really dumb question
> 
> 
> 
> 
> 
> 
> 
> 
> 
> how do you know which drive has failed out of the many? i.e physically which sata port? what is your method? i would say physically label each drive and work out which one is named SDB/SDD/SDC etc.. on the system in correspondence is this correct? *this is if your running off the on board sata and have no lights or obvious signs


get a label maker on sale at staples for $10 or a permanent marker, it will save lives as far as data is concerned. solaris based systems make sense in identifying hard drives for example, the drive c2t3d0 means the drive in controller 2, target 3 - port 3 in the controller, disk 0 - the first disk in the target (useful in port multipliers). Linux is weird as heck, and freeBSD is even weirder.

also be aware of the whole zero vs one thing. raid controllers will number starting from zero and chassis will number starting from one. so when a controller tells you disk #2 has failed in your raid 5 array and you pull out #2 in the chassis, you're actually pulling out disk #1 and you've just lost a ton of data.


----------



## Junior82

Top to Bottom

Patch Panel
ActionTec V1000H in transparent bridge mode
Next to that is Buffalo N300 seperate network/wireless ap
Netgear GS724T v2 24port smart switch
Brother Printer

Dell PowerEdge 2950
Specs:
x2 Intel Xeon E5335 2.0GHz
14GB RAM
x2 73GB 15K SAS drives
x2 1TB Seagate 7200rpm
x2 250 Seagate (going to be replacing these with x2 2TB drives
Has DRAC card and Perc 5/i raid controller
Running ESXi 5.0 update 2 latest patch
Has 5 vm's currently on it combination of CentOS & WIndows Server 2008R2; running mail server, sql server, vCenter server, web server, newznab server

Compaq DL360
Specs:
x2 Pentium lll 930MHz
2GB RAM
x2 18.2 10K SCSI drives
Running Windows Server 2003 (web/torrent server)

HP ProLiant ML110 G7
Specs:
Intel Xeon E3 1230 3.2GHz
10GB RAM (upgrading to 16GB hopefully soon)
Looking at getting a raid controller for this host in the future Adaptec 2405 and 1 more 2TB drive
x2 1TB Seagate
x1 2TB Seagate
x1 500GB Seagate
Running ESXi 5.0 update 2 latest patch
Has 5 vm's currently on it, Windows Server 2008R2, Win 7, Ubuntu. Domain Controller, PPTP Server, Secondary DC.

Dell Dimesion 4700
Running Untangle (looking to replace this machine for something that draws less power and in a 1u Case)
Untangle

Sitting on top of the Dell 4700 is a Iomega iX2 NAS used for storage, connected to one of the server's via iSCSI




Here is some work photo's


----------



## Ecstacy

Quote:


> Originally Posted by *Junior82*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Top to Bottom
> Patch Panel
> ActionTec V1000H in transparent bridge mode
> Next to that is Buffalo N300 seperate network/wireless ap
> Netgear GS724T v2 24port smart switch
> Brother Printer
> Dell PowerEdge 2950
> Specs:
> x2 Intel Xeon E5335 2.0GHz
> 14GB RAM
> x2 73GB 15K SAS drives
> x2 1TB Seagate 7200rpm
> x2 250 Seagate (going to be replacing these with x2 2TB drives
> Has DRAC card and Perc 5/i raid controller
> Running ESXi 5.0 update 2 latest patch
> Has 5 vm's currently on it combination of CentOS & WIndows Server 2008R2; running mail server, sql server, vCenter server, web server, newznab server
> Compaq DL360
> Specs:
> x2 Pentium lll 930MHz
> 2GB RAM
> x2 18.2 10K SCSI drives
> Running Windows Server 2003 (web/torrent server)
> HP ProLiant ML110 G7
> Specs:
> Intel Xeon E3 1230 3.2GHz
> 10GB RAM (upgrading to 16GB hopefully soon)
> Looking at getting a raid controller for this host in the future Adaptec 2405 and 1 more 2TB drive
> x2 1TB Seagate
> x1 2TB Seagate
> x1 500GB Seagate
> Running ESXi 5.0 update 2 latest patch
> Has 5 vm's currently on it, Windows Server 2008R2, Win 7, Ubuntu. Domain Controller, PPTP Server, Secondary DC.
> Dell Dimesion 4700
> Running Untangle (looking to replace this machine for something that draws less power and in a 1u Case)
> Untangle
> Sitting on top of the Dell 4700 is a Iomega iX2 NAS used for storage, connected to one of the server's via iSCSI
> 
> 
> Here is some work photo's


Nice setup, I'm planning on building myself a server in a couple months once I can afford it. What do you do for work if you don't mind me asking?


----------



## Junior82

Quote:


> Originally Posted by *Ecstacy*
> 
> Nice setup, I'm planning on building myself a server in a couple months once I can afford it. What do you do for work if you don't mind me asking?


Thanks. I would suggest when you build your server look into some kind of virtualization, VMware ESXi, HyperV etc. And as for what i do for work, i am a Network Engineer/IT Consultant, and i love what i do!


----------



## mgdev

My personal playground

EDIT: I don't know why the second picture is sideways


----------



## Ecstacy

Quote:


> Originally Posted by *Junior82*
> 
> Thanks. I would suggest when you build your server look into some kind of virtualization, VMware ESXi, HyperV etc. And as for what i do for work, i am a Network Engineer/IT Consultant, and i love what i do!


That's what I was planning on doing. I was hoping to put together a cheap Sandy build and use virtualization to run PfSense or Untangle for a firewall, ZFSGuru or FreeNAS for a file server, and a couple of VM's to test things out and learn. I can't afford multiple servers.









I'm in my last year of high school and was thinking of getting into computer engineering or network engineering, if you wouldn't mind telling me, what's the job like (the good and the bad), what kind of schooling did you do, and if you have any advice for someone like me interested in that field? Thanks!


----------



## afropelican

Quote:


> Originally Posted by *mgdev*
> 
> 
> My personal playground
> EDIT: I don't know why the second picture is sideways


WOahhhh what do you use this much server equipment for at home????


----------



## denton_12

Quote:


> Originally Posted by *afropelican*
> 
> WOahhhh what do you use this much server equipment for at home????


He's downloading the internet.


----------



## NKrader

Quote:


> Originally Posted by *denton_12*
> 
> He's downloading the internet.


ive always wanted to do that


----------



## pvt.joker

here's a more updated pic of my home setup (since I moved and added the 24 port switch)



dunno why it's posting the pic rotated.. oh well..


----------



## Oedipus

Not sure why so few people in this thread know how to rotate pictures.


----------



## NKrader

Quote:


> Originally Posted by *Oedipus*
> 
> Not sure why so few people in this thread know how to rotate pictures.


its tough its not like every photo upload site/ windows explorer has a rotate feature


----------



## pvt.joker

Quote:


> Originally Posted by *Oedipus*
> 
> Not sure why so few people in this thread know how to rotate pictures.


The pic is normal on my dropbox, phone and pc.. so something happened in the upload process.. never used to do that to pics i uploaded..


----------



## killabytes

Finally had some time to myself today. So I move some gear into my homemade rack.




Finally I have everything centralized. From the top down...

5 Port gigabit switch from TP-Link
26 Port Trendnet switch
WatchGuard Firebox II running pfSense
AMBX Customized 1U server running ubuntu server w/Fluxbox GUI
4 Sun MIcrosystems Sunfire V100s
The KVM setup
Left tower is the future 64Bit pfSense machine
RIght is my 14TB file server now running WHS 2011.


----------



## dushan24

Haha, you did paint it orange!

Nice rack by the way, I'm looking to build one too.


----------



## jibesh

Quote:


> Originally Posted by *killabytes*
> 
> Finally had some time to myself today. So I move some gear into my homemade rack.
> 
> 
> 
> 
> Finally I have everything centralized. From the top down...
> 
> 5 Port gigabit switch from TP-Link
> 26 Port Trendnet switch
> WatchGuard Firebox II running pfSense
> AMBX Customized 1U server running ubuntu server w/Fluxbox GUI
> 4 Sun MIcrosystems Sunfire V100s
> The KVM setup
> Left tower is the future 64Bit pfSense machine
> RIght is my 14TB file server now running WHS 2011.


*horrified* Is that an Actiontec router I see at the top of the rack?


----------



## killabytes

Quote:


> Originally Posted by *jibesh*
> 
> *horrified* Is that an Actiontec router I see at the top of the rack?


Yes and no. It's there but only for the HPNA portion of it. I've bypassed it for my Internet usage using pfSense. But since I have TV through fibre I needed to use it for HPNA.


----------



## CloudX

I was going to say its for FIOS!







Need that guy for the TV even if you use another router.


----------



## killabytes

Quote:


> Originally Posted by *CloudX*
> 
> I was going to say its for FIOS!
> 
> 
> 
> 
> 
> 
> 
> Need that guy for the TV even if you use another router.


Not FiOS, being Canadian we have FibreOP. Sort of the same.

I was able to remove the use of the router by spoofing the MAC address, and also placing the WAN on a VLAN. I can fully get rid of the Actiontec, but there's a twist. Number 1, I'd have to wire my houe with Ethernet to use it without HPNA. Number B, I'd have to switch to a router OS that supports priority tagging.

I have no plans to do either.


----------



## Dream Killer

To be fair, the Actiontec routers Verizon gives out are very good.


----------



## killabytes

Quote:


> Originally Posted by *Dream Killer*
> 
> To be fair, the Actiontec routers Verizon gives out are very good.


Out of all the mass routers I've seen from ISPs, yes these are fairly good. Only if ISPs didn't disable 90% of the features...


----------



## Dream Killer

Quote:


> Originally Posted by *killabytes*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Dream Killer*
> 
> To be fair, the Actiontec routers Verizon gives out are very good.
> 
> 
> 
> Out of all the mass routers I've seen from ISPs, yes these are fairly good. Only if ISPs didn't disable 90% of the features...
Click to expand...

The only thing I've found that it lacks is SNMP. QoS actually works better on it than pfSense and I'm currently using the Rev. I in place of my x86 router. Performs very well under a lot of states and QoS works better than even hfsc.


----------



## Citra

Verizon really does slap their logo on everything...


----------



## jibesh

Quote:


> Originally Posted by *CloudX*
> 
> I was going to say its for FIOS!
> 
> 
> 
> 
> 
> 
> 
> Need that guy for the TV even if you use another router.


Only if you have the STBs.


----------



## killabytes

I found I was able to bring it to its knees quite easily.

Having 4 HD PVRs, countless wireless devices, over 5 websites, 4 game servers and rsync running non-stop...plus...torrents.

While it's a nice router, it had to go.


----------



## Dream Killer

My network load isn't light either so I guess it depends on which MI424 revision. In my experience the Rev. I totally spanked my 3800 x2 / 2GB pfSense 2.0.1 in performance. I also don't like where pfSense is heading so I can't use pfSense anymore. I may compile my own m0n0wall to get a larger state table than the 30k the kernel is hardcoded with on the regular distro and use that instead.

Or spring for a Juniper router and get some peace of mind.


----------



## killabytes

Quote:


> Originally Posted by *Dream Killer*
> 
> My network load isn't light either so I guess it depends on which MI424 revision. In my experience the Rev. I totally spanked my 3800 x2 / 2GB pfSense 2.0.1 in performance. I also don't like where pfSense is heading so I can't use pfSense anymore. I may compile my own m0n0wall to get a larger state table than the 30k the kernel is hardcoded with on the regular distro and use that instead.
> 
> Or spring for a Juniper router and get some peace of mind.


I'll admit, pfSense is going...different. I do miss m0n0wall.

I eyed some old Juniper and Sonicwall devices that my last job was decomming. Trash anyway I guess.


----------



## Dream Killer

I'm also eying ZyWALL USG1000.

But at the same time I don't really need a mega-router anymore since I switched OpenVPN and RRD stuff into a VM (the main reason I used pfSense). I'll probably keep using the Actiontec until I find it unfit to do its job.


----------



## swat565

As a fairly new user of PFsense, how have you felt its changed directions/gone down hill?


----------



## Dream Killer

Quote:


> Originally Posted by *swat565*
> 
> As a fairly new user of PFsense, how have you felt its changed directions/gone down hill?


In the wrong direction, yes. Downhill? not exactly. PfSense is still great and I don't discourage people from using it because it works as advertised. To me a firewall/router just needs to be a router and firewall, nothing more and nothing less. PfSense is moving towards more of an all-in one UTM platform rather than just a front-end GUI for pFilter. The problem with that is a more complex package makes the OS more open and vulnerable to attacks so it's less secure.

I've been toying with the idea with making a custom all-in-one network box by using VMs so the appliance OSes are securely contained from each other. Somewhere along the lines of running a box with SmartOS, virtualize m0n0wall through Zones, run Untangle inside a KVM within the same box in bridged mode, a Linux OS for OpenVPN, another Linux OS for RRD/SNMP graphs and run a virtualized switch OS like the Cisco 1000v. This way I can get the best of all worlds with one physical box running isolated and secure instances of each network appliance.

The only thing holding me back is the power I would need for a box. I need something that supports VT-d for illumos' KVM so I'd need a Nehalem-class CPU in a 1U rack - expensive.


----------



## CloudX

Quote:


> Originally Posted by *Dream Killer*
> 
> In the wrong direction, yes. Downhill? not exactly. PfSense is still great and I don't discourage people from using it because it works as advertised. To me a firewall/router just needs to be a router and firewall, nothing more and nothing less. PfSense is moving towards more of an all-in one UTM platform rather than just a front-end GUI for pFilter. The problem with that is a more complex package makes the OS more open and vulnerable to attacks so it's less secure.
> 
> I've been toying with the idea with making a custom all-in-one network box by using VMs so the appliance OSes are securely contained from each other. Somewhere along the lines of running a box with SmartOS, virtualize m0n0wall through Zones, run Untangle inside a KVM within the same box in bridged mode, a Linux OS for OpenVPN, another Linux OS for RRD/SNMP graphs and run a virtualized switch OS like the Cisco 1000v. This way I can get the best of all worlds with one physical box running isolated and secure instances of each network appliance.
> 
> The only thing holding me back is the power I would need for a box. I need something that supports VT-d for illumos' KVM so I'd need a Nehalem-class CPU in a 1U rack - expensive.


That's awesome though. You've given me some ideas..


----------



## swat565

Quote:


> Originally Posted by *Dream Killer*
> 
> In the wrong direction, yes. Downhill? not exactly. PfSense is still great and I don't discourage people from using it because it works as advertised. To me a firewall/router just needs to be a router and firewall, nothing more and nothing less. PfSense is moving towards more of an all-in one UTM platform rather than just a front-end GUI for pFilter. The problem with that is a more complex package makes the OS more open and vulnerable to attacks so it's less secure.
> 
> I've been toying with the idea with making a custom all-in-one network box by using VMs so the appliance OSes are securely contained from each other. Somewhere along the lines of running a box with SmartOS, virtualize m0n0wall through Zones, run Untangle inside a KVM within the same box in bridged mode, a Linux OS for OpenVPN, another Linux OS for RRD/SNMP graphs and run a virtualized switch OS like the Cisco 1000v. This way I can get the best of all worlds with one physical box running isolated and secure instances of each network appliance.
> 
> The only thing holding me back is the power I would need for a box. I need something that supports VT-d for illumos' KVM so I'd need a Nehalem-class CPU in a 1U rack - expensive.


Just a thought (and something I've been trying to do), couldn't you do that with Vlan/trunking? have your physical switch with WAN coming in, on say vlan 100, then your NIC on a VM running Router distro like pfsense nat in the traffic to vlan 1?


----------



## Dream Killer

Quote:


> Originally Posted by *swat565*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Dream Killer*
> 
> In the wrong direction, yes. Downhill? not exactly. PfSense is still great and I don't discourage people from using it because it works as advertised. To me a firewall/router just needs to be a router and firewall, nothing more and nothing less. PfSense is moving towards more of an all-in one UTM platform rather than just a front-end GUI for pFilter. The problem with that is a more complex package makes the OS more open and vulnerable to attacks so it's less secure.
> 
> I've been toying with the idea with making a custom all-in-one network box by using VMs so the appliance OSes are securely contained from each other. Somewhere along the lines of running a box with SmartOS, virtualize m0n0wall through Zones, run Untangle inside a KVM within the same box in bridged mode, a Linux OS for OpenVPN, another Linux OS for RRD/SNMP graphs and run a virtualized switch OS like the Cisco 1000v. This way I can get the best of all worlds with one physical box running isolated and secure instances of each network appliance.
> 
> The only thing holding me back is the power I would need for a box. I need something that supports VT-d for illumos' KVM so I'd need a Nehalem-class CPU in a 1U rack - expensive.
> 
> 
> 
> Just a thought (and something I've been trying to do), couldn't you do that with Vlan/trunking? have your physical switch with WAN coming in, on say vlan 100, then your NIC on a VM running Router distro like pfsense nat in the traffic to vlan 1?
Click to expand...

Yes, you can but that requires a physical switch to manage the network on the VMs. My solution requires just two real ethernet interfaces: WAN & LAN. The network the VMs communicate through are virtual and handled internally in the box. Think of it as a pipeline with packets going from one VM to the next.

The Cisco 1000v is essentially a software based managed switch. The box I have in mind would have multiple quad/dual gigabit NICs. The number of ports I need is unreasonable for that solution though (see the post about the VZ router) so I do need a real managed switch.

EDIT: here's a diagram what it should look like:
Router 4.0


----------



## NKrader

mmmmm

dual six core arrived today



and its new home


I've got the heatsinks mountedand its in the case, tested with tester psu and got it to boot, now to wait formore money to buy psu and harddrives

best 50$ ever spent


----------



## CloudX

Nice!


----------



## pvt.joker

Quote:


> Originally Posted by *NKrader*
> 
> mmmmm
> 
> dual six core arrived today
> 
> best 50$ ever spent


That's a smokin deal for $50!


----------



## andymiller

Anyone know where i can get a Rosewill L4000 in the uk??


----------



## Ecstacy

Quote:


> Originally Posted by *andymiller*
> 
> Anyone know where i can get a Rosewill L4000 in the uk??


Rosewill is Newegg's in-house brand, but I've also seen their stuff on Buy.com. You might be able to get on eBay.


----------



## Thynsiia

I have been looking for the Rosewill case as wel, it costs allot to ship to EU anybody know a good alternative (pref somthing i can buy in the EU)?


----------



## bobfig

Quote:


> Originally Posted by *Thynsiia*
> 
> I have been looking for the Rosewill case as wel, it costs allot to ship to EU anybody know a good alternative (pref somthing i can buy in the EU)?


http://www.xcase.co.uk/X-Case-RM-420-Hotswap-4u-p/case-rm420.htm


----------



## andymiller

didn't want that many hot swaps and don't have that budget.

but on the other hand I found this 4u rackmount which going by the pics and pics from the Norco site is a rebranded RPC-470, don't have hot swaps but has the space for 2 hot swap cages also on the xcase site.


----------



## famous1994

OS: Windows Server 2012 Standard 64-Bit
Case: Thermaltake Armor A60 Leo Edition
CPU: AMD Athlon 64 x2 Brisbane BE-2350 @2.1GHz X2
Motherboard: ECS 780 GM-A V1.1 AM2+
Memory: G.Skill 8GB DDR2 6400 @800MHz
PSU: Diablotek PHD 350W
HDD: Seagate Barracuda 500GB 7200RPM HDD (OS/Storage)
HDD: Maxtor DiamondMax 200GB 7200RPM HDD (Storage)
HDD: Seagate Barracuda 160GB 7200RPM HDD (Storage)
HDD: Hitachi Travelstar 160GB 5400RPM HDD (Storage)
Server Manufacturer: Me


----------



## Kaboooom2000uk

Here is Bessie, the wonder server...

OS: FreeNAS-8.2.0-RELEASE-p1-x64 (r11950)
Case: Generic freestanding 19" 4U case with feet.
CPU: 2x Intel Xeon E5345 LGA 771
Motherboard: Tyan Tempest i5400pw
Memory: 8GB, 8x 1GB fully buffered PC2-5300F 667
PSU: 700W
HDD: 3x SAS Disk = IBM 146GB SAS IBM 10000RPM 2.5" Hot-Swap 42C0252
HDD: 1x 2000GB sata III drive
HDD: 2x 80GB sata ii disks in raid 0
Raid controller: Dell Perc 5i PCIe
Network: 2x Gigabit LAN
Server Manufacturer: Me



and closeup of the perc 5i



not the tidiest but its kind of work in progress at the moment.


----------



## Norse

Quote:


> Originally Posted by *Kaboooom2000uk*
> 
> Here is Bessie, the wonder server...
> 
> OS: FreeNAS-8.2.0-RELEASE-p1-x64 (r11950)
> Case: Generic freestanding 19" 4U case with feet.
> CPU: 2x Intel Xeon E5345 LGA 771
> Motherboard: Tyan Tempest i5400pw
> Memory: 8GB, 8x 1GB fully buffered PC2-5300F 667
> PSU: 700W
> HDD: 3x SAS Disk = IBM 146GB SAS IBM 10000RPM 2.5" Hot-Swap 42C0252
> HDD: 1x 2000GB sata III drive
> HDD: 2x 80GB sata ii disks in raid 0
> Raid controller: Dell Perc 5i PCIe
> Network: 2x Gigabit LAN
> Server Manufacturer: Me
> 
> 
> 
> and closeup of the perc 5i
> 
> 
> 
> not the tidiest but its kind of work in progress at the moment.


you dont seem to have additional cooling on the perc? i just have a PCI slot fan blower thing on mine


----------



## Kaboooom2000uk

Quote:


> Originally Posted by *Norse*
> 
> you dont seem to have additional cooling on the perc? i just have a PCI slot fan blower thing on mine


Hi, yeah I seen the mods you can do to the stock cooler on the board, some people add fans to the stock heat-sink, but I didn't notice much heat build-up so kept it stock for now.

However, I did notice that those SAS disks get very hot, which was a little bit worrying.

I hope to get a much faster LSI controller, which can support 600Mbps soon, but was going to test out some SSD's on it and if its any good use it on my gaming rig.

I think I'll probably need additional cooling then when its doing some work. right now its for home networking and doesn't get much heavy use, except when moving a video file or a iso image around.


----------



## Boyboyd

You guys should start posting the function(s) of your servers too. It's not immediately obvious from the hardware / os.


----------



## Plan9

Quote:


> Originally Posted by *Boyboyd*
> 
> You guys should start posting the function(s) of your servers too. It's not immediately obvious from the hardware / os.


It's been pretty obvious for most of the posts in this thread:
Quote:


> OS: FreeNAS-8.2.0


Answer: file server
Quote:


> OS: Windows Server 2012 Standard 64-Bit
> HDD: Seagate Barracuda 500GB 7200RPM HDD (OS/Storage)
> HDD: Maxtor DiamondMax 200GB 7200RPM HDD *(Storage)*
> HDD: Seagate Barracuda 160GB 7200RPM HDD *(Storage)*
> HDD: Hitachi Travelstar 160GB 5400RPM HDD *(Storage)*


Answer: file server

Hardly rocket science.


----------



## Boyboyd

Quote:


> Originally Posted by *Plan9*
> 
> It's been pretty obvious for most of the posts in this thread:


Ok then Mr. Smartypants. What does this one do?
Quote:


> Originally Posted by *NKrader*
> 
> mmmmm
> 
> dual six core arrived today
> 
> and its new home
> 
> I've got the heatsinks mountedand its in the case, tested with tester psu and got it to boot, now to wait formore money to buy psu and harddrives
> 
> best 50$ ever spent


That's the post that made me post that.


----------



## Plan9

Quote:


> Originally Posted by *Boyboyd*
> 
> Ok then Mr. Smartypants. What does this one do?
> That's the post that made me post that.


Blu-ray ripping.


----------



## wtomlinson

Quote:


> Originally Posted by *Kaboooom2000uk*
> 
> Here is Bessie, the wonder server...
> 
> OS: FreeNAS-8.2.0-RELEASE-p1-x64 (r11950)
> Case: Generic freestanding 19" 4U case with feet.
> CPU: 2x Intel Xeon E5345 LGA 771
> Motherboard: Tyan Tempest i5400pw
> Memory: 8GB, 8x 1GB fully buffered PC2-5300F 667
> PSU: 700W
> HDD: 3x SAS Disk = IBM 146GB SAS IBM 10000RPM 2.5" Hot-Swap 42C0252
> HDD: 1x 2000GB sata III drive
> HDD: 2x 80GB sata ii disks in raid 0
> Raid controller: Dell Perc 5i PCIe
> Network: 2x Gigabit LAN
> Server Manufacturer: Me
> 
> 
> 
> and closeup of the perc 5i
> 
> 
> 
> not the tidiest but its kind of work in progress at the moment.


How much did that board run you? I see those CPUs going for pretty cheap on fleabay. I would assume getting ahold of a lot of DDR2 would be pricey.

I ask because I'm at work and can't browse ebay right now.


----------



## dhrandy

*ROKU SPECS*
Roku HD

*Channels I Use the Most*
Plex - Streams from media server.
Hulu Plus - Streaming Hulu.
Twonky Beam - Beam things from Smartphone to TV.
Slacker Radio - Music streaming.

*MEDIA SERVER*
*Specs:*
Motherboard - Foxconn M61PMV AM2+/AM2 NVIDIA GeForce 6100 Micro ATX AMD Motherboard
Case - Rosewill R6426-P BK ATX Mid Tower Computer Case
Power Supply - Antec earthwatts EA380 380W Continuous Power ATX12V v2.0 80 PLUS Certified Active PFC Power Supply
Processor - AMD Athlon X2 BE-2300 Brisbane 1.9GHz Socket AM2 45W Dual-Core Processor ADH2300DOBOX
Memory - 2 Gigs
Hard Drives - 2 500GB WD Green Drives and 2 500GB old drives - Total of 2TB. I plan on expanding.

*Software:*
OS - Windows Server 2008
Emit - Stream to smartphone and tablet. If I can get it to work.
Plex Server - Stream TV shows and movies to Roku, smartphone and tablet.
Couchpotato
Sickbeard
uTorrent
Google Music Uploader
PC Monitor - Can monitor my server from any browsers or Android app. Has push notifications for updates to the server.
Goodsync - Used to backup pictures and music to a different hard drive
Growler for Windows - Sends notifications to smartphone via Squealer Android app and sends notification to other desktop.

More Information


----------



## Norse

Quote:


> Originally Posted by *Kaboooom2000uk*
> 
> Hi, yeah I seen the mods you can do to the stock cooler on the board, some people add fans to the stock heat-sink, but I didn't notice much heat build-up so kept it stock for now.
> 
> However, I did notice that those SAS disks get very hot, which was a little bit worrying.
> 
> I hope to get a much faster LSI controller, which can support 600Mbps soon, but was going to test out some SSD's on it and if its any good use it on my gaming rig.
> 
> I think I'll probably need additional cooling then when its doing some work. right now its for home networking and doesn't get much heavy use, except when moving a video file or a iso image around.


i have 4x2TB x2 Raid 5 on my perc 5, gives me about 300 MB/s which is waaaaay more than i need due to only having 1gbps ethernet (100MB/S)


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> Blu-ray ripping.


Could be a SQL Server? Hypervisor?


----------



## Kaboooom2000uk

Quote:


> Originally Posted by *Boyboyd*
> 
> You guys should start posting the function(s) of your servers too. It's not immediately obvious from the hardware / os.


Sorry, Mine is as mentioned earlier, simply a file server with some random disks and some samba shares, I do occasional FTP into it remotely if i need a file while out of the country.

For now just a place to dump my files while i experiment with other linux distros and system builds.

I also have a couple of IBM system x3850's which are awesome bits of kit, and have a 1U IBM Eserver which runs Pfsence. In additon to this, I have a HP Proliant DL350 G5 which needs a role but i feel is too overkill to use as a mere firewall...

my main gaming rig is based on a server board: Asus Z8PE-D18, which i got relatively cheaply.

At some point I want to be able to have all the servers and workstations running CX4 interconnects that way i wont be limited to gigabit speeds when moving files.


----------



## Plan9

Quote:


> Originally Posted by *tycoonbob*
> 
> Could be a SQL Server? Hypervisor?


oh it could be anything. I just hoped if I said something confidently enough people might believe me


----------



## Kaboooom2000uk

Quote:


> Originally Posted by *Norse*
> 
> i have 4x2TB x2 Raid 5 on my perc 5, gives me about 300 MB/s which is waaaaay more than i need due to only having 1gbps ethernet (100MB/S)


Exactly, You are right.









I have found this also during an office update job ages ago, which i recall fondly...

At the time I was doing some routine IT work, in house, and the company didn't want to pay for a contractor to do the work, so they got me to run some tests on the LAN since the workers were reporting that they were waiting a long time to move word documents around. at that time we had some Packard Bell boxes, which had 10/100 LAN and we were running about 8 machines inc a server through a HUB, (the server had a 56K modem in it for internet) Yes it was dire...

Normally for me, people on that type of LAN don't really need to move large files around, max 150MB but nothing massive, well maybe the mild exception of databases... I swear I'll never go back to that level. Those speeds were tortuously slow.

The biggest problem was that all the computers in the office at the time were using IDE drives, and didn't get anything more than 60MB/s data rates to begin with, now throw in a really piss poor LAN, and abysmal cables, then it is easy to see why it resulted in waiting about 20 minutes to move a 100 meg file between machines. Even mailbox synchronization was a daunting task, easily overloading the server at peak times. ( _In the end, I finally convinced them to upgrade to Gigabit LAN hardware, & get some better machines, which turned out to be a vast improvement. Virtually an instant increase in performance._),

Now I'm just thankful this was spotted when it was, otherwise they would have never updated. In the end we got new machines with Sata drives, and gig Ethernet on board and its been a lot more stable.









Ive also seen situations such as yours where the drives outperform the LAN, but thats a good thing to have headroom.


----------



## NKrader

Quote:


> Originally Posted by *Boyboyd*
> 
> Ok then Mr. Smartypants. What does this one do?
> That's the post that made me post that.


looks at sig,

crunch fileserver

will be Cruncher/NAS
Quote:


> Originally Posted by *Plan9*
> 
> Blu-ray ripping.


close, will prob encode and store bluray rips







but wont have bluray drive
Quote:


> Originally Posted by *tycoonbob*
> 
> Could be a SQL Server? Hypervisor?


Nope, not that smart


----------



## jibesh

Quote:


> Originally Posted by *Kaboooom2000uk*
> 
> At some point I want to be able to have all the servers and workstations running CX4 interconnects that way i wont be limited to gigabit speeds when moving files.


What exactly are you going to connect the CX4 cables to? 10Gb ethernet, Infiniband, etc?


----------



## Oedipus

Why not use 10gbase-t?


----------



## Norse

Quote:


> Originally Posted by *Oedipus*
> 
> Why not use 10gbase-t?


Its not exactly cheap


----------



## Boyboyd

The cable is, the switches and routers *are not*.


----------



## Plan9

Quote:


> Originally Posted by *NKrader*
> 
> *close*, will prob encode and store bluray rips
> 
> 
> 
> 
> 
> 
> 
> but wont have bluray drive


haha excellent!
Quote:


> Originally Posted by *Boyboyd*
> 
> The cable is, the switches and routers are not.


10gig cables aren't much use without supporting NICs


----------



## Oedipus

Quote:


> Originally Posted by *Norse*
> 
> Its not exactly cheap


And CX4 hardware is?


----------



## the_beast

Quote:


> Originally Posted by *Oedipus*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Norse*
> 
> Its not exactly cheap
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And CX4 hardware is?
Click to expand...

it's not bad - you can get dual port Mellanox infiniband cards on eBay for very reasonable prices - switches are another matter though...


----------



## pvt.joker

the majority of the fiber i deal with at work is all 10gb.. so nice..









i'd be happy runnin 4 or 8gb fiber at home..


----------



## tycoonbob

For home, and from a server to server connection, I also highly recommend IB. You can get 20Gbps speeds for like $150 or so.

I.E.
Storage box to backup storage box
Storage box to hypervisor
etc


----------



## Ecstacy

Quote:


> Originally Posted by *tycoonbob*
> 
> For home, and from a server to server connection, I also highly recommend IB. You can get 20Gbps speeds for like $150 or so.
> 
> I.E.
> Storage box to backup storage box
> Storage box to hypervisor
> etc


What's IB?


----------



## pvt.joker

IB = infinityband

We run IB between most of our storage clusters.. 50ft cables can be a bit of a bear to manage sometimes..


----------



## tycoonbob

Quote:


> Originally Posted by *pvt.joker*
> 
> IB = infinityband
> 
> We run IB between most of our storage clusters.. 50ft cables can be a bit of a bear to manage sometimes..


Oh I'm sure, lol. I'm thinking more like a ~3 ft cable, which still runs about $60. But the HBAs can also be found for ~$50, so it's really not a bad deal if you need the speed between two boxes. Throw in an IB switch and then you should start considering 10Gbit ethernet instead.


----------



## jibesh

Quote:


> Originally Posted by *tycoonbob*
> 
> Oh I'm sure, lol. I'm thinking more like a ~3 ft cable, which still runs about $60. But the HBAs can also be found for ~$50, so it's really not a bad deal if you need the speed between two boxes. Throw in an IB switch and then you should start considering 10Gbit ethernet instead.


Can InfiniBand be deployed without a switch? or can HBAs be directly connected?


----------



## the_beast

Quote:


> Originally Posted by *jibesh*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tycoonbob*
> 
> Oh I'm sure, lol. I'm thinking more like a ~3 ft cable, which still runs about $60. But the HBAs can also be found for ~$50, so it's really not a bad deal if you need the speed between two boxes. Throw in an IB switch and then you should start considering 10Gbit ethernet instead.
> 
> 
> 
> Can InfiniBand be deployed without a switch? or can HBAs be directly connected?
Click to expand...

yes. You can even use dual port cards - so have a fileserver with a dual port card in it, connected to single port cards in your workstation and backup or vm server. Simple layout and cheap to implement.

Whether it's actually worth it for home use is another matter - 99% of the time you won't notice any difference unless you regularly do large sustained transfers and have the hardware at each end to support it all (no point being able to throw things across the network at 500MB/s if your storage can't keep up!). And if you do require large transfers then you might be better placed reconfiguring how your storage is set up in the first place, so you're keeping things where they're used.

Infiniband has it's place, but really that place is connecting SANs where the low latency really shines (10G ethernet is poor in comparison for latency). It's by no means trivial to set up and can cause annoying headaches where things stop working for no apparent reason (cheap cables are often a cause of problems). You also need a spare PCIe x8 slot for most cards, which can be hard to accommodate in all your systems alongside your graphics & RAID cards. That's why I've never bothered...


----------



## jibesh

Quote:


> Originally Posted by *the_beast*
> 
> yes. You can even use dual port cards - so have a fileserver with a dual port card in it, connected to single port cards in your workstation and backup or vm server. Simple layout and cheap to implement.
> 
> Whether it's actually worth it for home use is another matter - 99% of the time you won't notice any difference unless you regularly do large sustained transfers and have the hardware at each end to support it all (no point being able to throw things across the network at 500MB/s if your storage can't keep up!). And if you do require large transfers then you might be better placed reconfiguring how your storage is set up in the first place, so you're keeping things where they're used.
> 
> Infiniband has it's place, but really that place is connecting SANs where the low latency really shines (10G ethernet is poor in comparison for latency). It's by no means trivial to set up and can cause annoying headaches where things stop working for no apparent reason (cheap cables are often a cause of problems). You also need a spare PCIe x8 slot for most cards, which can be hard to accommodate in all your systems alongside your graphics & RAID cards. That's why I've never bothered...


Excellent because I just picked up 4x Mellanox InfiniBand Dual port cards for $200 lol.


----------



## tycoonbob

Quote:


> Originally Posted by *jibesh*
> 
> Excellent because I just picked up 4x Mellanox InfiniBand Dual port cards for $200 lol.


I'm jealous!

I want to implement IB between my storage box and my 2 hypervisors.


----------



## Norse

Quote:


> Originally Posted by *jibesh*
> 
> Excellent because I just picked up 4x Mellanox InfiniBand Dual port cards for $200 lol.


I am intrigued by this, please can you tell me more config on how they see each other etc do you just plug them into each other and they see each other? or does it require a load of settings etc


----------



## megawatz

*OS:* Windows Server 2003
*Case:* HP's Case
*CPU:* 4 x Intel Xeon 2.4Ghz
*Motherboard:* HP's
*Memory:* 8 x 256MB RAM
*PSU:* 3 x HP PSUs
*OS HDD (If you have one):* 1 x 36.6GB (OS)
*Storage HDD(s):* 11 x 18.2GB (Storage)
*Server Manufacturer (Ex: Dell, HP, You?):* HP


----------



## Deeeebs

Quote:


> Originally Posted by *megawatz*
> 
> *OS:* Windows Server 2003
> *Case:* HP's Case
> *CPU:* 4 x Intel Xeon 2.4Ghz
> *Motherboard:* HP's
> *Memory:* 8 x 256MB RAM
> *PSU:* 3 x HP PSUs
> *OS HDD (If you have one):* 1 x 36.6GB (OS)
> *Storage HDD(s):* 11 x 18.2GB (Storage)
> *Server Manufacturer (Ex: Dell, HP, You?):* HP


DL580 G3?


----------



## jibesh

Quote:


> Originally Posted by *Norse*
> 
> I am intrigued by this, please can you tell me more config on how they see each other etc do you just plug them into each other and they see each other? or does it require a load of settings etc


I'll let you know in a few days when I get them.


----------



## megawatz

Quote:


> Originally Posted by *Deeeebs*
> 
> DL580 G3?


570 G2. I have no clue what kind of processor they are, except they're Xeon 2.4Ghz. HP support doesn't show any cores or anything like that.


----------



## Xyro TR1

Mine's not really a server in the sense that it uses desktop components, but here it is regardless...

*OS:* Windows 7 Professional x64
*Case:* Silverstone LC17-B
*CPU:* Intel i5-2300
*Motherboard:* ASUS P8Z68-V LX
*Memory:* 16GB G.Skill DDR3-1600 1.35v
*PSU:* Corsair VX550W
*OS HDD:* 90GB OCZ Agility 3
*Storage HDD(s):* 2x WD Green 2TB RAID1, 2x Samsung F3EG 2TB RAID1
*Server Manufacturer:* Me!








*Server Usage:* Web hosting, game servers, local file server, Shoutcast web radio server

Also, just for kicks, my network gear:
-- CheckPoint UTM-1 Edge N
-- Motorola NIM100 (TV)
-- EnGenius WAP300
-- Verizon FiOS 150/75 WAN

All equipment is on a 1500VA UPS, real world runtime is 90 minutes with server powered, 4 hours without.


----------



## Cyrious

Well i thought i'd provide some pics of my gear, try to not gag at the rats nest in my bedside computer cause its kinda bad. Case has no cable management at all and almost no room to mod some in.

Anways, here's The Stack as i call it:


The future firewall/fileserver is the top one while my bedside rig is the bottom one

Now for gut shots and specs!

Firewall/Fileserver
*CPU*: Pentium M 760 @ 2ghz (yes this is a mobile chip, used to be in my laptop before it disintegrated)
*Cooler*: Zalman CNPS7000B-ALCU (59C load temps with the fan off and almost no airflow, 45 with the fan on at 50%)
*Ram*: 1GB DDR-333 @ 354mhz Single Channel (stupid chipset)
*Motherboard*: AOpen i855GMEM-LFS S479 (got as freebie







)
*OS*: WinXP Professional for now, may change once i get the hardware i need
*HDDs*: 40GB Hitachi Death-imean DeskStar PATA, 2x Samsung Spinpoint 40GB Sata (Raid 0, will eventually switch them out for a pair of 500s or better and raid 1 them)
*Case*: Salvaged old HP slimline desktop case (cramped, but gets the job done fairly well. Built like a tank too)
*PSU*:Stock 250W Bestec


Bedside Rig/Fileserver
*CPU*: Phenom II x4 940BE @ 2ghz (if i ever really need power i kick it up to 3.4, but i dont leave it there long)
*Cooler*: AM3 Phenom II stock cooler w/ modified fan assembly (knocks 4 degrees of load temps and gets more air where its needed)
*Ram*: 2x2GB Gskill DDR2-800 + 2x2GB OCZ DDR2-800 (8GB total)
*Motherboard*: ASUS M3A78-EM
*OS*: Win7 Ultimate 64-bit
*HDDs*: 40GB Samsung Spinpoint Sata, 120GB Fujitsu Laptop Sata, 160GB Seagate Laptop Sata
*Case*: Rosewill R102-P-BK (I love this case yet i hate it. Love it because i have yet to find a microATX case that matches it in terms of features/price, hate it because there's no cable management at all)
*PSU*: Antec Earthwatts 380W

Both machines are more or less built to be energy efficient, with the Pentium M computer staying under 40W full load (if that, previous owner said it never topped 20W) and the Phenom II rig staying below 100W full load (it definitely gets higher once i kick the processor up to 3.4 though).

The plan with the firewall/fileserver is to eventually get it set up so all data entering or leaving my room has to pass through it first (thank you dual integrated Gig-e), so i can keep track of whats going in or out, and maybe once i get some game servers up keep attackers out. I kinda needs to get a gigabit switch before i can implement it good and proper though.


----------



## NKrader

whooohooo finally found a place where I can order Foxconn blue sata cables! I love the quality of these cables so much!


----------



## dushan24

Quote:


> Originally Posted by *megawatz*
> 
> *OS:* Windows Server 2003
> *Case:* HP's Case
> *CPU:* 4 x Intel Xeon 2.4Ghz
> *Motherboard:* HP's
> *Memory:* 8 x 256MB RAM
> *PSU:* 3 x HP PSUs
> *OS HDD (If you have one):* 1 x 36.6GB (OS)
> *Storage HDD(s):* 11 x 18.2GB (Storage)
> *Server Manufacturer (Ex: Dell, HP, You?):* HP


Dude, that would suck power.

Nice old box though, I used to pick stuff up that the local universities left on the street.


----------



## BodenM

Quote:


> Originally Posted by *megawatz*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Deeeebs*
> 
> DL580 G3?
> 
> 
> 
> 570 G2. I have no clue what kind of processor they are, except they're Xeon 2.4Ghz. HP support doesn't show any cores or anything like that.
Click to expand...

I have the *exact* same server as you do (except mine has 4x1GB RAM sticks and only 2x 36.6GB HDDs). The procs are single-core Netburst-arch Xeons (Gallatin code name) with HT and no 64-bit support.


----------



## Deeeebs

Quote:


> Originally Posted by *BodenM*
> 
> I have the *exact* same server as you do (except mine has 4x1GB RAM sticks and only 2x 36.6GB HDDs). The procs are single-core Netburst-arch Xeons (Gallatin code name) with HT and no 64-bit support.


Those sounds like some amazing chips


----------



## Oedipus

Quote:


> Originally Posted by *dushan24*
> 
> Dude, that would suck power.
> 
> Nice old box though, I used to pick stuff up that the local universities left on the street.


It's funny because his name is megawatz.


----------



## megawatz

Quote:


> Originally Posted by *dushan24*
> 
> Dude, that would suck power.
> 
> Nice old box though, I used to pick stuff up that the local universities left on the street.


Bought these a while back ago (when they were new)

Now I have 4 total I don't use, and a DL530 that are useless paperweights in my office.


----------



## NKrader

Quote:


> Originally Posted by *dushan24*
> 
> Dude, that would suck power.
> 
> Nice old box though, I used to pick stuff up that the local universities left on the street.


haha my 16core uses like 200watts at full load.

my power bill thanks me


----------



## Kaboooom2000uk

Quote:


> Originally Posted by *jibesh*
> 
> Can InfiniBand be deployed without a switch? or can HBAs be directly connected?


It can be indeed









Sorry for my late reply guys, I've been working...

my first experiments with CX4 were:
I bought 2x Intel NE020 CX4 PCIe x8 cards, and a 15M CX4 cable on ebay, this came to about £150 all together. All parts were sealed and new. (The cable wasn't that flexible)

My initial thoughts were, "Can this be used without a switch? between two computers?" and i found that indeed I could use it just like a normal Ethernet card, that is by assigning each card its own IP address on the same subnet, and just plugging in the cable to each NIC. I was pleasantly delighted by the speeds that I got from the benchmark.

My *biggest* problem I encountered was finding suitable drivers.
Unless you are running a server OS, you are will have problems, but if you want to use my particular card on your desktop then I find only the server 2003 driver seems to work on XP, and is OK with it. I had problems trying to get Windows 7 to recognize it, as it kept telling me that the driver isn't for that OS,







I didn't try any server OS's as I assume they will work fine.

This news alone has allowed me to go ahead and buy the Brocade CX4 switch, which has 48 gigabit ports and 4x CX4 ports. I plan to get a second one and team up 10 gigabit lanes, and then it will be possible for me to get CX4 speeds over 100m.

watch this space


----------



## Kaboooom2000uk

Quote:


> Originally Posted by *Norse*
> 
> I am intrigued by this, please can you tell me more config on how they see each other etc do you just plug them into each other and they see each other? or does it require a load of settings etc


Luckily the ones I have behave just like modern Ethernet adapters, in that they seem to support auto negotiation, so all you need is two cards, and a cable to do 10Gbps speeds.

Obviously as mentioned earlier your disks need to be able to keep up to fully saturate that link, 1200MB/sec approx is one 10Gbps link can handle. a DVD image in a few seconds.









Setting up two 10GB ram-disks each end will be a good way of saturating the link, as then your pretty limited only by the cable and the system bus speed.

Also yes, I did look at 10GBASE-T but those cards were like £400+ each. and switches were like over £1000+. I saw many cheap (£100) CX4 cards available on eBay, and I also spied on the Brocade CX4 switch which had 4 CX4 ports and 48 standard gigabit ports as well, for about £180, plus its managed so its possible to team up some of the gigabit ports.

Id say the advantage of CX4 is its pretty cheap and it justifies upgrading to it, since I can't fit 10X gigabit cards in a single 8x PCIe slot.

Fiber to me seems very delicate although it can go potentially many KM, I think I just like the robustness of copper for now.


----------



## Kaboooom2000uk

Quote:


> Originally Posted by *wtomlinson*
> 
> How much did that board run you? I see those CPUs going for pretty cheap on fleabay. I would assume getting ahold of a lot of DDR2 would be pricey.
> 
> I ask because I'm at work and can't browse ebay right now.


I managed to get that board very cheaply, it was one of a kind listing, you have to keep your eyes peeled! At the time it was an epic board, 128GB of PC2-5300F RAM capacity. I must have paid only £100 for it, and it was used, I had to find a lot of parts before I could use it. (Those CEK springs were the hardest to find in UK) Yeah the CPU's now are 10 a penny almost! Nobody wants 771s anymore









Saying that I managed to acquire a much better server board which handles 2x socket 1366 6 core cpus, and up to 192GB of DDR3 ram. its an Asus Z8PE-D18. I actually use this for my Desktop.

I would love to test drive the new Z9PE series, but I cant seem to find one below £400...







anyway keeping my eyes peeled. that supports 2 of those new socket 2011 cpus.


----------



## Norse

Quote:


> Originally Posted by *NKrader*
> 
> haha my 16core uses like 200watts at full load.
> 
> my power bill thanks me


my D585 G2 (with 4xquad core 8347HE) uses about 300 watts now but it has 32GB of memory and im sure the raid controller is prolly juicy


----------



## diesel678

Dual node Quad Xeon, 6TB of storage, 64GB ram!!


----------



## mrsmoke

Quote:


> Originally Posted by *diesel678*
> 
> 
> 
> Dual node Xeon server, 6TB of storage, 64GB ram!!


That's pretty awesome! Would love to get my hands on one of those for a VM server.


----------



## NKrader

Quote:


> Originally Posted by *Norse*
> 
> my D585 G2 (with 4xquad core 8347HE) uses about 300 watts now but it has 32GB of memory and im sure the raid controller is prolly juicy


i had those with arima board. was a beast on wattage, doubt its the raid controller.


----------



## Norse

Quote:


> Originally Posted by *NKrader*
> 
> i had those with arima board. was a beast on wattage, doubt its the raid controller.


Well i dont knoww about the HP P400 but i know the Dell Perc 5 uses about 60 watts just by itself so


----------



## Kaboooom2000uk

Quote:


> Originally Posted by *diesel678*
> 
> 
> 
> Dual node Xeon server, 6TB of storage, 64GB ram!!


Looking nice! Put ESXi on it and you'll be flying with the faires!


----------



## Norse

Quote:


> Originally Posted by *Kaboooom2000uk*
> 
> Luckily the ones I have behave just like modern Ethernet adapters, in that they seem to support auto negotiation, so all you need is two cards, and a cable to do 10Gbps speeds.
> 
> Obviously as mentioned earlier your disks need to be able to keep up to fully saturate that link, 1200MB/sec approx is one 10Gbps link can handle. a DVD image in a few seconds.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Setting up two 10GB ram-disks each end will be a good way of saturating the link, as then your pretty limited only by the cable and the system bus speed.
> 
> Also yes, I did look at 10GBASE-T but those cards were like £400+ each. and switches were like over £1000+. I saw many cheap (£100) CX4 cards available on eBay, and I also spied on the Brocade CX4 switch which had 4 CX4 ports and 48 standard gigabit ports as well, for about £180, plus its managed so its possible to team up some of the gigabit ports.
> 
> Id say the advantage of CX4 is its pretty cheap and it justifies upgrading to it, since I can't fit 10X gigabit cards in a single 8x PCIe slot.
> 
> Fiber to me seems very delicate although it can go potentially many KM, I think I just like the robustness of copper for now.


I am wondering if you found a card that works on Server 2008 then it will also work on 7? i cant seem to find many CX4 cards cheap and no Intel NE020 at all on ebay.


----------



## ZFedora

Some updated pictures:



Top to bottom:

Ortronics 48 port patch panel
Cisco Catalyst 2950
Cisco SF-100-16
Belkin 2U cable management
TP-Link TL-SG1024
2x Trendnet TC-P16C5E

On the shelf I have a random assortment of older PCs running as file servers for my home network and a small webserver.

Over the top of the shelf & rack is a ladder rack that Fry's happily sold me for $0.58







. Keeps everything a bit more organized.

A better look at the rack switches & patch panels:


----------



## dushan24

Quote:


> Originally Posted by *ZFedora*
> 
> Some updated pictures:
> 
> 
> 
> Top to bottom:
> 
> Ortronics 48 port patch panel
> Cisco Catalyst 2950
> Cisco SF-100-16
> Belkin 2U cable management
> TP-Link TL-SG1024
> 2x Trendnet TC-P16C5E
> 
> On the shelf I have a random assortment of older PCs running as file servers for my home network and a small webserver.
> 
> Over the top of the shelf & rack is a ladder rack that Fry's happily sold me for $0.58
> 
> 
> 
> 
> 
> 
> 
> . Keeps everything a bit more organized.
> 
> A better look at the rack switches & patch panels:


Dude, that's sexy, just clean it up a bit...

TP-Link switches are nice too

/offtopic, what is https://budgetno.de/ it's in your sig but the URL doesn't resolve...


----------



## Oedipus

That's cleaner than a lot of the setups I've seen, regardless of size.


----------



## ZFedora

Thanks guys, I'll be sure to clean it up a bit and post some updates!

And budgetno.de was a past project of mine, not doing much anymore. I actually forgot I had it in my sig haha


----------



## Deeeebs

Quote:


> Originally Posted by *ZFedora*
> 
> Thanks guys, I'll be sure to clean it up a bit and post some updates!
> 
> And budgetno.de was a past project of mine, not doing much anymore. I actually forgot I had it in my sig haha


I think you should sleeve all your cables.


----------



## ZFedora

Quote:


> Originally Posted by *Deeeebs*
> 
> I think you should sleeve all your cables.


That'd take a pretty long time haha









Forgot to mention the rack itself. It's a 2 post open frame CPI rack. Same with the ladder as well, it's a CPI.


----------



## Deeeebs

Quote:


> Originally Posted by *ZFedora*
> 
> *That'd take a pretty long time haha
> 
> 
> 
> 
> 
> 
> 
> *
> 
> Forgot to mention the rack itself. It's a 2 post open frame CPI rack. Same with the ladder as well, it's a CPI.


But you know how badass it would look in the end...


----------



## ZFedora

Quote:


> Originally Posted by *Deeeebs*
> 
> But you know how badass it would look in the end...


Very true


----------



## ZFedora

Here's an update guys:


----------



## BiscuitHead

Quote:


> Originally Posted by *ZFedora*
> 
> Here's an update guys:
> 
> 
> Spoiler: Warning: Picture!


That looks great!


----------



## jibesh

Quote:


> Originally Posted by *Norse*
> 
> Quote:
> 
> 
> 
> Originally Posted by *jibesh*
> 
> Excellent because I just picked up 4x Mellanox InfiniBand Dual port cards for $200 lol.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I am intrigued by this, please can you tell me more config on how they see each other etc do you just plug them into each other and they see each other? or does it require a load of settings etc
Click to expand...

Well I installed 2x IB cards (MHGH28-XTC) in my storage server and the other 2 in my VM host servers. I was able to configure them for 10Gb Ethernet and they work great with Windows Server 2012 and MPIO.

I can't seem to figure out how to run them as IB however. Not really sure how I need to configure them properly because when I enable IB, the status of the cards change to Network Cable unplugged. Obviously, the CX4 cables are good since 10GbE works with them.

I would really love to run them as 20Gbps InfiniBand (the specs for the cards and cables both show 20Gbps capable) so if anyone has any ideas or suggestions, please let me know.


----------



## ZFedora

Quote:


> Originally Posted by *BiscuitHead*
> 
> That looks great!


Thanks, I appreciate it! I really enjoy racking and re-racking my network equipment. It keeps me busy


----------



## megawatz

Getting rid of Windows Server 2003. Its time to install something worthy for my backup/minecraft/folding server.

Hello Ubuntu


----------



## wtomlinson

Quote:


> Originally Posted by *megawatz*
> 
> Getting rid of Windows Server 2003. Its time to install something worthy for my backup/minecraft/folding server.


After spending the last 2 years managing nothing but 2008 R2 boxes at work, I had to jump on a 2003 box yesterday to do some troubleshooting. This is how I felt navigating around:









Then I remembered how happy I was when I no longer had to mess with 2003.


----------



## CiBi

I use it mostly for downloading torrents but it also serves as a print and fileserver.

*OS:* Windows Server 2003 32bit
*Case:* Zalman Z9
*CPU:* Intel Pentium 4 550 (1 core 2 threads @ 3.40GHz)
*Motherboard:* MSI MS-7091
*Memory:* 4 dimms of 512MB DDR memory (2GB total @ 200MHz)
*PSU:* Stock OEM crap PSU
*OS HDD (If you have one):* 2x Western Digital Raptor 36GB (10.000rpm)
*Storage HDD(s):* External WD drives
*Server Manufacturer (Ex: Dell, HP, You?):* me


----------



## wtomlinson

Quote:


> Originally Posted by *CiBi*
> 
> I use it mostly for downloading torrents but it also serves as a print and fileserver.
> 
> *OS:* Windows Server 2003 32bit
> *Case:* Zalman Z9
> *CPU:* Intel Pentium 4 550 (1 core 2 threads @ 3.40GHz)
> *Motherboard:* MSI MS-7091
> *Memory:* 4 dimms of 512MB DDR memory (2GB total @ 200MHz)
> *PSU:* Stock OEM crap PSU
> *OS HDD (If you have one):* 2x Western Digital Raptor 36GB (10.000rpm)
> *Storage HDD(s):* External WD drives
> *Server Manufacturer (Ex: Dell, HP, You?):* me


Here I was thinking I was the only one to use those Raptors, let alone 2 of them.








They have lasted me a while.


----------



## tiro_uspsss

Quote:


> Originally Posted by *wtomlinson*
> 
> Here I was thinking I was the only one to use those Raptors, let alone 2 of them.
> 
> 
> 
> 
> 
> 
> 
> 
> They have lasted me a while.


I have:

2x 36GB
1x 74 (might have another 2 lying around somewhere)
1x 80GB
2x RaptorX

all still alive & kicking!


----------



## wtomlinson

I'm waiting on my new 80gb to arrive. It's a refurbished WD800HLFS that I picked up for $30-ish shipped (geeks.com) to replace my 2 x 36gb. They've been running strong for a while in RAID0 on my Linux server (Ubuntu Server 12.04), but I am in the process of going low power so I'm dropping down to one drive that is SATA II. Between different Windows and Linux setups running 24/7, they've been running strong since 2010.

Plus, I can hear those 2 things chattering over 6 x 500gb drives whenever something is reading from them.







2 x 120mm fans in the front, 2 x 92mm (one on the TX3 and one in front of the PERC), and 2 x 80mm in the rear, and I can STILL here them seeking.


----------



## Kaboooom2000uk

Quote:


> Originally Posted by *Norse*
> 
> I am wondering if you found a card that works on Server 2008 then it will also work on 7? i cant seem to find many CX4 cards cheap and no Intel NE020 at all on ebay.


I attempted to use the Server 2008 driver with Win 7 ultimate x64 but it did not work.







I also had no joy trying the server 2012 drivers either... it seemed to proceed further into the installation than it did when I was trying the 2008 driver but it moans and doesn't play ball.

I don't have a server OS up and running at this time to test my Intel NE020, however my thoughts are it should be fine.

I have also just ordered some really, really cheap Infiband cards, which should work as LAN adapters. I will see how easy they are to work with.

Ebay item number 251044070879

I also need more cables now my Brocade CX4 switch has arrived


----------



## nerdalertdk

My new server room (work in progress)


----------



## yanksno1

Quote:


> Originally Posted by *nerdalertdk*
> 
> My new server room (work in progress)


Which rack?


----------



## NKrader

Quote:


> Originally Posted by *yanksno1*
> 
> Which rack?


One of the awesome 24 u that costs more than a 48u..

I wants


----------



## nerdalertdk

Quote:


> Originally Posted by *yanksno1*
> 
> Which rack?


Dell
http://blog.theninjabay.dk/2012/07/01/rack-version-2-0-beta/

Quote:


> Originally Posted by *NKrader*
> 
> One of the awesome 24 u that costs more than a 48u..
> 
> I wants


Hehe actually this costs about the same as an U42, Its U16 high


----------



## NKrader

Quote:


> Originally Posted by *nerdalertdk*
> 
> Dell
> http://blog.theninjabay.dk/2012/07/01/rack-version-2-0-beta/
> Hehe actually this costs about the same as an U42, Its U16 high


yeah I wanted that one till I saw the price

picked up two more of these, for a total of three in my file server












soon to look like this


----------



## yanksno1

@nerdalertdk: Very nice job on modifying that rack. Looks really nice. Wish I had those building skills haha.

Can't decide when I do build a server if I want to build a rack or a desktop version. I do like that look of using the hot swap trays (would probably do 3 too) with a desktop. Was thinking about maybe the Antec Three Hundred for that.

@NKrader: For more sata ports, which cards are you going to use for the new trays?


----------



## NKrader

Quote:


> Originally Posted by *yanksno1*
> 
> @NKrader: For more sata ports, which *card*s are you going to use for the new trays?


narrowed it down to a single adaptec sas raid controller, not sure which yet, 3x sas port, hopefully 512 ram. eyeing a few


----------



## nerdalertdk

Quote:


> Originally Posted by *yanksno1*
> 
> @nerdalertdk: Very nice job on modifying that rack. Looks really nice. Wish I had those building skills haha.


Well it was just 4 cuts with an angle grinder and wuup you have a small killer rack


----------



## megawatz

Taking servers apart and putting pieces together. Upgrading my ML570 G2 that has 2x 2.4 Xenons and adding 2x 3.0 Xenons and another 4g of ram.
















Sent from my Transformer Prime TF201 using Tapatalk 2


----------



## tiro_uspsss

that's *Xeons* *XEONS*







how dare you misspell that in the _server_ thread!


----------



## dushan24

Haha, my friend says Xenon too when we're talking, it really annoys me.


----------



## megawatz

Quote:


> Originally Posted by *tiro_uspsss*
> 
> that's *Xeons* *XEONS*
> 
> 
> 
> 
> 
> 
> 
> how dare you misspell that in the _server_ thread!


It wasn't me. My TF Prime likes to correct corrected spellings of words. I need to disable it, but swyping is SO MUCH FUN!


----------



## NKrader

got me a few hot swap Bays


----------



## CloudX

Very nice. It's looking good!


----------



## Theloudtrout

Quote:


> Originally Posted by *NKrader*
> 
> got me a few hot swap Bays


Sweet server man. What Hot swap bays are they ?


----------



## mrsmoke

I also would like to know what kind of bays those are. Is there a power and HDD activity light for each individual slot?


----------



## broadbandaddict

Quote:


> Originally Posted by *Theloudtrout*
> 
> Sweet server man. What Hot swap bays are they ?


Quote:


> Originally Posted by *mrsmoke*
> 
> I also would like to know what kind of bays those are. Is there a power and HDD activity light for each individual slot?


They are SuperMicro CSE-M35T-1B as listed in his sig.

Newegg link.


----------



## NKrader

Quote:


> Originally Posted by *mrsmoke*
> 
> Is there a power and HDD activity light for each individual slot?


yes, not power, but activity without needing cables.


----------



## Kaboooom2000uk

That's a very sexy looking server you have there, NKrader. are those 6gbps drives in those bays & What controller are you planning to use?


----------



## NKrader

Quote:


> Originally Posted by *Kaboooom2000uk*
> 
> That's a very sexy looking server you have there, NKrader. are those 6gbps drives in those bays & What controller are you planning to use?


its just passthru,

im getting adaptec 3-4 sas port card


----------



## mrsmoke

I have two 2 port sata to pci-e card for 40 bucks each. much cheaper than that card. I run FreeNAS so i don't need hardware raid. If i did, that card would be sweet!


----------



## NKrader

Quote:


> Originally Posted by *mrsmoke*
> 
> I have two 2 port sata to pci-e card for 40 bucks each. much cheaper than that card. I run FreeNAS so i don't need hardware raid. If i did, that card would be sweet!


that rig crunches so software raid would slow down points, also like hardware raid, sense im running win server 2008


----------



## hartofwave

Guys... i just got given a HP PROLIANT ML370 G5........ what do i do with such a thing?


----------



## the_beast

Quote:


> Originally Posted by *hartofwave*
> 
> Guys... i just got given a HP PROLIANT ML370 G5........ what do i do with such a thing?


sell it unless you're deaf already or would like to be driven insane by the noise.


----------



## hartofwave

cool, ok


----------



## wtomlinson

Learn something new with it. By that I'm talking about VMware, Hyper-V, Linux, etc... something that you're unfamiliar with.


----------



## blooder11181

send me for free


----------



## lordhinton

Quote:


> Originally Posted by *blooder11181*
> 
> send me for free


beat me to it


----------



## Norse

Quote:


> Originally Posted by *the_beast*
> 
> sell it unless you're deaf already or would like to be driven insane by the noise.


i found the ML370's relatively quiet apart from boot up when it sounds like a bloody chinook


----------



## the_beast

Quote:


> Originally Posted by *Norse*
> 
> Quote:
> 
> 
> 
> Originally Posted by *the_beast*
> 
> sell it unless you're deaf already or would like to be driven insane by the noise.
> 
> 
> 
> i found the ML370's relatively quiet apart from boot up when it sounds like a bloody chinook
Click to expand...

They're are amongst the quietest enterprise servers - but I think your use of the word 'relatively' sums it up well. In a DC they're inaudible, especially compared to a high density blade enclosure for example, but in a home environment they're still way too loud for my taste at least.


----------



## Oedipus

All of the 11th and 12th generation Dell Poweredge's are very quiet, with the likely exception of the 4-CPU offerings (820, 910, 815, etc.). At moderate load, a T420 (for example) is barely louder than an Optiplex 7010.


----------



## Atomfix

Quote:


> Originally Posted by *hartofwave*
> 
> Guys... i just got given a HP PROLIANT ML370 G5........ what do i do with such a thing?


I'd love to have a chance to own something like that, if it's too loud, then just replace the fans, it's useful to learn new things with it









Hell!, Make it as power friendly as you can, it could be a good File Storage server for your network


----------



## the_beast

Quote:


> Originally Posted by *Atomfix*
> 
> Quote:
> 
> 
> 
> Originally Posted by *hartofwave*
> 
> Guys... i just got given a HP PROLIANT ML370 G5........ what do i do with such a thing?
> 
> 
> 
> I'd love to have a chance to own something like that, if it's too loud, then just replace the fans, it's useful to learn new things with it
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Hell!, Make it as power friendly as you can, it could be a good File Storage server for your network
Click to expand...

If you change out the fans it won't post...


----------



## shadow5555

update to my office/ server setup


----------



## shadow5555

another small update. Changed out keyboard setups to comps and put in cable box for tv and bigger monitor for middle setup


----------



## particleman

Here's my server. The primary goal of my server was to minimize noise and power consumption since it would be run 24/7 at home. When I finished, I measured the power consumption using a kill a watt and it came in at 16 watts during normal use, the only time it increases is if Plex is doing some realtime transcoding. Considering my old Asus WL-500W router consumed 13 watts once I plugged in a usb stick for storage, I am pretty happy with the end result.

The server is used for the following:
-File & Print server
-Router
-Wireless Access Point
-Plex Media Server
-utorrent downloader with webui
-pyload
-Simple DNS plus

The Specs:
Intel i5 3470s w/ Thermaltake Slim X3 HSF
DQ77KB ITX motherboard
1TB Western Digital Blue 2.5 inch hdd
8GB DDR1333 ram
Mini PCIe Wireless N Adapter
Minibox M350
Windows Server 2003 x64
DD-WRT x86 in a Virtual Machine

(forgive my crappy camera)


----------



## Ecstacy

Quote:


> Originally Posted by *particleman*
> 
> Here's my server. The primary goal of my server was to minimize noise and power consumption since it would be run 24/7 at home. When I finished, I measured the power consumption using a kill a watt and it came in at 16 watts during normal use, the only time it increases is if Plex is doing some realtime transcoding. Considering my old Asus WL-500W router consumed 13 watts once I plugged in a usb stick for storage, I am pretty happy with the end result.
> 
> The server is used for the following:
> -File & Print server
> -Router
> -Wireless Access Point
> -Plex Media Server
> -utorrent downloader with webui
> -pyload
> -Simple DNS plus
> 
> The Specs:
> Intel i5 3470s w/ Thermaltake Slim X3 HSF
> DQ77KB ITX motherboard
> 1TB Western Digital Blue 2.5 inch hdd
> 8GB DDR1333 ram
> Mini PCIe Wireless N Adapter
> Minibox M350
> Windows Server 2003 x64
> DD-WRT x86 in a Virtual Machine
> 
> (forgive my crappy camera)


That's an awesome build, can you post pictures? Also how do you have that setup? Is everything running in it's own virtual machine or...? I was thinking about building a mITX all-in-one server like this for a file server, router/firewall, torrenting box, media server, small webserver, and use it as a HTPC.

Also, this might be a bit extreme, but this guy is using the same motherboard and got the power consumption down to 5.9 watts by modding the motherboard.


----------



## dushan24

Yeah, more pics, open the case.

On another note, I'm doing basically the same thing right now (mATX low power firewall build), parts arriving today, will probs do a build log and post here.


----------



## Jtvd78

I can post a few pics of my router/firewall if anyones interested. I posted it a while back in the thread. It uses the same case as above


----------



## Ecstacy

Quote:


> Originally Posted by *Jtvd78*
> 
> I can post a few pics of my router/firewall if anyones interested. I posted it a while back in the thread. It uses the same case as above


Can you link us to the post or post some pictures? Thanks.


----------



## thenk83

Basically just an ESXi 5.1 box with Windows 2012 serving up files, iTunes, Air Video, UPS management, and RAID management. Then a CentOS 6.3 VM for web and database stuff. More VM's to come though.









OS: VMWare ESXi 5.1
Case: LianLi A04B
CPU: FX-8150
Motherboard: Gigabyte GA-78LMT-USB3
Memory: G.Skill 4GB (x4)
PSU: Corsair CX500
OS HDD (If you have one): Western Digital Black 320GB
Storage HDD(s): Western Digital Red 2TB (x4)
Server Manufacturer (Ex: Dell, HP, You?): Whitebox


----------



## particleman

I will try to post some more pictures with it opened up soon. Unfortunately to do so I need to schedule a bit of downtime to take more pics. I have read about the fluffy2 build, his power numbers are pretty amazing. His default power consumption before mods was 11.6 watts, I attribute the difference in power consumption from his default numbers to; him using a msata drive vs my mechanical drive (I don't think solid state would last long with constant torrents), active network ports at all times since I use mine as a router, and perhaps him having a slightly more efficient power supply. The rest of the difference to 5.9 watts is his mods, but soldering up the motherboard is a bit too extreme for me. Also since mine is a server I don't want to risk stability.

I am running Windows Server 2003 x64 as the primary OS and vmware with dd-wrt as a VM. I considered using ESXi and running both as VMs. But my thinking behind it was that; the I/O redirection overhead from the VM Image harddrive would make it more efficient to run Windows Server 2003 as the primary OS. Pretty much all of my applications are Windows based so it made the most sense, plus dd-wrt is so compact that it can run completely in RAM. Windows also has slightly better driver support so the idle power numbers are slightly lower with Windows vs ESXi.
Quote:


> That's an awesome build, can you post pictures? Also how do you have that setup? Is everything running in it's own virtual machine or...? I was thinking about building a mITX all-in-one server like this for a file server, router/firewall, torrenting box, media server, small webserver, and use it as a HTPC.
> 
> Also, this might be a bit extreme, but this guy is using the same motherboard and got the power consumption down to 5.9 watts by modding the motherboard.


----------



## dushan24

Quote:


> Originally Posted by *Jtvd78*
> 
> I can post a few pics of my router/firewall if anyones interested. I posted it a while back in the thread. It uses the same case as above


Do it


----------



## Mactox

Since I haven't hardly played any videogames on the PC for the last 2 months I decided to change my gaming station to a server to dedicate it to the few roles I DID still use it for.
She'll be used for streaming media to my PS3 through PS3 Media Server (not the best, if any better alternatives with support for .mkv's exist I'm happy to hear them). And to serve as a download/torrent station with central storage for my network.
Coming from a powerfull SR-2 build I'm a bit sad I sold it because now I want to toy around with multiple VM machines again :'( but for now this will do (+ uses a lot less power, especially when I get rid of the GTX580)

Hardware:
CPU: i5 2500k
MB: Asus Maximus IV Extreme ( 2x gigabit LAN ports, 4x PCI-e 2.0 x16)
RAM: 16GB Corsair Vengeance DDR3-1600 CL8
GPU: Asus GTX580DCII
PSU: Antec HCP-1200
Case: Cooler Master HAF X

OS Drive:
- Vertex 3 120GB SSD

HDDs:
- 2x 2TB Samsung Spinpoint F4E
- 2x 1.5TB WD Caviar Green
- 2x 500GB 7200rpm Seagate

Roles

PS3 Media Server (stream/transcode on the fly)
Fileserver
Torrent client

For now I just want to get rid of the power consuming GTX580, already sold 1 but now trying to get rid of the 2nd. I'll probably get a cheap GT610 or so since I can't use the CPU's internal GPU.
The Case is more then enough for now, when I need more space for HDD's I have a brand new Xigmatek Elysium standing in the closet.
Future plans is to expand the storage through 5-in-3 frames for easy access.

I haven't decided yet on what OS to run, I want to stick to Windows since that's what I'm used to. Either Server 2008 R2 or have a try with Server 2012 ... or WHS?
Only have experience with Server 2003/2008 so far. Software RAID poses no problem with the Eco HDD's?
7200rpm drives are there for backup/applications as I want to keep the SSD dedicated to the OS only.


----------



## Killbuzzjrad

This was my ESXi Whitebox project that I abandoned for a Windows Server 2012 w/ Hyper-V setup. Right now it's just server up my files on a RAID 10 array. I have a RAID 1 array being used to store my VM images for all the VMs that I play with. The SSD is running Windows 7, Windows 8, and BT5R3 VMs.

Case: Thermaltake Level 10 GTS Snow Edition
CPU: Intel Xeon E3-1245 V2 Ivy Bridge 3.4GHz
Motherboard: ASRock Z77 Extreme4
Memory: Patriot Viper 16GB (2 x 8GB) DDR3 1600
Heatsink: Noctua NH-D14
Boot Drive: Samsung 840 250GB SSD
Data Storage: Seagate Barracuda 2TB x4 RAID 10
Additional VM Drives: Seagate Barracuda 1TB x2 RAID 1
PSU: PC Power and Cooling Silencer MK III 400W Modular 80PLUS Bronze





http://www.overclock.net/t/1351256/build-log-esxi-whitebox


----------



## particleman

Here are a few more photos of my micro server with the case lid off like a couple of you asked for:




Quote:


> Originally Posted by *particleman*
> 
> Here's my server. The primary goal of my server was to minimize noise and power consumption since it would be run 24/7 at home. When I finished, I measured the power consumption using a kill a watt and it came in at 16 watts during normal use, the only time it increases is if Plex is doing some realtime transcoding. Considering my old Asus WL-500W router consumed 13 watts once I plugged in a usb stick for storage, I am pretty happy with the end result.
> 
> The server is used for the following:
> -File & Print server
> -Router
> -Wireless Access Point
> -Plex Media Server
> -utorrent downloader with webui
> -pyload
> -Simple DNS plus
> 
> The Specs:
> Intel i5 3470s w/ Thermaltake Slim X3 HSF
> DQ77KB ITX motherboard
> 1TB Western Digital Blue 2.5 inch hdd
> 8GB DDR1333 ram
> Mini PCIe Wireless N Adapter
> Minibox M350
> Windows Server 2003 x64
> DD-WRT x86 in a Virtual Machine
> 
> (forgive my crappy camera)


----------



## Gunfire

My god. Want.


----------



## NKrader

Quote:


> Originally Posted by *particleman*
> 
> Here are a few more photos of my micro server with the case lid off like a couple of you asked for:
> 
> 
> Spoiler: Warning: Spoiler!


pretty awesome


----------



## pvt.joker

so I came into a pair of xeon x5550's, so of course I had to cough up for the rest of the parts to get my server upgraded..









Should have my new motherboard, ram, and heatsinks tomorrow. Just need the time to get all my projects done..









server once upgraded should be:
dual x5550's
Supermicro MBD-X8DTL-iF-O
32gb ram
32gb ssd boot
adaptec 31205
9x1tb raid 5
3x3tb raid 5

Probably going to run server 2012 and hyper-v to get it all done..
Might actually post a build log if i have the time..


----------



## particleman

Thanks, this is actually my 3rd iteration of my low power micro server.

I started with a single core atom, about 3 years ago. It was OK for file and print serving, but woefully inadequate for everything else, even virtualized routing stressed the CPU. The power consumption of the Atom wasn't that great either, it was 26 watts.

Next I tried it with a Zotac IONITX P-E which used a dual core ultra low voltage core 2 duo, this could handle the routing with ease, but couldn't handle transcoding 1080p in realtime, it also only had one network port so I had to use a USB network adapter for the second port. Power consumption of this system was 22 watts.

Finally, I put this one together, which I think I am going to stick with for the foreseeable future. It is more than fast enough for everything I need, and I don't think I can save much more power, at least not enough to offset the cost of new hardware. Rumors are Haswell will be 20x more efficient at idle, but from my research Ivy Bridge (the CPU itself) already only uses 2 watts when idle, so even if Haswell is 20x more efficient at idle, it is still only saving 2 watts, and I'm not sure how useful a deep sleep state would be on a server. I think Haswell will be more important for phones and tablets, where saving 2 watts and deep sleep states are useful, and there aren't other components that use a lot of power like harddrives. I will be sad to see Intel leave the motherboard market though, their motherboards are the most energy efficient I've come across.
Quote:


> Originally Posted by *NKrader*
> 
> pretty awesome


----------



## hartofwave

Me again,(with the proliant) I have had a poke around in it and it has 2 146gb 10k rpm sas drives, 1.66 Xeon, 2gb of ram and what looks like a tape reader thing (backing up stuff?) Would it be worth trying to up grade it and add sound damperning?


----------



## VictorB

Dumpster find











Asus RS120-E4 server!

http://www.asus.com/Commercial_Server_Workstation/RS120E4PA2/

I added some ddr2 and its posts without errors! and i had a quad nic on the shelf perfect for this box. The CPU is a xeon 3050 dual core 2.13ghz

I gonna make a video about it for my youtube channel later!

www.youtube.com/user/victorbart


----------



## NKrader

Quote:


> Originally Posted by *hartofwave*
> 
> Me again,(with the proliant) I have had a poke around in it and it has 2 146gb 10k rpm sas drives, 1.66 Xeon, 2gb of ram and what looks like a tape reader thing (backing up stuff?) Would it be worth trying to up grade it and add sound damperning?


what size tape? sell it toooooo meeee


----------



## hartofwave

It is a hp storageworks ultrium 460


----------



## tycoonbob

Quote:


> Originally Posted by *hartofwave*
> 
> It is a hp storageworks ultrium 460


Which is LTO-2. Natively, each tape drive can store 200GB but can double that if compressed.


----------



## hartofwave

Quote:


> Originally Posted by *tycoonbob*
> 
> Which is LTO-2. Natively, each tape drive can store 200GB but can double that if compressed.


ok, good to know


----------



## hartofwave

So how effective would sound damping be on something like this?


----------



## the_beast

Sound dampening is, as a rule, pretty much worthless. If it's not pretty quiet to begin with, no amount of damping materials is going to make it quiet. And it's VERY hard to make something quiet without affecting it's cooling as well - things get very big and expensive if you try.

Either find somewhere it can live that is far enough away or with thick enough walls that it doesn't annoy you, or sell it and use the proceeds to build something quiet to begin with.


----------



## hartofwave

fair enough, should I try eBay or is there a better place for used hardware? (can't be here, I don't have the rep







)


----------



## broadbandaddict

Quote:


> Originally Posted by *hartofwave*
> 
> fair enough, should I try eBay or is there a better place for used hardware? (can't be here, I don't have the rep
> 
> 
> 
> 
> 
> 
> 
> )


Can you sell it locally on Craigslist or something? I'd hate to have to ship something like that, I'm guessing it's pretty heavy.


----------



## hartofwave

Quote:


> Originally Posted by *broadbandaddict*
> 
> Can you sell it locally on Craigslist or something? I'd hate to have to ship something like that, I'm guessing it's pretty heavy.


it's an absolute tonne, i am also in the uk i don't think we have Craigslist here.

Edit yes we do


----------



## LBGreenthumb

(Not current)

"Dell Inspiron 530"
Windows Home Server 2011
Antec EA650W
Intel Core 2 Duo E8500 3.16GHz
Foxconn G33M02 (Dell BIOS)
Intel Gigabit CT PCI-E Network Adapter
4 GB DDR2 667
WD Black Caviar 1TB HDD (2 on the way)
Cooler Master 370 Elite


----------



## suicidegybe

So m home server rack is coming along nicely. Here's the specs:

Star tech 12U desk top server rack two post
Net Gear GS724T 24 port gigabit smart switch
2u patch panel
1u Compaq 4 port KVM switch
TP link WDR 4300 gigabit router ( used for wireless AP only)
Super Micro MBD-X7SPE-HF-D525-O w/4gb flash drive running PF sense
Super Micro X8SIA-F, Intel Xeon X3430, 16 GB Kingston Buffered ECC RAM, Adaptec 2405 RAID 5w/ 4 3TB WD Reds, OCZ 120GB Agility 3 Boot Drive, OCZ 240GB Agility 3 VM Drive, 2x Ceton InfiniTV 4's, Server 2008 R2, WHS 2011 in VM (Upgrading to Server 2012, and Server 2012 essentials once all data migration and backup are complete. I have another Super Micro X8SIL-V w/ Intel i3 530 as a spare/ back up. So I'm moving everything to that and upgrading this rig to Server 2012 and running some testing before I commit.)[IMG
ALT=""]http://www.overclock.net/content/type/61/id/1310879/width/500/height/1000[/IMG]




I have a 1u Super Micro server case coming for the PF Sense box. After that I plan to put the Xeon rig in a 4u Norco 450B and put some hot swap bays in it but I have to wait a little while for that. My setup streams Live TV, Music, and my 500+ DVD collection (all Digital) to all rooms of the house via wired cat 6 also have 750 mb WiFi coverage through the entire house. Still working on a Sound dampened door for the rack area witch is in an insulated attic crawl space. I do have UPS protection for everything via a Cyber Power 900 Watt and a 450 Watt consumer UPS they give me about 25 min. of run time under normal usage. But I will have to figure out some additional cooling during the summer. It is insulated but like any upper floor of a house in the summer it can get warm. More to come.


----------



## Citra

Nice setup!


----------



## Junior82

Some updates to my rack. Finally got rid of my old untangle router; and in with the new well used, new to me. Dell R200 E7400 4gb ram, and added a Dual port Intel NIC. While i was at this i reconfigured my main network. Waiting for another set of Dual port NIC's for my ESXi servers, should be here Monday.
Hoping to pick up a Dell C6100 within the next couple of months.


----------



## jerry1234

OK, just setting this up:

Dell T110 II
16 gB of ECC RAM
Two WD Black drives, one 1-tB, second 500mB
LG Blu-ray burner drive.
Tandberg RDX drive with ( at this time ) one 640gB Imation cartridge.
Gigabit Ethernet PCIe card.

I have installed Slackware Linux 14.0 on this fine machine, which will provide the following services ( when I get it all working )

* Routing between the Internet and the local net.
* My own extra-paranoid firewall
* mailserver
* Web server, serving dynamic content in support of my business, generated by approximately 100,000 lines of Perl. All secured by
https.
* Samba server for Windows filesharing. All personal and business data is kept on the server. My motto: Never put anything on a
Windows machine that you can't buy at the store!
* Two webmail clients.
* Openvpn server for remote access.
* Regular backup/mirroring over the localnet

All of this already works on the existing server. Cutting over is not fun. Takes a LONG time, during which time the new server is
kept updated by an rsync script. I have an ISP account with 8 public IP addresses. To cut over, I just will just change the DNS
records, so my domain will go to the new machine.

- jerry1234


----------



## akshep

Just got my first real server in the mail today. A friend of mine didn't want it anymore so I got it for free.



Its a Dell Poweredge SC1435
2 AMD Opteron 2218's I believe @ 2.6Ghz
8GB of ECC Dell RAM
1 TB Hatachi 7.2K RPM HDD
2 Gigabit Ethernet Cards
Windows Server 2012 Database Edition

I have no idea as to what I am going to do with it, any suggestions? My friends want me to use it to host a minecraft server but i dont believe this is the best solution for that. And thats not its final home btw I just have no room in my dorm.


----------



## selectstriker2

Quote:


> Originally Posted by *akshep*
> 
> Just got my first real server in the mail today. A friend of mine didn't want it anymore so I got it for free.
> 
> 
> 
> Its a Dell Poweredge SC1435
> 2 AMD Opteron 2218's I believe @ 2.6Ghz
> 8GB of ECC Dell RAM
> 1 TB Hatachi 7.2K RPM HDD
> 2 Gigabit Ethernet Cards
> Windows Server 2012 Database Edition
> 
> I have no idea as to what I am going to do with it, any suggestions? My friends want me to use it to host a minecraft server but i dont believe this is the best solution for that. And thats not its final home btw I just have no room in my dorm.


That's a nice freebie. It would do just fine as a minecraft server


----------



## akshep

Quote:


> Originally Posted by *selectstriker2*
> 
> That's a nice freebie. It would do just fine as a minecraft server


You dont think its overkill for 10ish players?


----------



## x_HackMan

Quote:


> Originally Posted by *akshep*
> 
> You dont think its overkill for 10ish players?


Doesnt have to be all it does.


----------



## selectstriker2

Quote:


> Originally Posted by *x_HackMan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *akshep*
> 
> You dont think its overkill for 10ish players?
> 
> 
> 
> Doesnt have to be all it does.
Click to expand...

This and you can run something like the dynmap plugin for bukkit


----------



## Ecstacy

Quote:


> Originally Posted by *akshep*
> 
> You dont think its overkill for 10ish players?


It's overkill, but you can have it run other tasks as well.


----------



## wtomlinson

Quote:


> Originally Posted by *akshep*
> 
> You dont think its overkill for 10ish players?


Nothing is overkill here.


----------



## Norse

Quote:


> Originally Posted by *hartofwave*
> 
> it's an absolute tonne, i am also in the uk i don't think we have Craigslist here.
> 
> Edit yes we do


Part it out, take the server apart and just try to sell the bits. might take longer but you'll get more money than selling it as a whole especially if you wont post the thing


----------



## NKrader

Quote:


> Originally Posted by *akshep*
> 
> You dont think its overkill for 10ish players?


nope, ihave 16 core nas, nothing is ever overkill


----------



## Blindsay

Quote:


> Originally Posted by *NKrader*
> 
> nope, ihave 16 core nas, nothing is ever overkill


yup this is overkill.net err overclock.net


----------



## Jtvd78

Quote:


> Originally Posted by *akshep*
> 
> You dont think its overkill for 10ish players?


try some of these things


----------



## JayXMonsta

Description / Usage: Home File Server

Music Streaming (to all my computers via iTunes home share, and to media center)
Streaming Old TV shows (from MC to xBox MCE & Computers, I automatic move TV shows off media center after they air and convert them into MP4 files and remove the commercial and have them organized on this server for streaming when ever I went um)
Daily Backups

*OS*: Windows Home Sever 2011
*Case*: Dell Mini Tower Optiplex
*CPU*: Core2Duo 3.0GHz
*Motherboard*: Optiplex Mobo
*Memory*: 2GB
*PSU*: 250 Watts
*OS HDD*:60GB partition starting to think this is stupid for reformatting think I'm gonna get a 64GB flash drive for the OS moving forward.
*Storage HDD(s)*: 2TB (RAID 1)
*Server Manufacturer*: Dell


----------



## Darkcyde

Here is my simple low power media/file server. Specs are in my sig.


----------



## Shev7chenko

Quote:


> Originally Posted by *Darkcyde*
> 
> Here is my simple low power media/file server. Specs are in my sig.


Nice.


----------



## akshep

Quote:


> Originally Posted by *Jtvd78*
> 
> try some of these things


Thankd for the link. Ill look into some of these things


----------



## broadbandaddict

Finally got around to updating my server.










Spoiler: Pictures













2.6GHz Pentium G620, 8GB DDR3 1333, MSI Z68A-GD65, Antec 550w Platinum PSU, Rosewill Server Chassis with 12 hotswap bays.

I've got three 3TB drives right now (working on getting that Seagate out in favor of another WD RED) and one 2TB in the array which gives me 8TB of space. The 1TB is a cache drive, everything is written to it first and moved over once a week to the array. The 500GB Blue is an "application drive" that I run Transmission off of.


----------



## fishy0689

Quote:


> Originally Posted by *broadbandaddict*
> 
> Finally got around to updating my server.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Pictures
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 2.6GHz Pentium G620, 8GB DDR3 1333, MSI Z68A-GD65, Antec 550w Platinum PSU, Rosewill Server Chassis with 12 hotswap bays.
> 
> I've got three 3TB drives right now (working on getting that Seagate out in favor of another WD RED) and one 2TB in the array which gives me 8TB of space. The 1TB is a cache drive, everything is written to it first and moved over once a week to the array. The 500GB Blue is an "application drive" that I run Transmission off of.


How do you like the rosewill case? I've been considering buying it for my freenas server, but I've seen mixed reviews on newegg.

Here is my mess. bottom is my freenas rig using an amd e450, 8gb ram, with 4x2tb drives in raid z. it gives backup space to the household, and media network storage for a ps3 and wdtv live box. The other two are just a c2d and c2q, running boinc 24/7 on cpu.



Spoiler: Warning: Spoiler!


----------



## broadbandaddict

Quote:


> Originally Posted by *fishy0689*
> 
> How do you like the rosewill case? I've been considering buying it for my freenas server, but I've seen mixed reviews on newegg.
> 
> Here is my mess. bottom is my freenas rig using an amd e450, 8gb ram, with 4x2tb drives in raid z. it gives backup space to the household, and media network storage for a ps3 and wdtv live box. The other two are just a c2d and c2q, running boinc 24/7 on cpu.
> 
> 
> 
> Spoiler: Warning: Spoiler!


It's a pretty good case. Occasionally the hotswap bays will need a drive reseated before it shows up but I don't swap drives all that often so it works out OK. The fans are junk that come in them, plan on replacing them. There is also a plastic insert behind the front cover that holds a filter on, I cut, sanded and painted mine to just the outer frame so that it would stop making noise, I'll see if I have a picture on here. They're pretty spacious though and are very sturdy, mine weighs about 40 or 50 pounds fully loaded and you can carry it by the bar on the top with little to no flex. For the money I think they're pretty great, I got mine on sale for like $120 I think. If you aren't going to switch the drives very much you'd be better off getting one of the cheaper ones though, I've got one of them too and it works good.

Before:

After:


----------



## shadow5555

small update:

white box which is my untangle box got upgraded last night

old:
p4 2.8 with 1.5gig ddr1
40gig hd
2nd gig nic

new:
core 2 duo 2.4
2gig ddr2
40gig hd
2nd gig nic

Seems to run things a lot better now.

I might be getting a i5 build later today not sure yet. I am currently at work


----------



## driftingforlife

Im useing 2 old HDDs to test and set-up. WD REDs next month









It's role is a file server and download box that I can leave on 24/7 as my internet sucks









MSI B75A-G41
Intel Pentium G2020
Corsair Memory XMS3 4GB DDR3 1333 Mhz
Highpoint RocketRAID RR2720SGL 6Gb/s 8 Port RAID Controller
Codegen 4U Rackmount 600mm Deep Server Case
Corsair TX650w


----------



## Jtvd78

Quote:


> Originally Posted by *broadbandaddict*
> 
> Finally got around to updating my server.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Pictures
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 2.6GHz Pentium G620, 8GB DDR3 1333, MSI Z68A-GD65, Antec 550w Platinum PSU, Rosewill Server Chassis with 12 hotswap bays.
> 
> I've got three 3TB drives right now (working on getting that Seagate out in favor of another WD RED) and one 2TB in the array which gives me 8TB of space. *The 1TB is a cache drive, everything is written to it first and moved over once a week to the array*. The 500GB Blue is an "application drive" that I run Transmission off of.


What effect does the cache drive make to the storage array speed wise?


----------



## broadbandaddict

Quote:


> Originally Posted by *Jtvd78*
> 
> What effect does the cache drive make to the storage array speed wise?


I went from 35-45MB/s without a cache to 110MB/s with it. The RED drive slows down a bit after ~800GBs of data to about 65-75MB/s. I have the server set to sync the data weekly to the array.


----------



## i_ame_killer_2

Bought a 22U rack from HP







I do not own any server computers yet but planning on buying some. I have a ReadyNAS Pro 4 and I love it, it welcomed me to the server world


----------



## CloudX

Nice rack!

lol


----------



## driftingforlife

Server is in place. Getting the HDDs next month


----------



## NKrader

Quote:


> Originally Posted by *i_ame_killer_2*
> 
> Bought a 22U rack from HP
> 
> 
> 
> 
> 
> 
> 
> I do not own any server computers yet but planning on buying some. I have a ReadyNAS Pro 4 and I love it, it welcomed me to the server world


I wanted one if those so bad..

unfortunately they are crazy expensive new and are rarely seen used


----------



## tycoonbob

Dell PowerEdge C1100
CPU: Dual Quad Core Xeon L5520 (2.26 Ghz with HyperThreading -- 16 Threads)
RAM: 36GB DDR3-1066R ECC
HDD: x1 160GB Hitachi 7200RPM for OS; x2 3TB Toshiba DT01ACA300 in Mirrored Storage Space for VM storage
NIC: Dual Gigabit + 10/100 Management NIC
PSU: Single 350W Delta

Purpose:
Running Windows Server 2012 with Hyper-V. Currently holding 9 VMs, which 16GB of RAM free. CPU utilization is rarely above 30%


Spoiler: Pictures


----------



## jibesh

Quote:


> Originally Posted by *tycoonbob*
> 
> Dell PowerEdge C1100
> CPU: Dual Quad Core Xeon L5520 (2.26 Ghz with HyperThreading -- 16 Threads)
> RAM: 36GB DDR3-1066R ECC
> HDD: x1 160GB Hitachi 7200RPM for OS; x2 3TB Toshiba DT01ACA300 in Mirrored Storage Space for VM storage
> NIC: Dual Gigabit + 10/100 Management NIC
> PSU: Single 350W Delta
> 
> Purpose:
> Running Windows Server 2012 with Hyper-V. Currently holding 9 VMs, which 16GB of RAM free. CPU utilization is rarely above 30%
> 
> 
> Spoiler: Pictures


How loud is it?


----------



## tycoonbob

Quote:


> Originally Posted by *jibesh*
> 
> How loud is it?


Full fan power (i.e., startup) is probably around 60 dBA.
With 9 or so VMs running, and the server uptime over 20 days, it is probably running around 40 dBA or so. It is actually a little quieter than my storage server:

Norco RPC-4224
3 x 120mm bgears b-Blaster 120 (35 dBA each)
2 x 80mm bgears b-Blaster 80 (39 dBA each)
Xeon 1220 v2
8GB DDR3 ECC RAM
LSI MegaRAID 9261-8i
HP SAS Expander

I have it running in my home office right now and the fan behind me is louder. I don't think you could comfortable run it in a bedroom though. It will soon be in my server rack in the garage, once I finish that up.


----------



## NKrader

no more noisy server grade sanace fans for my hotswap bays


----------



## i_ame_killer_2

Quote:


> Originally Posted by *NKrader*
> 
> I wanted one if those so bad..
> 
> unfortunately they are crazy expensive new and are rarely seen used


Was just a accident that I found it on a auction site. Really like it so far. Heavy as hell







We had to push it all the way home trough the town and on trams etc as I do not have a car (studying).







Well worth it thought. Looking for some atom microservers for a firewall and then some more heavy stuff gear for VM etc.


----------



## tiro_uspsss

Quote:


> Originally Posted by *i_ame_killer_2*
> 
> We had to push it all the way home trough the town and on trams etc as I do not have a car (studying).
> 
> 
> 
> 
> 
> 
> 
> Well worth it thought.










I can so imagine you Swedish lads doing that ROFL! none the less.................. PICS OR IT NEVER HAPPENED!


----------



## i_ame_killer_2

Quote:


> Originally Posted by *tiro_uspsss*
> 
> 
> 
> 
> 
> 
> 
> 
> I can so imagine you Swedish lads doing that ROFL! none the less.................. PICS OR IT NEVER HAPPENED!


Hahah took us about 2 hours to get it on 1 tram and then going about 3km thought the inner city to where we live









On the tram. We had to ask for help to lift it off


----------



## CloudX

Quote:


> Originally Posted by *i_ame_killer_2*
> 
> Hahah took us about 2 hours to get it on 1 tram and then going about 3km thought the inner city to where we live
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On the tram. We had to ask for help to lift it off


That just makes the score even sweeter! Haha good times.


----------



## driftingforlife

My server temps. It is in the conservatory


----------



## Jeci

Quote:


> Originally Posted by *driftingforlife*
> 
> My server temps. It is in the conservatory


Haha nice temps - wouldn't moisture be a potential issue if it's a in a conservatory?


----------



## u3b3rg33k

So long as it's warmer than the surrounding air, it should be GTG.


----------



## tiro_uspsss

Quote:


> Originally Posted by *i_ame_killer_2*
> 
> Hahah took us about 2 hours to get it on 1 tram and then going about 3km thought the inner city to where we live
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On the tram. We had to ask for help to lift it off


awesome!
















now where's the video!?




























jk!


----------



## techx86

I looked through all 140 pages and enjoyed every post. So i thought i'd post mine









Intel S5000PSL Motherboard
Dual Intel Xeon E5405 Quad-Core Socket LGA771 @ 2.0Ghz (3.0Ghz once i have enough to upgrade) W/ 2x SuperMicro SNK-P0034AP4*
8GB DDR2 PC5300 FB-DIMM Samsung RAM (Adding another 8GB soon) W/ G.SKILL FTB-3500C5-D Cooler
1x 60GB OCZ Vertex SSD
7x 1TB SATA-II Hard Drives (For storage)
1x 250GB IDE (For iTunes only, being removed soon)
SuperMicro AOC-SAT2-MV8 8-Port SATA2 Controller (Used for JBOD)
An internally-mounted Raspberry-Pi for "firewall" purposes (Similar to the one on the desk in front)
And a Hauppauge PCI WinTV-PVR-350 (Cheap and effective)

My OS of choice is Ubuntu 11.10 x64, But i also run a few XP VMs for the things that aren't made for Ubuntu.
Used mostly for TS3, Minecraft, VirtualBox, Owncloud, Samba (Storage), Steam Server, and Whatever else i need.


----------



## tiro_uspsss

Quote:


> Originally Posted by *techx86*
> 
> Dual Intel Xeon E5405 Quad-Core Socket LGA771 @ 2.0Ghz (3.0Ghz once i have enough to upgrade) W/ 2x SuperMicro SNK-p0025p


what heatsinks are you using - the ones in your pic look different to the ones you have mentioned








nice box btw!









edit, it seems you may have the incorrect part number, I think your heatsinks are SNK-P0034AP4


----------



## Naz

Here's mine:

OS: Windows 8 Pro x64
Case: Coolermaster Elite 310
Mobo: ASUS F1A75-M LE
Daughterboard: IOCrest 4 port Sata III expansion
CPU: AMD A4 3400
CPU Cooler: Coolermaster GeminII M4
RAM: OCZ DDR3 PC3-12800/ 1600MHz / Lynnfield Memory / Fatal1ty Edition / Dual Channel
PSU: FSP Aurum 400W Gold
Storage HDD(s): 4 x 3TB WD Green EZRX (Parity Storage Space) / 4 x 250GB Hitachi Travelstar (2-way mirror Storage Space)
OS HDD: Intex X25M-G2 80GB
Server Manufacturer: Me!





Full specs in sig. I use it for:
1.) Plex server (inc transcoding)
2.) Print server
3.) File server
4.) Seed box
5.) Local backups
6.) Personal cloud


----------



## dushan24

Quote:


> Originally Posted by *techx86*
> 
> I looked through all 140 pages and enjoyed every post. So i thought i'd post mine
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Intel S5000PSL Motherboard
> Dual Intel Xeon E5405 Quad-Core Socket LGA771 @ 2.0Ghz (3.0Ghz once i have enough to upgrade) W/ 2x SuperMicro SNK-p0025p
> 8GB DDR2 PC5300 FB-DIMM Samsung RAM (Adding another 8GB soon) W/ G.SKILL FTB-3500C5-D Cooler
> 1x 60GB OCZ Vertex SSD
> 7x 1TB SATA-II Hard Drives (For storage)
> 1x 250GB IDE (For iTunes only, being removed soon)
> SuperMicro AOC-SAT2-MV8 8-Port SATA2 Controller (Used for JBOD)
> An internally-mounted Raspberry-Pi for "firewall" purposes (Similar to the one on the desk in front)
> And a Hauppauge PCI WinTV-PVR-350 (Cheap and effective)
> 
> My OS of choice is Ubuntu 11.10 x64, But i also run a few XP VMs for the things that aren't made for Ubuntu.
> Used mostly for TS3, Minecraft, VirtualBox, Owncloud, Samba (Storage), Steam Server, and Whatever else i need.


What FW distro are you running on the Pi

And how do you get the 2 NIC's

I assume, one on the board and a USB adaptor...


----------



## uberdum05

Mine are pretty terrible, but you know, old hardware with normally no uses...

This one just runs my domain and file server once I get it set up, and it will do for playing with some stuff and learning from.
CALSRV:
OS: Server 2008
Case: Stock HP one
CPU: Pentium 4 HT (cannot OC with current motherboard...)
Motherboard: Some Lite-On one with the brand not even stamped on the board
RAM: 4x DDR or DDR2 sticks - 2x 256MB, 2x 512MB
PSU: Cheap nasty chinese one, 400W
OS HDD: Maxtor 10GB. Godknows how old
Storage HDDs: 500GB Seagate, 40GB Maxtor (all drives are IDE)
Server manufacturer: HP (HP Compaq DX2000)

Just runs my home automation web interface, DLNA client (endpoint?) and web server (apache w/ PHP5 and mySQL)
PiSRV:
OS: Raspbian
Case: Custom built
CPU: Broadcom

Will get pics tomorrow.


----------



## techx86

SNK-P0034AP4 is indeed what i am using, no idea where p0025p came from









The Raspberry Pi is just running raspian. It's more of gateway than a full firewall. For whatever reason TS3 or Minecraft attract a lot of SSH login attempts (mostly China). So i direct all SSH traffic to the Pi, then i can access my network from there. This is only used for when i need to modify something from work or my phone.

The Pi is powered by the internal USB port on the S5000PSL. I am using a USB Power-only cable as to not confuse my server or the Pi with strange devices. Also the Ethernet cable is running out of an old Wi-Fi PCI bracket with a rubber grommet. It may not be ideal (or pretty), but it works.

I have no idea who makes the case (since it doesn't say). I got it off craigslist for $40 and it supports EATX and it will fit in a standard rack so i can't really complain.

I'll post a better pic later when i get home. I didn't realise how hard it was to see everything


----------



## herkalurk

My lovely server resides in my basement, so the temps are AMAZING right now, cause it's 20 F outside.

Code:



Code:


/dev/sda: WDC WD3000HLFS-01G6U1: 17Â°C
/dev/sdb: Hitachi HDT721010SLA360: 22Â°C
/dev/sdc: ST1000LM024 HN-M101MBB: 17Â°C
/dev/sdd: ST1000LM024 HN-M101MBB: 19Â°C
/dev/sde: SAMSUNG HD103SJ: 18Â°C
/dev/sdf: WDC WD3000HLHX-01JJPV0: 16Â°C
/dev/sdg: OCZ-SOLID3: 30Â°C

That hitachi drive has always run warm....


----------



## tiro_uspsss

Quote:


> Originally Posted by *herkalurk*
> 
> My lovely server resides in my basement, so the temps are AMAZING right now, cause it's 20 F outside.
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> /dev/sda: WDC WD3000HLFS-01G6U1: 17Â°C
> /dev/sdb: Hitachi HDT721010SLA360: 22Â°C
> /dev/sdc: ST1000LM024 HN-M101MBB: 17Â°C
> /dev/sdd: ST1000LM024 HN-M101MBB: 19Â°C
> /dev/sde: SAMSUNG HD103SJ: 18Â°C
> /dev/sdf: WDC WD3000HLHX-01JJPV0: 16Â°C
> /dev/sdg: OCZ-SOLID3: 30Â°C
> 
> That hitachi drive has always run warm....


the weird symbols are confusing me a lil, but if those are all celsius, then those drives are way too cold, sir! Best to have them around 35-45C - Google I believe did some research on optimal temps for HDDs, & iirc that's where those figures come from. Those kind of temps are good for _electronics_ but not for _mechanics_


----------



## Killbuzzjrad

This was going to be a ESXi whitebox but I didn't want to buy a RAID card atm. So now it's just running Windows Server 2012 with Hyper V. I haven't been able to mess with it much and I probably won't until I get a RAID card and finally run ESXi. I don't want to get my virtual environment all set up just to switch over to ESX







. So right now it's running a couple of VMs to mess around with and it's my file server.

Specs:
CPU: Intel Xeon E3-1245 V2 Ivy Bridge 3.4GHz
Motherboard: ASRock Z77 Extreme4
Memory: Patriot Viper 16GB (2 x 8GB) DDR3 1600
Heatsink: Noctua NH-D14
Boot Drive: Samsung 840 250GB SSD
Data Storage: Seagate Barracuda 2TB x4 RAID 10
VM Storage: Seagate Barracuda 1TB x2 RAID 1
PSU: PC Power and Cooling Silencer MK III 400W Modular 80PLUS Bronze
Case: Thermaltake Level 10 GTS Snow Edition


----------



## spikes746

Wow nice PC room, lots of room for extra rigs! I especially like that desk/tabletop


----------



## blooder11181

the dog is sad "where is my bed here?"


----------



## lowfat

G555 Cele
P8H77-I
LSI 9211-8i HBA
8GB KVR 1333
40GB X25-V boot drive
25TB Drive Bender storage pool.
450W Silverstone
Lian Li PC-Q25B





network transfer speed








http://hostthenpost.org


----------



## TrueTroop

They are not much anymore but it is a lot harder to optimize such old systems and that's part of the reason I do it instead of just building a "typical" server with new components. It not the amount of RAM you have that matters but how you use it









Dell Desktop (top):
-Ubuntu Server 12.04 LTS
-512MB RAM, 1GHz Pentium 3 CPU, 40GB IDE HDD
-gigabit NIC
-used for linux things (like web server, file server, vpn, etc)

Generic 1U (bottom):
-Windows Server 2003 R2
-2GB RAM, 2.8GHz Pentium 4 CPU, 1 x 120GB IDE HDD, 1 x 80GB IDE HDD
-gigabit NIC
-used for windows things (like gaming servers mainly in my case)



Both systems I received free from work and are noisily sitting in the corner of my closet...


----------



## Dark-Asylum

what do you guys think of buying pre-owned(by businesses)HP proliant servers versus building your own?? I'm thinking of getting a ML350 G5 on ebay with redudant PSUs and 2x Quad core Xeon processors. Also is ECC ram really necessary for a home server build if I go custom?


----------



## Gunfire

Quote:


> Originally Posted by *Dark-Asylum*
> 
> what do you guys think of buying pre-owned(by businesses)HP proliant servers versus building your own?? I'm thinking of getting a ML350 G5 on ebay with redudant PSUs and 2x Quad core Xeon processors. Also is ECC ram really necessary for a home server build if I go custom?


Noise. Expect a lot of fan noise.


----------



## driftingforlife

Quote:


> Originally Posted by *lowfat*
> 
> network transfer speed
> 
> 
> 
> 
> 
> 
> 
> 
> http://hostthenpost.org


Are you using port trunking or something else?


----------



## tycoonbob

Quote:


> Originally Posted by *driftingforlife*
> 
> Are you using port trunking or something else?


Has to be. That's the speed of 2 Gigabit links, so he would have to be using MPIO or something similar.


----------



## driftingforlife

Thats what I was thinking.


----------



## dushan24

EDIT: Posted in wrong thread


----------



## lowfat

Quote:


> Originally Posted by *driftingforlife*
> 
> Are you using port trunking or something else?


Sort of. I am using LACP on my workstation but not on the server. But Windows 8 supports SMB multichannel so it splits traffic on its own without any trunking.


----------



## mitchtaydev

Quote:


> Originally Posted by *lowfat*
> 
> Sort of. I am using LACP on my workstation but not on the server. But Windows 8 supports SMB multichannel so it splits traffic on its own without any trunking.


I don't understand. If you are using LACP on your workstation but not the server wouldn't the bandwidth still be limited by the network connection at the server end? I would have though you needed link aggregation on both sides to see speeds like that?


----------



## tycoonbob

Quote:


> Originally Posted by *mitchtaydev*
> 
> I don't understand. If you are using LACP on your workstation but not the server wouldn't the bandwidth still be limited by the network connection at the server end? I would have though you needed link aggregation on both sides to see speeds like that?


He has to have at least 2 NICs on the server, or a 10Gbps NIC (along with a switch that support 10GigE).


----------



## lowfat

Quote:


> Originally Posted by *mitchtaydev*
> 
> I don't understand. If you are using LACP on your workstation but not the server wouldn't the bandwidth still be limited by the network connection at the server end? I would have though you needed link aggregation on both sides to see speeds like that?


I do have two connections on the server. But they aren't teamed.


----------



## CloudX

Quote:


> Originally Posted by *lowfat*
> 
> I do have two connections on the server. But they aren't teamed.


Windows Server 2012?


----------



## lowfat

Quote:


> Originally Posted by *CloudX*
> 
> Windows Server 2012?


Win 8 Pro.







But it was $15.


----------



## CloudX

That's what both machines are running?


----------



## lowfat

Quote:


> Originally Posted by *CloudX*
> 
> That's what both machines are running?


Yes.


----------



## Irisservice

Quote:


> Originally Posted by *lowfat*
> 
> Yes.


TYTYTYTYTY









Uninstall intel Teaming...let windows handle and bam 50% increase in network speed


----------



## lowfat

Quote:


> Originally Posted by *Irisservice*
> 
> TYTYTYTYTY
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Uninstall intel Teaming...let windows handle and bam 50% increase in network speed


For some reason I have had to use teaming on my workstation. Speeds just don't scale at all when both are independent. Haven't figured it out but I can't complain since it works now.


----------



## CloudX

And so many people cry about windows 8....


----------



## Oedipus

Quote:


> Originally Posted by *CloudX*
> 
> And so many people cry about windows 8....


It's fine once you get Start8 or classic shell installed.


----------



## CloudX

Quote:


> Originally Posted by *Oedipus*
> 
> It's fine once you get Start8 or classic shell installed.


At first I didn't alter windows8 at all, gave it a chance I guess. I've had it installed since a little after launch. Last week I grabbed start8 and I will admit that I'm glad to have the windows 7 start menu back. It was not the end of the world without it though. I like keyboard shortcuts, so I wasn't as shocked with the new start menu I never use I guess.


----------



## G33K

Quote:


> Originally Posted by *Oedipus*
> 
> It's fine once you get Start8 or classic shell installed.


it's fine without either installed. I might try one of them anyway though.


----------



## Chicken_Lover

Unfinished server project .. getting there, have been working on my shed which has been cut in half .. front half workshop back half comp/home theatre room. If your going to have a doghouse (somewhere to get away from the missus) might as well make it a comfy one.

The server is running old P4 1gig ram, windows 2003. Serving up movies, data, backups, print server etc.

Soon to be updated with a core 2duo 4gig ram and maybe windows 2011.

Need to install some more black metal plates and a row of fans up top to exit the hot air + abit of cable management.

More pics soon.


----------



## ez12a

Currently sitting unused.


----------



## Snyderman34

Here's my server:



Bought it bare bones on NewEgg. Running an AMD E-350 APU (with the HD6310 built in), 4GB RAM, 500GB HDD I had laying around. Running Windows 7 on it right now, with XBMC handling he video and music sharing. Also have a private Minecraft server running on it at the same time. For a little thing, it runs surprisingly well. Hoping I'll get to do another build soon though, build me a proper server


----------



## VictorB

I made a ZFS server hardware tutorial to use older hardware!


----------



## pm40elys40

Just revamped our little home server with a fanless PSU.


----------



## tycoonbob

Quote:


> Originally Posted by *pm40elys40*
> 
> Just revamped our little home server with a fanless PSU.


Loving the case. Specs?


----------



## pm40elys40

Quote:


> Originally Posted by *tycoonbob*
> 
> Loving the case. Specs?


Case is a Travla TE1160 which is not manufactured anymore, 1U modified with fully vented cover and external Scythe SY1225SL12L fan. 4x3.5" front hot swap bays, 2x2.5" internal, mITX, 2xFH/HL expansion bays, slim ODD, IDE CF reader (disabled).
Intel Core 2 Duo T7250 running fanlessly on a MSI FUZZY GME965 mobo, all copper passive 1U heatsink.
2x2048MB Kingston KVR800D2N5/2G.
Optiarc AD7590A slim DVD-RW.
Boot drive OCZ Vertex2 120GB (25nm).
Power supply FSP 150W fanless.
Storage controller Silicon Image SI3124 PCI.
HDD: 2xWD20EARS, 2xWD30EFRX.
Dual Intel Gigabit LAN (PHY+PCIE).
Working front USB/RS232.
Internal Hauppauge DVB-T stick to turn it in a terrestrial TV server.

Works well, was derived from my first 1U HTPC build in 2009, consumes 45W so the FSP150-50TNF is more than enough, makes no noise and no crappy fans to replace every now and then.


----------



## dushan24

Quote:


> Originally Posted by *pm40elys40*
> 
> Case is a Travla TE1160 which is not manufactured anymore, 1U modified with fully vented cover and external Scythe SY1225SL12L fan. 4x3.5" front hot swap bays, 2x2.5" internal, mITX, 2xFH/HL expansion bays, slim ODD, IDE CF reader (disabled).
> Intel Core 2 Duo T7250 running fanlessly on a MSI FUZZY GME965 mobo, all copper passive 1U heatsink.
> 2x2048MB Kingston KVR800D2N5/2G.
> Optiarc AD7590A slim DVD-RW.
> Boot drive OCZ Vertex2 120GB (25nm).
> Power supply FSP 150W fanless.
> Storage controller Silicon Image SI3124 PCI.
> HDD: 2xWD20EARS, 2xWD30EFRX.
> Dual Intel Gigabit LAN (PHY+PCIE).
> Working front USB/RS232.
> Internal Hauppauge DVB-T stick to turn it in a terrestrial TV server.
> 
> Works well, was derived from my first 1U HTPC build in 2009, consumes 45W so the FSP150-50TNF is more than enough, makes no noise and no crappy fans to replace every now and then.


What tasks does it perform, recording TV...

Anything else?


----------



## pm40elys40

5.0TB NAS and TV recording.


----------



## suicidegybe

Update: This is my Home Server rack. From Top to bottom:
24 port patch panel
Netgear GS724T 200NAS v4
1u Supermicro MBD-X7SPE-HF-D525-O, 4GB RAM 800, mini 65 watt PSU, PF Sense 2.0.2 running from 8GB flash, Hosts Open VPN for remote access to local network
1u Compaq 4 port KVM
Eee PC 900HD running piaf for 2 line phone service via Google voice and Linksys PAP2T-NA phone adapter
Dell Inspirion 530s case with Supermicro MBD-X8SIL-O, Intel i3 530, 16 GB Kingston ECC unbuffered RAM 1060, 120GB OCZ Agility 3, 2x Ceton InfiniTV 4's, 4x WD Green 3TB Storage Spaces parity for backups, OS windows server 2008 R2 host with Windows storage Server 2012 VM I need server 2008 R2 for the Cetons they won't work with server 2012 and I needed to save my storage spaces pool so that is the reason for the VM
4u Rosewill RVS4000, 2x Supermicro 5 bay hotswap, Supermicro MBD-X8SIA-F-O, Intel Xeon x3430, 16GB Kingston ECC Buffered RAM 1060, OCZ Agility 3 120GB, 4x WD Red 3 TB RAID 5 on Adaptec 3405, WD Black 640GB Documents Drive, Samsung 1TB Client Backups, 2x WD Green 2TB Recorded TV & Server Backups, Windows Server 2012 host with Server essentials 2012 VM with 5 clients.








My KVM has a wall jack connection that I hook up to the spare input at my desk monitor that way if I need to log in locally I just switch my monitor input and I have local access.
I plan to move the backup server to my brothers house once he closes on it and do my backups via VPN. For offsite data redundancy.


----------



## CloudX

That's cool man!


----------



## dushan24

Quote:


> Originally Posted by *CloudX*
> 
> That's cool man!


Very


----------



## Pawelr98

OS:Windows 7 home premium x64 (I use it as htpc)
Caseld tracer one (from my first home PC)
CPU: Athlon II x2 250 @1.25V Vcore 1.0V Cpu/nb
Motherboard: Crosshair V Formula (got it for 25$)
Memory:4gb of goodram memory (1x4gb)
PSU: OCZ CoreXtream 500W
OS HDD (If you have one): 40gb OEM PS3 HDD
Storage HDD(s): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (I only have this one 40 gb hdd







)
Gpu: Asus HD5450 Silent 1024mb DDR3
Server Manufacturer: Me
At the moment it's working as Minecraft Server. It's a very silent server (This machine don't have any fans in the case). I use this old CRT monitor for watching some movies sometimes but for gaming i use teamviewer to have a fast acess. I built this system mainly for testing purposes but I transformed it into a server about 3days ago.


----------



## famous1994

Cleaned up the inside of my file/media server, added 2 more HDDs (5 total now) and a Zalman CNPS9000 cooler.


----------



## EchoGecko

wow, that Zalman CNPS9000 seems like over kill, and Maxtor drives?? well at lest they last a long time


----------



## famous1994

Quote:


> Originally Posted by *EchoGecko*
> 
> wow, that Zalman CNPS9000 seems like over kill, and Maxtor drives?? well at lest they last a long time


It's better than the stock cooler I had and overkill is good. Also the 2 Maxtor drives have a little more than 5 hours of use on them. They have been sitting it there boxes for the last 5 or 6 years, figured it's better for me to have as much storage as possible and threw them in with my other 3 HDDs.


----------



## Master__Shake

Case: Norco RPC-450
PSU: Cooler Master RS 550
Motherboard: Gigabyte EP35-DS4
CPU: Intel Core2Quad Q6600
RAM: 4x1gb Ballistix
GPU: NONE
RAID Controller: LSI 8888ELP
OS Drive: OCZ Vertex 2 60gb
Storage Drives: 5 Toshiba 2TB RAID0 (have a raid 6 backup on a different server)~10TB's usable.
Operating System: Windows Home Server 2011


----------



## CloudX

Nice Master_Shake. Those Norco cases are pretty decent without breaking the bank too bad.


----------



## Master__Shake

Quote:


> Originally Posted by *CloudX*
> 
> Nice Master_Shake. Those Norco cases are pretty decent without breaking the bank too bad.


Thanks,
kinda wish i splurged for the 470, its got the mid plate with the fans on it.


----------



## tiro_uspsss

my humble little NAS box







(one of 4 servers







)

specs:

1x Intel Xeon 'Sossaman' SL8WT 2Ghz DC (heatsink is Dynatron i65G) http://ark.intel.com/products/27222/Intel-Xeon-Processor-LV-2_00-GHz-2M-Cache-667-MHz-FSB
4x 2GB DDR2-400 ECC+REG
Tyan Tiger i7520SD S5365 http://www.tyan.com/archive/products/html/tigeri7520sd.html
AMD HD4350 (PCIEx8 slot cut open







)
Creative X-Fi XtremeMusic
Silicon Image 3132 (PCIE -> 2x SATAII)
Intel 330 120GB (OS)
2x WD Red 3TB ('puresync'd')
Sony IDE DVDRW
floppy.. yes a floppy







:
Hyper 560W PSU
Lian Li V1100 Plus
Windows Server 2008 Standard SP2 x86


----------



## Boyboyd

Quote:


> Originally Posted by *CloudX*
> 
> Nice Master_Shake. Those Norco cases are pretty decent without breaking the bank too bad.


Yeah that looks like the same as mine, but mine was re-badged to be some other brand. They are great. I sort of wish we could get Norco cases easily here. I want the 24 bay 4u unit that murlock uses.


----------



## gosties

X- Case in the UK do the same cases but use the X-Case 4224 term rather than Norco.


----------



## the_beast

Quote:


> Originally Posted by *gosties*
> 
> X- Case in the UK do the same cases but use the X-Case 4224 term rather than Norco.


they're also a lot more expensive - probably because of the shipping...


----------



## gosties

Unfortunately it's a lot more expensive for almost all computer equipment in the UK than in the United States.


----------



## shodan

PSU: *Corsair HX620W*
Motherboard: *Asus p5Q*
CPU: *Intel Core2Quad Q8400 2.66GHZ*
RAM: *4x2gb Kingston*
GPU: *Ati 4550*
RAID Controller: *NONE using ICH* (I should buy a hardware RAID controller but I have no problem with the ICH)
SCSI Controller: *Adaptec SCSI 39160*
Tape Drive: *IBM LTO2 (*Should get a LTO 4 but they are expensive!!)
OS Drive: *OCZ Agility 3 60gb*
Storage Drives: *5 Seagatye 1TB RAID5*
Operating System: *Windows Server 2008*


----------



## herkalurk

Quote:


> Originally Posted by *shodan*
> 
> PSU: *Corsair HX620W*
> Motherboard: *Asus p5Q*
> CPU: *Intel Core2Quad Q8400 2.66GHZ*
> RAM: *4x2gb Kingston*
> GPU: *Ati 4550*
> RAID Controller: *NONE using ICH* (I should buy a hardware RAID controller but I have no problem with the ICH)
> SCSI Controller: *Adaptec SCSI 39160*
> Tape Drive: *IBM LTO2 (*Should get a LTO 4 but they are expensive!!)
> OS Drive: *OCZ Agility 3 60gb*
> Storage Drives: *5 Seagatye 1TB RAID5*
> Operating System: *Windows Server 2008*


YAY LTO2 backup


----------



## Boyboyd

Quote:


> Originally Posted by *gosties*
> 
> X- Case in the UK do the same cases but use the X-Case 4224 term rather than Norco.


Damn, that's the kind of thing i'll be buying next for my home nas. Once i reach 10 drives.

Very expensive though.


----------



## Quasimojo

Finally got her up and running:

Server: Dell PowerEdge C1100 (big thanks again to tycoonbob for the guidance)
CPU: Dual Quad Core Xeon L5520 2.26GHz (16-threads of goodness)
RAM: 36GB DDR3 ECC
VM Drive: Plextor M5P Extreme 512GB SSD
Storage Drives: pending
Hypervisor: Xen Cloud Platform (XCP) 1.6
Rack: Tripp Lite SR4POST13 open rack
Switch: TP-LINK TL-SG1016 10/100/1000Mbps 16-Port Gigabit

I'm now busying myself with getting comfortable with the management tools (Dell Remote Management Controller and Citrix XenCenter) and setting up my various server VM's (Apache web server, PostgreSQL database server, whatever else strikes me). I see many, many hours holed up in my office ahead of me.









(I know, I know - I need to work on filling that rack up, lol)


----------



## tycoonbob

Quote:


> Originally Posted by *Quasimojo*
> 
> Finally got her up and running:
> 
> Server: Dell PowerEdge C1100 (big thanks again to tycoonbob for the guidance)
> CPU: Dual Quad Core Xeon L5520 2.26GHz (16-threads of goodness)
> RAM: 36GB DDR3 ECC
> VM Drive: Plextor M5P Extreme 512GB SSD
> Storage Drives: pending
> Hypervisor: Xen Cloud Platform (XCP) 1.6
> Rack: Tripp Lite SR4POST13 open rack
> Switch: TP-LINK TL-SG1016 10/100/1000Mbps 16-Port Gigabit
> 
> I'm now busying myself with getting comfortable with the management tools (Dell Remote Management Controller and Citrix XenCenter) and setting up my various server VM's (Apache web server, PostgreSQL database server, whatever else strikes me). I see many, many hours holed up in my office ahead of me.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> (I know, I know - I need to work on filling that rack up, lol)


That looks great!

What you need to do next is build you a storage server so you can play around with iSCSI, and backups. My C1100 has a 60GB SSD for the boot drive, and will soon have a 500GB SSD for VMs and that is all the drives it will have in there.


----------



## dushan24

Quote:


> Originally Posted by *tycoonbob*
> 
> That looks great!
> 
> What you need to do next is build you a storage server so you can play around with iSCSI, and backups. My C1100 has a 60GB SSD for the boot drive, and will soon have a 500GB SSD for VMs and that is all the drives it will have in there.


+1 to a SAN, lots of fun.


----------



## bryce

Some of these servers make me so jealous =/.

I'm so broke right now and the rigs I have are total **** and wouldn't be able to run anything remotely useful at all.

AMD Sempron rig and a AMD Duron rig.

It does make me want to make my raspberry pi a file server, if only there were a cheap way to hook about 7 hard drives up to it.


----------



## CaptainBlame

Quote:


> Originally Posted by *bryce*
> 
> Some of these servers make me so jealous =/.
> 
> I'm so broke right now and the rigs I have are total **** and wouldn't be able to run anything remotely useful at all.
> 
> AMD Sempron rig and a AMD Duron rig.
> 
> It does make me want to make my raspberry pi a file server, if only there were a cheap way to hook about 7 hard drives up to it.


Just out of curiosity, how much processing power do you think you need to run a file server?


----------



## the_beast

Quote:


> Originally Posted by *bryce*
> 
> Some of these servers make me so jealous =/.
> 
> I'm so broke right now and the rigs I have are total **** and wouldn't be able to run anything remotely useful at all.
> 
> AMD Sempron rig and a AMD Duron rig.
> 
> It does make me want to make my raspberry pi a file server, if only there were a cheap way to hook about 7 hard drives up to it.


Your Duron rig will be similar speed to the P3 fileserver I ran for a long time. Could stream multiple HD streams at once and never skipped a beat.

The only reason I upgraded was because I needed to use drives over 1TB and the PCI-X controllers I had (in PCI slots on a standard consumer mobo) couldn't take the bigger discs.


----------



## cdoublejj

I think I might upgrade from a full size dual early/first gen Pentium 4 xeons on a 400mhz bus (100mhz), to a socket 939 dual core, matx. with some a few cooling tweaks. was thinking about getting an SSD. Am not sure weather to replace the OS drive with the SSD or the minecraft/tekkit driver with the SSD. Then again not even sure if that is smart since 939 supposedly doesn't have AHCI?


----------



## NKrader

crunchy crunchy crunchy,

stats in sig


----------



## Pawelr98

^^^
You should get something like celeron g550.If server runs 24/7 then get SSD for OS. Then set up Ramdisc(for Tekkit) wchich saves data on RAID 0 HDD's. You need UPS when using ramdisc though.


----------



## NKrader

Quote:


> Originally Posted by *Pawelr98*
> 
> ^^^
> You should get something like celeron g550.If server runs 24/7 then get SSD for OS. Then set up Ramdisc(for Tekkit) wchich saves data on RAID 0 HDD's. You need UPS when using ramdisc though.


who what? use quotes


----------



## cdoublejj

Quote:


> Originally Posted by *Pawelr98*
> 
> ^^^
> You should get something like celeron g550.If server runs 24/7 then get SSD for OS. Then set up Ramdisc(for Tekkit) wchich saves data on RAID 0 HDD's. You need UPS when using ramdisc though.


Dang, that's way more than i can afford.


----------



## NKrader

Quote:


> Originally Posted by *cdoublejj*
> 
> Dang, that's way more than i can afford.


not that that stuff costs that much more than you are talking about spending,

although UPS aren't cheap, I still can't rationalize getting one..

true story, get what you can, you don't need the most efficient gear out there if it runs it works fine


----------



## bloodfury

Ok, well.... here is mine

Cisco CDE-200
4400 hrs on the 12 drives

server is used for video streaming / video conversion / file server / backup server
12 500GB WD RE2 drives in 2 raid 5 arrays

Atm I am re-installing windows some how it got corrupt....



I also have a Cisco Catalyst 3750 switch


----------



## cdoublejj

Quote:


> Originally Posted by *NKrader*
> 
> not that that stuff costs that much more than you are talking about spending,
> 
> although UPS aren't cheap, I still can't rationalize getting one..
> 
> true story, get what you can, you don't need the most efficient gear out there if it runs it works fine


No, I'd be getting the dual 939 for almost free. the psu and maybe the cpu is all i would be paying for so maybe 65 bucks TOPS. Now the SSDs or Raptor drive on the other had i would be paying for.


----------



## NKrader

Quote:


> Originally Posted by *cdoublejj*
> 
> No, I'd be getting the dual 939 for almost free. the psu and maybe the cpu is all i would be paying for so maybe 65 bucks TOPS. Now the SSDs or Raptor drive on the other had i would be paying for.



like I says, what makes you happy

I spend on gear I think is fun, I could get a gpu for cheaper that does more, I do what I can with a little of what I want

I wanna see this dual 939


----------



## cdoublejj

I'll try and post pics of both machines. the Dual Xeon is a true workstation/server bred machine.


----------



## DaveLT

OS : Fedora
Case : I have no idea ... it's just some tiny case with 2 silverstone FN181 fans on top
CPU : 2 X L5520
Memory : 8GB DDR3 ECC RAM
PSU : Seasonic S12II-520W
HDDs : 500GB bootup disk
8 x 1TB HDDs in RAID5 (External disk array)
Server manufacturer ... No idea. The shop did it not me

I actually stole the picture from the shop because it's in a dark room and i can't be bothered to pull it out
I use it for my torrent server, file server and a "router"








Actually i routed 2 of it's other LAN ports to 2 other switches


----------



## tycoonbob

Quote:


> Originally Posted by *DaveLT*
> 
> OS : Fedora
> Case : I have no idea ... it's just some tiny case with 2 silverstone FN181 fans on top
> CPU : 2 X L5520
> Memory : 8GB DDR3 ECC RAM
> PSU : Seasonic S12II-520W
> HDDs : 500GB bootup disk
> 8 x 1TB HDDs in RAID5 (External disk array)
> Server manufacturer ... No idea. The shop did it not me
> 
> I actually stole the picture from the shop because it's in a dark room and i can't be bothered to pull it out
> I use it for my torrent server, file server and a "router"
> 
> 
> 
> 
> 
> 
> 
> Actually i routed 2 of it's other LAN ports to 2 other switches


Where did you get that from? It looks like it uses the motherboards from the Dell C6100, along with those awesome Xeon 5520s. That's a nice small POWERFUL server!


----------



## DaveLT

Quote:


> Originally Posted by *tycoonbob*
> 
> Where did you get that from? It looks like it uses the motherboards from the Dell C6100, along with those awesome Xeon 5520s. That's a nice small POWERFUL server!


It is indeed a C6100 motherboard







The same ones you have, got the whole thing really cheap. I just like L5520s for some strange reason (I kid, i kid. But it is only a smidgen larger than a prodigy i reckon but i don't like small cases







)
It's tiny thermal footprint is always a bonus since it's got some real crunching power but also doing it with much less power consumption ... How's your "insane"







server*s* setup going on? Actually those passive heatsinks are no more though, last month i swapped them out for 2U active heatsinks from CM (the horizontal ones with heatpipes sticking out from the side) just to test and i'm took it down last week to put the passive heatsinks back. Don't need the extra noise








I forgot to mention there's another 2x 3TB HDDs in RAID1 inside the case. The external array got upgraded just now for extra hitachi 2TBs ... took forever to be in stock








I also have this picture from the seller that i'm considering and thinking of buying. It's also got L5520s but obviously it's got more HDD capacity without a extra array. God old servers are fun!







Because of this addiction i'll be picking up a Socket 940 dual proc motherboard with 2 opteron 880s for next to nothing money and also a Socket F and Opteron 2220


What do you reckon? My tiny server isn't virtualized. I might virtualize the next server i get!


----------



## tycoonbob

Quote:


> Originally Posted by *DaveLT*
> 
> It is indeed a C6100 motherboard
> 
> 
> 
> 
> 
> 
> 
> The same ones you have, got the whole thing really cheap. I just like L5520s for some strange reason (I kid, i kid. But it is only a smidgen larger than a prodigy i reckon but i don't like small cases
> 
> 
> 
> 
> 
> 
> 
> )
> It's tiny thermal footprint is always a bonus since it's got some real crunching power but also doing it with much less power consumption ... How's your "insane"
> 
> 
> 
> 
> 
> 
> 
> server*s* setup going on? Actually those passive heatsinks are no more though, last month i swapped them out for 2U active heatsinks from CM (the horizontal ones with heatpipes sticking out from the side) just to test and i'm took it down last week to put the passive heatsinks back. Don't need the extra noise
> 
> 
> 
> 
> 
> 
> 
> 
> I forgot to mention there's another 2x 3TB HDDs in RAID1 inside the case. The external array got upgraded just now for extra hitachi 2TBs ... took forever to be in stock
> 
> 
> 
> 
> 
> 
> 
> 
> I also have this picture from the seller that i'm considering and thinking of buying. It's also got L5520s but obviously it's got more HDD capacity without a extra array. God old servers are fun!
> 
> 
> 
> 
> 
> 
> 
> Because of this addiction i'll be picking up a Socket 940 dual proc motherboard with 2 opteron 880s for next to nothing money and also a Socket F and Opteron 2220
> 
> 
> What do you reckon? My tiny server isn't virtualized. I might virtualize the next server i get!


L5520s are awesome. May have a low clock speed (2.26GHz) but being quad core with HT, they are great for home virtualization. Used, thoses procs run like $50.

Please PM me the link! I want one of those tiny driveless models like what you have!


----------



## Boyboyd

vmware exsi (or esxi, i always forget) is free, and fun to learn. Do it.


----------



## tycoonbob

Quote:


> Originally Posted by *Boyboyd*
> 
> vmware exsi (or esxi, i always forget) is free, and fun to learn. Do it.


VMware vSphere Hypervisor (ESXi)

Or you could use Hyper-V Server 2012, or you could use XCP (XenCloud Platform) which as an API for any Citrix XenServer products, such as XenCenter.


----------



## DaveLT

Quote:


> Originally Posted by *Boyboyd*
> 
> vmware exsi (or esxi, i always forget) is free, and fun to learn. Do it.


Ah thanks for the info. I always used Oracle Virtualbox, is it a pile of crap?


----------



## Boyboyd

Quote:


> Originally Posted by *DaveLT*
> 
> Ah thanks for the info. I always used Oracle Virtualbox, is it a pile of crap?


It's not crap, it's just different. Virtualbox is more for workstations and desktops, proper hypervisors like esxi and the others that tycoonbob mentioned are more for servers.


----------



## Pip Boy

Quote:


> Originally Posted by *Boyboyd*
> 
> It's not crap, it's just different. Virtualbox is more for workstations and desktops, proper hypervisors like esxi and the others that tycoonbob mentioned are more for servers.


^ this, but then again running a proper file system like ZFS benefits from dedicated installs even over Dom0


----------



## Quasimojo

Quote:


> Originally Posted by *tycoonbob*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Boyboyd*
> 
> vmware exsi (or esxi, i always forget) is free, and fun to learn. Do it.
> 
> 
> 
> VMware vSphere Hypervisor (ESXi)
> 
> Or you could use Hyper-V Server 2012, or you could use XCP (XenCloud Platform) which as an API for any Citrix XenServer products, such as XenCenter.
Click to expand...

I've not done anything particularly complex with my virtualization yet, but I will say that Xen Cloud Platform has proven itself to be very intuitive, even for this n00b server admin. The Citrix XenCenter tool is very easy to use. I've got a bad habit of borking Linux servers with all my bungling about, and the snapshot functionality looks to be just what I need (knock on wood).









I'm already planning a separate storage server of some sort - probably iSCSI - which looks to be a piece of cake to set up in XCP.


----------



## DaveLT

Quote:


> Originally Posted by *Boyboyd*
> 
> It's not crap, it's just different. Virtualbox is more for workstations and desktops, proper hypervisors like esxi and the others that tycoonbob mentioned are more for servers.


Was just kidding with you. I never really delved so deep into virtualbox because my usage was very light for the VMs, never really did much work apart from light usage webservers


----------



## Boyboyd

Quote:


> Originally Posted by *Quasimojo*
> 
> I've not done anything particularly complex with my virtualization yet, but I will say that Xen Cloud Platform has proven itself to be very intuitive, even for this n00b server admin. The Citrix XenCenter tool is very easy to use. I've got a bad habit of borking Linux servers with all my bungling about, and the snapshot functionality looks to be just what I need (knock on wood).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm already planning a separate storage server of some sort - probably iSCSI - which looks to be a piece of cake to set up in XCP.


Honestly, i've never tried anything other than esxi. I know that Xen is one of the industry standards though.
Quote:


> Originally Posted by *DaveLT*
> 
> Was just kidding with you. I never really delved so deep into virtualbox because my usage was very light for the VMs, never really did much work apart from light usage webservers


Well, you got me lol.

I should have an update to my home nas soon. I'm in the process of moving my data from my Flexraid server to my new Unraid server. Only 5 hours remaining.


----------



## cdoublejj

Quote:


> Originally Posted by *DaveLT*
> 
> OS : Fedora
> Case : I have no idea ... it's just some tiny case with 2 silverstone FN181 fans on top
> CPU : 2 X L5520
> Memory : 8GB DDR3 ECC RAM
> PSU : Seasonic S12II-520W
> HDDs : 500GB bootup disk
> 8 x 1TB HDDs in RAID5 (External disk array)
> Server manufacturer ... No idea. The shop did it not me
> 
> I actually stole the picture from the shop because it's in a dark room and i can't be bothered to pull it out
> I use it for my torrent server, file server and a "router"
> 
> 
> 
> 
> 
> 
> 
> Actually i routed 2 of it's other LAN ports to 2 other switches


Duuuude!

As promised I took some pics. I took them with a Sony Mavica that uses floppy disks.





Asus PC-DL

Dual Xeon (Pentium 4s [will update when get a chance to run CPU-Z]) {400mhz bus}

3gb DDR1 (mixed speeds)

80 gb? IDE os drive

400GB IDE game server drive.

Windows Server 2008 R1






More or less my first computer _that got me in into PC gaming_.

Asus A8N-LA OEM 939 board

AMD Athlon 64 X2 (2GHZ)

2GB DDR1 400

80GB IDE OS drive

Nvidia 6150 LE (Onboard)

I installed a taller/bigger North Bridge heatsink as well as a home made Velocity stock on the CPU fan made out of a dead 80mm PSU fan.
If I use this server I will take some of the ram out of the Dual Xeon to bump me up to 3.5 or 4GB of ram.

The wire management isn't as bad as it looks. Some of the wires probably got loose in transport. The PC shop just moved, hence the mess.


----------



## DaveLT

Quote:


> Originally Posted by *cdoublejj*
> 
> Duuuude!


What was that for








I think your Athlon could be alot faster than the P4 (If it was a 3GHz) server IF the extra CPU isn't there but anyway it uses like 1/3 the power so







but i guess they are probably northwoods judging by the size of those heatsinks
But if it's not 3GHz then ... Man. You figure it out







What is the loose RAM sitting on the bottom of the case for?


----------



## cdoublejj

Quote:


> Originally Posted by *DaveLT*
> 
> What was that for
> 
> 
> 
> 
> 
> 
> 
> 
> I think your Athlon could be alot faster than the P4 (If it was a 3GHz) server IF the extra CPU isn't there but anyway it uses like 1/3 the power so
> 
> 
> 
> 
> 
> 
> 
> but i guess they are probably northwoods judging by the size of those heatsinks
> But if it's not 3GHz then ... Man. You figure it out
> 
> 
> 
> 
> 
> 
> 
> What is the loose RAM sitting on the bottom of the case for?


They are pre Northwood, they are Prestonia. They are not 3 gigahertz, that is 400 mhz combined bus speed. The "duuuude!" was because that's a pretty cool looking server.


----------



## spice003

Quote:


> Originally Posted by *DaveLT*
> 
> OS : Fedora
> Case : I have no idea ... it's just some tiny case with 2 silverstone FN181 fans on top
> CPU : 2 X L5520
> Memory : 8GB DDR3 ECC RAM
> PSU : Seasonic S12II-520W
> HDDs : 500GB bootup disk
> 8 x 1TB HDDs in RAID5 (External disk array)
> Server manufacturer ... No idea. The shop did it not me
> 
> I actually stole the picture from the shop because it's in a dark room and i can't be bothered to pull it out
> I use it for my torrent server, file server and a "router"
> 
> 
> 
> 
> 
> 
> 
> Actually i routed 2 of it's other LAN ports to 2 other switches


can you post more pics of this setup and tell me a little about how you run it, because i looked on ebay and the front of the board looks like it plugs in to something. i think i might get something like this. also where can i buy that case, it looks nice.


----------



## DaveLT

Quote:


> Originally Posted by *cdoublejj*
> 
> They are pre Northwood, they are Prestonia. They are not 3 gigahertz, that is 400 mhz combined bus speed. The "duuuude!" was because that's a pretty cool looking server.


I see i was just shooting out there if they were 3GHz each it could possibly be a tiny bit faster than your Athlon but no chance man








As for my server ... i just picked something that was cheap and had full of power, didn't want something that my computer had more horsepower than it does ... But yes it's looks pretty cool, it's plain as hell but it looks like that! and it looks better than the alien-ish phantom cases








Quote:


> Originally Posted by *spice003*
> 
> can you post more pics of this setup and tell me a little about how you run it, because i looked on ebay and the front of the board looks like it plugs in to something. i think i might get something like this. also where can i buy that case, it looks nice.


I seriously have no idea







I bought it from 8838.com. It's based off a C6100 blade and it plugs into a C6100 rack board
At the prices they charge you are better off with a C1100 but if you want the sheer look of the case i seriously have no idea they claim that it's a "custom job"
http://8838.com/show.php?tid=351


----------



## spice003

thanx for the link! those cases are sexy, wonder if the ship to US.


----------



## cdoublejj

Quote:


> Originally Posted by *DaveLT*
> 
> I see i was just shooting out there if they were 3GHz each it could possibly be a tiny bit faster than your Athlon but no chance man
> 
> 
> 
> 
> 
> 
> 
> 
> As for my server ... i just picked something that was cheap and had full of power, didn't want something that my computer had more horsepower than it does ... But yes it's looks pretty cool, it's plain as hell but it looks like that! and it looks better than the alien-ish phantom cases
> 
> 
> 
> 
> 
> 
> 
> 
> I seriously have no idea
> 
> 
> 
> 
> 
> 
> 
> I bought it from 8838.com. It's based off a C6100 blade and it plugs into a C6100 rack board
> At the prices they charge you are better off with a C1100 but if you want the sheer look of the case i seriously have no idea they claim that it's a "custom job"
> http://8838.com/show.php?tid=351


That's a whole lot of server. do you have a decent work load for it?


----------



## NKrader

Quote:


> Originally Posted by *DaveLT*
> 
> OS : Fedora
> Case : I have no idea ... it's just some tiny case with 2 silverstone FN181 fans on top
> CPU : 2 X L5520
> Memory : 8GB DDR3 ECC RAM
> PSU : Seasonic S12II-520W
> HDDs : 500GB bootup disk
> 8 x 1TB HDDs in RAID5 (External disk array)
> Server manufacturer ... No idea. The shop did it not me
> 
> I actually stole the picture from the shop because it's in a dark room and i can't be bothered to pull it out
> I use it for my torrent server, file server and a "router"
> 
> 
> 
> 
> 
> 
> 
> Actually i routed 2 of it's other LAN ports to 2 other switches


i want that case


----------



## Plan9

dup post


----------



## Plan9

Quote:


> Originally Posted by *Boyboyd*
> 
> vmware exsi (or esxi, i always forget) is free, and fun to learn. Do it.


I use Proxmox, which is also free and has been just as powerful and stable as VMWare. Plus Proxmox supports OS containers which are more efficient than virtual machines. AFAIK, esxi is just virtualisation.


----------



## DaveLT

Quote:


> Originally Posted by *cdoublejj*
> 
> That's a whole lot of server. do you have a decent work load for it?


Not ATM, i use it as a router (2x ports to 2 gigabit switches but i'm still on WiFi for this rig because i haven't wired in LAN) , file server, torrent server but that is pretty much it. If i run a private server yes probably but even a 1000+ player badly coded (Yes i actually started coding java in MapleS ...) private server would not even bring full potential or half
I guess i really need to host many VMs ... Maybe if i get my bloody internet fixed (I can't connect from outside) then i will host my blog over here! Bloody free webhost is a POS ... (000webhost BTW) Sometimes doesn't load my webpages sometimes slow as hell but mostly problematic
And also i still haven't finished setting up a new blog for my new-found love for IT ... it's called Built Up To Crumble
Right now i'm just time to time posting some stuff about my computers on what i call "The 4th Pin" Yeah, i used to post electronics related stuff and my issues about singapore


----------



## dushan24

Quote:


> Originally Posted by *tycoonbob*
> 
> L5520s are awesome. May have a low clock speed (2.26GHz) but being quad core with HT, they are great for home virtualization. Used, thoses procs run like $50.
> 
> Please PM me the link! I want one of those tiny driveless models like what you have!


PM me too please


----------



## dushan24

Quote:


> Originally Posted by *Plan9*
> 
> I use Proxmox, which is also free and has been just as powerful and stable as VMWare. Plus Proxmox supports OS containers which are more efficient than virtual machines. AFAIK, esxi is just virtualisation.


It is.


----------



## tycoonbob

Quote:


> Originally Posted by *dushan24*
> 
> It is.


Not to start a ProxMox conversation, but the way it uses containers, isn't that like using Citrix XenServer with Provisioning Services? Or is that different?
Quote:


> Originally Posted by *dushan24*
> 
> PM me too please


For those interested in that cool case with the Dell C6100 motherboard, here is the only information that we can find:
http://www.buychina.com/items/dell-c6100-diy-workstation-server-l5520-1-4g-graphics-rendering-desktop-computers-ywwpuorpmqp

Looks like it's a Chinese seller on Taobao, so you could get it through a broker such as BuyChina or similar, for around $300. They say the case costs about $50, but I have no idea how to just buy the case. A good friend of mine has a lot of CNC, drafting, and metalwork experience, so I sent him the pictures and we are gonna try to come up with a 3D drawing and see if there is anywhere we can get one made.


----------



## Plan9

Quote:


> Originally Posted by *tycoonbob*
> 
> Not to start a ProxMox conversation, but the way it uses containers, isn't that like using Citrix XenServer with Provisioning Services? Or is that different?


Containers are a different thing to virtualisation. It's where the guest OSs are sharing the same kernel - so the guests are effectively running on bare metal. But the guests are sandboxed like they would have been with virtualisation (are you familiar with with the UNIX command *chroot*? It's a bit like that but on steroids).

Containers are a much underrated tool as they work out just as secure as virtualisation (in fact I've read fewer documented attacks against containers) and have greater performance than virtualisation while still retaining many of killer features that draw people to virtualisations (eg snapshots). The downside is that you're options for the guest OS is limited. With Proxmox, because it's Debian based, you can only run Linux guests. But in most set ups, the same kernel limitation isn't much of an issue anyway (eg when building a web farm, you'd likely pick Linux / BSD / Solaris and then roll that out across all of your web and database servers). However if you really wanted to run (for example) a Windows VM as well as some Linux guests, then you can equally mix and match (in one of my set ups I was running Solaris host OS with a couple of Solaris containers and a couple of virtual machines (Linux and ReactOS IIRC) for the odd bit of software that couldn't be run on Solaris)


----------



## DaveLT

Quote:


> Originally Posted by *tycoonbob*
> 
> Not to start a ProxMox conversation, but the way it uses containers, isn't that like using Citrix XenServer with Provisioning Services? Or is that different?
> For those interested in that cool case with the Dell C6100 motherboard, here is the only information that we can find:
> http://www.buychina.com/items/dell-c6100-diy-workstation-server-l5520-1-4g-graphics-rendering-desktop-computers-ywwpuorpmqp
> 
> Looks like it's a Chinese seller on Taobao, so you could get it through a broker such as BuyChina or similar, for around $300. They say the case costs about $50, but I have no idea how to just buy the case. A good friend of mine has a lot of CNC, drafting, and metalwork experience, so I sent him the pictures and we are gonna try to come up with a 3D drawing and see if there is anywhere we can get one made.


SWEET! Hook me up too. I'm gonna need more of these cases


----------



## Pawelr98

Server now have a fanless Cpu cooling.

Contac 29 BP.
2 hours of 100% load- 52°C core temperature.
Heatsink is damm hot(you can't touch it).
Undervolted to 1.15 Vcore.


----------



## NKrader

Quote:


> Originally Posted by *tycoonbob*
> 
> Not to start a ProxMox conversation, but the way it uses containers, isn't that like using Citrix XenServer with Provisioning Services? Or is that different?
> For those interested in that cool case with the Dell C6100 motherboard, here is the only information that we can find:
> http://www.buychina.com/items/dell-c6100-diy-workstation-server-l5520-1-4g-graphics-rendering-desktop-computers-ywwpuorpmqp
> 
> Looks like it's a Chinese seller on Taobao, so you could get it through a broker such as BuyChina or similar, for around $300. They say the case costs about $50, but I have no idea how to just buy the case. A good friend of mine has a lot of CNC, drafting, and metalwork experience, so I sent him the pictures and we are gonna try to come up with a 3D drawing and see if there is anywhere we can get one made.


that site legit?

what do you get for 300$?

I want one, would make a killer htpc


----------



## DaveLT

240 USD Only 4GB but L5520 + C6100 single node motherboard and that's all ... no chassis either
3 HDD rack and a power adapter cable so you don't have to hunt for one ...
http://www.buychina.com/items/the-dell-c6100-streaking-computer-dual-xeon-l5520-8-core-16-thread-16g-mac-upsusqskipp I bought my L5520 server AND my sig rig L5520 from them
The smaller chassis which is available for 30$ is a open chassis ( DUH ) and holds 140mm fans


----------



## tycoonbob

Quote:


> Originally Posted by *DaveLT*
> 
> 240 USD Only 4GB but L5520 + C6100 single node motherboard and that's all ... no chassis either
> 3 HDD rack and a power adapter cable so you don't have to hunt for one ...
> http://www.buychina.com/items/the-dell-c6100-streaking-computer-dual-xeon-l5520-8-core-16-thread-16g-mac-upsusqskipp I bought my L5520 server AND my sig rig L5520 from them
> The smaller chassis which is available for 30$ is a open chassis ( DUH ) and holds 140mm fans


How did you go about getting the chassis though, that is what I think most of us are interested in!


----------



## DaveLT

Quote:


> Originally Posted by *tycoonbob*
> 
> How did you go about getting the chassis though, that is what I think most of us are interested in!


I think i'll copy what i told you just now







Quote:


> It doesn't by default, i had to tell the agent i wanted the case and had to add 300 yen or the smaller open chassis that is 190 yen that accomodates 140mm fans


----------



## tycoonbob

Yeah, thanks for clearing that up. I think it's an amazing case and I would love to see someone in the US do something like that.


----------



## DaveLT

Quote:


> Originally Posted by *tycoonbob*
> 
> Yeah, thanks for clearing that up. I think it's an amazing case and I would love to see someone in the US do something like that.


Absolutely. Why hasn't anyone though of it? ... Oh wait if someone did it won't be cheap :\
If caselabs thought of such a shape it would probably fit 8 nodes ...


----------



## NKrader

Quote:


> Originally Posted by *tycoonbob*
> 
> How did you go about getting the chassis though, that is what I think most of us are interested in!


I'm interested I'm the case + the hardware that fits


----------



## Darylrese

Here's mine (well sort of, i manage it at work) haha





24TB SAN,3 ESXi hosts


----------



## MikhailV

Here is mine, it serves a dual purpose as development workstation and as a test bed.



Specs:
Lian-Li PC-A75
2x Xeon E5-2640s
64GB of Kingston ECC Ram
2x Intel 520 60GB SSDs
4x 500GB WD Velociraptors
1x PNY GTX 660

Doesn't break 52C at full load. Idles at 31/32C.


----------



## fishy0689

Bit of an update to my old post. Got a 24u skeletek a few weeks ago, so far it's looking ghetto fabulous.









have the freenas rig on the bottom, i5 desktop on the left, c2q boincing w/gpu on the right, raspberry pi in the homemade organizer below the switch, homemade rack ears for the dlink switch, and a pfsense router in the 2u case up top.

http://smg.photobucket.com/user/fishy0689/media/20130504_223228.jpg.html

Ignore the mess, I intend to clean wiring and everything up once I get the two desktop rigs in rackmount cases. I also need to find a decent ups one of these days, but shipping to canada suuuuucks.


----------



## tiro_uspsss

my... 9th







Lian li case... a PC-A77F. It is my disc burner server!













specs:

Intel Xeon W3520 with TRUE Black
Supermicro X8STE
6x 2GB Hynix ECC
Nvidia 7300LE
10x Samsung DVDRW
2x Silicon Image 3114
Intel 330 120GB @ OS @ Windows Server 2012
Intel 520 60GB @ iso read/write dump
Gigabyte Odin Pro 1200W (yeah I know overkill, but thats what I had lying around!)


----------



## Jakeey802

Quote:


> Originally Posted by *tiro_uspsss*
> 
> my... 9th
> 
> 
> 
> 
> 
> 
> 
> Lian li case... a PC-A77F. It is my disc burner server!


LOL HOLY ****


----------



## Plan9

Quote:


> Originally Posted by *Darylrese*
> 
> Here's mine (well sort of, i manage it at work) haha
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 24TB SAN,3 ESXi hosts


If we're posting server racks then I could fill this thread up just with the hardware from one of our buildings








Quote:


> Originally Posted by *tiro_uspsss*
> 
> my... 9th
> 
> 
> 
> 
> 
> 
> 
> Lian li case... a PC-A77F. It is my disc burner server!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> specs:
> 
> Intel Xeon W3520 with TRUE Black
> Supermicro X8STE
> 6x 2GB Hynix ECC
> Nvidia 7300LE
> 10x Samsung DVDRW
> 2x Silicon Image 3114
> Intel 330 120GB @ OS @ Windows Server 2012
> Intel 520 60GB @ iso read/write dump
> Gigabyte Odin Pro 1200W (yeah I know overkill, but thats what I had lying around!)


Why on earth would you need such a machine? If you're writing that many discs at a time then you're better off paying for small batches from pressing plants. Assuming you own the copyright for the content you're burning, it would be just as cheap getting them pressed and you'd have a better quality finish too.


----------



## maarten12100

Quote:


> Originally Posted by *tiro_uspsss*
> 
> my... 9th
> 
> 
> 
> 
> 
> 
> 
> Lian li case... a PC-A77F. It is my disc burner server!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> specs:
> 
> Intel Xeon W3520 with TRUE Black
> Supermicro X8STE
> 6x 2GB Hynix ECC
> Nvidia 7300LE
> 10x Samsung DVDRW
> 2x Silicon Image 3114
> Intel 330 120GB @ OS @ Windows Server 2012
> Intel 520 60GB @ iso read/write dump
> Gigabyte Odin Pro 1200W (yeah I know overkill, but thats what I had lying around!)


and not a single cable was managed that day


----------



## dushan24

Quote:


> Originally Posted by *Plan9*
> 
> If we're posting server racks then I could fill this thread up just with the hardware from one of our buildings


Please do, this thread is stagnating a bit...


----------



## cookiesowns

Quote:


> Originally Posted by *DaveLT*
> 
> Not ATM, i use it as a router (2x ports to 2 gigabit switches but i'm still on WiFi for this rig because i haven't wired in LAN) , file server, torrent server but that is pretty much it. If i run a private server yes probably but even a 1000+ player badly coded (Yes i actually started coding java in MapleS ...) private server would not even bring full potential or half
> I guess i really need to host many VMs ... Maybe if i get my bloody internet fixed (I can't connect from outside) then i will host my blog over here! Bloody free webhost is a POS ... (000webhost BTW) Sometimes doesn't load my webpages sometimes slow as hell but mostly problematic
> And also i still haven't finished setting up a new blog for my new-found love for IT ... it's called Built Up To Crumble
> Right now i'm just time to time posting some stuff about my computers on what i call "The 4th Pin" Yeah, i used to post electronics related stuff and my issues about singapore


I have a CPanel WebServer laying around in Dallas not being used. If you want a free webhost package let me know and I'll set one up for you.

Will post pictures of my server setup shortly.


----------



## DaveLT

Quote:


> Originally Posted by *tiro_uspsss*
> 
> my... 9th
> 
> 
> 
> 
> 
> 
> 
> Lian li case... a PC-A77F. It is my disc burner server!


OMG. But i did see such a setup at a local store though








How many devices are there attached to the motherboard?


----------



## spice003

Quote:


> Originally Posted by *tiro_uspsss*
> 
> my... 9th
> 
> 
> 
> 
> 
> 
> 
> Lian li case... a PC-A77F. It is my disc burner server!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> specs:
> 
> Intel Xeon W3520 with TRUE Black
> Supermicro X8STE
> 6x 2GB Hynix ECC
> Nvidia 7300LE
> 10x Samsung DVDRW
> 2x Silicon Image 3114
> Intel 330 120GB @ OS @ Windows Server 2012
> Intel 520 60GB @ iso read/write dump
> Gigabyte Odin Pro 1200W (yeah I know overkill, but thats what I had lying around!)


i want that case







but i cant afford it.


----------



## tiro_uspsss

Quote:


> Originally Posted by *maarten12100*
> 
> and not a single cable was managed that day


oh hush!







the SATA cables are short-ish & the case is big!








Quote:


> Originally Posted by *DaveLT*
> 
> How many devices are there attached to the motherboard?


not sure what you mean... there are 10 DVD drives & 2 SSDs attached as so:

4 DVD @ silicon image 3114 #1
4 DVD @ silicon image 3114 #2
2 DVD @ ICH10R
2 SSD @ ICH10R

only other extra is the video card.

answered?








Quote:


> Originally Posted by *spice003*


I did buy it second hand, but it is my most expensive case ever!


----------



## DaveLT

Quote:


> Originally Posted by *tiro_uspsss*
> 
> not sure what you mean... there are 10 DVD drives & 2 SSDs attached as so:


I mean the ICH







But why only 4 devices attached?


----------



## EchoGecko

Piracy is not the only reason to have so many drives, I recall a few years ago at a place i worked out every thing was backed up that day to 6-7 DVD-R disk and mailed to a sister site, (had prepaid shipping envelopes and every thing) to speed thing up a custom case with 6 DVD-RW drives was setup, it took 45 minutes from start to finish to burn everything to disk and drop it off in the mail pickup list. The real pain came from when we had to restore these files, since a full backup was only made once a month, incremental backups were made daily, so I had to restore like 25 days of incremental backups, it was 125+ disk that i had to put in, wait 10 minutes to be copied and then repeat, on the plus side it worked and I did get some nice over time.


----------



## ramicio

...or they could be a person needing to duplicate en masse a lot of their own material, such a person in a band without a label to have a factory do such a task.

Doesn't 99.99999% of the world own a computer to commit some act of "piracy?" No one from the middle class could afford to buy all of the content they watch and listen to. Only your wealthy people who build dedicated theater rooms can afford such luxuries. The stuff is expensive and it's because the wealthy will always pay the price to do something the easy way and the most legal way. They have something to lose. Your ordinary middle class person has nothing to be sued for.


----------



## Plan9

Quote:


> Originally Posted by *EchoGecko*
> 
> Piracy is not the only reason to have so many drives, I recall a few years ago at a place i worked out every thing was backed up that day to 6-7 DVD-R disk and mailed to a sister site


I'm aware of such reasons, but this is a home server.

Quote:


> Originally Posted by *ramicio*
> 
> ...or they could be a person needing to duplicate en masse a lot of their own material, such a person in a band without a label to have a factory do such a task.


I'd already accounted for that when I made my comment about CD pressing being cheaper and better quality than building a 10 bay burning server.
Quote:


> Originally Posted by *ramicio*
> 
> Doesn't 99.99999% of the world own a computer to commit some act of "piracy?" No one from the middle class could afford to buy all of the content they watch and listen to.


If he's burning pirated content on that sort of scale, then it's to sell on. I have no qualms with with people who download an album to "try before they buy", but I _do_ have a problem with people who like to profit from piracy.
Quote:


> Originally Posted by *ramicio*
> 
> Only your wealthy people who build dedicated theater rooms can afford such luxuries. The stuff is expensive and it's because the wealthy will always pay the price to do something the easy way and the most legal way. They have something to lose. Your ordinary middle class person has nothing to be sued for.


That's as ridiculous as the other extreme of the arguments people make when they claim the movie industry is going out of business because of piracy and that it funds terrorism.
Why can't people make sane arguments when discussing stuff online instead of exaggerating to make a point


----------



## DaveLT

Man ... 10 bay servers are cheap to build. Actually. 10-20bucks per drive and you could just use cheap hardware ... For the sata cards cheap ones will do
The case is a problem but i have seen cheap 10 bay cases for like what 100 bucks? lol


----------



## Plan9

Quote:


> Originally Posted by *DaveLT*
> 
> Man ... 10 bay servers are cheap to build. Actually. 10-20bucks per drive and you could just use cheap hardware ... For the sata cards cheap ones will do
> The case is a problem but i have seen cheap 10 bay cases for like what 100 bucks? lol


Yes, and CD pressing is _cheaper_.

It's also significantly more professional than having a green or blue tinted disc that was clearly burnt in the back room of someones house. Plus if you're selling band albums then not only would pressed CDs look better, but they would last longer too.

[edit]

though I will say, there's still a chance that he's only doing a very small run (say 50 discs a time) which would render cutting the glass press cutting cost ineffective. Personally I couldn't see the point of a 10 bay server for such a small batch, but maybe he does.


----------



## Zeus

Here's my media / gaming capture server....

Full spec in sig (NAS).

The LSi Raid controller cost more than the rest of the system (excluding the 4 x 3TB HDD's)


----------



## DaveLT

Quote:


> Originally Posted by *Plan9*
> 
> Yes, and CD pressing is _cheaper_.
> 
> It's also significantly more professional than having a green or blue tinted disc that was clearly burnt in the back room of someones house. Plus if you're selling band albums then not only would pressed CDs look better, but they would last longer too.
> 
> [edit]
> 
> though I will say, there's still a chance that he's only doing a very small run (say 50 discs a time) which would render cutting the glass press cutting cost ineffective. Personally I couldn't see the point of a 10 bay server for such a small batch, but maybe he does.


I totally don't see the point at all ...








I just added 8x3TB HDDs into my SAN in RAID6. The performance is spectacular (LSI Controller here, onboard RAID lulz







)


----------



## NKrader

Quote:


> Originally Posted by *Plan9*
> 
> If we're posting server racks then I could fill this thread up just with the hardware from one of our buildings
> 
> 
> 
> 
> 
> 
> 
> 
> Why on earth would you need such a machine? If you're writing that many discs at a time then you're better off paying for small batches from pressing plants. Assuming you own the copyright for the content you're burning, it would be just as cheap getting them pressed and you'd have a better quality finish too.


those drives are like 20$ each, not that much to build that rig.. most people build servers here fur fun and rationalize due to a small necessity


----------



## tiro_uspsss

Quote:


> Originally Posted by *Plan9*
> 
> Going by your lack of reply to my other post, I'm guessing this build is for piracy after all.


Incorrect. Its because I thought your post was both rather presumptuous & thus daft.








You don't know if the material is copyright or not: its NOT
You don't know how often or how many DVDs I write: frequency - not too often, when I do its anywhere between 20 to 100 discs & usually isn't a single ISO, it usually is 5-15 different ISOs.
I also highly doubt you know the cost of CD/DVD pressing, so it wouldn't be a viable alternative as you seem to think it is
Quote:


> Originally Posted by *Plan9*
> 
> Yes, and CD pressing is _cheaper_.
> 
> It's also significantly more professional than having a green or blue tinted disc that was clearly burnt in the back room of someones house. Plus if you're selling band albums then not only would pressed CDs look better, but they would last longer too.
> 
> [edit]
> 
> though I will say, there's still a chance that he's only doing a very small run (say 50 discs a time) which would render cutting the glass press cutting cost ineffective. Personally I couldn't see the point of a 10 bay server for such a small batch, but maybe he does.


I'm not a business. The people who get (note the word get, not buy) CDs/DVDs of me know I'm not a business. They are a-ok with it being burns, & not presses.

edit: as for people not seeing the point..... clearly they lack imagination.... let me help: imagine 15 different ISOs that are part of a video series. Someone wants 6 sets. Thats 90 DVDs.. burning an ISO takes ~10mins. Most folks have 1 DVD burner. Thats 900mins spend burning alone. Get the picture? All I do is dump the 15 ISOs (single layer) on the 520 60GB SSD, & write from there. I'll spend about 90mins - not 900, 90mins doing the 90 discs. All clear?


----------



## jibesh

DVDs and DVD drives still exist!??

My time travel machine works...I must be in the past!!


----------



## Plan9

Quote:


> Originally Posted by *tiro_uspsss*
> 
> Incorrect. Its because I thought your post was both rather presumptuous & thus daft.


The original post wasn't presumptuous. I was asking what you used it for. My presumptions were only after you ignored me








Quote:


> Originally Posted by *tiro_uspsss*
> 
> You don't know if the material is copyright or not: its NOT


What is it for then? (and to be fair, part of the remit for posting servers on here is providing a short description of the of the usage of the box - which everyone else has been good enough to comply to)
Quote:


> Originally Posted by *tiro_uspsss*
> 
> I also highly doubt you know the cost of CD/DVD pressing, so it wouldn't be a viable alternative as you seem to think it is


Now who's being presumptuous? I used to be a producer and would frequently be sending out demos.
Quote:


> Originally Posted by *tiro_uspsss*
> 
> I'm not a business. The people who get (note the word get, not buy) CDs/DVDs of me know I'm not a business. They are a-ok with it being burns, & not presses.


I know you're not a business, I made that point at the start of my post.
Quote:


> Originally Posted by *tiro_uspsss*
> 
> edit: as for people not seeing the point..... clearly they lack imagination.... let me help: imagine 15 different ISOs that are part of a video series. Someone wants 6 sets. Thats 90 DVDs.. burning an ISO takes ~10mins. Most folks have 1 DVD burner. Thats 900mins spend burning alone. Get the picture? All I do is dump the 15 ISOs (single layer) on the 520 60GB SSD, & write from there. I'll spend about 90mins - not 900, 90mins doing the 90 discs. All clear?


I see. So you're a non-professional that manages to film 15 DVD's worth of video in your spare time -per project!!!- to a high enough quality that someone would want 6 sets of them and regularly enough that you needed a dedicated server? For the sake of not wanting to come across as a troll, I'll take your word for it. But can you at least see why that might sound a little implausible? Hell, there's people who produce less content that class themselves as a business and yet you're just doing this for free and in your spare time and still manage to generate more business than many self-made businesses (and I know this from having mates that are hobbyist film makers; though sadly I've lost touch with most of them in recent years)

But as I said, I'm not here to troll, things just didn't add up, so only meant to ask you about them purely out of curiosity. You've still failed to really explain the point of the box (aside vaguely discussing batch processing of some videos) so it's pretty obvious that you don't really want to go into more details (which in itself is pretty odd as well because most people who work in the industry - even just for fun - tend to want to shout about their work). So in there interest of keep things civil, I'll take your word for it. But hopefully you can now see my perspective as well and understand why I had/have* doubts









( *I'm not really sure the correct tense that should be applied here)


----------



## tiro_uspsss

Quote:


> Originally Posted by *Plan9*
> 
> The original post wasn't presumptuous. I was asking what you used it for. My presumptions were only after you ignored me


Your first post wasn't presumptuous?? You used the word assume itself. I am in no way obliged to respond to you.
Quote:


> Originally Posted by *Plan9*
> 
> What is it for then?


I thought you would be able to presume that, you seem to have a knack for it!








Quote:


> Originally Posted by *Plan9*
> 
> Now who's being presumptuous? I used to be a producer and would frequently be sending out demos.


So stop posting what you are or used to be & post figures - I'm actually interested, both the cost figure & if you are correct.
Quote:


> Originally Posted by *Plan9*
> 
> I know you're not a business, I made that point at the start of my post.
> I see. So you're a non-professional that manages to film 15 DVD's worth of video in your spare time -per project!!!- to a high enough quality that someone would want 6 sets of them and regularly enough that you needed a dedicated server? For the sake of not wanting to come across as a troll, I'll take your word for it. But can you at least see why that might sound a little implausible? Hell, there's people who produce less content that class themselves as a business and yet you're just doing this for free and in your spare time and still manage to generate more business than many self-made businesses.


Again with the guess work..so much..dribble








lets break it down:
correct! I am not a professional








incorrect! I have not (yet) recorded 15 DVDs worth of material.
incorrect! It is not the quality of the video/audio that makes the material so desirable.
You'll have to take my word for it, won't you? Seeing as I haven't told you what the material is.
The _content_ of the material has such worth to it (the opinion of myself & others that want it) that I & others rather distribute it for free.


----------



## Plan9

Quote:


> Originally Posted by *tiro_uspsss*
> 
> Your first post wasn't presumptuous?? You used the word assume itself. I am in no way obliged to respond to you.


I said _"assuming you own the copyright"_. The assumption I made in that post was favorable to you!!! Would you rather I had said "_I'm not sure if you're pirating or not...._" and just out right called you a criminal? Because that's how you're now starting to come off with how vague and defensive you're being.
Quote:


> Originally Posted by *tiro_uspsss*
> 
> So stop posting what you are or used to be & post figures - I'm actually interested, both the cost figure & if you are correct.


I nearly did in my last post, but then spotted that you're Australian so I couldn't recommend my usual place (in Britain)








Quote:


> Originally Posted by *tiro_uspsss*
> 
> Again with the guess work..so much..dribble


I was only reiterating what you said (or at least implied). It's hardly my fault if you decide to discuss the purpose of your server in a form of riddles.
Quote:


> Originally Posted by *tiro_uspsss*
> 
> lets break it down:
> correct! I am not a professional
> 
> 
> 
> 
> 
> 
> 
> 
> incorrect! I have not (yet) recorded 15 DVDs worth of material.
> incorrect! It is not the quality of the video/audio that makes the material so desirable.
> You'll have to take my word for it, won't you? Seeing as I haven't told you what the material is.
> The _content_ of the material has such worth to it (the opinion of myself & others that want it) that I & others rather distribute it for free.


You know what, I don't even care any more. You're either trolling me or a pirate. In either case I've lost all respect for you. Sorry mate, but what's the point of posting a server if you're then just going to critic about it when people show an interest?


----------



## maarten12100

Cut the crap I mean who cares this is a server topic.
Besides having a lot of dvd drive doesn't make you a pirate.


----------



## dushan24

Quote:


> Originally Posted by *maarten12100*
> 
> Cut the crap I mean who cares this is a server topic.
> Besides having a lot of dvd drive doesn't make you a pirate.


This, seriously, who cares what he uses it for, it's obvious anyway, burning discs, the semantics are irrelevant as the purpose of the server has been established.


----------



## Plan9

Quote:


> Originally Posted by *maarten12100*
> 
> Cut the crap I mean who cares this is a server topic.


I was only ever asking him questions about his server. So cut the crap yourself








Quote:


> Originally Posted by *maarten12100*
> 
> Besides having a lot of dvd drive doesn't make you a pirate.


Have you actually read this argument from the beginning because I actually never said he was. Quite the opposite in fact; I assumed he wasn't and asked why he didn't get his stuff pressed instead. He was the one who got defensive. And let's be honest, the why he's since conducted himself has been pretty weird (again, not an accusation, just a statement)


----------



## Plan9

Quote:


> Originally Posted by *dushan24*
> 
> This, seriously, who cares what he uses it for.


*One of the mandates for posting in this sodding thread is a description of what it's used for.* Check page one of this thread if you don't believe me. So don't give me that crap that nowbody cares because that's the whole sodding point of this thread.

Furthermore, everyone else managed to follow those rules yet the other guy hadn't which is why I asked him (and then he got all defensive like a guilty child). I mean what's the bloody point in posting a server in a "show your server" thread without a description of what the server is used for?
Quote:


> Originally Posted by *dushan24*
> 
> Also, he didn't initially say what the server is for because it's obvious, burning discs


I might as well just post a server and say it's for _"serving stuff"_ then, seeming as this thread has now degenerated into playschool "show and tell" except with even less informative content


----------



## dushan24

Quote:


> Originally Posted by *Plan9*
> 
> One of the mandates for posting in this sodding thread is a description of what it's used for. Everyone else managed to follow those rules yet the other guy hadn't which is why I asked him (and then he got all defensive like a guilty child). I mean what's the bloody point in posting a server in a "show your server" thread without a description of what the server is used for?
> I might as well just post a server and say it's for _"serving stuff"_ then, seeming as this thread has now degenerated into playschool "show and tell" except with even less informative content


I actually edited that post just after posting it and before your comment as I realised it didn't come out right. I agree that people need to say what the server is for...


----------



## Plan9

Quote:


> Originally Posted by *dushan24*
> 
> I actually edited that post just after posting it and before your comment as I realised it didn't come out right. I agree that people need to say what the server is for...


Indeed







Quote:


> Originally Posted by *dushan24*
> 
> the semantics are irrelevant as the purpose of the server has been established.


Not really. "burning stuff" is about as descriptive as saying a VM server "virtualises stuff". People on here would equally then go on to ask "_what VM's are you thinking of running?_".

Also, let's get one thing clear, I never asked him to elaborate what he's burning; not initially. I only asked why he didn't press discs instead given it appears he's burning them on a large scale. He ignored that question, answered all the non-server related questions and started to behave erratic in this thread, so it's hardly surprising that I jumped to the assumption that he had something to hide (and clearly he has. I mean just look at the way he's behaving: post a box then refuse to answer any questions about what it's used for. That's not weird in the slightest







.).

But the crux of the matter is, if he didn't want us to talk about the purpose of the server in any detail then he really shouldn't have posted the server to begin with (it's the same reason why I've not posted my work infrastructure. I've love to show it off but I couldn't really talk about it in detail so there's little point in posting any pictures of the racks). So with all this arguing and make me out to be the bad guy here, I was only asking questions that most people might have asked in this thread on any other day (or at least most people who work with servers everyday and have an interest in servers).

Anyway, I think we're this topic is done to death now. You've said your piece, I've said mine, and tiro_uspsss has made it clear that he he would rather not talk about the server that he's just publicised. So matter closed?


----------



## dushan24

Quote:


> Originally Posted by *Plan9*
> 
> Indeed
> 
> 
> 
> 
> 
> 
> 
> 
> Not really. "burning stuff" is about as descriptive as saying a VM server "virtualises stuff". People on here would equally then go on to ask "_what VM's are you thinking of running?_".


I disagree, there is a limited domain of burning tasks. Simply saying "I'm burning stuff" is sufficient for me (and then people can extrapolate as they wish)

Though for virtualising, there is a huge amount more that you can do so certainly more justification would be needed in that case.

FWIW, I too really like this thread and enjoy reading the descriptions in all their detail
Quote:


> Originally Posted by *Plan9*
> 
> Also, let's get one thing clear, I never asked him to elaborate what he's burning, I only asked why he didn't press discs instead given it appears he's burning them on a large scale. He ignored that question, answered all the non-server related questions and started to behave erratic in this thread (as well as others pointing out the legitimacy of such a box in situations that the burn-server guy wasn't applicable). So if you guys are fed up with the direction this thread has taken, you're all just a guilty as the burn-server guy and myself for turning it into a piracy debate.


Yes I did notice what your initial question was, he seemed to have taken it in an accusatory way.

Though perhaps your comments did have an accusatory connotation to them. However, personally I would not have taken them as such.


----------



## dushan24

But anyway, I have no desire to argue with anyone about anything, more servers!


----------



## ramicio

Quote:


> Originally Posted by *Plan9*
> 
> I'd already accounted for that when I made my comment about CD pressing being cheaper and better quality than building a 10 bay burning server.


Do you know how manufacturing works? It would never make fiscal sense for a small band to get a CD pressed versus building some tower with 10 drives.


----------



## tycoonbob

I don't care if it's a burnt CD or pressed (or whatever), as long as it's not just MP3 tracks on there. If I can extract FLACs from the CD, I will never touch the physical copy again.

Take that conversation elsewhere please, and stick to showing off servers here!


----------



## Plan9

Quote:


> Originally Posted by *ramicio*
> 
> Do you know how manufacturing works? It would never make fiscal sense for a small band to get a CD pressed versus building some tower with 10 drives.


I've already said that I've worked in the industry and used to send out demos. But honestly, who gives a toss when internet randoms like you can chip in with theirwworthless opinions









So how about we get back on topic now?


----------



## ramicio

Yep, it's cheaper to get a factory to setup and press a few hundred discs versus just burning them


----------



## Plan9

Quote:


> Originally Posted by *ramicio*
> 
> Yep, it's cheaper to get a factory to setup and press a few hundred discs versus just burning them


Actually it's about the same. But you don't have the cost of pc parts and you have a significantly better quality finish.

That's why I used to get larger batches (in terms of the business I was generating) pressed and save the burning for prototyping (so to speak).

But keep trolling mate. I mean, real life experience is clearly worthless on the internet


----------



## ramicio

Except that hardware is a physical asset. A fee to set up tooling is money lost.


----------



## Plan9

Quote:


> Originally Posted by *ramicio*
> 
> Except that hardware is a physical asset.


It's still a cost - regardless of how you're now trying to argue it. It's not as if we're talking about using an existing desktop to burn CDs - this is a dedicated build. So in that regard it's no different to paying for the glass cuts.
Quote:


> Originally Posted by *ramicio*
> 
> A fee to set up tooling is money lost.


The places I've booked with absorbed that cost into the cost of running the batch. But I can't speak for every plant.

Anyway, all of this is moot because the guy in question isn't burning CDs and isn't bulk burning the same ISO. (which was why I asked the original question - curiosity what his workload was like that meant this was more advantageous over pressing). But as usual, you're more interested in arguing than your same tired (and often misinformed - if we're completely honest. I've seen your rants in the Linux forums) argument until everyone gets utterly fed up and either gives up out of boredom or the thread gets locked. So please let's not drag this thread the same way.


----------



## ramicio

So add me to your block list so you don't have to see my posts.


----------



## Plan9

In the interest of getting this thread back on topic, here's one of my personal boxes. (It's not one I host myself though, so I can't post any pictures):

Code:



Code:


[email protected]:~# uname -a
Linux primus.h4ck.in 2.6.32-11-pve #1 SMP Wed Apr 11 07:17:05 CEST 2012 x86_64 GNU/Linux

It runs Debian and a number of OpenVZ containers on top of that (basically virtualisation but at the OS level). Currently I only have 3 VMs running on it (though each with their own dedicated WAN IP):

A live web server
A development web server
And an IRC server


----------



## Xyro TR1

Whatever you use your server for is your own business. Arguing that point serves no one.

Now then, post away!


----------



## Norse

Not got it yet due to the expense but slowly building this beast over the next month and its not......technically a server though it is hardware wise one

OS: Win 7 Pro
Case: Silverstone TJ09
CPU: 2x AMD Opteron 6272 (2.1ghz 16 Cores) with Noctua NH-U12DO A3 Heatsinks
Motherboard: Asus KGPE-D16
Memory: 8x4GB 1333mhz DDR3 totalling 32GB
PSU: Corsair Professional AX860 Modular
OS HDD (If you have one): 2x3TB Raid 1
Gaming Drive: Partition of ^
Graphics EVGA 2GB GTX 680
Server Manufacturer (Ex: Dell, HP, You?): meeeeeeeeeeeeee with a little help.....okay £2k help from my bank


----------



## wtomlinson

Quote:


> Originally Posted by *Norse*
> 
> Not got it yet due to the expense but slowly building this beast over the next month and its not......technically a server though it is hardware wise one
> 
> OS: Win 7 Pro
> Case: Silverstone TJ09
> CPU: 2x AMD Opteron 6272 (2.1ghz 16 Cores) with Noctua NH-U12DO A3 Heatsinks
> Motherboard: Asus KGPE-D16
> Memory: 8x4GB 1333mhz DDR3 totalling 32GB
> PSU: Corsair Professional AX860 Modular
> OS HDD (If you have one): 2x3TB Raid 1
> Gaming Drive: Partition of ^
> Graphics EVGA 2GB GTX 680
> Server Manufacturer (Ex: Dell, HP, You?): meeeeeeeeeeeeee with a little help.....okay £2k help from my bank


Just used for gaming (going off the part where you said "gaming drive")?


----------



## Zeus

Quote:


> Originally Posted by *Zeus*
> 
> Here's my media / gaming capture server....
> 
> Full spec in sig (NAS).
> 
> The LSi Raid controller cost more than the rest of the system (excluding the 4 x 3TB HDD's)


Just a update, I'll be adding another 4 x 3TB drives to the RAID5 this week. So I'll have approx 21TB of storage







I might also play with the RAID config to see if I can get a faster write speed.


----------



## Norse

Quote:


> Originally Posted by *wtomlinson*
> 
> Just used for gaming (going off the part where you said "gaming drive")?


75% gaming, but also then rendering and BOINC


----------



## Jakeey802

Got an IBM 346 and 2 IBM Desktops set-up as web hosts for my own personal company








Also have a remote location managed by the Co-Owner in SA









IBM Desktop Specs:
i5-670
8GB Ram
Dual NIC's
5TB HDD in RAID
Dual 280W PSU
Intergrated Graphics Adapter
One runs Win7 the other runs Ubuntu Server

Cant remember the IBM 346 specs but it noisy as hell xD
Also awaiting a free rack from my school <3

Will get some pics up soon


----------



## DaveLT

Your servers aren't complete without these


----------



## Jakeey802

Quote:


> Originally Posted by *DaveLT*
> 
> Your servers aren't complete without these
> 
> 
> 
> 
> 
> 
> 
> 
> -snip-


Those little fans...........
I want to burn everyone of them, their so loud xD


----------



## tycoonbob

Quote:


> Originally Posted by *DaveLT*
> 
> Your servers aren't complete without these


Pfft, you think that FFB1212VHE can push some air? Check out the AFB1212GHE-CF00! 240CFM, 62dBa. I have 3 of them waiting to go in my storage box once I get my server closet built. HDDs will be too cold to touch.









All kidding aside, those are some serious fans.


----------



## DaveLT

Quote:


> Originally Posted by *tycoonbob*
> 
> Pfft, you think that FFB1212VHE can push some air? Check out the AFB1212GHE-CF00! 240CFM, 62dBa. I have 3 of them waiting to go in my storage box once I get my server closet built. HDDs will be too cold to touch.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> All kidding aside, those are some serious fans.










PFC1212DE will blow your brains clean off! I got them for 2-3$ each ... what a steal


----------



## tycoonbob

Quote:


> Originally Posted by *DaveLT*
> 
> 
> 
> 
> 
> 
> 
> 
> PFC1212DE will blow your brains clean off! I got them for 2-3$ each ... what a steal


$2-3/each? Where did you get them!


----------



## DaveLT

Quote:


> Originally Posted by *tycoonbob*
> 
> $2-3/each? Where did you get them!


Just some shop in china, the nidec i recieved happened to be absolutely unused. What a win







Pm me if you want the link


----------



## stl drifter

hey guys is this a good price for this server. Will I be able to run vm on here as well as use it as a storage server.

For $349.99
Case is 24 Bay Supermicro SC846 with caddies
Motherboard: H8DME-2
Procs: Qty 1 AMD Opteron Quad Core 2346HE @ 1.8GHz
Ram: 8GB 4x 2GB, 12 empty slots
IPMI Card: Kira 100
Qty 3 SAT2-MV8 Raid cards
Qty 2 Ablecom PWS-902-IR Power supplies
No hard drives included
**We are doing a temporary sale on these servers, until the end of May we are dropping the price to $319.99**


----------



## DaveLT

Hmm, the case alone is worth all the money, lol. VM + Storage? I wouldn't count on that. Get another cpu and you are good to go


----------



## Plan9

Quote:


> Originally Posted by *DaveLT*
> 
> Hmm, the case alone is worth all the money, lol. VM + Storage? I wouldn't count on that. Get another cpu and you are good to go


Erm yeah. It's more powerful than my VM and storage server and that performs perfectly well.


----------



## stl drifter

Im a noob to servers and networks. Im trying to set up alittle home lab to experiment and learn with. Was just wondering if price per performance was there. I cant afford to go to school right now . So , I figured this is the next best thing.


----------



## Plan9

Quote:


> Originally Posted by *stl drifter*
> 
> Im a noob to servers and networks. Im trying to set up alittle home lab to experiment and learn with. Was just wondering if price per performance was there. I cant afford to go to school right now . So , I figured this is the next best thing.


Buy yourself some Raspberry Pi's. They're only $25, will run Linux perfectly fine and a great introduction to servers and networks for just pocket money


----------



## jibesh

Thought I would post my home lab since I have finished it for now (I hope).



Top to bottom

*Atom Server - CentOS running OpenSM*

*Voltaire 9024D 24-Port Infiniband Switch*

*HP ProCurve 1800-24G 24-Port GbE Switch*

*Linux VM Host:*

*Case:* SUPERMICRO CSE-825TQ-563LPB 2U
*Motherboard:* SUPERMICRO MBD-X9SCM-F (LGA1155)
*CPU:* Intel Xeon E3-1230
*Memory:*Kingston 32GB (4 x 8GB)
*PSU:* SuperMicro 560W
*NIC(s):*
1 x Intel 82579LM (onboard)
1 x Intel 82574L (onboard)
1 x Intel EXPI9402PT PRO/1000 PT (2 x 1GbE)
1 x Mellanox MHGH28-XTC Infiniband HCA
*RAID Controller:* 3Ware 9650SE-8LPML
*Hard Drives:*
1 x OCZ Vertex 3 120GB SSD
1 x WD 150GB 3.5" VelociRaptor
8 x WD 1TB BLUE (RAID 10)
*Fans:* 3 x Supermicro 5000rpm 80mm
*OS:* VMWare ESXi 5.1
*Purpose:* Various Test VMs

*Windows VM Host 1 :*

*Case:* SuperMicro 833T-650B 3U
*Motherboard:* SUPERMICRO MBD-X9DR7-LN4F (2 x LGA2011)
*CPU:* 2 x Intel E5-2620
*Memory:*Samsung 64GB (16 x 4GB)
*PSU:* Supermicro 650W
*NIC(s):*
Intel I350-AM4 (4 x 1GbE)
1 x Mellanox MHGH28-XTC Infiniband HCA
*RAID Controller:*
3Ware 9690SA-4I
3Ware 9650SE-8LPML
*Hard Drives:*
4 x Seagate 146GB 15K 3.5" SAS (RAID 10)
6 x Hitachi 500GB - 0F10381 (RAID 10)
*Fans:* 6 x Supermicro 5000rpm 80mm
*OS:* Windows Server 2012 Datacenter
*Purpose:* Secondary Domain Controller (W2K12) / Various Test VMs

*Windows VM Host 2:*

*Case:* Norco RPC-450TH 4U
*Motherboard:* P8P67 EVO (LGA1155)
*CPU:* Intel Xeon E3-1240
*Memory:*G.Skill 32GB (4 x 8GB)
*PSU:* SeaSonic 650W
*NIC(s):*
1 x Intel 82579
1 x Intel EXPI9301CT
1 x Realtek 8110SC
1 x Mellanox MHGH28-XTC Infiniband HCA
*RAID Controller:*
3Ware 9690SA-8I
3Ware 9650SE-8LPML
*Hard Drives:*
2 x WD 150GB 3.5" VelociRaptor (RAID 1)
8 x Hitachi 2TB - 0F10311 (RAID 6)
8 x WD 150GB 2.5" VelociRaptor (RAID 10)
*Fans:* 4 x Scythe 80mm (SP0825FDB12H)
*Misc:* 2 x ICY DOCK MB994SP-4S
*OS:* Windows Server 2012 Datacenter
*Purpose:* NAS (W2K12) / Various Test VMs

*Storage Server:*

*Case:* Norco 4216 4U
*Motherboard:* ASUS P6X58-E WS (LGA1366)
*CPU:* Intel i7 950
*Memory:* G.Skill 24GB (6 x 4GB)
*PSU:* SeaSonic 650W
*NIC(s):*
2 x Intel 82574L (onboard)
2 x Mellanox MHGH28-XTC Infiniband HCAs
*RAID Controller:* 3Ware 9650SE-24M8
*Hard Drives:*
2 x 72GB WD 3.5" VelociRaptor (RAID 1)
16 x Hitachi 1TB - 0F10383 (RAID 10)
*Fans:*
3 x Scythe 120mm (SP1225FDB12H)
2 x Scythe 80mm (SP0825FDB12H)
*OS:* Windows Server 2012 Std
*Other:* Starwind iSCSI SAN
*Purpose:* iSCSI SAN storage for VM Servers



*Netgear MoCA to Ethernet Bridge (MCAB1001)*

*Network Appliance VM Host:*

*Case:* SuperMicro SYS-5017C-LF
*Motherboard:* SuperMicro X9SCL-F
*CPU:* Intel i3 2100
*Memory:*Kingston 8GB (4 x 2GB)
*PSU:* SuperMicro 200W
*NIC(s):*
1 x Intel 82579LM (onboard)
1 x Intel 82574L (onboard)
1 x Intel EXPI9402PT PRO/1000 PT (2 x 1GbE)
*RAID Controller:* none
*Hard Drives:*
1 x OCZ Vertex 2 120GB
1 x WD 150GB 3.5" VelociRaptor
*Fans:* 2 x Supermicro 8500rpm 40mm
*OS:* VMWare ESXi 5.1
*Purpose:* Router (pfSense) / Primary Domain Controller (W2K12) / RDP JumpBox (W2K12)

*HP ProCurve V1910-24G 24-Port GbE Switch*



*Windows VM Host 3:*

*Case:* COOLER MASTER HAF 932
*Motherboard:* GIGABYTE GA-990FXA-UD3 (AM3+)
*CPU:* AMD FX-8350
*Memory:*Crucial 32GB (4 x 8GB)
*PSU:* Antec 800W
*NIC(s):* 1 x Intel EXPI9402PT PRO/1000 PT (2 x 1GbE)
*RAID Controller:*
3Ware 9650SE-4LPML
3Ware 9650SE-8LPML
*Hard Drives:*
2 x OCZ Vertex 2 60GB (RAID 1)
4 x WD 6000GB 3.5" VelociRaptor (RAID 10)
8 x Seagate 3TB - ST3000DM001 (RAID 6)
*Fans:*
2 x Corsair 120mm
2 x Noctua 120mm
3 x Cooler Master 230mm
*Misc:* 2 x 4 - 3.5" bays in 5.25"
*OS:* Windows Server 2012 Datacenter
*Purpose:* Backup Server (W2K12) / Various Test VMs


----------



## CloudX

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *jibesh*
> 
> Thought I would post my home lab since I have finished it for now (I hope).
> 
> 
> 
> Top to bottom
> 
> *Atom Server - CentOS running OpenSM*
> 
> *Voltaire 9024D 24-Port Infiniband Switch*
> 
> *HP ProCurve 1800-24G 24-Port GbE Switch*
> 
> *Linux VM Host:*
> 
> *Case:* SUPERMICRO CSE-825TQ-563LPB 2U
> *Motherboard:* SUPERMICRO MBD-X9SCM-F (LGA1155)
> *CPU:* Intel Xeon E3-1230
> *Memory:*Kingston 32GB (4 x 8GB)
> *PSU:* SuperMicro 560W
> *NIC(s):*
> 1 x Intel 82579LM (onboard)
> 1 x Intel 82574L (onboard)
> 1 x Intel EXPI9402PT PRO/1000 PT (2 x 1GbE)
> 1 x Mellanox MHGH28-XTC Infiniband HCA
> *RAID Controller:* 3Ware 9650SE-8LPML
> *Hard Drives:*
> 1 x OCZ Vertex 3 120GB SSD
> 1 x WD 150GB 3.5" VelociRaptor
> 8 x WD 1TB BLUE (RAID 10)
> *Fans:* 3 x Supermicro 5000rpm 80mm
> *OS:* VMWare ESXi 5.1
> *Purpose:* Secondary Domain Controller (W2K12) / Various Test VMs
> 
> *Windows VM Host 1 :*
> 
> *Case:* SuperMicro 833T-650B 3U
> *Motherboard:* SUPERMICRO MBD-X9DR7-LN4F (2 x LGA2011)
> *CPU:* 2 x Intel E5-2620
> *Memory:*Samsung 64GB (16 x 4GB)
> *PSU:* Supermicro 650W
> *NIC(s):*
> Intel I350-AM4 (4 x 1GbE)
> 1 x Mellanox MHGH28-XTC Infiniband HCA
> *RAID Controller:*
> 3Ware 9690SA-4I
> 3Ware 9650SE-8LPML
> *Hard Drives:*
> 4 x Seagate 146GB 15K 3.5" SAS (RAID 10)
> 6 x Hitachi 500GB - 0F10381 (RAID 10)
> *Fans:* 6 x Supermicro 5000rpm 80mm
> *OS:* Windows Server 2012 Datacenter
> *Purpose:* Secondary Domain Controller (W2K12) / Various Test VMs
> 
> *Windows VM Host 2:*
> 
> *Case:* Norco RPC-450TH 4U
> *Motherboard:* P8P67 EVO (LGA1155)
> *CPU:* Intel Xeon E3-1240
> *Memory:*G.Skill 32GB (4 x 8GB)
> *PSU:* SeaSonic 650W
> *NIC(s):*
> 1 x Intel 82579
> 1 x Intel EXPI9301CT
> 1 x Realtek 8110SC
> 1 x Mellanox MHGH28-XTC Infiniband HCA
> *RAID Controller:*
> 3Ware 9690SA-8I
> 3Ware 9650SE-8LPML
> *Hard Drives:*
> 2 x WD 150GB 3.5" VelociRaptor (RAID 1)
> 8 x Hitachi 2TB - 0F10311 (RAID 6)
> 8 x WD 150GB 2.5" VelociRaptor (RAID 10)
> *Fans:* 4 x Scythe 80mm (SP0825FDB12H)
> *Misc:* 2 x ICY DOCK MB994SP-4S
> *OS:* Windows Server 2012 Datacenter
> *Purpose:* NAS (W2K12) / Various Test VMs
> 
> *Storage Server:*
> 
> *Case:* Norco 4216 4U
> *Motherboard:* ASUS P6X58-E WS (LGA1366)
> *CPU:* Intel i7 950
> *Memory:* G.Skill 24GB (6 x 4GB)
> *PSU:* SeaSonic 650W
> *NIC(s):*
> 2 x Intel 82574L (onboard)
> 2 x Mellanox MHGH28-XTC Infiniband HCAs
> *RAID Controller:* 3Ware 9650SE-24M8
> *Hard Drives:*
> 2 x 72GB WD 3.5" VelociRaptor (RAID 1)
> 16 x Hitachi 1TB - 0F10383 (RAID 10)
> *Fans:*
> 3 x Scythe 120mm (SP1225FDB12H)
> 2 x Scythe 80mm (SP0825FDB12H)
> *OS:* Windows Server 2012 Std
> *Other:* Starwind iSCSI SAN
> *Purpose:* iSCSI SAN storage for VM Servers
> 
> 
> 
> *Netgear MoCA to Ethernet Bridge (MCAB1001)*
> 
> *Network Appliance VM Host:*
> 
> *Case:* SuperMicro SYS-5017C-LF
> *Motherboard:* SuperMicro X9SCL-F
> *CPU:* Intel i3 2100
> *Memory:*Kingston 8GB (4 x 2GB)
> *PSU:* SuperMicro 200W
> *NIC(s):*
> 1 x Intel 82579LM (onboard)
> 1 x Intel 82574L (onboard)
> 1 x Intel EXPI9402PT PRO/1000 PT (2 x 1GbE)
> *RAID Controller:* none
> *Hard Drives:*
> 1 x OCZ Vertex 2 120GB
> 1 x WD 150GB 3.5" VelociRaptor
> *Fans:* 2 x Supermicro 8500rpm 40mm
> *OS:* VMWare ESXi 5.1
> *Purpose:* Router (pfSense) / Primary Domain Controller (W2K12) / RDP JumpBox (W2K12)
> 
> *HP ProCurve V1910-24G 24-Port GbE Switch*
> 
> 
> 
> *Windows VM Host 3:*
> 
> *Case:* COOLER MASTER HAF 932
> *Motherboard:* GIGABYTE GA-990FXA-UD3 (AM3+)
> *CPU:* AMD FX-8350
> *Memory:*Crucial 32GB (4 x 8GB)
> *PSU:* Antec 800W
> *NIC(s):* 1 x Intel EXPI9402PT PRO/1000 PT (2 x 1GbE)
> *RAID Controller:*
> 3Ware 9650SE-4LPML
> 3Ware 9650SE-8LPML
> *Hard Drives:*
> 2 x OCZ Vertex 2 60GB (RAID 1)
> 4 x WD 6000GB 3.5" VelociRaptor (RAID 10)
> 8 x Seagate 3TB - ST3000DM001 (RAID 6)
> *Fans:*
> 2 x Corsair 120mm
> 2 x Noctua 120mm
> 3 x Cooler Master 230mm
> *Misc:* 2 x 4 - 3.5" bays in 5.25"
> *OS:* Windows Server 2012 Datacenter
> *Purpose:* Backup Server (W2K12) / Various Test VMs






Wow, I've worked for large businesses and pretty high end clients who don't have server closets like that! Good lord!

Excellent setup man! That must be fun to have for personal use


----------



## xAdam

Wow! thats crazy! What do you do to need that much power?!


----------



## tycoonbob

Quote:


> Originally Posted by *xAdam*
> 
> Wow! thats crazy! What do you do to need that much power?!


Why not?









I have around the same amount of power, just less number of drives.


----------



## dushan24

Quote:


> Originally Posted by *jibesh*
> 
> Thought I would post my home lab since I have finished it for now (I hope).
> 
> *Snip*


It will never be finished 

But seriously man, amazing setup indeed.

More details please, and why so many DC's?


----------



## jibesh

Quote:


> Originally Posted by *CloudX*
> 
> 
> Wow, I've worked for large businesses and pretty high end clients who don't have server closets like that! Good lord!
> 
> Excellent setup man! That must be fun to have for personal use


Lol well working for a large enterprise, you don't really get to play around with equipment until something breaks, which is rare where I work.

Quote:


> Originally Posted by *xAdam*
> 
> Wow! thats crazy! What do you do to need that much power?!


This is a lab. Gotta have multiple systems to test out new and large configurations and learn.








Quote:


> Originally Posted by *dushan24*
> 
> It will never be finished
> 
> But seriously man, amazing setup indeed.
> 
> More details please, and why so many DC's?


Thanks. Haha yea probably not. It'll probably be an ongoing project. Its already been one for years.

Should only be 2 DCs. One was a copy and paste error.


----------



## Zeus

Updated my NAS server, added 3 more 3TB drives to the RAID5 and 1 hot spare







So the spec is now: -

AMD A4-5300 APU
16GB RAM
2 x Crusial M4 64GB SSD in RAID1
2 x Seagate 500GB 2.5" in RAID0
7 x Seagate 3TB in RAID5 + 1 hot spare


----------



## Plan9

Quote:


> Originally Posted by *stl drifter*
> 
> hey guys is this a good price for this server. Will I be able to run vm on here as well as use it as a storage server.
> 
> For $349.99
> Case is 24 Bay Supermicro SC846 with caddies
> Motherboard: H8DME-2
> Procs: Qty 1 AMD Opteron Quad Core 2346HE @ 1.8GHz
> Ram: 8GB 4x 2GB, 12 empty slots
> IPMI Card: Kira 100
> Qty 3 SAT2-MV8 Raid cards
> Qty 2 Ablecom PWS-902-IR Power supplies
> No hard drives included
> **We are doing a temporary sale on these servers, until the end of May we are dropping the price to $319.99**


where did you find that sale?


----------



## jibesh

Quote:


> Originally Posted by *Plan9*
> 
> where did you find that sale?


Yea I wouldn't mind getting one of those either


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> where did you find that sale?


I think it may have something to do with this:
http://www.avsforum.com/t/1412640/are-you-looking-for-a-less-expensive-norco-4220-4224-alternative

He has a few posts on the last page of that thread. I remember reading about these several months ago. Great deal if there are any left.


----------



## stl drifter

yelp thats exactly where i found it at.


----------



## ramicio

It looks like in that thread all they did was downgrade an enterprise-grade product with consumer-grade gear.


----------



## Plan9

Quote:


> Originally Posted by *ramicio*
> 
> It looks like in that thread all they did was downgrade an enterprise-grade product with consumer-grade gear.


It was the case and RAID controllers that i was mostly interested in. Plus I current home server is just desktop, so I don't have an issue with using consumer grade hardware on a home server (personally I think enterprise grade gear is completely unnecessary for home servers - but each to their own)


----------



## DaveLT

Quote:


> Originally Posted by *Plan9*
> 
> It was the case and RAID controllers that i was mostly interested in. Plus I current home server is just desktop, so I don't have an issue with using consumer grade hardware on a home server (personally I think enterprise grade gear is completely unnecessary for home servers - but each to their own)


Yep, indeed. To each their own. For me my server has absolutely overkill fans. But i like overkill


----------



## NameUnknown

I'd show you pictures of a glorious room here at work and at my old site....BUT....I'm pretty sure I would get fired. If I go back to my old site though I'll ask though because I'm sure you would love to see the inside of a P&G Data Center no? So would I









But really, the chances of my being allowed in as a lowly tech are pretty much slim to none.


----------



## DaveLT

Quote:


> Originally Posted by *NameUnknown*
> 
> I'd show you pictures of a glorious room here at work and at my old site....BUT....I'm pretty sure I would get fired. If I go back to my old site though I'll ask though because I'm sure you would love to see the inside of a P&G Data Center no? So would I
> 
> 
> 
> 
> 
> 
> 
> 
> 
> But really, the chances of my being allowed in as a lowly tech are pretty much slim to none.


I have been to a datacenter before


----------



## NameUnknown

Quote:


> Originally Posted by *DaveLT*
> 
> I have been to a datacenter before


Hehe, that's good because it would take all but a miracle for me to get into it lol.


----------



## CloudX

Quote:


> Originally Posted by *NameUnknown*
> 
> Hehe, that's good because it would take all but a miracle for me to get into it lol.


Our colo houses BofA and Xbox live and other gov agencies and stuff. I have to almost give blood in order to get to our humble cage!


----------



## pvt.joker

I'm in and out of a datacenter several times a week (was just there this morning babysitting while HP came to fix some stuff.) It's fun walking through and seeing all the different hardware configs and how some people/companies care and take the time to manage cables, and how some others are just complete rats nests..


----------



## NameUnknown

Quote:


> Originally Posted by *pvt.joker*
> 
> I'm in and out of a datacenter several times a week (was just there this morning babysitting while HP came to fix some stuff.) It's fun walking through and seeing all the different hardware configs and how some people/companies care and take the time to manage cables, and how some others are just complete rats nests..


Our local server room here where I am is in beautiful condition. All the cables are run and ziptied together. Then they go under the floor the same way as its a raised floor. But then ironically you go into the network closets on various floors and you are bound to find wires that were unplugged but left tangled in the other in use wires. Scraps of wire, insulation, braided wire, and even food wrappers and pop bottles can be found in many of them as well.


----------



## EpicAMDGamer

Please excuse the horrible IPOD picture, I'll take better pictures when I'm done with this.



So in the basement a single wifi router connects directly to the modem. I use the first wi-fi router (on the top left) which some sort of Motorola with DD-WRT and it is just being used as a bridge from that first router's wi-fi to ethernet. That ethernet then goes into a wired linksys router (top middle) where I filter out all of the first router dhcp and what not and pretty much separate them from myself. That router is to be replaced with a much larger rackmount wired router in the future. On the left of the top is my personal access point, a Netgear WGR614v10 running 150MB N.

First thing in the rack (its facing backwards) is a nice rackmountable Power Distribution Unit.
Second you see my 3Com Baseline 24Port 10/100 HUB (I'm getting a 24 port switch soon, and surprisingly the performance isn't too bad with a hub).
Then sitting on my server is a keyboard mouse (mouse never gets used) and an old IBM LCD monitor.
The server is a Tangent SFF comptuer case with an ECS motherboard from an emachines pc with an AMD Athlon 64 X2 4200+ and 4GB of DDR2 ram running Proxmox VE 2.0 as my main server (and only server at this moment, however I will make a single backup server with an identical case later).

And its all sitting in a Quest 20U network rack I got new on eBay a year ago for $80.

Feel free to ask questions and also remember I will post more details about the server itself as its finished.


----------



## wholeeo

Seeing this thread reminded me I had one..









Speaking of which, whats the longest you guys have went without logging onto your servers? I'm going on about 3 months now. I do connect to the network drives but that's about it.


----------



## EpicAMDGamer

Quote:


> Originally Posted by *wholeeo*
> 
> Seeing this thread reminded me I had one..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Speaking of which, whats the longest you guys have went without logging onto your servers? I'm going on about 3 months now. I do connect to the network drives but that's about it.


I haven't had a server up for a while but last time I had everything running smooth I didn't login for probably a month or so and I think when I did it was because of a power outage and having to manually start a service.


----------



## DaveLT

Quote:


> Originally Posted by *wholeeo*
> 
> Seeing this thread reminded me I had one..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Speaking of which, whats the longest you guys have went without logging onto your servers? I'm going on about 3 months now. I do connect to the network drives but that's about it.


I've had to enter every single day ...


----------



## tiro_uspsss

Quote:


> Originally Posted by *wholeeo*
> 
> Seeing this thread reminded me I had one..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Speaking of which, whats the longest you guys have went without logging onto your servers? I'm going on about 3 months now. I do connect to the network drives but that's about it.


While I don't have to log on, I do so just for the sake of it & to check out upload stats


----------



## tycoonbob

Quote:


> Originally Posted by *wholeeo*
> 
> Seeing this thread reminded me I had one..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Speaking of which, whats the longest you guys have went without logging onto your servers? I'm going on about 3 months now. I do connect to the network drives but that's about it.


Using something like OpenNMS to manage/monitor everything, I only log into one of my servers if I need to fix/upgrade/add something. Otherwise they run and do their task as requested.


----------



## Plan9

I'm constantly logged into my servers. But then I tend to use my laptop as a thin client these days and write my software on the servers (better for back ups and also means I can access all my project files from anywhere in the world anytime I want them).
Quote:


> Originally Posted by *tycoonbob*
> 
> Using something like OpenNMS to manage/monitor everything, I only log into one of my servers if I need to fix/upgrade/add something. Otherwise they run and do their task as requested.


with the amount of VMs you have, you must still end up logging in regularly if just to run Windows updates.


----------



## Onions

Quote:


> Originally Posted by *ramicio*
> 
> ...or they could be a person needing to duplicate en masse a lot of their own material, such a person in a band without a label to have a factory do such a task.
> 
> Doesn't 99.99999% of the world own a computer to commit some act of "piracy?" No one from the middle class could afford to buy all of the content they watch and listen to. Only your wealthy people who build dedicated theater rooms can afford such luxuries. The stuff is expensive and it's because the wealthy will always pay the price to do something the easy way and the most legal way. They have something to lose. Your ordinary middle class person has nothing to be sued for.


normally i just browse for some insights and pics but this caught my eye. And sir im sorry to disapoint but i do infact pay for all of my music and movies now. im clearly in the middle class as im a student living at home with a part time job and i can afford it... just saying


----------



## Quasimojo

Quote:


> Originally Posted by *Onions*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ramicio*
> 
> Doesn't 99.99999% of the world own a computer to commit some act of "piracy?" No one from the middle class could afford to buy all of the content they watch and listen to. Only your wealthy people who build dedicated theater rooms can afford such luxuries. The stuff is expensive and it's because the wealthy will always pay the price to do something the easy way and the most legal way. They have something to lose. Your ordinary middle class person has nothing to be sued for.
> 
> 
> 
> normally i just browse for some insights and pics but this caught my eye. And sir im sorry to disapoint but i do infact pay for all of my music and movies now. im clearly in the middle class as im a student living at home with a part time job and i can afford it... just saying
Click to expand...

They like to think that everyone does it, so that makes it ok to do it, themselves.


----------



## dushan24

Quote:


> Originally Posted by *Plan9*
> 
> you must still end up logging in regularly if just to run Windows updates.


Windows Updates can be automated and pushed in bulk to multiple hosts (such that they don't require any user intervention).

http://en.wikipedia.org/wiki/Windows_Server_Update_Services
http://en.wikipedia.org/wiki/Windows_Deployment_Services


----------



## Plan9

Quote:


> Originally Posted by *dushan24*
> 
> Windows Updates can be automated and pushed in bulk to multiple hosts (such that they don't require any user intervention).
> 
> http://en.wikipedia.org/wiki/Windows_Server_Update_Services
> http://en.wikipedia.org/wiki/Windows_Deployment_Services


Yes, I'm aware of that. But it's very bad practice to automate updates. Particularly on Windows where it often causes downtime (reboots). But even that aside, updates do sometimes cause issues or require some level of manual intervention; that last part is particularly true for Linux. In fact I don't even trust automated updates on Linux and many distros have a much more streamlined update process than Windows does.

But that's just my experience from managing mission critical systems. I guess it's less of an issue on home servers.


----------



## DaveLT

Automatic updates on certain Linux distros on certain programs tends to break compatibility sometimes.


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> with the amount of VMs you have, you must still end up logging in regularly if just to run Windows updates.


You trying to take a stab at Microsoft's update cycle? I actually use Microsoft System Center 2012 Configuration Manager to manage all updates in my environment, as well as antimalware definitions with Microsoft System Center Endpoint Protection. Updates are installed nightly, and my boxes do an automated rolling reboot on a weekly schedule, if needed. All automated and hands off.

I do have a Server 2012 VDI instance built out now, with a Windows 7 and Windows 8 image. I seem to log into those from my laptop more than I use my actual OS on my laptop. Just easier, and everything is stored safely on my storage box. I also have RD Gateway configured, so I can RDP into any box on my network over HTTPS. On top of that, I also have OpenVPN connectivity on my EdgeRouter Lite, if I need to use VPN.
Quote:


> Originally Posted by *dushan24*
> 
> Windows Updates can be automated and pushed in bulk to multiple hosts (such that they don't require any user intervention).
> 
> http://en.wikipedia.org/wiki/Windows_Server_Update_Services
> http://en.wikipedia.org/wiki/Windows_Deployment_Services


While WSUS is the underlying technology used for automating updates, SCCM makes it even better. WDS on the other hand; that has nothing to do with updates, but is for OSD (via PXE or bootable media). WDS is something else SCCM utilizes to make it work even better.
Quote:


> Originally Posted by *Plan9*
> 
> Yes, I'm aware of that. But it's very bad practice to automate updates. Particularly on Windows where it often causes downtime (reboots). But even that aside, updates do sometimes cause issues or require some level of manual intervention; that last part is particularly true for Linux. In fact I don't even trust automated updates on Linux and many distros have a much more streamlined update process than Windows does.
> 
> But that's just my experience from managing mission critical systems. I guess it's less of an issue on home servers.


Automatic updates isn't necessarily a bad practice. Not updating at all is a worse practice, of course. I work extensively with SCCM as a consulting, so this is a topic I assist a lot of enterprise companies with. Typically, updates are automatically deployed to a pilot group on a schedule, and a production deployment is also set on a schedule. If something breaks in the pilot during the testing period, that update is removed from the production deployment before it goes out, or the production deployment is put on hold. You can configure GPOs to prevent automatic reboots. Regardless, this applies 99% only to workstations and not servers. While SCCM is often utilized to patch Windows servers in a production setting, it is not an automatic thing.


----------



## Plan9

Quote:


> Originally Posted by *tycoonbob*
> 
> You trying to take a stab at Microsoft's update cycle?


No, I'm saying you have a lot of VMs and thus a lot of systems that need to be kept up to date. The same comment would apply if you were running Linux, Solaris, FreeBSD or whatever.
Quote:


> Originally Posted by *tycoonbob*
> 
> Automatic updates isn't necessarily a bad practice. Not updating at all is a worse practice, of course.


Well durr. We could be here all night if we're just going to list off things that are worse than automatic updates.







However just because you're comparing automatic updates to worse things (which i clearly wasn't advocating) , it doesn't make it good practice.
Quote:


> Originally Posted by *tycoonbob*
> 
> I work extensively with SCCM as a consulting, so this is a topic I assist a lot of enterprise companies with. Typically, updates are automatically deployed to a pilot group on a schedule, and a production deployment is also set on a schedule. If something breaks in the pilot during the testing period, that update is removed from the production deployment before it goes out, or the production deployment is put on hold. You can configure GPOs to prevent automatic reboots. Regardless, this applies 99% only to workstations and not servers. While SCCM is often utilized to patch Windows servers in a production setting, it is not an automatic thing.


That's good. Sounds like a pretty neat solution in fact


----------



## ledzeppie

Somewhat surprised more people here don't use their servers as a web cache with squid. I just set mine up and damn it makes some sites so much faster to load (I'm looking at you NHL.com and NCIX.)


----------



## tiro_uspsss

Quote:


> Originally Posted by *ledzeppie*
> 
> Somewhat surprised more people here don't use their servers as a web cache with squid. I just set mine up and damn it makes some sites so much faster to load (I'm looking at you NHL.com and NCIX.)


I've wanted to do this for aaaaaaaages









what OS do you need to run for it - I thought it was a *nix app? is there any CLI in setting it up? I always thought there was CLI involved in the set up, & since I know zero CLI & can't be bothered, I have not used/installed squid


----------



## driftingforlife

If i knew how to set-up a web cache i would, my server is already on 24/7 as a DNS.


----------



## stumped

Finally got my server up and running. It's a ssh, samba, and torrent box, and is currently running Arch Linux Arm.

here's the photo album:
https://plus.google.com/photos/116785772663475241450/albums/5883397723217860897?authkey=CPidxqLd3_PTXg


----------



## DaveLT

Quote:


> Originally Posted by *driftingforlife*
> 
> If i knew how to set-up a web cache i would, my server is already on 24/7 as a DNS.


It's highly worth it to setup a local DNS server, it's really annoying to use those faux "routers" which have slow processors and limited memory as DNS servers (as they always do) and often corrupts my local computer DNS cache
At least that's what i did







I actually have a larger Socket F dual hexas (4U) that is acting as my switch now







I use many Intel quad gigabit cards to do just that


----------



## herkalurk

Quote:


> Originally Posted by *driftingforlife*
> 
> If i knew how to set-up a web cache i would, my server is already on 24/7 as a DNS.


The point of a server is to be on 24/7....., both of mine only shutdown for updates or hardware reconfig. Not to mention there are a lot of automated processes that I would rather have run at night and use resources when I'm not wanting to use those resources for myself. Also, service like DNS or DHCP are nothing really.


----------



## ledzeppie

I'm using Linux Mint Debian edition (side thread, anyone have any comments on this distro for server use? I found I liked it better than standard debian, but I'm assuming it will be very similar in stability)

CLI is pree easy. I'm by no means an advanced CLI guy, but if you sit down with it you'll pick it up pretty fast. Even still there isn't a tonne of CLI stuff.

In fact here's a quick and dirty guide


----------



## DaveLT

Jeez, luckily i had my DNS server active. But it still relied on my ISP's DNS server and it went down just now







For 2 hours
The value of a DNS server. I better convert my DNS server to a full time server


----------



## Quasimojo

Quote:


> Originally Posted by *ledzeppie*
> 
> I'm using Linux Mint Debian edition (side thread, anyone have any comments on this distro for server use? I found I liked it better than standard debian, but I'm assuming it will be very similar in stability)


I'm not sure I understand the point of using Mint, if you're just going to work from the command line, anyway. If you *are* using the Mint desktop, then I would say it's just unnecessary overhead.


----------



## ledzeppie

When I tried to install debian it didn't really seem to want to work for my computer very well, but mint debian edition went on flawlessly (weird, I know)... I also am not a big ubuntu fan, especially how they sort of sold out to amazon. On my next setup I'll probably use debian (and try harder to make it work), but again, it didn't want to work this time around.

Also I dont just work from command line (i'm not THAT good lol)


----------



## ledzeppie

Had some weird stuff going on with my server last night, so I was like "ahh *** why not reinstall linux and redo some partitions while I'm at it." Anyways I went for a minimalist Debian install with a KDE desktop. Worked this time around. I've got it set up perfectly now <3

But I got to say, if you're having any qualms about using command line linux, it is actually one of the most satisfying computing experiences I've had. You actually feel like you're in control of what you do and there is so much documentation and community around it that a google search will fix your problems 99% of the time.

Anyways my 10GB squid cache is flying! Its worth setting up (which isn't hard). I wonder how fast it would be with an SSD drive....hmmm......


----------



## Plan9

Quote:


> Originally Posted by *ledzeppie*
> 
> Had some weird stuff going on with my server last night, so I was like "ahh *** why not reinstall linux and redo some partitions while I'm at it." Anyways I went for a minimalist Debian install with a KDE desktop. Worked this time around. I've got it set up perfectly now <3
> 
> But I got to say, if you're having any qualms about using command line linux, it is actually one of the most satisfying computing experiences I've had. You actually feel like you're in control of what you do and there is so much documentation and community around it that a google search will fix your problems 99% of the time.
> 
> Anyways my 10GB squid cache is flying! Its worth setting up (which isn't hard). I wonder how fast it would be with an SSD drive....hmmm......


I live on the command line so I know exactly what you mean there. Though you lost me with the bit where you said "_minimalist Debian install_" followed by "_KDE desktop_" which is easy the most bloated bit of software you can run on Linux







(and this is coming from someone who uses KDE as his primary desktop environment)


----------



## ledzeppie

LOL Yeah, I was kind of conflicted using that term and KDE together, but what I meant was 0 additional packets installed.

KDE is damn gorgeous though.


----------



## DaveLT

Looks better than Win7 or Win8 for that matter! Anyday








Have you seen Fedora's new UI yet?


----------



## lowfat

Quote:


> Originally Posted by *ledzeppie*
> 
> Anyways my 10GB squid cache is flying! Its worth setting up (which isn't hard). I wonder how fast it would be with an SSD drive....hmmm......


I'm going to have to look in to this it seems. In the process of building a new 2P LGA2011 home server and always looking for more uses out of it. I wonder how well it will work on an Fusion-IO ioXtreme.


----------



## Imrac

Current uptime of my FreeNAS VM







. The host has a couple more days than that.


----------



## dushan24

Quote:


> Originally Posted by *DaveLT*
> 
> Looks better than Win7 or Win8 for that matter! Anyday
> 
> 
> 
> 
> 
> 
> 
> 
> Have you seen Fedora's new UI yet?


That's just GNOME3. Not unique to Fedora


----------



## Plan9

Quote:


> Originally Posted by *Imrac*
> 
> Current uptime of my FreeNAS VM
> 
> 
> 
> 
> 
> 
> 
> . The host has a couple more days than that.


Glad you're getting along with FreeNAS, but that uptime isn't impressive for a UNIX box. In fact my home server has more than that and I reboot that thing regularly (powercuts, etc):

Code:



Code:


$ uname -a; uptime
FreeBSD Primus.monkey-spank 8.1-RELEASE FreeBSD 8.1-RELEASE #0: Mon Jul 19 02:36:49 UTC 2010     [email protected]:/usr/obj/usr/src/sys/GENERIC  amd64
10:48AM  up 108 days, 14:03, 2 users, load averages: 0.23, 0.26, 0.25

(I really need to get round to upgrading that actually....)

At work, some of our internal UNIX systems have uptimes of 2 to 3 years (and they were only rebooted because the server room was being relocated and refitted):

Code:



Code:


$ uptime
 11:03am  up 850 day(s),  2:46,  1 user,  load average: 1.08, 1.08, 1.09

(picking a Solaris box at random)

The only reason our Linux boxes don't have the same uptimes is because they're public facing so it's preferable to keep the kernel up to date.


----------



## Imrac

Quote:


> Originally Posted by *Plan9*
> 
> Glad you're getting along with FreeNAS, but that uptime isn't impressive for a UNIX box. In fact my home server has more than that and I reboot that thing regularly (powercuts, etc):
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> $ uname -a; uptime
> FreeBSD Primus.monkey-spank 8.1-RELEASE FreeBSD 8.1-RELEASE #0: Mon Jul 19 02:36:49 UTC 2010     [email protected]:/usr/obj/usr/src/sys/GENERIC  amd64
> 10:48AM  up 108 days, 14:03, 2 users, load averages: 0.23, 0.26, 0.25
> 
> (I really need to get round to upgrading that actually....)
> 
> At work, some of our internal UNIX systems have uptimes of 2 to 3 years (and they were only rebooted because the server room was being relocated and refitted):
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> $ uptime
> 11:03am  up 850 day(s),  2:46,  1 user,  load average: 1.08, 1.08, 1.09
> 
> (picking a Solaris box at random)
> 
> The only reason our Linux boxes don't have the same uptimes is because they're public facing so it's preferable to keep the kernel up to date.


Pfft updates....









I am just impressed with it since it's not on a UPS or anything... Its actually on the same circuit as a pretty beefy shredder that dims the lights when in use... the Virtual Host is Idle most of the time, but it houses my 12tb FreeNAS server, couple of linux flavors, couple of windows 2008 R2 servers and my Virtualized media PC that I have running as my DVR with a cable card tuner. Although the Win7 HTPC VM gets rebooted nightly.... Why? Because Windows....


----------



## DaveLT

Quote:


> Originally Posted by *Imrac*
> 
> Pfft updates....
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I am just impressed with it since it's not on a UPS or anything... Its actually on the same circuit as a pretty beefy shredder that dims the lights when in use... the Virtual Host is Idle most of the time, but it houses my 12tb FreeNAS server, couple of linux flavors, couple of windows 2008 R2 servers and my Virtualized media PC that I have running as my DVR with a cable card tuner. Although the Win7 HTPC VM gets rebooted nightly.... Why? Because Windows....


Jeez. Windows. It annoys the hell out of me if i leave it running for a couple of days
I have looked at options to get another cheap server as a switch. Maybe i can really do this cheap if i try to :
Opteron 285 x2 : 6$ each
Tyan S2882-D : 40$
3Ware 9550SX-8LP : 24$
2U chassis (it's a unisign 2U) : 32$
4x Intel 82546EB : 8$ each
1GB x 8 DDR400 Reg ECC : 3$ each

I'm going too far into server hardware ... Thankfully they are old i'm interested in








I will be putting "zalman-ish" heatsinks on them. the ones with 50mm height. Should be enough for the 88mm height cases\
But also i have been looking for a way to use my Caviar RE IDE drives ... hopefully this shall use it

Anyone knows anything about the 3ware 9550SX? How does it stack up against a Adaptec AAR-2610SA


----------



## dushan24

I don't see why you would want to run a server as a switch.

I'd say an 8 or 16 port, gigabit layer 3 switch could be had for less.

More power efficient, less overhead and specifically geared for, you know... Switching


----------



## DaveLT

Quote:


> Originally Posted by *dushan24*
> 
> I don't see why you would want to run a server as a switch.
> 
> I'd say an 8 or 16 port, gigabit layer 3 switch could be had for less.
> 
> More power efficient, less overhead and specifically geared for, you know... Switching


Just for fun.







Switches are seriously expensive here. Also, some extra storage will live on this server, 240GB of IDE storage (2 drives) and roughly 2TB of SATA







Takes me back to 2003 rofl
Switches start from 200$ here but i can also do more things with a server like thtat even if it's from 2004


----------



## DaveLT

So yeah i had to hunt for a low-power PSU that was EPS sized for my 68W opteron 285 rig that i'm building (noise reasons in a 2U to save money since that 2U puts the PSU up front) ... It didn't go well. They were >500W and by the time i found a suitable one it is a delta 650W ...
Then i looked for 1U units and finally found a FSP 350W worth buying. 80PLUS and it's cheap.
But finally one that has a 8pin plug for my rig. Surely 288W is good enough for half load optimization right? I'm stuffing in the two 120GB IDE drives that never got used in like 7 years and they're Caviar REs. Maybe a couple of SATA HDDs ... 1TB at most. I'm certain SATA I is a bottleneck for 2TB harddisks
What do you guys think? I can get it for like 30$ flat


----------



## Artikbot

Go for it. I'm not generally picky of PSUs when they run way over specs, as long as they are minimally good. And if it passes 80Plus certification you're rather certain it's not a random generic piece of crap.


----------



## DaveLT

Quote:


> Originally Posted by *Artikbot*
> 
> Go for it. I'm not generally picky of PSUs when they run way over specs, as long as they are minimally good. And if it passes 80Plus certification you're rather certain it's not a random generic piece of crap.


Ah, well. I'll do it then







Since a FSP Hexa 500W costs roughly 60$ and it will be way less efficient than the FSP will be ... at full load.
Correction : FSP 350W 1U costs 30$


----------



## Plan9

Quote:


> Originally Posted by *dushan24*
> 
> I don't see why you would want to run a server as a switch.
> 
> I'd say an 8 or 16 port, gigabit layer 3 switch could be had for less.
> 
> More power efficient, less overhead and specifically geared for, you know... Switching


Good switches are expensive.


----------



## DaveLT

Quote:


> Originally Posted by *Plan9*
> 
> Good switches are expensive.


Especially ones that are "fast enough" and "won't crash under heavy loads"
I'm sick and tired of my router crashing under heavy loads .... And it's a Cisco!


----------



## Artikbot

Quote:


> Originally Posted by *DaveLT*
> 
> Especially ones that are "fast enough" and "won't crash under heavy loads"
> I'm sick and tired of my router crashing under heavy loads .... And it's a Cisco!


IKR...

I went through SEVEN routers in a 5-year timeframe. All of them ended up with blown Ethernet ports because of the load I put on them.


----------



## jibesh

Quote:


> Originally Posted by *Artikbot*
> 
> IKR...
> 
> I went through SEVEN routers in a 5-year timeframe. All of them ended up with blown Ethernet ports because of the load I put on them.


Supermicro Atom 330 barebone 1U server + 2GB DDR2 running pfsense + HP ProCurve1800-24G switch = stable home network for 4+ years


----------



## DaveLT

Quote:


> Originally Posted by *Artikbot*
> 
> IKR...
> 
> I went through SEVEN routers in a 5-year timeframe. All of them ended up with blown Ethernet ports because of the load I put on them.


What the ... holy lord? in this world? Oh ... I don't know the innards of a router anyway








But anyway i'm all NO WAY to any router serving my LAN. That's why i don't have LAN ... for now







Quote:


> Originally Posted by *jibesh*
> 
> Supermicro Atom 330 barebone 1U server + 2GB DDR2 running pfsense + HP ProCurve1800-24G switch = stable home network for 4+ years


It's just much better







I can't believe how long a D-Link DVG-N5402SP takes to startup, sadly i have to use that for my home telephone or else i can't (VOIP reasons)








When the contract dries up i'm switching to another ISP. 4mins startup for a router that only serves as a VOIP router and primary gateway is unacceptable (Fiber gateway is hella strange)


----------



## Artikbot

I know... I'm in process of getting a half decent 8 port Gb switch. But for now, I will hold onto this Zyxel router my ISP gave me that seems to hold pretty well (doesn't crash _too_ often)...

I think I have never posted pics of my server using The Wooden Cheese case, my first scratch built case









I modified it to house a six drive array on the back, as opposed to the initial version that only held two drives in the front. And because I ran out of aluminium... Because it can potentially hold 36 drives with a properly made rack


----------



## CloudX

That's pretty cool!


----------



## DaveLT

Cool as it is it's a very good idea to use wood







Easy to cut holes


----------



## dushan24

Quote:


> Originally Posted by *Plan9*
> 
> Good switches are expensive.


Trust me, I know.

But I still think you could get a decent one for less than the cost of building a server.

Or get a good one 2nd hand


----------



## Plan9

Quote:


> Originally Posted by *dushan24*
> 
> Trust me, I know.
> 
> But I still think you could get a decent one for less than the cost of building a server.
> 
> Or get a good one 2nd hand


That would depend on whether you basically want a "smart hub" or whether you need managed vlans, bonded pairs and all the other smart networking stuff that I get frustrated with and end up delegating to other people to set up


----------



## DaveLT

Ah well, you've got to know from me that servers are fun to play with. Endless fun for people, like me and bob


----------



## Plan9

Quote:


> Originally Posted by *DaveLT*
> 
> Cool as it is it's a very good idea to use wood
> 
> 
> 
> 
> 
> 
> 
> Easy to cut holes


Not so easy to disperse the heat though (and fans are noisy







)


----------



## Artikbot

Quote:


> Originally Posted by *Plan9*
> 
> Not so easy to disperse the heat though (and fans are noisy
> 
> 
> 
> 
> 
> 
> 
> )


Shenanigans. That case (with its panels closed of course) performs much better than any other case I've ever had









It also runs as silent as it can be, and if I swap the 8800GT with a X1600Pro I've got, it becomes virtually silent (unless you load the CPU... then that fan goes tornado XD)

Concerning switches, I'm the type of guy that prefers unmanaged switches with a proxy server to control the traffic. In most of the cases you have a server anyway, and unless your network is rather large and needs very strict rules to avoid packet loops, traffic filtering from the proxy server itself is usually good enough.


----------



## Plan9

Quote:


> Originally Posted by *Artikbot*
> 
> Shenanigans. That case (with its panels closed of course) performs much better than any other case I've ever had


well yeah, you have a whole boat load of beefy fans on that case









My point is that wood is a better insulator than metal so you need to be strict about adding additional fans. Though I will admit I hadn't realised you can get fans as quite as yours.
Quote:


> Originally Posted by *Artikbot*
> 
> It also runs as silent as it can be, and if I swap the 8800GT with a X1600Pro I've got, it becomes virtually silent (unless you load the CPU... then that fan goes tornado XD)


Oh nice. how much were your fans?
Quote:


> Originally Posted by *Artikbot*
> 
> Concerning switches, I'm the type of guy that prefers unmanaged switches with a proxy server to control the traffic. In most of the cases you have a server anyway, and unless your network is rather large and needs very strict rules to avoid packet loops, traffic filtering from the proxy server itself is usually good enough.


None of the examples I gave for needing a managed switch are possible just via a proxy server.


----------



## Artikbot

Quote:


> Originally Posted by *Plan9*
> 
> well yeah, you have a whole boat load of beefy fans on that case
> 
> 
> 
> 
> 
> 
> 
> 
> 
> My point is that wood is a better insulator than metal so you need to be strict about adding additional fans. Though I will admit I hadn't realised you can get fans as quite as yours.
> Oh nice. how much were your fans?


Yeah, but you don't get a whole lot of heat escaping your case from the metal, usually. A few watts at most... In my case it is offset by the excessive airflow









They are NZXT FN series (5.60€ each), ran through that custom fan controller you see under the DVD drive. I run them slightly higher than their minimum startup voltage, and they are very quiet. That small fan in the bottom served as a means to cool the old HD rack that was there, now it serves no purpose. The one on the back is a very silent Xilence 2000RPM fan, got it years ago for 4€. It cools the new drive rack









Quote:


> None of the examples I gave for needing a managed switch are possible just via a proxy server.


Indeed, I just wanted to chime in and say what I do









Bear with me, I sometimes want to say stuff, no matter the situation


----------



## tycoonbob

Used Dell Powerconnect switches are the best. PowerConnect 5324 is a 24 port gigabit managed switch. I consider it L2.5, since it's a L2 switch with SOME L3 capabilities. You can get these for around (or under) $100 used. PowerConnect 5424 is another nice switch, but cost more like $250-300.

I really don't think you can find anything better than a used PowerConnect 5324 for less than $100. The CLI is a lot like that of Cisco, so not much of a learning curve. The web interface is basic, but still allows for basic configuration such as VLANs, LACP, portfast, etc.


----------



## DaveLT

I'll consider that. Meanwhile i'm sure i don't know why would people discourage me from buying a server


----------



## TheNegotiator

Here's my home server rack:


From top to bottom:

Game Hosting/Web Server

OS: Windows Server 2008
Case: HP dl385 stock
CPU: 2x AMD Opteron 2218
Motherboard:
Memory: 4GB
PSU: 1x 1000w
OS HDD: WD Velociraptor 80GB
Storage HDD(s): 2x WD Blue 500GB 2.5"
Server Manufacturer: HP

Currently unused

OS: N/A
Case: Dell PowerEdge 1950 III stock
CPU: 2x Intel Xeon E5450
Motherboard:
Memory: 8GB
PSU: 2x 670w
OS HDD: 1x 72GB 15k SAS
Storage HDD(s): N/A
Server Manufacturer: Dell

DC/DHCP/DNS/Print Server

OS: Windows Server 2012 Standard
Case: Dell PowerEdge 2950 III stock
CPU: 2x Intel Xeon E5450
Motherboard:
Memory: 8GB
PSU: 2x 750w
OS HDD: 2x 72GB 15k SAS (RAID 1)
Storage HDD(s): 1x WD10EARS 1TB
Server Manufacturer: Dell

Storage/Media/Backup Server

OS: Windows Server 2012 Standard
Case: Dell PowerEdge 2900 II stock
CPU: 2x Intel Xeon 5160
Motherboard:
Memory: 8GB
PSU: 2x 960w
OS HDD: 1x 146GB 15k SAS
Storage HDD(s): 4x WD WD20EARX 2TB
Server Manufacturer: Dell

(Not shown) Untangle Router/Firewall

OS: Untangle
Case: Dell PowerEdge 1950 II stock
CPU: 1x Intel Xeon 5130
Motherboard:
Memory: 2GB
PSU: 1x 670w
OS HDD: 1x 72GB 10k SAS
Storage HDD(s): N/A
Server Manufacturer: Dell


----------



## EpicAMDGamer

Quote:


> Originally Posted by *cmgunn*
> 
> Here's my home server rack:
> 
> 
> From top to bottom:
> 
> Game Hosting/Web Server
> 
> OS: Windows Server 2008
> Case: HP dl385 stock
> CPU: 2x AMD Opteron 2218
> Motherboard:
> Memory: 4GB
> PSU: 1x 1000w
> OS HDD: WD Velociraptor 80GB
> Storage HDD(s): 2x WD Blue 500GB 2.5"
> Server Manufacturer: HP
> 
> Currently unused
> 
> OS: N/A
> Case: Dell PowerEdge 1950 III stock
> CPU: 2x Intel Xeon E5450
> Motherboard:
> Memory: 8GB
> PSU: 2x 670w
> OS HDD: 1x 72GB 15k SAS
> Storage HDD(s): N/A
> Server Manufacturer: Dell
> 
> DC/DHCP/DNS/Print Server
> 
> OS: Windows Server 2012 Standard
> Case: Dell PowerEdge 2950 III stock
> CPU: 2x Intel Xeon E5450
> Motherboard:
> Memory: 8GB
> PSU: 2x 750w
> OS HDD: 2x 72GB 15k SAS (RAID 1)
> Storage HDD(s): 1x WD10EARS 1TB
> Server Manufacturer: Dell
> 
> Storage/Media/Backup Server
> 
> OS: Windows Server 2012 Standard
> Case: Dell PowerEdge 2900 II stock
> CPU: 2x Intel Xeon 5160
> Motherboard:
> Memory: 8GB
> PSU: 2x 960w
> OS HDD: 1x 146GB 15k SAS
> Storage HDD(s): 4x WD WD20EARX 2TB
> Server Manufacturer: Dell
> 
> (Not shown) Untangle Router/Firewall
> 
> OS: Untangle
> Case: Dell PowerEdge 1950 II stock
> CPU: 1x Intel Xeon 5130
> Motherboard:
> Memory: 2GB
> PSU: 1x 670w
> OS HDD: 1x 72GB 10k SAS
> Storage HDD(s): N/A
> Server Manufacturer: Dell


Excellent rack!

Very very nice hardware you've got there, probably loud too but that's just fine.

Whats up with the "Dell ML6000" (I zoomed in on the picture to see what it was)??


----------



## TheNegotiator

Quote:


> Originally Posted by *EpicAMDGamer*
> 
> Excellent rack!
> 
> Very very nice hardware you've got there, probably loud too but that's just fine.
> 
> Whats up with the "Dell ML6000" (I zoomed in on the picture to see what it was)??


Thanks! The noise isn't too much of a problem, the rack is in a climate controlled room separate from the house.

The ML6000 is a tape backup library. A company I worked for retired it and let me take it. I currently have 8TB of backup space on it, it'll do 32TB when all slots are filled.


----------



## killabytes

Since I'm known as the server guy around here I always get PMs asking for more info about my gear. Well 99% of my stuff has been sold off. I only have a single server that is my 14TB storage.

This is all that remains, yes even the rack is sold.

Full shot of the server. Runs Microsoft Windows Home Server 2011. Little change for me.


My spares and soon to be installed drives:


My newish 5.25 to 4 2.5 drive bay:


Most of you have seen this, this is my RAID controller output. I made this out of some basic plexi-glass and LEDs:


I do plan on using all of those drives. I have a Dell Perc 5i and a HP P400 ready to go, just waiting for cables.


----------



## jibesh

Quote:


> Originally Posted by *killabytes*
> 
> Since I'm known as the server guy around here I always get PMs asking for more info about my gear. Well 99% of my stuff has been sold off. I only have a single server that is my 14TB storage.
> 
> This is all that remains, yes even the rack is sold.


Why is it all gone?


----------



## killabytes

Quote:


> Originally Posted by *jibesh*
> 
> Why is it all gone?


I spend most of my spare time working on my house, yard and I'll be a father in 6 weeks.

I've spent the past 4 months tearing apart the bedrooms and updating them for my wife and I and for our baby.

Also, I spend 12 hours a day 4 days a week with some of the most powerful servers you can think of.


----------



## Callist0

So I just picked myself up a Dell PowerEdge 850 from work and got it up and running with Debian Weezy. However I don't have a lot of space in my house for a full rack type mount so was wondering if anyone had any suggestions on how to mount the thing..preferrably sideways against the wall?

Thanks


----------



## DaveLT

That should do








Make sure there are no vents on the bottom though.


----------



## killabytes

Quote:


> Originally Posted by *Callist0*
> 
> So I just picked myself up a Dell PowerEdge 850 from work and got it up and running with Debian Weezy. However I don't have a lot of space in my house for a full rack type mount so was wondering if anyone had any suggestions on how to mount the thing..preferrably sideways against the wall?
> 
> Thanks


Sideways like, bottom flat against the wall?

Get some screws, go through the metal case and into the studs.


----------



## NKrader

Quote:


> Originally Posted by *Callist0*
> 
> So I just picked myself up a Dell PowerEdge 850 from work and got it up and running with Debian Weezy. However I don't have a lot of space in my house for a full rack type mount so was wondering if anyone had any suggestions on how to mount the thing..preferrably sideways against the wall?
> 
> Thanks


http://www.newegg.com/Product/Product.aspx?Item=N82E16816129040&nm_mc=KNC-GoogleAdwords&cm_mmc=KNC-GoogleAdwords-_-pla-_-Server+Accessories-_-N82E16816129040&gclid=CLfGl5j42LcCFehAMgodhRAAkA


----------



## DaveLT

Old it definitely is. But this sucker is only 160$







and comes with 8GBs of DDR400 RAM (lol)








Somehow when i was removing the heatsinks to give it a new "Coat" of thermal paste, the top CPU came out along with the heatsink ... yeah it became THAT sticky .... Took a flat head screwdriver to get it off it but luckily no bent pin or damaged CPU and yes, the socket is in a levered position


----------



## TheNegotiator

Quote:


> Originally Posted by *DaveLT*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Old it definitely is. But this sucker is only 160$
> 
> 
> 
> 
> 
> 
> 
> and comes with 8GBs of DDR400 RAM (lol)
> 
> 
> 
> 
> 
> 
> 
> 
> Somehow when i was removing the heatsinks to give it a new "Coat" of thermal paste, the top CPU came out along with the heatsink ... yeah it became THAT sticky .... Took a flat head screwdriver to get it off it but luckily no bent pin or damaged CPU and yes, the socket is in a levered position


Is that a DL380 G4?


----------



## DaveLT

Quote:


> Originally Posted by *cmgunn*
> 
> Is that a DL380 G4?


DL385 G1 i think, yeah G1.


----------



## DaveLT

Jeez, found out the seller lied about the CPU, he said 2.2GHz QUAD CORE ( which presumably means 2x 270s) but it's actually 2x 252s!
BUMMER.


----------



## dushan24

Quote:


> Originally Posted by *DaveLT*
> 
> Jeez, found out the seller lied about the CPU, he said 2.2GHz QUAD CORE ( which presumably means 2x 270s) but it's actually 2x 252s!
> BUMMER.


Appeal it, where did you get it eBay or TaoBao or something else...


----------



## DaveLT

Quote:


> Originally Posted by *dushan24*
> 
> Appeal it, where did you get it eBay or TaoBao or something else...


Ebay from a local seller. Seller said maybe the guy who did the checking f'ed up or something but are sending two 275s to me now


----------



## Matt-Matt

Quote:


> Originally Posted by *DaveLT*
> 
> Ebay from a local seller. Seller said maybe the guy who did the checking f'ed up or something but are sending two 275s to me now


Nice!


----------



## bayarea757

Used for MCSA studying and to let my friend hack into for his studies, storage, media share.

OS: VM ESXI 5.1 server 2008 R2 enterprise, FreeNAS
Case: Dell PowerEdge 2950 III
CPU: 2x Quad Xeon L5420 2.5GHz
Motherboard: dell
Memory: 16GB ECC
PSU: Redundant 750W
OS HDD: 2x 147GB SAS 10k Raid0
Storage HDD: 4x500GB SATA 7.2K Raid5, 750GB SATA 7.2K and 320GB SATA 7.2K
Server Manufacturer Dell


----------



## The_Rocker

http://s83.photobucket.com/user/Tom_418/media/20130617_111631.jpg.html

This is my development or 'pre production' platform. Its got 5 Dell CS23-SH's in it. Each with a pair of Quad Core Xeon L5410's and 16GB RAM. The Dell 2950 has a pair of Xeon E5430's with 16GB RAM and a PERC5i. It is loaded with 6 1TB SATA drives and is running Openfiler to act as an iSCSI SAN for the ESXi hosts.

This is mainly for work. It lives in a side room in the small office I work in. The 'production' system is a dell M1000e blade chassis loaded with 16 blades with dual quad core xeon X5560's and 48GB RAM. All connected to an EMC VNXe SAN.

Switches are some cheap dell powerconnect's. 1Gbps.


----------



## bspree45

Hosting ArmA 2 and 3 servers, Minecraft, and at times a few others.

Occasionally acting as fileserver


----------



## jibesh

Quote:


> Originally Posted by *bspree45*
> 
> 
> 
> Hosting ArmA 2 and 3 servers, Minecraft, and at times a few others.
> 
> Occasionally acting as fileserver


Specs?


----------



## bspree45

In my Sig, but Ill share anyways.

e7400 C2D 2.8ghz with peltier cooling running at <5* C
4gb mushkin (2x2gb)
2x 320gb HDDs
1x 64gb crucial M4
9500GT 512mb
Leftover case from Velocity Micro (manufactured by Lian- Li)


----------



## DaveLT

Choices, choices 
Dayum, i see 10k 73GB 2.5" disks for 15$ (or 15k for 20$ lol ...)
BUT
their throughput is definitely worrying ... i can buy a new WD 2.5" Black 750GB for 50$ and that will definitely slaughter it in throughput and storage density of course.
Those who are wondering why am i thinking of using laptop drives (the thought of it ...







), the answer is simple. I will be using 2 1-bay 4 2.5" hotswap for a 2U which i will be putting 5148s and eventually L5420 or maybe just put 5420s


----------



## bonami2

Toshiba Tecra a3 notebook with:

Pentium M 1.6
1gb ram 2x 512
onboard graphic
Windows xp pro

In the last 6 month a minecraft server and now a fileserver


----------



## beers

Recently upgraded mine from:

Athlon II 260u
Asus M5A78L-M LX+
4 GB Samsung 30nm
Corsair CX430

to

Intel i7 4770K
Asus Sabertooth Z87
32 GB Crucial Ballistix LP
500w Rosewill SilentNight

Power wise it went from ~100w at idle to ~72w as measured from my UPS. Definitely a lot faster, it's even hugely noticeable in negotiating SSH sessions and other minor tasks. The RAID speed got an extra bump as observed by hdparm, breaking 700 MB/sec








Quote:


> [[email protected] ~]$ sudo hdparm -t /dev/md127
> 
> /dev/md127:
> Timing buffered disk reads: 2116 MB in 3.00 seconds = 704.96 MB/sec


----------



## ledzeppie

Gonna be getting a couple of 3TB Barracudas soon I think.
Will make my setup have:

Desktop:
2TB Green for documents, music, photos
120GB SSD for boot

Server
1TB Barracuda for boot as well as Squid Cache, photo and music backup.
3TB Barracuda for movies

Offline in Enclosure:
2TB Green to backup the one in the desktop
3TB Barracuda to backup the one in the server.

That leads me to thinking, anyone else think RAID is slightly overrated? Personally if I were to buy 2 drives I'd keep one of them offline in case of power surges, viruses, accidental deletion, etc. Everyone seems to just throw **** into a RAID array these days and assume its safe. I definitely understand the idea of having things on two hard drives immediately after writing them, which for things like media work can definitely be useful. Although I guess I'm also talking pretty specifically about RAID1 here as well. I guess if you're willing to pay the price it's great, but people act like it's an alternative to offline backups, which I don't think it is.


----------



## DaveLT

1) RAID is very useful for LVM. (in fact the only ever reason you ever use LVM ...)
2) You don't want a single drive bringing down a whole LVM group
3) Redundancy reasons ... It's obvious


----------



## ledzeppie

I'm not saying it's not useful, just that people often use it as a replacement for offline backup when it's not.


----------



## Zankza

Got two of below

OS: Win 2012
Case: N/A
CPU: 5645s
Motherboard: SuperMicro
Memory: 96GB
OS HDD (If you have one): Intel 520 Series
Storage HDD(s): RevoDrive 3 x2
Server Manufacturer (Ex: Dell, HP, You?): Hybrid









It's being used for hyper-v and here's only one picture i have.

http://minus.com/lpzcOLQoa1WZn


----------



## DaveLT

Quote:


> Originally Posted by *ledzeppie*
> 
> I'm not saying it's not useful, just that people often use it as a replacement for offline backup when it's not.


That said,


----------



## blooder11181

no more servers for me.


----------



## bonami2

Raid is nice but if psu blow and take the 2 drive









Why i prefer external for backup


----------



## beers

Quote:


> Originally Posted by *ledzeppie*
> 
> That leads me to thinking, anyone else think RAID is slightly overrated? Personally if I were to buy 2 drives I'd keep one of them offline in case of power surges, viruses, accidental deletion, etc. Everyone seems to just throw **** into a RAID array these days and assume its safe.


It's not really the end-all solution, but is a cost-effective way to retain data in event of a minor hardware failure.

Most backups are budget limited, it's often a response to 'how much is my data worth?'. The data on my server isn't anything irreplaceable, so something like RAID5 + a UPS makes sense. If I had something more worthwhile, it'd certainly be worth pursuing alternative means but after a few degrees of paranoia the cost quickly outweighs the benefit.


----------



## EpicAMDGamer

Quote:


> Originally Posted by *Zankza*
> 
> Got two of below
> 
> OS: Win 2012
> Case: N/A
> CPU: 5645s
> Motherboard: SuperMicro
> Memory: 96GB
> OS HDD (If you have one): Intel 520 Series
> Storage HDD(s): RevoDrive 3 x2
> Server Manufacturer (Ex: Dell, HP, You?): Hybrid
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It's being used for hyper-v and here's only one picture i have.
> 
> http://minus.com/lpzcOLQoa1WZn


That is an absolute BEAST!

What in the world are you running on those VM's to require that sort of horsepower?


----------



## bonami2

Revodrive are like 1500$ each no?


----------



## tycoonbob

Quote:


> Originally Posted by *bonami2*
> 
> Revodrive are like 1500$ each no?


Can be, but no. Depends on the size. The 240GB version is $640, and the 480GB version is $999. The 110GB version is like $100, which is what I had about 2 years ago.


----------



## bonami2

Ok


----------



## Zankza

Quote:


> Originally Posted by *tycoonbob*
> 
> Can be, but no. Depends on the size. *The 240GB version is $640*, and the 480GB version is $999. The 110GB version is like $100, which is what I had about 2 years ago.


Correct. With Hyper-V's VHD Differencing, I made master copy and rest Differencing resulting in incredibly saving ****ton space. Yet the VMs runs/boots faster than any SSD.


----------



## tycoonbob

Quote:


> Originally Posted by *Zankza*
> 
> Correct. With Hyper-V's VHD Differencing, I made master copy and rest Differencing resulting in incredibly saving ****ton space. Yet the VMs runs/boots faster than any SSD.


RevoDrives are awesome for Server 2012 Pooled VDI infrastructures! I wouldn't recommend differencing disks for a production environment, but for a test/lab/dev environment, you can't really get much better for performance or space.


----------



## Imrac

Quote:


> Originally Posted by *ledzeppie*
> 
> That leads me to thinking, anyone else think RAID is slightly overrated? Personally if I were to buy 2 drives I'd keep one of them offline in case of power surges, viruses, accidental deletion, etc. Everyone seems to just throw **** into a RAID array these days and assume its safe. I definitely understand the idea of having things on two hard drives immediately after writing them, which for things like media work can definitely be useful. Although I guess I'm also talking pretty specifically about RAID1 here as well. I guess if you're willing to pay the price it's great, but people act like it's an alternative to offline backups, which I don't think it is.


Raid =/= Backup or Data integrity, Raid = High Availability.


----------



## lowfat

Still a work in progress.


----------



## Mugen87

What bench case is that


----------



## lowfat

It is a Technofront HWD. All the drive cages are from various Lian Lis.

The rest of the hardware is:
AMD 6128
Supermicro H8SGL-F
64GB 1333MHz registered ECC
80GB FusionIO ioXtreme
2x20Gbps Infiniband


----------



## DaveLT

Those VRMs need some fan cooling (60+CFM please) on them, or they will get hot!


----------



## lowfat

Quote:


> Originally Posted by *DaveLT*
> 
> Those VRMs need some fan cooling (60+CFM please) on them, or they will get hot!


There will be a Gentle Typhoon AP-15 directly in front of the VRMs and PCIe cards. The Infiniband card runs extremely hot too.


----------



## DaveLT

Quote:


> Originally Posted by *lowfat*
> 
> There will be a Gentle Typhoon AP-15 directly in front of the VRMs and PCIe cards. The Infiniband card runs extremely hot too.


That's good then







I didn't know infiniiband cards run extremely hot on the other hand ...


----------



## Zankza

Quote:


> Originally Posted by *lowfat*
> 
> It is a Technofront HWD. All the drive cages are from various Lian Lis.
> 
> The rest of the hardware is:
> AMD 6128
> Supermicro H8SGL-F
> 64GB 1333MHz registered ECC
> 80GB FusionIO ioXtreme
> 2x20Gbps Infiniband


I don't quite see the logic of 2x20infiband for home purpose?


----------



## lowfat

Quote:


> Originally Posted by *Zankza*
> 
> I don't quite see the logic of 2x20infiband for home purpose?


Super fast access to storage and ram drive from my gaming PC.









In all honesty I went w/ it because it was significantly cheaper than moving to 10GbE.


----------



## Zankza

Quote:


> Originally Posted by *lowfat*
> 
> Super fast access to storage and ram drive from my gaming PC.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In all honesty I went w/ it because it was significantly cheaper than moving to 10GbE.


do you have a thread containing about this setup? I am curious.


----------



## beers

Quote:


> Originally Posted by *lowfat*
> 
> Super fast access to storage and ram drive from my gaming PC.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In all honesty I went w/ it because it was significantly cheaper than moving to 10GbE.


What kind of IPoIB performance do you get for that platform?


----------



## jibesh

Quote:


> Originally Posted by *beers*
> 
> What kind of IPoIB performance do you get for that platform?


It will be only limited by the speed of your storage devices basically. A RAM disk will be able to max out a 10 or 20gbps IB link very easily though.

To max out a 20gbps link on those cards using standard storage devices, you would need about 5+ SATA3 SSDs or 2+ Revo X3 drives in RAID0.

Between my file server with 8 disk RAID6 array and desktop PC, I get about 400 to 440 MB/s (3.2 to 3.5 Gbps) using an 10gbps IB card.


----------



## lowfat

Quote:


> Originally Posted by *beers*
> 
> What kind of IPoIB performance do you get for that platform?


I'll let you know in about 2 weeks when I get the rest of it together.


----------



## Shipw22

I have two retired ones... I re-purposed them to the below specs.

A "Frankenstein" - Compaq Socket 7 Motherboard, 384MB PC100 RAM, AMD K6-2 CXT 530MHz, Seagate Barracuda ATA IV 40GB HDD (ST340016A), and integrated Trident Video Accelerator Blade 3D/MVP4.

An old gamer - ASUS P4V8X-X Motherboard, 1GB PC3200 DDR RAM, Intel Pentium 4 2.8 GHz, Western Digital Caviar Blue 120GB HDD (WD1200BB), and PowerColor ATI Radeon X1650 Pro AGP 8x 256 MB video card.


----------



## WroLeader

File server:

OS: ArchLinux
Case: Antec VSK-3000E
CPU: Intel Pentium IV @ 1.8GHz
Motherboard: Intel Desktop Board D915GAV
Memory: 2GBs DDR2-667
PSU: Antec 300
HDD: Seagate Barracuda 7200 - 500GBs
Server Manufacturer: Me

Configured for FTP access using an Intel PRO/1000 GT card.


----------



## ledzeppie

What do you guys think of generic/standard onboard NICs vs dropping 25-35 bucks on an Intel one (for home use, nothing super special). What's the main advantage? I understand the Intel one's are rock solid, I've just never really had too much of a problem with the one on my cheapo server motherboard.


----------



## PCSarge

i5 750 @ stock
EVGA P55 Classified 200
4GB Patriot Viper II
6850 and 2x 5770s
6 3TB WD greens (1 OS w/ windows server 2008, rest in raid 5)
850W PSU
caseless as you can see

used to run bitcoin on the 3 cards 24/7, and as a file and backup server for my main rig.


----------



## dushan24

Quote:


> Originally Posted by *ledzeppie*
> 
> What do you guys think of generic/standard onboard NICs vs dropping 25-35 bucks on an Intel one (for home use, nothing super special). What's the main advantage? I understand the Intel one's are rock solid, I've just never really had too much of a problem with the one on my cheapo server motherboard.


The Intel ones are vastly superior in terms of hardware performance, quality, features and drivers.

Intel NIC's will work a lot better with hypervisors such as ESXi or Xen than some cheap onboard one (which sometimes is not even compatible).

You will get better throughput and a myriad of other things.

I think Intel make the best NIC's full stop.


----------



## DaveLT

Quote:


> Originally Posted by *ledzeppie*
> 
> What do you guys think of generic/standard onboard NICs vs dropping 25-35 bucks on an Intel one (for home use, nothing super special). What's the main advantage? I understand the Intel one's are rock solid, I've just never really had too much of a problem with the one on my cheapo server motherboard.


As far as i know, high throughput, solid stability, unmatched capability of driving long distances of LAN cabling.


----------



## tiro_uspsss

Quote:


> Originally Posted by *ledzeppie*
> 
> What do you guys think of generic/standard onboard NICs vs dropping 25-35 bucks on an Intel one (for home use, nothing super special). What's the main advantage? I understand the Intel one's are rock solid, I've just never really had too much of a problem with the one on my cheapo server motherboard.


if its an actual server mobo (supermicro etc) it'll highly likely have intel as the inbuilt NIC already


----------



## dushan24

Quote:


> Originally Posted by *DaveLT*
> 
> As far as i know, high throughput, solid stability, unmatched capability of driving long distances of LAN cabling.


That too 
Quote:


> Originally Posted by *tiro_uspsss*
> 
> if its an actual server mobo (supermicro etc) it'll highly likely have intel as the inbuilt NIC already


Not true for all (Example: These days Dell use Broadcomm controllers for their integrated NIC's)


----------



## DaveLT

Quote:


> Originally Posted by *dushan24*
> 
> That too
> Not true for all (Example: These days Dell use Broadcomm controllers for their integrated NIC's)


Are the cheaping out even on poweredge now?!


----------



## Plan9

Quote:


> Originally Posted by *WroLeader*
> 
> File server:
> 
> OS: ArchLinux
> Case: Antec VSK-3000E
> CPU: Intel Pentium IV @ 1.8GHz
> Motherboard: Intel Desktop Board D915GAV
> Memory: 2GBs DDR2-667
> PSU: Antec 300
> HDD: Seagate Barracuda 7200 - 500GBs
> Server Manufacturer: Me
> 
> Configured for FTP access using an Intel PRO/1000 GT card.


I used to run ArchLinux as my file server. In fact that was a P4 as well (not nearly that much RAM though)


----------



## dushan24

Quote:


> Originally Posted by *DaveLT*
> 
> Are the cheaping out even on poweredge now?!


Seems that way, every PowerEdge we've gotten over the past two years had BroadComm NIC's onboard.

3 x R810
1 x R610
2 x R510


----------



## ledzeppie

So basically it might be worth it to drop 25 bucks down to get an Intel NIC lol.


----------



## Oedipus

It seems like an quad i350 is like a $400 upgrade over a quad Broadcom.


----------



## DaveLT

If you still have PCI-X slots you can buy a 8494MT for cheap on the used market


----------



## tiro_uspsss

Quote:


> Originally Posted by *dushan24*
> 
> That too
> Not true for all (Example: These days Dell use Broadcomm controllers for their integrated NIC's)


+1. hence why I said _likely_, implication being some won't .. on that note, are broadcom any good?


----------



## DaveLT

Good yes, but not nearly as good as Intel ... but for the price if all you're needing not so high requirements go broadcomm instead


----------



## dushan24

Quote:


> Originally Posted by *tiro_uspsss*
> 
> +1. hence why I said _likely_, implication being some won't .. on that note, are broadcom any good?


They aren't bad but they're nowhere near as good.

Sorry for the one line answer, I'm busy.

Let me know if you want more verbosity...

My view, if you just need a single port, get this http://www.scorptec.com.au/computer/33919-expi9301ctblk


----------



## jibesh

Quote:


> Originally Posted by *dushan24*
> 
> They aren't bad but they're nowhere near as good.
> 
> Sorry for the one line answer, I'm busy.
> 
> Let me know if you want more verbosity...
> 
> My view, if you just need a single port, get this http://www.scorptec.com.au/computer/33919-expi9301ctblk


This would be a better deal...http://www.ebay.com/itm/EXPI9402PTBLK-Intel-PRO-1000-PT-DP-Server-Adapter-/350819669171

Dual port Intel 1GbE NIC for $35.


----------



## Plan9

Quote:


> Originally Posted by *jibesh*
> 
> This would be a better deal...http://www.ebay.com/itm/EXPI9402PTBLK-Intel-PRO-1000-PT-DP-Server-Adapter-/350819669171
> 
> Dual port Intel 1GbE NIC for $35.


But there's also a chance it's just a cheap Chinese fake (I avoid ebay for new PC parts)


----------



## jibesh

Quote:


> Originally Posted by *Plan9*
> 
> But there's also a chance it's just a cheap Chinese fake (I avoid ebay for new PC parts)


Never had that problem before but might have just been lucky...


----------



## Plan9

Quote:


> Originally Posted by *jibesh*
> 
> Never had that problem before but might have just been lucky...


I might just be paranoid, but and most of the stuff on there might be fine. But given how cheap some legitimate retail shops are, I'd sooner buy through them and have more guarantees.


----------



## dushan24

Quote:


> Originally Posted by *jibesh*
> 
> This would be a better deal...http://www.ebay.com/itm/EXPI9402PTBLK-Intel-PRO-1000-PT-DP-Server-Adapter-/350819669171
> 
> Dual port Intel 1GbE NIC for $35.


Indeed, assuming 2nd hand is ok and assuming that there is a free PCIe x4 or greater slot free.

I was only considering new parts


----------



## MikhailV

Quote:


> Originally Posted by *tycoonbob*
> 
> Can be, but no. Depends on the size. The 240GB version is $640, and the 480GB version is $999. The 110GB version is like $100, which is what I had about 2 years ago.


How was your experience with revodrives? I want to get one and stick it into my WS/Server to store critical files. I'm a backup fiend, I make backup DVDs, store it in my NAS, and have server in a co-location.


----------



## herkalurk

Quote:


> Originally Posted by *MikhailV*
> 
> How was your experience with revodrives? I want to get one and stick it into my WS/Server to store critical files. I'm a backup fiend, I make backup DVDs, store it in my NAS, and have server in a co-location.


To be honest, using expensive media for backup is a horrible idea. If the point of that drive is backup, then use cheaper media, like a 1T drive. If the server is in colo, then you would be limited to your bandwidth between your home and the colo, so what is your ISP speed? 25 mbit? Even 100 mbit is only 11.4 MBps. Any modern sata drive can fulfill that stream. A revo drive is a great boot drive or database drive, but not backup. I'm also a backup fiend, at work with a 195 tape library, full of LTO5 tapes (or 585 TB compressed on tape). We restore everything from tape, and there is no reason we can't wait the 90 seconds for a tape to mount.


----------



## MikhailV

Thought so, I just don't feel like running a tape library even though I could get a written off Dell PowerVault from work, guess I'll still with my current method. Honestly I don't even care about transfer speeds when FTP'ing files to my server, I'm just happy I have backup everywhere.

But when it comes to dealing with the files I work with, I don't care how expensive the media is, those files and projects pay my salary.

I may have gone crazy, when I decided to get a commercial chassis with redundant PSUs. As I said, it is extremly crucial for me to have backup's no matter what.


----------



## DaveLT

But what the hell is the point of a SSD for backup-ing? There's no point, bro. Seriously








It's the inverse speaking in terms of using a tape drive for boot drive








I would rather get storage that costs 1/10th of a SSD per gig. Which are harddisks, while RAM prices are through the roof now, 2TB>3TB harddisks command not much more money than a 256>512GB SSD which is essentially near double


----------



## MikhailV

Almost none lol. I just have excess income. At least for my age group.


----------



## CloudX

Save it!


----------



## herkalurk

I have excess income as well, but I don't spend it on a SSD for backup. Invest in 2 x 1T drives in a raid 1. It will be less $$ than the revo, and more reliable for a backup copy. Either that, or invest in crashplan. I have lto 2 tapes at home, and crashplan backing up my personal data which can not be recreated (family pictures for example). The only SSDs in my stuff is the boot drive on my desktop, my wifes laptop, and my htpc. Other than that, everything is spindle, even my servers.


----------



## MikhailV

@herkalurk I will not be doing that, the idea just popped into my head, too expensive and impractical. Instead I will focus on putting together another NAS in RAID 5. This will cost me technically nothing as I have a plethora of decommissioned servers to choose from, don't need to use them, but rather salvageable parts, such as RAID cards, HDDs and such. You wouldn't believe how many servers are still boxed up, at least where I work.


----------



## herkalurk

Quote:


> Originally Posted by *MikhailV*
> 
> @herkalurk I will not be doing that, the idea just popped into my head, too expensive and impractical. Instead I will focus on putting together another NAS in RAID 5. This will cost me technically nothing as I have a plethora of decommissioned servers to choose from, don't need to use them, but rather salvageable parts, such as RAID cards, HDDs and such. You wouldn't believe how many servers are still boxed up, at least where I work.


I don't have that luxury sadly. My works servers are a little to over powered for what I would use them for anyway. If my boss got rid of a 6 TB iscsi applicane he has, it's a liitle old, but I'd love to have it.


----------



## DaveLT

The other day we were talking about Broadcomm NICs and realized the NICs in the PE2970 aren't exactly weak NICs as they have built-in TOEs
Since i can find a Intel 9702PT at the same price as a BCM5709 now it boils down to which is better







Does anyone know?
I am probably pointing towards Intel since i have such a huge bias for "Everything Intel does is better" (Luckily i don't think that way for their CPUs)


----------



## DaveLT

Aye guys i noticed supermicro has the x8dth-if board ... holy crap all 7 slots are PCI-E x8!


----------



## tiro_uspsss

Quote:


> Originally Posted by *DaveLT*
> 
> Aye guys i noticed supermicro has the x8dth-if board ... holy crap all 7 slots are PCI-E x8!


:/ so? that mobo has been around for years - you only just discover it? :/
they have 7x PCIE3x8 mobos too!









http://www.supermicro.com/products/motherboard/Xeon/C600/X9DRH-iTF.cfm


----------



## ZFedora

The Datacenter



Needs some cable work but nothing much I can do since it's a leased rack.


----------



## WroLeader

Quote:


> Originally Posted by *Plan9*
> 
> I used to run ArchLinux as my file server. In fact that was a P4 as well (not nearly that much RAM though)


I only have that much RAM because it was the only DDR2 module I could find









kind of excessive for just a simple file server which is only used through TFTP, though.


----------



## Master__Shake

From the top down

PFSense Box
Windows Home Server 2011 Box (10 tb's)
Miscellaneous Box (Rushing water, osx vm machine, pxe server vm machine, printer server vm machine (coming soon))
File Server (20 tb usable)


----------



## Plan9

Quote:


> Originally Posted by *WroLeader*
> 
> I only have that much RAM because it was the only DDR2 module I could find
> 
> 
> 
> 
> 
> 
> 
> 
> 
> kind of excessive for just a simple file server which is only used through TFTP, though.


Why TFTP? Why not SMB or NFS? Or are you just using this for stuff like PXE booting?
Quote:


> Originally Posted by *Master__Shake*


Those fans look like they're burning up


----------



## The_Rocker

Some more pics of my set up.

http://s83.photobucket.com/user/Tom_418/media/s3252.jpg.html

http://s83.photobucket.com/user/Tom_418/media/s3254.jpg.html

http://s83.photobucket.com/user/Tom_418/media/s3246.jpg.html

This lives in my the office of the small company I work for. The blades will shortly be going in a datacentre down the road.

There are 5 Dell CS23-SH's in the rack each with 2 Xeon L5410's and 16GB RAM as well as a Poweredge 2950 with 2 Xeon E5430's and 16GB RAM. These are my ESXi development platform. The PE2950 is running Openfiler and presenting storage to the ESXi hosts via iSCSI.

The blades are Dell M610's. Each has a pair of Xeon X5560's and 48GB DDR3 RAM. All 16 of them. Some of these along with an EMC VNXe will make the production ESXi hosted environment.

The spare capacity is mine to play with :-D


----------



## SuperMudkip

My first server, so no hate!

OS: Windows 7 Ultimate
Case: 4U Generic Rackmount Case (Got it for $10 off of craigslist)
CPU: AMD Athlon II X3 455 @3.2 (Will be Undervolting)
Motherboard: ASUS M4A88TD-V EVO/USB3
Memory: 4GB
PSU: Enermax NXAN 550W PSU
OS HDD (If you have one): Western Digital Caviar Black 750GB (Partitioned for also for storage.)
Storage HDD(s): 750GB Maxtor External Harddrive (USB 2.0)
Server Manufacturer: ME!
Purpose: Storage(Media for my Dad and GIMP Shop Storage for my Sister), Folding, VM, and experiments.




Will be moving this in a more secuded spot, Still working out software packages and stuff like that.


----------



## parityboy

*@The_Rocker*

How many units is that rack?


----------



## DaveLT

hey guys,
Saw a IBM X3550 (L5420) and a HP DL360 G5 (L5420) for the same price (20$ difference with IBM being more)
Which one do you guys think is better? Both have only very basic RAID cards
Noise-wise is one of my largest considerations, the x3550 has much less fans with a bigger heatsink while the dl360 g5 has a smaller heatsink but a massive array of fans


----------



## The_Rocker

Quote:


> Originally Posted by *parityboy*
> 
> *@The_Rocker*
> 
> How many units is that rack?


18U


----------



## parityboy

Quote:


> Originally Posted by *The_Rocker*
> 
> 18U


Hmmmm







...cheers.


----------



## levontraut

My Server:

cpu: amd 1055T
mobo: ASRock 990FX Extreme3
ram: 4 x 2 gig DDR3 1333 (8gig total)
psu:coolermaster 450
gpu: evga 550TI
os hdd: wd black 1 terabyte
hdd's 3 x 1 terabyte WD green
hdd's 3 x 500 gig WD black
1 x raid card
case: coolermaster elite 350
OS: server 2008R2 std
add-ons: coolermaster 3 in 4 hdd cage
1 x extra NIC - for VM's

pics to come later

*use of my server.*

media/storage server
FTP
backup server
game hosting - trackmania / cod / ghost recon
teamspeak server
vm machines and testing


----------



## EpicAMDGamer

Quote:


> Originally Posted by *DaveLT*
> 
> hey guys,
> Saw a IBM X3550 (L5420) and a HP DL360 G5 (L5420) for the same price (20$ difference with IBM being more)
> Which one do you guys think is better? Both have only very basic RAID cards
> Noise-wise is one of my largest considerations, the x3550 has much less fans with a bigger heatsink while the dl360 g5 has a smaller heatsink but a massive array of fans


Not sure which is better but be warned, the Dl360's are notorious for being extremely loud.

I have a DL360 G3 and I can confirm it is very loud.


----------



## killabytes

I thought this would be funny. Since I sold off all my servers, except my single file server, here is what my homemade server rack has become:


----------



## jibesh

Quote:


> Originally Posted by *DaveLT*
> 
> hey guys,
> Saw a IBM X3550 (L5420) and a HP DL360 G5 (L5420) for the same price (20$ difference with IBM being more)
> Which one do you guys think is better? Both have only very basic RAID cards
> Noise-wise is one of my largest considerations, the x3550 has much less fans with a bigger heatsink while the dl360 g5 has a smaller heatsink but a massive array of fans


Considering the noise and the power draw for these servers, are they really worth it?


----------



## Obakemono

My server:
HP proliant N54L microserver.

I have ready for it is the following:
2x2tb WD greens, (storage)
2x1tb WD greens, (storage)
Toshiba 750gb 2.5" (network drive)
WD scorpio black 320gb 2.5" (OS, WHS 2011)
Koultech 4 port raid card
4gb of 1333 ECC memory
TSST dvd burner.
Orico single external drive with a wd 500gb green and 320gb black.

Further upgrade plans:
LG Blueray burner for hard backups, and 2 Adata 64gb SSDs for my network media drive (raid 0)

I'll post pics later this week when I finish it up.


----------



## DaveLT

Quote:


> Originally Posted by *EpicAMDGamer*
> 
> Not sure which is better but be warned, the Dl360's are notorious for being extremely loud.
> 
> I have a DL360 G3 and I can confirm it is very loud.


Hell even their DL38X series is loud as hell The X3550 has less HDD bays, not that i could care since i was going to use a 2U 2xL5420 and 12 3.5" slots for storage but i still don't really know what i want
Quote:


> Originally Posted by *jibesh*
> 
> Considering the noise and the power draw for these servers, are they really worth it?


I don't see why they aren't worth it







I will most probably run 2-4VMs per 1U node for my webservers


----------



## Quasimojo

Quote:


> Originally Posted by *DaveLT*
> 
> hey guys,
> Saw a IBM X3550 (L5420) and a HP DL360 G5 (L5420) for the same price (20$ difference with IBM being more)
> Which one do you guys think is better? Both have only very basic RAID cards
> Noise-wise is one of my largest considerations, the x3550 has much less fans with a bigger heatsink while the dl360 g5 has a smaller heatsink but a massive array of fans


I don't know what kind of costs or specs you're looking at on those, but I can say that I'm thrilled with the Dell PowerEdge C1100 I picked up on eBay. They're going for $309 with free shipping (box is huge, heavy and extremely well-packed). For dual quad-core low-voltage Xeons (16-threads) and 24GB of memory, it's a fantastic deal. PM me for a link.
Quote:


> Originally Posted by *jibesh*
> 
> Considering the noise and the power draw for these servers, are they really worth it?


I guess it depends on whether you need (want, this is OCN after all) a server in the first place or not. The fans on my C1100 are always idling at low speed, so it's not very loud at all. I've had PC's that were nearly as loud. The low-voltage Xeons (max 60W TDP each) rarely hit anything close to 100% utilization. That said, it's raised the temps in my small office a few degrees, so it *is* consuming some power. I haven't run it through a meter, but I wouldn't guess it to be much.


----------



## The_Rocker

Quote:


> Originally Posted by *Quasimojo*
> 
> I don't know what kind of costs or specs you're looking at on those, but I can say that I'm thrilled with the Dell PowerEdge C1100 I picked up on eBay. They're going for $309 with free shipping (box is huge, heavy and extremely well-packed). For dual quad-core low-voltage Xeons (16-threads) and 24GB of memory, it's a fantastic deal. PM me for a link.
> I guess it depends on whether you need (want, this is OCN after all) a server in the first place or not. The fans on my C1100 are always idling at low speed, so it's not very loud at all. I've had PC's that were nearly as loud. The low-voltage Xeons (max 60W TDP each) rarely hit anything close to 100% utilization. That said, it's raised the temps in my small office a few degrees, so it *is* consuming some power. I haven't run it through a meter, but I wouldn't guess it to be much.


I have a few CS23's which are similar to the C1100. At idle / normal utilization you are looking at around 0.4 amps. At full load 0.8-0.9 amps.


----------



## Citra

Quote:


> Originally Posted by *Quasimojo*
> 
> I don't know what kind of costs or specs you're looking at on those, but I can say that I'm thrilled with the Dell PowerEdge C1100 I picked up on eBay. They're going for $309 with free shipping (box is huge, heavy and extremely well-packed). For dual quad-core low-voltage Xeons (16-threads) and 24GB of memory, it's a fantastic deal. PM me for a link.
> I guess it depends on whether you need (want, this is OCN after all) a server in the first place or not. The fans on my C1100 are always idling at low speed, so it's not very loud at all. I've had PC's that were nearly as loud. The low-voltage Xeons (max 60W TDP each) rarely hit anything close to 100% utilization. That said, it's raised the temps in my small office a few degrees, so it *is* consuming some power. I haven't run it through a meter, but I wouldn't guess it to be much.


Sadly that amazing deal is only available in the US.


----------



## tycoonbob

Quote:


> Originally Posted by *The_Rocker*
> 
> I have a few CS23's which are similar to the C1100. At idle / normal utilization you are looking at around 0.4 amps. At full load 0.8-0.9 amps.


The measurements I took from my C1100 were not that low. Idle, I was looking at around 1.16A and about 145W. Max that I could push my C1100 (all CPUs at 100%) was 1.61A and 205W. Check out this thread which shows all the numbers I got from it:
Dell C1100 Power Consumption


----------



## DaveLT

Quote:


> Originally Posted by *Quasimojo*
> 
> I don't know what kind of costs or specs you're looking at on those, but I can say that I'm thrilled with the Dell PowerEdge C1100 I picked up on eBay. They're going for $309 with free shipping (box is huge, heavy and extremely well-packed). For dual quad-core low-voltage Xeons (16-threads) and 24GB of memory, it's a fantastic deal. PM me for a link.
> I guess it depends on whether you need (want, this is OCN after all) a server in the first place or not. The fans on my C1100 are always idling at low speed, so it's not very loud at all. I've had PC's that were nearly as loud. The low-voltage Xeons (max 60W TDP each) rarely hit anything close to 100% utilization. That said, it's raised the temps in my small office a few degrees, so it *is* consuming some power. I haven't run it through a meter, but I wouldn't guess it to be much.


The ones i see start from 300$ for the LGA1366 ones, the IBMs i'm quoting are sub 200$








*Aw damn, i forgot my C6100-based server ... A new one will cost me less than the X3550 or the DL360 G5. I'll be going that route then for a new one
Quote:


> Originally Posted by *Citra*
> 
> Sadly that amazing deal is only available in the US.


Absolutely.
Quote:


> Originally Posted by *The_Rocker*
> 
> I have a few CS23's which are similar to the C1100. At idle / normal utilization you are looking at around 0.4 amps. At full load 0.8-0.9 amps.


Is that 220V? Sounds like a very power efficient server. Sadly the CS line is always low-end
Quote:


> Originally Posted by *tycoonbob*
> 
> The measurements I took from my C1100 were not that low. Idle, I was looking at around 1.16A and about 145W. Max that I could push my C1100 (all CPUs at 100%) was 1.61A and 205W. Check out this thread which shows all the numbers I got from it:
> Dell C1100 Power Consumption


As if they can be THAT low







Anyway. The seller who sold me the previous C6100 (which had that larger chassis but they don't have it anymore) is selling a new version for 200$ (That's in local dollars!) but that only includes a single proc. No problem, extra proc costs 50$ lol


----------



## darwing

okay so it has to be said... why do ANY of you require an entire server stand with tons of HD's in your house??? how much data could you possibly be downloading in your local home that requires a dedicated server for your house??

you can buy 3 x 4 TB HD's toss it into a regular mid size case and you have 12GB of storage without having a whole server rack! LOL


----------



## DaveLT

Quote:


> Originally Posted by *darwing*
> 
> okay so it has to be said... why do ANY of you require an entire server stand with tons of HD's in your house??? how much data could you possibly be downloading in your local home that requires a dedicated server for your house??
> 
> you can buy 3 x 4 TB HD's toss it into a regular mid size case and you have 12GB of storage without having a whole server rack! LOL


The grand question is : Why the Fn not?!








Seriously, i'm running out of storage on 2TB hdds quickly and i need a proper server along with higher density storage, not just a single ATX case computer doing the job.


----------



## darwing

Quote:


> Originally Posted by *DaveLT*
> 
> The grand question is : Why the Fn not?!
> 
> 
> 
> 
> 
> 
> 
> 
> Seriously, i'm running out of storage on 2TB hdds quickly and i need a proper server along with higher density storage, not just a single ATX case computer doing the job.


you have got to be joking me!

1 - Why not?

Because its ridiculous, its a single household what could you possibly need it for?

2 - Storage

you can get 4TB for $100, if you want you can get 8 x 4TB drives and toss them into a full ATX case. *12 hard drive bays!!!* how on earth do you need more than 12 HD bays?

3 - efficiency

the space your servers take up is needless, and the noise is insane with the server fans.. you can toss this all into one simple case for storage...


----------



## The_Rocker

Quote:


> Originally Posted by *tycoonbob*
> 
> The measurements I took from my C1100 were not that low. Idle, I was looking at around 1.16A and about 145W. Max that I could push my C1100 (all CPUs at 100%) was 1.61A and 205W. Check out this thread which shows all the numbers I got from it:
> Dell C1100 Power Consumption


This was at 230V. With 2 L5410's, 16GB RAM and a single HDD. PSU is the 600w delta unit.

Each of my blades pulls 1.3 ish amps idle and 2.2 under full load but they are a lot more powerful.


----------



## jibesh

Quote:


> Originally Posted by *darwing*
> 
> you have got to be joking me!
> 
> 1 - Why not?
> 
> Because its ridiculous, its a single household what could you possibly need it for?
> 
> 2 - Storage
> 
> you can get 4TB for $100, if you want you can get 8 x 4TB drives and toss them into a full ATX case. *12 hard drive bays!!!* how on earth do you need more than 12 HD bays?
> 
> 3 - efficiency
> 
> the space your servers take up is needless, and the noise is insane with the server fans.. you can toss this all into one simple case for storage...


Its more than just storage. A lot of us do this for a living so we would like performance and reliability in our servers as well. We also need servers with different configurations to do testing and learning.


----------



## jibesh

Quote:


> Originally Posted by *DaveLT*
> 
> The ones i see start from 300$ for the LGA1366 ones, the IBMs i'm quoting are sub 200$
> 
> 
> 
> 
> 
> 
> 
> 
> *Aw damn, i forgot my C6100-based server ... A new one will cost me less than the X3550 or the DL360 G5. I'll be going that route then for a new one.


Lol I was pretty sure you did say you had a c1100 or c6100 in previous posts so I was wondering why you would go another generation older for another server.


----------



## broadbandaddict

Hey guys, I've been wanting a server rack for a while and I found one on Craigslist. Its a 36U HP with a UPS, selling for $75. Only catch is it's ~ 300 miles away. It would cost around $100 in gas to drive to get it. You think it would be worth it?

Picture:


----------



## jibesh

Quote:


> Originally Posted by *broadbandaddict*
> 
> Hey guys, I've been wanting a server rack for a while and I found one on Craigslist. Its a 36U HP with a UPS, selling for $75. Only catch is it's ~ 300 miles away. It would cost around $100 in gas to drive to get it. You think it would be worth it?
> 
> Picture:


Personally, yes I think it would be worth it if you have a need for it or stuff you can fill it with. Whats the capacity of the UPS?


----------



## ledzeppie

Quote:


> Originally Posted by *darwing*
> 
> you have got to be joking me!
> 
> 1 - Why not?
> 
> Because its ridiculous, its a single household what could you possibly need it for?
> 
> 2 - Storage
> 
> you can get 4TB for $100, if you want you can get 8 x 4TB drives and toss them into a full ATX case. *12 hard drive bays!!!* how on earth do you need more than 12 HD bays?
> 
> 3 - efficiency
> 
> the space your servers take up is needless, and the noise is insane with the server fans.. you can toss this all into one simple case for storage...


Some men just want to watch the world learn.

Seriously though, I don't have a server rack myself, nor do I plan on getting one, but setting up a home server has been a hugely entertaining and educating experience for me. I don't see why you have to stop learning and having fun at the ATX form factor. If you want to play the Wants vs Needs game, then go live in the woods.


----------



## Quasimojo

Quote:


> Originally Posted by *DaveLT*
> 
> Quote:
> 
> 
> 
> Originally Posted by *darwing*
> 
> okay so it has to be said... why do ANY of you require an entire server stand with tons of HD's in your house??? how much data could you possibly be downloading in your local home that requires a dedicated server for your house??
> 
> you can buy 3 x 4 TB HD's toss it into a regular mid size case and you have 12GB of storage without having a whole server rack! LOL
> 
> 
> 
> The grand question is : Why the Fn not?!
Click to expand...

There's a good portion of your answer right there. It's the OCN way, after all.









In my case, I had numerous needs/wants for various types of servers on my home network. I didn't want to run them from my development/gaming PC. I wanted them running on their own box, and I wanted it to be capable of meeting my needs now and in the future.

My choices were buy a two-generation-old used business-class server or cobble together an additional PC for perhaps half the cost and a fraction of the memory (have you priced RAM lately?). I chose the former and got a lot more processing horsepower and a *lot* more RAM for the added expense, which was not that much in the first place.

It was $440 including the server and rack. People here drop more than that for a single processor and more than double that for some video cards. Are you giving them guff too? You're in the wrong place to be preaching the evils of excess, my friend.









Also wanted to mention that storage is only one small use for a server. That's why a piece of hardware the size of a paperback book can provide that.

I just noticed your sig rig is water cooled. Ironic.


----------



## darwing

Quote:


> Originally Posted by *jibesh*
> 
> Its more than just storage. A lot of us do this for a living so we would like performance and reliability in our servers as well. We also need servers with different configurations to do testing and learning.


Now thats an answer, and yes I was a sys admin as well and have my own little home server with DHCP, DNS, web config and routing rules as well I love to learn the new server software etc.. but the money some of the servers in house on here are truly insane (insanely amazing!!) like just massive and cool, I just dont see it being practice... (yes its OCN, where everyone is over the top hahaha)

Quote:


> Originally Posted by *Quasimojo*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> There's a good portion of your answer right there. It's the OCN way, after all.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In my case, I had numerous needs/wants for various types of servers on my home network. I didn't want to run them from my development/gaming PC. I wanted them running on their own box, and I wanted it to be capable of meeting my needs now and in the future.
> 
> My choices were buy a two-generation-old used business-class server or cobble together an additional PC for perhaps half the cost and a fraction of the memory (have you priced RAM lately?). I chose the former and got a lot more processing horsepower and a *lot* more RAM for the added expense, which was not that much in the first place.
> 
> 
> 
> It was $440 including the server and rack. People here drop more than that for a single processor and more than double that for some video cards. Are you giving them guff too? You're in the wrong place to be preaching the evils of excess, my friend.


thats insane price for what you got including the rack! I guess my thing is space and efficiency, if I have 5 hard drives I'll sell all of them and get 1 big one... thats just how I am,

but you are right some just want to watch the world learn


----------



## broadbandaddict

Quote:


> Originally Posted by *jibesh*
> 
> Personally, yes I think it would be worth it if you have a need for it or stuff you can fill it with. Whats the capacity of the UPS?


So its not a UPS it's a power supply unit. Still worth it? I've got 2 4U servers and a switch right now so it won't be anywhere near full any time soon. Seems like getting one for under $200 would be a pretty decent deal.


----------



## jibesh

Quote:


> Originally Posted by *broadbandaddict*
> 
> So its not a UPS it's a power supply unit. Still worth it? I've got 2 4U servers and a switch right now so it won't be anywhere near full any time soon. Seems like getting one for under $200 would be a pretty decent deal.


Yea its still worth it...paid $200 for my 39U one. Just make sure you have enough room to put it where you want.


----------



## parityboy

Quote:


> Originally Posted by *broadbandaddict*
> 
> Hey guys, I've been wanting a server rack for a while and I found one on Craigslist. Its a 36U HP with a UPS, selling for $75. Only catch is it's ~ 300 miles away. It would cost around $100 in gas to drive to get it. You think it would be worth it?


$175 for a 36U server rack, and you're asking if it's worth it?







Of course it is!!!!


----------



## broadbandaddict

Quote:


> Originally Posted by *jibesh*
> 
> Yea its still worth it...paid $200 for my 39U one. Just make sure you have enough room to put it where you want.


Yeah it seems pretty big. Off to get it tomorrow.









Quote:


> Originally Posted by *parityboy*
> 
> $175 for a 36U server rack, and you're asking if it's worth it?
> 
> 
> 
> 
> 
> 
> 
> Of course it is!!!!


Haha. Had to make sure, I'd hate to get one and hear that it wasn't worth it or something. Might cost a little more than that (~$225), I've gotta take my Expedition instead of my buddies Escape cause it won't fit.


----------



## EpicAMDGamer

Quote:


> Originally Posted by *broadbandaddict*
> 
> Quote:
> 
> 
> 
> Originally Posted by *jibesh*
> 
> Yea its still worth it...paid $200 for my 39U one. Just make sure you have enough room to put it where you want.
> 
> 
> 
> Yeah it seems pretty big. Off to get it tomorrow.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Quote:
> 
> 
> 
> Originally Posted by *parityboy*
> 
> $175 for a 36U server rack, and you're asking if it's worth it?
> 
> 
> 
> 
> 
> 
> 
> Of course it is!!!!
> 
> 
> 
> 
> 
> 
> 
> 
> Click to expand...
> 
> Haha. Had to make sure, I'd hate to get one and hear that it wasn't worth it or something. Might cost a little more than that (~$225), I've gotta take my Expedition instead of my buddies Escape cause it won't fit.
Click to expand...

Either way it is definitely worth it to pay even $225 for a 36U rack, and lets not forget it also is coming with a UPS which makes the deal even more sweet.


----------



## DaveLT

Quote:


> Originally Posted by *jibesh*
> 
> Lol I was pretty sure you did say you had a c1100 or c6100 in previous posts so I was wondering why you would go another generation older for another server.


Because it was cheap. I like LGA1366
Quote:


> Originally Posted by *Quasimojo*
> 
> There's a good portion of your answer right there. It's the OCN way, after all.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In my case, I had numerous needs/wants for various types of servers on my home network. I didn't want to run them from my development/gaming PC. I wanted them running on their own box, and I wanted it to be capable of meeting my needs now and in the future.
> 
> My choices were buy a two-generation-old used business-class server or cobble together an additional PC for perhaps half the cost and a fraction of the memory (have you priced RAM lately?). I chose the former and got a lot more processing horsepower and a *lot* more RAM for the added expense, which was not that much in the first place.


Indeed, RAM prices have been going through the roof
Quote:


> Originally Posted by *broadbandaddict*
> 
> Yeah it seems pretty big. Off to get it tomorrow.
> 
> 
> 
> 
> 
> 
> 
> 
> Haha. Had to make sure, I'd hate to get one and hear that it wasn't worth it or something. Might cost a little more than that (~$225), I've gotta take my Expedition instead of my buddies Escape cause it won't fit.


I'm pretty sure ya have to disassemble it first? Because that's how racks are transported used or new


----------



## Jeci

Here's my £6 rack - No server yet, I'm in the process of acquiring some parts for a silent home build that I'm going to be using for ripping, encoding, remuxing as well as running plex server:


----------



## parityboy

*@Jeci*

Those blanking plates need filling. Get to it!


----------



## Jeci

Quote:


> Originally Posted by *parityboy*
> 
> *@Jeci*
> 
> Those blanking plates need filling. Get to it!


Hehe, thanks - I've only picked up the kit recently, so I may look at getting some expansion cards or might look at getting more kit...


----------



## SamKook

Quote:


> Originally Posted by *DaveLT*
> 
> I'm pretty sure ya have to disassemble it first? Because that's how racks are transported used or new


If you have a big enough vehicle, then why dissasemble it? I brought my 44U rack home in one piece.


----------



## DaveLT

Quote:


> Originally Posted by *SamKook*
> 
> If you have a big enough vehicle, then why dissasemble it? I brought my 44U rack home in one piece.


If you mean putting it in horizontally then ... 44U racks are a bit hard to turn sideways








But anyway at least that's what my dad told me, racks are disassembled then transported


----------



## SamKook

Quote:


> Originally Posted by *DaveLT*
> 
> If you mean putting it in horizontally then ... 44U racks are a bit hard to turn sideways
> 
> 
> 
> 
> 
> 
> 
> 
> But anyway at least that's what my dad told me, racks are disassembled then transported


I indeed had to flip it horizontally. When they don't have any equipment in and it's not a closed rack(doors and a roof make it a lot heavier), they're not that bad and it's even doable alone(but having someone help makes things so much easier).

As far as I know, they are indeed transported in a disassembled state, but I've only worked with brand new ones, don't know about used ones. But since they are a pain to assemble(the company that install them often do a crap job and we have to half disassemble and reassemble it to be able to mount stuff it them), I'd try to move it in one piece if at all possible just to save me the trouble.
Something for personal use doesn't have to meet a company standard for transportation(if those exist for servers rack).

All this talk reminds me that I'll have to post pictures as soon as I'm done setting it up properly in my new appartment now that it's in own mostly dedicated room.


----------



## broadbandaddict

Got the rack home safe and sound. Ended up leaving at 2AM, got there at 7AM and was home by lunch.









This is how I transported it:



Also, the seller threw in this PDU thing, I took some pictures. If someone knows what its worth or is interested in it let me know.


Spoiler: PDU Thing






I've got 4 of the PDU extensions, enough to fill all the plugs.


----------



## caraboose

Don't mean to toot my horn...
But my 42u Compaq rack I got off Kijiji for $25...
Mind you it's from 1999, it's beige, and bottom 2u is dented, has no sides just a front door.. it still works great!


----------



## CloudX

I picked up a 42u shark rack with the smoked Plexi door option for $200 about 50 miles from home. Had like 4 shelves and drawer shelf included too. Was about $4k new. Craigslist ftw.


----------



## DaveLT

Quote:


> Originally Posted by *CloudX*
> 
> I picked up a 72u shark rack with the smoked Plexi door option for $200 about 50 miles from home. Had like 4 shelves and drawer shelf included too. Was about $4k new. Craigslist ftw.


Fantastic


----------



## CloudX

Quote:


> Originally Posted by *DaveLT*
> 
> Fantastic


Found a pic! It was immaculate too.


----------



## Obakemono

The guts to my N54L.


----------



## The_Rocker

Quote:


> Originally Posted by *CloudX*
> 
> Found a pic! It was immaculate too.


Nice, but was your '72u' a typo? Did you mean 42u?


----------



## CloudX

Quote:


> Originally Posted by *The_Rocker*
> 
> Nice, but was your '72u' a typo? Did you mean 42u?


Yes thanks! I fixed it.


----------



## AMD SLI guru

Nothing crazy, but I figured I would post it up.


----------



## dushan24

Quote:


> Originally Posted by *AMD SLI guru*
> 
> 
> 
> 
> Nothing crazy, but I figured I would post it up.


Nice man, but I swear I've seen that setup before, is this a repost?


----------



## AMD SLI guru

I might have done a post awhile back to be honest, but I'm not 100% sure. I was just surfing around in here and figured I would pop in. I should actually update the photo as the rack has changed a bit...


----------



## xNovax

Quote:


> Originally Posted by *AMD SLI guru*
> 
> 
> 
> 
> Nothing crazy, but I figured I would post it up.


Specs?


----------



## AMD SLI guru

It's not how it is now, but from the picture is follows from top to bottom:

Switch is a Netgear GS724T 10/100/1000

Modem under that with a 50/5 connection * soon to be Google Fiber*

Black 1u box is a supermicro intel Atom Dual core system with 4gigs of ram running Untangle

Monitor is a 120hz Acer 3D screen * I had it left over from my previous desktop build *

Core2duo 3ghz Freenas rig with 6x1tb drives and 6x2tb drives.

KVM switch

4u case is my HTPC: 2600K with 16gigs of ram, GTX560ti and dual 120gig SSD in a Raid 0

Dual UPS cyberpower 1350AR which, with everything running, will keep everything up for 30minutes.

I have another freenas rig with 16x 2tb drives but it's not pictured in this. I've also added a 24core AMD server to the mix and that is also not pictured...

If you wanted to see the build of this, you can check the Balrog Build log in my Sig.


----------



## RushiMP

SGI ALTIX 3000 Rack



Wouldn't you like to know whats inside


----------



## Plan9

Quote:


> Originally Posted by *RushiMP*
> 
> SGI ALTIX 3000 Rack
> 
> 
> 
> 
> 
> Wouldn't you like to know whats inside


Not really


----------



## RushiMP

I think I asked my wife the same questions and your post reminds me of her:

No, not really (While she shakes her head).


----------



## RushiMP

Maybe if I put a Prada sticker on it, especially if it has a clearance or sale tag on it. Then maybe, otherwise she only cares when she can not access the videos of her babies at her convenience. Or god forbid the network printer not print.


----------



## Plan9

Quote:


> Originally Posted by *RushiMP*
> 
> I think I asked my wife the same questions and your post reminds me of her:
> 
> No, not really (While she shakes her head).


to be honest I was just messing with you









But since we're on the subject, I take it you don't run any 3D file managers on that?


----------



## Irisservice

Quote:


> Originally Posted by *RushiMP*
> 
> SGI ALTIX 3000 Rack
> 
> 
> 
> 
> 
> Wouldn't you like to know whats inside


I would love that rack...inside pics please...


----------



## AMD SLI guru

that's a beautiful rack!


----------



## Plan9

Quote:


> Originally Posted by *AMD SLI guru*
> 
> that's a beautiful rack!


----------



## akromatic

Quote:


> Originally Posted by *darwing*
> 
> you have got to be joking me!
> 
> 1 - Why not?
> 
> Because its ridiculous, its a single household what could you possibly need it for?
> 
> 2 - Storage
> 
> you can get 4TB for $100, if you want you can get 8 x 4TB drives and toss them into a full ATX case. *12 hard drive bays!!!* how on earth do you need more than 12 HD bays?
> 
> 3 - efficiency
> 
> the space your servers take up is needless, and the noise is insane with the server fans.. you can toss this all into one simple case for storage...


O_O you must not consume much media.....

i envy alot of of people here for their storage

i currently run 4x 4bay DAS storage to my NAS with 2TB drives and im long out of storage that im forced to delete valuable stuff with still towers of DVDs worth of data that i've yet to put into the drives. I cant afford more storage let alone any backup


----------



## jibesh

Quote:


> Originally Posted by *AMD SLI guru*
> 
> that's a beautiful rack!


stop drooling over his rack...


----------



## NKrader

Dedicated crunch brothers rig stats in sig


----------



## RushiMP

No 3D file managers...yet.


But it does help keep things moving:


----------



## Muskaos

Quote:


> Originally Posted by *SuperMudkip*
> 
> My first server, so no hate!
> 
> OS: Windows 7 Ultimate
> Case: 4U Generic Rackmount Case (Got it for $10 off of craigslist)
> CPU: AMD Athlon II X3 455 @3.2 (Will be Undervolting)
> Motherboard: ASUS M4A88TD-V EVO/USB3
> Memory: 4GB
> PSU: Enermax NXAN 550W PSU
> OS HDD (If you have one): Western Digital Caviar Black 750GB (Partitioned for also for storage.)
> Storage HDD(s): 750GB Maxtor External Harddrive (USB 2.0)
> Server Manufacturer: ME!
> Purpose: Storage(Media for my Dad and GIMP Shop Storage for my Sister), Folding, VM, and experiments.
> Will be moving this in a more secuded spot, Still working out software packages and stuff like that.


I used to have that exact case, had a dual Pentium III 1 GHz system in it running Mandrake linux. It hosted a teamspeak server, a src_ds CS:Source server, and a Tribes 2 server. Once I lost my free co-loc privileges, the machine crunched SETI for a while, until 2007, when I put it into storage while I went to Japan for 3 years. I got rid of it in 2010 since it was very lacking in fans and HD space.


----------



## neo565

My server (I have 4 of these, with different add-on cards and hard drives):

It is made by Microway custom for Woods Hole Oceanographic, and they are going to throw out all 200 of these, 50 Dell Poweredge 2650's, 20 Poweredge 6950's,1 Poweredge 1950, and some SunFire's.
The spec's are:
2x Xeon Gallatin
1x 40GB IDE HDD
1x 4.5GB SCSI HDD
Supermicro XDP8
EMACS PSU
2GB Transcend DDR ECC RAM
1x Adaptec SCSI Controller
4x Airflow Tech 40mm fans
1x Delta blower fan
1x Floppy Drive
Custom Microway Case
NodeWatch Circuit board


----------



## Muskaos

My server:
Windows Home Server 2011
Case: CoolerMaster 690
Power Supply: ChouRiki 600W 80+ (Japanese brand; this machine was partially sourced while I was in Japan)
Mobo: Asus P8P67 Pro Rev 3.1
RAM: Corsair XMS3 DDR3 (2GB x 4)
CPU: Intel Core i5 2500K
GPU: nVidia 6600 GTX silent
5.25" HD cage: Coolermaster 4 in 3 Device Module

Hard Drives:
Hitatchi GST Deskstar (HDS722020) 2 TB (two)
Seagate ST10600DM003 1TB (OS)
Seagate Barracuda (STS2000DM001) 2TB (one)
WD Green (WD20EARX-00P) 2TB (one)
WD Green (WD20EARX-32PASB0) 2TB (one)
WD Green (WD20EZRX-00D) 2TB (one)

I use Stable Bit's Drive Extender to regain the DE functionality lost with WHS 2011, so I have myself a nice drive pool. Have had no issues with it so far, with usage near on two years now.

I use a Synology DS411J as my secondary back up, with four 3TB WD Reds stuffed into it, set up with Synology's hybrid RAID.

I have spare HDs for both the WHS box and the Synology unit, in case one fails, and also intermittently back up to a gang of external hard drives for my tertiary back up, with my files residing in three places, for redundancy.

I use full paid version of Syncback SE for my file copying chores


----------



## bigredishott

My Media Server Running windows 7. 14TB in drives and 120SSD for OS 16GB ram. Nothing too special but streams media throughout my home. Specs in Sig media server / htpc
Going through this thread makes me want to buy a rack and get some real servers.


----------



## CSCoder4ever

Quote:


> Originally Posted by *bigredishott*
> 
> My Media Server Running windows 7. 14TB in drives and 120SSD for OS 16GB ram. Nothing too special but streams media throughout my home. Specs in Sig media server / htpc
> *Going through this thread makes me want to buy a rack and get some real servers.*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!


Wow that reminds me of my previous machine!

and yeah, I'm also thinking about buying rack cases for most of my existing systems, though I'm wondering if I should make a build log of it.


----------



## bigredishott

I know thats a really old case. I had 2 others like them but with windows on the sides, back when P4 had just broke 3Ghz and the best Vcard was radeon 9800.
The guy I bought my 560TI's sold it to me for $20 still new in the box. The don't build cases like that anymore its a tank.


----------



## AMD SLI guru

Quote:


> Originally Posted by *CSCoder4ever*
> 
> Wow that reminds me of my previous machine!
> 
> and yeah, I'm also thinking about buying rack cases for most of my existing systems, though I'm wondering if I should make a build log of it.


Always make a build log of things.


----------



## bigredishott

Quote:


> Originally Posted by *AMD SLI guru*
> 
> Always make a build log of things.


I wish I had although, I have nothing special.


----------



## AMD SLI guru

Quote:


> Originally Posted by *bigredishott*
> 
> I wish I had although, I have nothing special.


it's not about what you have being special. It's they way you've built it for your own use.

I've always loved build logs because it helps people see your thinking, help with problems that might come up, figure out the mistakes and corrections for the future, and most importantly, it provides a guide for people looking to do the same thing.

I can't tell you how many times i've had to explain my Balrog build while I was doing it. Nobody understood it and thought I was being stupid. As soon as I did a build log, people would stop asking and were in complete awe.


----------



## bigredishott

Next time! When I make some major changes. I want to try a custom loop sooner than later and I think I will need a new case to do so. Then I will do a log.


----------



## DaveLT

It also depends on you, the builder. You want silence? Sacrifice density ...
For me it's go big or go home (in terms of density for servers). I like 2U form factors


----------



## Plan9

Quote:


> Originally Posted by *AMD SLI guru*
> 
> it's not about what you have being special. It's they way you've built it for your own use.
> 
> I've always loved build logs because it helps people see your thinking, help with problems that might come up, figure out the mistakes and corrections for the future, and most importantly, it provides a guide for people looking to do the same thing.
> 
> I can't tell you how many times i've had to explain my Balrog build while I was doing it. Nobody understood it and thought I was being stupid. As soon as I did a build log, people would stop asking and were in complete awe.


To be honest, I think I'd rather have people assume I'm stupid than waste time composing a log just to prove internet randoms wrong.


----------



## AMD SLI guru

Quote:


> Originally Posted by *Plan9*
> 
> To be honest, I think I'd rather have people assume I'm stupid than waste time composing a log just to prove internet randoms wrong.


It's not the only reason why you should do one. It's just one of the reasons I was giving.


----------



## Plan9

Quote:


> Originally Posted by *AMD SLI guru*
> 
> It's not the only reason why you should do one. It's just one of the reasons I was giving.


What's the other reasons?

I've never bothered and never missed having one. But I'm curious about how others have found them useful


----------



## Quasimojo

Quote:


> Originally Posted by *Plan9*
> 
> Quote:
> 
> 
> 
> Originally Posted by *AMD SLI guru*
> 
> It's not the only reason why you should do one. It's just one of the reasons I was giving.
> 
> 
> 
> What's the other reasons?
> 
> I've never bothered and never missed having one. But I'm curious about how others have found them useful
Click to expand...

Personally, with apologies to the community, I've never done one, either. I've always wanted to, but I never figured I'd be willing to devote enough time to one to produce something of sufficient quality that I would feel good about posting it. It always just seemed like it would impede my progress to a point I would find unacceptable.

Some people, I'm sure, are pretty good at the presentation aspect, and it likely doesn't slow them down much. I'm extremely grateful for their efforts. Me, I'm too OCD not to put a ton of time into the log, and I rely on a good head of steam to research and complete the build project in the first place.


----------



## xNovax

If I was to do a build log I would need someone else to run the log and take pictures while I build. I build my pc's to fast and when I start I want to finish. Usually I don't have enough time or I don't want to have enough time to stop and take pictures.


----------



## DaveLT

Quote:


> Originally Posted by *xNovax*
> 
> If I was to do a build log I would need someone else to run the log and take pictures while I build. I build my pc's to fast and when I start I want to finish. Usually I don't have enough time or I don't want to have enough time to stop and take pictures.


The other day i was building up a Richland rig and it was all over in 10mins ... That's how fast a person can finish a rig if they try.


----------



## AMD SLI guru

Quote:


> Originally Posted by *Plan9*
> 
> What's the other reasons?
> 
> I've never bothered and never missed having one. But I'm curious about how others have found them useful


Well the perfect example is the Balrog build. I mean that isn't something you see everyday in the folding area. The reason people do them is because yes there is a sort of l33t factor and showing off, but more so how simple or complex a great looking or functioning build would come about. I've built 2 freenas rigs out of crappy hardware. I've stripped down server cases, modded PSU fans, and loaded them up with exactly what I wanted to run. Then the questions come in of: How did you do that? What did you do with the old xxxxxx? How much xxxx cost and does it work right? These are super common questions and when you spend your time answering about 30 or so, you get to the point of thinking that most people haven't done this or had the idea to. I prefer to keep logs now because it answers tons of questions that would be asked *and still are asked... I just point them to the build log* but I know that I rely on build logs when I start selecting components for what I want to run. I can reference them when I have a problem and see if it's common. What is a solution to getting around certain bits and cost analysis.

People who do "common" builds think this stuff is a dime a dozen. I understand that mentality, but if you ever wanted to get into a more complex area of a build *custom water loops, Pelters, Sleeving, or building a game server* , you're going to rely on people who have done it before, and those build logs would go along way to helping you do w/e it is you wanted to do.


----------



## Plan9

Yeah, now you mention it, I can see the benefits for the more unique builds. I've quite enjoyed reading about some of the more niche Raspberry Pi's and am planning to put together some kind of blog about my own RasPi project.

Personally -and I stress this is just my person opinion- I still don't see the point for a build log for home servers as they are generally a pretty standard hardware set up. However I appreciate it's just personal preference and that some people enjoy sharing their build - which I guess is a good enough reason in itself.

I guess what I'm trying to say is you've not convinced me to log my next build, but I can now appreciate why you guys prefer to keep a build log.


----------



## Samuez

Here's mine. I had this for a while, but didn't like the heat output but I could fix that by increasing fan speed.

spec:
Apex 008 mini-itx case
D510MO intel motherboard
3GB DDR2 6400 or maybe 5300
PCI SATA 2port card
3x2TB HDD by WD green or Seagate 5400rpm.
1x250GB Hitachi 2.5" HDD
2x120mm fan going around 800rpm










pretty small case, I like the compact and itx case. I could add another 3.5" HDD on where one of the fan is, but that'll be too hot. It's running at 43C with all HDD running at once.


----------



## RushiMP

Quote:


> Originally Posted by *LoneWolf15*
> 
> Quote:
> 
> 
> 
> Originally Posted by *killabytes*
> 
> 
> _But the newest member to the family is a Cobalt Raq XTR._
> 
> 
> Cool. I always thought the Cobalt Qube was a really cool piece of equipment.
> 
> For comp-history buffs --Wikipedia link


I like it. Taking old chassis and putting in modern gear is a hobby of mine. I have collected SGI gear.

To me it is like taking a 60s muscle car and fitting it with a crate motor, 6 piston brakes, and coil overs. Best of both worlds.


----------



## Citra

My closet. Apartment is wired for gigabit ethernet.







Synology NAS for storage duties.


----------



## Plan9

Quote:


> Originally Posted by *RushiMP*
> 
> I like it. Taking old chassis and putting in modern gear is a hobby of mine. I have collected SGI gear.
> 
> To me it is like taking a 60s muscle car and fitting it with a crate motor, 6 piston brakes, and coil overs. Best of both worlds.


I hope the old parts go to computer museums rather than being thrown away; as they're always looking for replacement parts to keep ageing systems running


----------



## SISTERxFISTER

A little server setup. Dell Optiplex 540 (I think), Dell Optiplex 745 and a Dell Precision 390.

Optiplex 745:
Intel Pentium D dual core
4 GB RAM

Used as:
Active Directory Domain Server
DHCP
DNS

OS: Windows Server 2008 R2

Optiplex 540:
Intel Pentium 4
2 GB RAM

Used as:
WSUS Server
WINS

OS: Windows server 2008 R2

Precision 390:
Intel Core 2 Duo
4 GB RAM

Used as:
DNS
Hyper-V
-Windows 7 virtual for torrent downloads
-2008 R2 for Firewall. Forefront TMG
File Server
Plex Server for Roku Box

OS: Windows Server 2012


----------



## Quasimojo

Quote:


> Originally Posted by *SISTERxFISTER*
> 
> 
> 
> 
> 
> A little server setup. Dell Optiplex 540 (I think), Dell Optiplex 745 and a Dell Precision 390.
> 
> Optiplex 745:
> Intel Pentium D dual core
> 4 GB RAM
> 
> Used as:
> Active Directory Domain Server
> DHCP
> DNS
> 
> OS: Windows Server 2008 R2
> 
> Optiplex 540:
> Intel Pentium 4
> 2 GB RAM
> 
> Used as:
> WSUS Server
> WINS
> 
> OS: Windows server 2008 R2
> 
> Precision 390:
> Intel Core 2 Duo
> 4 GB RAM
> 
> Used as:
> DNS
> Hyper-V
> -Windows 7 virtual for torrent downloads
> -2008 R2 for Firewall. Forefront TMG
> File Server
> Plex Server for Roku Box
> 
> OS: Windows Server 2012


Ok, now what we really want to know is - what's in the fridge?


----------



## AMD SLI guru

Quote:


> Originally Posted by *Quasimojo*
> 
> Ok, now what we really want to know is - what's in the fridge?


don't you know already? it's another computer silly! Didn't you know that refrigerators are the best and getting low temps!? just mount the computer inside, plug it in, and off you go. Never worry about overheating ever again.


----------



## Citra

Quote:


> Originally Posted by *AMD SLI guru*
> 
> don't you know already? it's another computer silly! Didn't you know that refrigerators are the best and getting low temps!? just mount the computer inside, plug it in, and off you go. Never worry about overheating ever again.


----------



## tiro_uspsss

Quote:


> Originally Posted by *Citra*


LOLOLOL used his on avy no less, classic!


----------



## hartofwave

I was on here a while ago with a HP ProLiant ML370G5, you may not remember. but i have decided to give it away and so i am shamelessly promoting the thread here! 

links to OCN, don't hurt me mods


----------



## levontraut

this is my little rig.

i will be getting a new intel dual gigabyte NIC

new hard drives to fill the space that is empty.

and probably swop out the ram as that is way to little now (8 gig total)


----------



## EpicAMDGamer

Quote:


> Originally Posted by *levontraut*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> this is my little rig.
> 
> i will be getting a new intel dual gigabyte NIC
> 
> new hard drives to fill the space that is empty.
> 
> and probably swop out the ram as that is way to little now (8 gig total)


You need to find a smaller and more efficient GPU.


----------



## NKrader

Quote:


> Originally Posted by *EpicAMDGamer*
> 
> You need to find a smaller and more efficient GPU.


right? i have a rage xl that he could use i just pulled it from a server that didnt have inboard gpu lol


----------



## levontraut

Quote:


> Originally Posted by *EpicAMDGamer*
> 
> You need to find a smaller and more efficient GPU.


maybe.

but i am going to start bitcoin mining again... so it is fine. also if i want to stick it on a desk so my brother inlaw wants to play there is more than enough power behind the rig


----------



## Imrac

Current state of my TV table:


You know that point when you feel like you over done it?.... Well I just had that after realizing between the servers and my computer, I have about 200GB of ram....


----------



## xNovax

Quote:


> Originally Posted by *Imrac*
> 
> Current state of my TV table:
> 
> 
> You know that point when you feel like you over done it?.... Well I just had that after realizing between the servers and my computer, I have about 200GB of ram....


Specs?
Server List?


----------



## blooder11181

i am going to repair a hp proliant ml150 g1 what windows server can i install there?


----------



## DaveLT

Quote:


> Originally Posted by *blooder11181*
> 
> i am going to repair a hp proliant ml150 g1 what windows server can i install there?


Of course if you don't need drivers that works with your hardware (It might or might not work) any windows will do
But i wouldn't recommend anything beyond 2008.


----------



## Imrac

Quote:


> Originally Posted by *xNovax*
> 
> Specs?
> Server List?


2x Dell C1100 with 2x L5520 QC Procs, 72GB of DDR3 ECC RAM
2x Dell SC 1450 (Never used cause they are way too loud, slow and power hungry.)
VM Host (in my sig) - i7 3770s , 32GB of DDR3 RAM, 4x 2TB WD Greens, 4x1TB Samsung F3


----------



## xNovax

Quote:


> Originally Posted by *Imrac*
> 
> 2x Dell C1100 with 2x L5520 QC Procs, 72GB of DDR3 ECC RAM
> 2x Dell SC 1450 (Never used cause they are way too loud, slow and power hungry.)
> VM Host (in my sig) - i7 3770s , 32GB of DDR3 RAM, 4x 2TB WD Greens, 4x1TB Samsung F3


I got myself the exact same C1100. I like it.


----------



## Plan9

My VM host has 8 GB RAM. I thought that was excessive


----------



## Citra

Quote:


> Originally Posted by *xNovax*
> 
> I got myself the exact same C1100. I like it.


Did you end up buying yours from ebay?


----------



## xNovax

Quote:


> Originally Posted by *Citra*
> 
> Did you end up buying yours from ebay?


Yes


----------



## Sean Webster

Can't wait to post my C1100 with 2x L5520 QC Procs, 72GB of DDR3 ECC RAM too! lol Should be here early this week









To stay on topic, here is my current server:

Canon 60D IMG_4608.jpg by Sean Webster Photo, on Flickr


----------



## DaveLT

How are you loving your FX-888D?







I sold off one of my FX-951s the other day (didn't really have a need for it anymore)


----------



## Sean Webster

Quote:


> Originally Posted by *DaveLT*
> 
> How are you loving your FX-888D?
> 
> 
> 
> 
> 
> 
> 
> I sold off one of my FX-951s the other day (didn't really have a need for it anymore)


I'm honestly in love with it lol.


----------



## DaveLT

Quote:


> Originally Posted by *Sean Webster*
> 
> I'm honestly in love with it lol.


Nice








Where i am it's bad value @ 200$ (LOL) but in the states it's only 80USD which is a super bargain for one the best no-nonsense simplistic stations out there
The first time i picked up my FX951 i felt it was simple ... (just like any japanese car, in japanese culture it's important to be simple) but it really surprised me when i started soldering
In the future i will definitely try a QUICK 303B, those things have a crap-ton of thermal horsepower


----------



## Chooofoojoo

My "Server"

Supermicro H8QGi+-F
4x AMD 6386SE ES
128GB ECC Reg DDR3 (sloooooow 1333 cl9)
64Gb SSD, Still trying to figure out proper storage when I actually use it for something.

64 threads of 3.2Ghz.... 1000W out of the wall.








And.. um... yea. It's watercooled even the 8400gs


----------



## Norse

Quote:


> Originally Posted by *Chooofoojoo*
> 
> My "Server"
> 
> Supermicro H8QGi+-F
> 4x AMD 6386SE ES
> 128GB ECC Reg DDR3 (sloooooow 1333 cl9)
> 64Gb SSD, Still trying to figure out proper storage when I actually use it for something.
> 
> 64 threads of 3.2Ghz.... 1000W out of the wall.
> 
> 
> 
> 
> 
> 
> 
> 
> And.. um... yea. It's watercooled even the 8800gs


That pulls 1k? jesus my Ragnarok build (see sig) pulls 200 idling then again its only 2.1ghz, i have quarter the ram you do but i do have a GTX 680









Not sure on mine 100% CPU and under heavy GPU usage but i know during gaming its 350ish


----------



## DaveLT

Quote:


> Originally Posted by *Norse*
> 
> That pulls 1k? jesus my Ragnarok build (see sig) pulls 200 idling then again its only 2.1ghz, i have quarter the ram you do but i do have a GTX 680
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Not sure on mine 100% CPU and under heavy GPU usage but i know during gaming its 350ish


Of course, duh. 200W per CPU sounds legit.
And a 8800GT doesn't draw little ... it draws quite alot as well, figure in PSU efficiency (Which is 88% for a fully loaded 1000W 80PLUS Gold PSU) and there you have it


----------



## Norse

Quote:


> Originally Posted by *DaveLT*
> 
> Of course, duh. 200W per CPU sounds legit.
> And a 8800GT doesn't draw little ... it draws quite alot as well, figure in PSU efficiency (Which is 88% for a fully loaded 1000W 80PLUS Gold PSU) and there you have it


er 200 watt per cpu seems impossible as they are not 200watt TDP on the 6386SE's


----------



## DaveLT

Quote:


> Originally Posted by *Norse*
> 
> er 200 watt per cpu seems impossible as they are not 200watt TDP on the 6386SE's


Um, 3.2GHz. That's not stock. Yes sure 3.2GHz is the boost freq BUT boost only occurs on less than 16 core loads.


----------



## Chooofoojoo

Quote:


> Originally Posted by *DaveLT*
> 
> Um, 3.2GHz. That's not stock. Yes sure 3.2GHz is the boost freq BUT boost only occurs on less than 16 core loads.










3.2Ghz is the all-core boost freq, whereas 3.5Ghz is the <8 core boost freq.

She runs forced @ 3.2 full load all day long. The 8400GS doesn't pull but a sip of power (I corrected my first post... it's not an 8800 gs).

Running off an AX1200i 80+plat. PSU too. it gets ~88-90% efficiency around that load.


----------



## DaveLT

Quote:


> Originally Posted by *Chooofoojoo*
> 
> 
> 
> 
> 
> 
> 
> 
> 3.2Ghz is the all-core boost freq, whereas 3.5Ghz is the <8 core boost freq.
> 
> She runs forced @ 3.2 full load all day long. The 8400GS doesn't pull but a sip of power (I corrected my first post... it's not an 8800 gs).
> 
> Running off an AX1200i 80+plat. PSU too. it gets ~88-90% efficiency around that load.


But how in the world does it pull 1000W then








If power consumption is roughly 140W (or thereof) per proc that would be 560W isn't it for the procs? Or is the voltage elevated?
But nevermind, if say the board pulls at most 50W counting 88% efficiency it isn't anywhere near 1000W ...
Next, is your power meter actually reporting correctly


----------



## Norse

Quote:


> Originally Posted by *DaveLT*
> 
> But how in the world does it pull 1000W then
> 
> 
> 
> 
> 
> 
> 
> 
> If power consumption is roughly 140W (or thereof) per proc that would be 560W isn't it for the procs? Or is the voltage elevated?
> But nevermind, if say the board pulls at most 50W counting 88% efficiency it isn't anywhere near 1000W ...
> Next, is your power meter actually reporting correctly


Unless the memory is fairly power hungry? if you think 10watts per dimm and he has er quite a few dimms


----------



## neo565

I found 8 of those CPU's in the recycle bin at Woods Hole Oceanographic once. How much did you pay for them?


----------



## neo565

Also, mine are the Interlagos. What are yours?


----------



## Chooofoojoo

I would love to take some of those recycle bin cpus off of you!

Mine are AbuDhabi. 140w tpd is for the 2.8 standard freq. These have a healthy over-volt to hang out so spicy. I bumped one of the cpus up to 4ghz (yay for unlocked engineering samples!) But haven't got around to try and OC all of them together.


----------



## DaveLT

Quote:


> Originally Posted by *Chooofoojoo*
> 
> I would love to take some of those recycle bin cpus off of you!
> 
> Mine are AbuDhabi. 140w tpd is for the 2.8 standard freq. These have a healthy over-volt to hang out so spicy. I bumped one of the cpus up to 4ghz (yay for unlocked engineering samples!) But haven't got around to try and OC all of them together.


No wonder they're pulling 200W each. And also those are ES chips aren't they? mmm. That's why it's "extra spicy"







Quote:


> Originally Posted by *neo565*
> 
> Also, mine are the Interlagos. What are yours?


No wonder they're in the recycle bin, they belong there!







It's not bad for collecting though. I want some of those as well!


----------



## neo565

Sorry, my Interlagos CPU's are in some motherboards at Woods Hole Oceanographic, running some big calculations for my dad.


----------



## neo565

He's doing some kind of mapping thing for the ocean.


----------



## DaveLT

Quote:


> Originally Posted by *neo565*
> 
> He's doing some kind of mapping thing for the ocean.


Nice. IMO that's something properly useful for once. (rather than folding, yes i know i lost a relative to cancer but jeez)


----------



## MikhailV

Since those Opterons can be had really cheap now, I'd snap up a couple and run a my network with em.


----------



## neo565

12 core Interlagos for 14 bucks. WOW.
http://www.mrsparepartsonline.com/cpu-processor/amd-opteron-6234-16mb-2-4ghz-12-core-g34-cpu-processor-os6234wktcggu/


----------



## EpicAMDGamer

Quote:


> Originally Posted by *neo565*
> 
> 12 core Interlagos for 14 bucks. WOW.
> http://www.mrsparepartsonline.com/cpu-processor/amd-opteron-6234-16mb-2-4ghz-12-core-g34-cpu-processor-os6234wktcggu/


G34 motherboards still have pretty hefty price tags though.


----------



## neo565

Sometimes you can find them on Ebay for about 50 bucks or so.


----------



## Blindsay

Quote:


> Originally Posted by *neo565*
> 
> 12 core Interlagos for 14 bucks. WOW.
> http://www.mrsparepartsonline.com/cpu-processor/amd-opteron-6234-16mb-2-4ghz-12-core-g34-cpu-processor-os6234wktcggu/


why are they so cheeap, whats the catch? and how many of those can i run on 1 board?


----------



## Sean Webster

See the Brand specified?


----------



## neo565

I just think the seller made a mistake with the brand, because lower on the page it says that it was made by AMD. You can run 4 of those on 1 board.


----------



## neo565

Also they are cheap because they are not the newest architecture.


----------



## DaveLT

Quote:


> Originally Posted by *neo565*
> 
> Also they are cheap because they are not the newest architecture.


But certainly because also it's crappy. For example FX8150 was hotter-running, slower than a 1090t ...
So to sum it up all it takes is a 12-core MC to beat a 16-core IL and it is even easily overclocked


----------



## neo565

Still, 12 cores for 14 bucks. You really can't beat that.


----------



## bobfig

you need 12 cores for home use???eff that


----------



## Blindsay

Quote:


> Originally Posted by *neo565*
> 
> Still, 12 cores for 14 bucks. You really can't beat that.


So i could have 48 cores (4x12)? this sounds like it might be a fun project for VM's


----------



## neo565

Yeah. All you need is a motherboard with 4 G34 sockets.


----------



## Norse

Quote:


> Originally Posted by *bobfig*
> 
> you need 12 cores for home use???eff that


i have 32 in my gaming PC.......


----------



## Blindsay

Quote:


> Originally Posted by *neo565*
> 
> Yeah. All you need is a motherboard with 4 G34 sockets.


Are there any that are not like $700 lol


----------



## neo565

Here's a dual g34 mobo for $110:
http://www.ebay.com/itm/ASUSTeK-COMPUTER-KGNH-D16-Socket-G34-KGNH-D16-Motherboard-AMD-DDR3-New-O-S-/271246570214?pt=Motherboards&hash=item3f278e32e6


----------



## Norse

Quote:


> Originally Posted by *neo565*
> 
> Here's a dual g34 mobo for $110:
> http://www.ebay.com/itm/ASUSTeK-COMPUTER-KGNH-D16-Socket-G34-KGNH-D16-Motherboard-AMD-DDR3-New-O-S-/271246570214?pt=Motherboards&hash=item3f278e32e6


Unless you make a custom case you'd be screwed


----------



## neo565

No. It's half-SSI so it would fit in pretty much any case that accepts SSI.


----------



## Blindsay

Quote:


> Originally Posted by *neo565*
> 
> Here's a dual g34 mobo for $110:
> http://www.ebay.com/itm/ASUSTeK-COMPUTER-KGNH-D16-Socket-G34-KGNH-D16-Motherboard-AMD-DDR3-New-O-S-/271246570214?pt=Motherboards&hash=item3f278e32e6


thanks, I should have specified though I meant quad socket


----------



## neo565

Yeah quad socket boards are very expensive, but you could get a dual socket for $110.


----------



## DaveLT

Quote:


> Originally Posted by *neo565*
> 
> Yeah quad socket boards are very expensive, but you could get a dual socket for $110.


Even a 2P with 24 cores is alot of computing power

Certainly more than dual L5520 server
And the fact i can get Opty 6128s all over on ebay for 50$ ...

Anyone willing to sell me a dual socket G34 that you will ship overseas?


----------



## neo565

That would be awesome to make a gaming pc with that board, and then liquid cool it and stick a Titan in there.


----------



## neo565

Here's the info page from Asus for that mobo:
http://www.asus.com/Commercial_Servers_Workstations/KGNHD16/


----------



## DaveLT

Quote:


> Originally Posted by *neo565*
> 
> That would be awesome to make a gaming pc with that board, and then liquid cool it and stick a Titan in there.


If i can get a dual cpu mobo for under 150$ i would immediately buy g34 waterblocks ... and 2 360 rads
One thing i'm not keen about is spending 50$ on a single pump


----------



## Norse

Quote:


> Originally Posted by *DaveLT*
> 
> Anyone willing to sell me a dual socket G34 that you will ship overseas?


Same!


----------



## NKrader

Quote:


> Originally Posted by *Norse*
> 
> Same!


buy one on ebay and have somone ship it too you,

i do it for another forum, guy in south america buys stuff has it shipped to my place then i package better and forward to his place


----------



## AMD SLI guru

I have a Asus dual socket motherboard I'm willing to sell *not for 150bucks* and ship overseas. They just have to pay for the shipping.


----------



## xNovax

Quote:


> Originally Posted by *AMD SLI guru*
> 
> I have a Asus dual socket motherboard I'm willing to sell *not for 150bucks* and ship overseas. They just have to pay for the shipping.


What board?


----------



## AMD SLI guru

Quote:


> Originally Posted by *xNovax*
> 
> What board?


The Asus KGPE-D16


----------



## neo565

That is a really awesome motherboard.


----------



## AMD SLI guru

they are great boards. I actually have two of them and one of them is being used as a VM server. The other one is just sitting here.


----------



## crust_cheese

Out of curiosity, how is it these rackmount switches are so huge and expensive? I mean, it's high quality, probably, but how can it be that more complicated and expensive than a generic 4-port switch just scaled to more ports?


----------



## ndoggfromhell

typically they're managed switches (meaning they have a sophisticated OS on them that allows each port to managed seperately) They're also usually equipped with more backplane memory and have a higher stability/MTBF.


----------



## DaveLT

Quote:


> Originally Posted by *crust_cheese*
> 
> Out of curiosity, how is it these rackmount switches are so huge and expensive? I mean, it's high quality, probably, but how can it be that more complicated and expensive than a generic 4-port switch just scaled to more ports?


Much more stable (generic 4 ports ... good luck getting stable full speed power), more performance, more features that are important and lastly, ports that don't blow up when you put a huge load on them.

If you just want a cheap rackmount switch buy a unmanaged switch if you certainly don't need VLAN


----------



## Oedipus

Quote:


> Originally Posted by *crust_cheese*
> 
> how can it be that more complicated and expensive than a generic 4-port switch just scaled to more ports?


Yeah, about that...



http://www.dell.com/us/business/p/powerconnect-6200-series/pd


----------



## bobfig

Quote:


> Originally Posted by *crust_cheese*
> 
> Out of curiosity, how is it these rackmount switches are so huge and expensive? I mean, it's high quality, probably, but how can it be that more complicated and expensive than a generic 4-port switch just scaled to more ports?


its sorta like windows and arch linux in a way. windows is good out the box to go where as you have to sit there and configure arch to work on the computer your running it on. not only that you have to have a good understanding of bits and bytes and networking practices to configre them., yes an everyday jo shmo from OCN could sit there and set up and get it to work but why do that when you can get a cheaper one that all you do is plug in.

i have taken a few cisco clases and also have a cisco catalyst 3500 xl siting here in my room that i dont use. if you went it i could sell it to you if you want to play with it









[


----------



## crust_cheese

Selling in this case probably means "a lot of money that I don't have anymore", but thanks anyway









Man, Cisco. It seems like they're the indisputable top dog when it comes to networking, but man, they're proprietary as hell, no? That really sucks.


----------



## bobfig

they are as any other company would so they have buyers stay within the same company. but i dont see why you couldn't mix switches and stuff with others. its just having to manage all them together would mean having to learn multiple OS's that each device uses.


----------



## crust_cheese

But I would think the problem is rather that a fair portion of the Internet backbone is based on proprietary technology and proprietary standards and the education's based on proprietary courses (aka Cisco networking)


----------



## Oedipus

I know HP and Dell switches use CLI structure that is largely similar to Cisco's. Differences or not, they will talk to each other, one way or another.


----------



## driftingforlife

Networking question.

I am getting 2 x dual port 4GbE fibre PCI-E cards at the end of the month to go between my server and my rig.

How I connect them both is what working out atm. I have 2 internet lines as well going though one hub to add into the mix. I will be teaming the 2 ports on the cards so I get 960MB/s

1. Have it peer-to-peer and have them connected strait to each other. would mean I have 2 network connections on server and rig, they both use a different gateway each.

2. Get a switch with 4 x 4gbE SFPs .

I want 2 but I need a switch that I can use 4Gbe SFP's on. Am i right in saying if I use a 4GbE SFP modle in a mini-GBIC SFP port I will be limited to 1GbE?

This is where 1 comes in as a switch with 4 10GbE SFP port will be waaaay to much.

Im looking at this atm http://www.misco.co.uk/product/195047/ZyXel-GS1910-24-24-Port-Gigabit-Smart-Switch?selectedTabIndex=2&tabBarViewName=ProductTechnicalSpecifications&page=1tabs


----------



## jibesh

Quote:


> Originally Posted by *driftingforlife*
> 
> Networking question.
> 
> I am getting 2 x dual port 4GbE fibre PCI-E cards at the end of the month to go between my server and my rig.
> 
> How I connect them both is what working out atm. I have 2 internet lines as well going though one hub to add into the mix. I will be teaming the 2 ports on the cards so I get 960MB/s
> 
> 1. Have it peer-to-peer and have them connected strait to each other. would mean I have 2 network connections on server and rig, they both use a different gateway each.
> 
> 2. Get a switch with 4 x 4gbE SFPs .
> 
> I want 2 but I need a switch that I can use 4Gbe SFP's on. Am i right in saying if I use a 4GbE SFP modle in a mini-GBIC SFP port I will be limited to 1GbE?
> 
> This is where 1 comes in as a switch with 4 10GbE SFP port will be waaaay to much.
> 
> Im looking at this atm http://www.misco.co.uk/product/195047/ZyXel-GS1910-24-24-Port-Gigabit-Smart-Switch?selectedTabIndex=2&tabBarViewName=ProductTechnicalSpecifications&page=1tabs


I don't believe what you are trying to do is possible. AFAIK, fiber HBAs connect to storage devices such as SANs and they don't use the TCP/IP protocol or can be used as ethernet adapters.


----------



## driftingforlife

Yeap, just being looking into it some more







Will connect them peer-to-peer.

Thanks


----------



## jibesh

Quote:


> Originally Posted by *driftingforlife*
> 
> Yeap, just being looking into it some more
> 
> 
> 
> 
> 
> 
> 
> 
> Will connect them peer-to-peer.
> 
> Thanks


Question: how are you planning to present the storage so that they can be passed through the HBAs? What storage protocol and what software?


----------



## driftingforlife

No idea as of yet.


----------



## xNovax

Does anyone have any spare Dell C1100 Rack mounts? It seems like my server never got shipped with them.


----------



## The_Rocker

Quote:


> Originally Posted by *xNovax*
> 
> Does anyone have any spare Dell C1100 Rack mounts? It seems like my server never got shipped with them.


I beleive this is the kit you need.

http://www.ebay.co.uk/itm/Intel-AXXBasicRail-881096-Basic-Slide-Rail-Kit-/140976846479?pt=US_Rackmount_Cabinets_Frames&hash=item20d2dff68f

They are what I needed for my Dell CS23's.


----------



## The_Rocker

Quote:


> Originally Posted by *driftingforlife*
> 
> Networking question.
> 
> I am getting 2 x dual port 4GbE fibre PCI-E cards at the end of the month to go between my server and my rig.
> 
> How I connect them both is what working out atm. I have 2 internet lines as well going though one hub to add into the mix. I will be teaming the 2 ports on the cards so I get 960MB/s
> 
> 1. Have it peer-to-peer and have them connected strait to each other. would mean I have 2 network connections on server and rig, they both use a different gateway each.
> 
> 2. Get a switch with 4 x 4gbE SFPs .
> 
> I want 2 but I need a switch that I can use 4Gbe SFP's on. Am i right in saying if I use a 4GbE SFP modle in a mini-GBIC SFP port I will be limited to 1GbE?
> 
> This is where 1 comes in as a switch with 4 10GbE SFP port will be waaaay to much.
> 
> Im looking at this atm http://www.misco.co.uk/product/195047/ZyXel-GS1910-24-24-Port-Gigabit-Smart-Switch?selectedTabIndex=2&tabBarViewName=ProductTechnicalSpecifications&page=1tabs


4gb FC HBA's 8gb and 16gb are used for Storage fabric. Fibre channel.

It sounds like you want to run a TCP/IP network. In which case you should get yourself a 10Gig fibre NIC and a couple of 10Gig SFP's. You could also do this with 1Gig fibre NIC's.

But yeah... the 2/4/8/16 specification is designed for use in fibre channel storage fabric.


----------



## driftingforlife

Yeap, I looked it up, my mistake. Will just use dual port 1GB NICs for now.

Thanks


----------



## The_Rocker

Quote:


> Originally Posted by *driftingforlife*
> 
> Yeap, I looked it up, my mistake. Will just use dual port 1GB NICs for now.
> 
> Thanks


Just going to mention that you may as well use 1GB copper NIC's and CAT 5 or 6 if this is at home since you probably won't be exceeding the 100m limit.


----------



## driftingforlife

Yea, just need to get a proper switch.

This is what I will get http://www.misco.co.uk/product/195153/HP-1810-24G-v2-24-Port-Gigabit-Switch?selectedTabIndex=2&tabBarViewName=ProductTechnicalSpecifications&page=1tabs

Just seen this. Might just save for for when i can use 10GbE. might just do that.

http://www.misco.co.uk/product/195159/ZyXel-GS1910-24-24-Port-Gigabit-Smart-Switch-with-10G-uplinks


----------



## DaveLT

Quote:


> Originally Posted by *The_Rocker*
> 
> Just going to mention that you may as well use 1GB copper NIC's and CAT 5 or 6 if this is at home since you probably won't be exceeding the 100m limit.


... I often exceeded the 125m limit, that's why i have many quad 1G NICs that handle the output from my file server (FC) which then goes to my switch and my rig is hooked up with a 9402pt teamed with my 2x Realtek NICs


----------



## Sarec

Bah ignore this post.


----------



## jibesh

Quote:


> Originally Posted by *DaveLT*
> 
> Quote:
> 
> 
> 
> Originally Posted by *The_Rocker*
> 
> Just going to mention that you may as well use 1GB copper NIC's and CAT 5 or 6 if this is at home since you probably won't be exceeding the 100m limit.
> 
> 
> 
> ... I often exceeded the 125m limit, that's why i have many quad 1G NICs that handle the output from my file server (FC) which then goes to my switch and my rig is hooked up with a 9402pt teamed with my 2x Realtek NICs
Click to expand...

Lol same here...I push between 300MB/s - 500MB/s between my servers/workstation so ConnectX 10Gb Ethernet adapters for me. I'm impatient


----------



## The_Rocker

Quote:


> Originally Posted by *jibesh*
> 
> Lol same here...I push between 300MB/s - 500MB/s between my servers/workstation so ConnectX 10Gb Ethernet adapters for me. I'm impatient


Well we all know you can exceed the copper 'limit' but then you don't meet the specification for a CAT5e or 6 install etc.... Only really relevant in professional install.


----------



## Ecstacy

Quote:


> Originally Posted by *jibesh*
> 
> Lol same here...I push between 300MB/s - 500MB/s between my servers/workstation so ConnectX 10Gb Ethernet adapters for me. I'm impatient


You can team a couple of cheap NICs like this one if you wanted to save some money.


----------



## jibesh

Quote:


> Originally Posted by *Ecstacy*
> 
> You can team a couple of cheap NICs like this one if you wanted to save some money.


Teaming typically gives more bandwidth not more speed. Also, when I can get ConnectX EN or Infiniband adapters for $50 to $60 each, it doesn't make sense to buy more 1GbE adapters.


----------



## Plan9

Quote:


> Originally Posted by *jibesh*
> 
> Teaming typically gives more bandwidth not more speed.


Bandwidth is speed for most purposes. Youre not accelerating the speed of elections, but by increasing the bandwidth you're increasing the amount of data you can transmit in a given period of time. Thus you speed up the time it takes to transmit larger packets out data.

What bandwidth isn't is latency. But that's another thing again.


----------



## Ecstacy

Quote:


> Originally Posted by *Plan9*
> 
> Bandwidth is speed for most purposes. Youre not accelerating the speed of elections, but by increasing the bandwidth you're increasing the amount of data you can transmit in a given period of time. Thus you speed up the time it takes to transmit larger packets out data.
> 
> What bandwidth isn't is latency. But that's another thing again.


That's what I was thinking. It's pretty much the same thing for most purposes in my opinion (correct me if I'm wrong.)


----------



## jibesh

Quote:


> Originally Posted by *Plan9*
> 
> Bandwidth is speed for most purposes. Youre not accelerating the speed of elections, but by increasing the bandwidth you're increasing the amount of data you can transmit in a given period of time. Thus you speed up the time it takes to transmit larger packets out data.
> 
> What bandwidth isn't is latency. But that's another thing again.


Quote:


> Originally Posted by *Ecstacy*
> 
> That's what I was thinking. It's pretty much the same thing for most purposes in my opinion (correct me if I'm wrong.)


As I understand it, bandwidth is capacity and speed is throughput.

Teaming 2 or more 1GbE links will give you the capacity to serve multiple requests but each link will still be limited to the speed of ~125 MB/s.

So a server with 2 or more teamed 1GbE links will only be able to serve out data at the maximum rate of ~125 MB/s to one or more servers (transfer rate can be increased with MPIO but that's not really NIC teaming).

Having a 10GbE link will allow you to transmit data up to ~1.2 GB/s to another device.

i.e. I recently had to move 7TB of data from a failed array to another. With a 10GbE link, it only took about 5 hours (at an average transfer rate of 400 MB/s). Over 1GbE, it would have taken about 18 hours (assuming an average transfer rate of 115 MB/s).


----------



## Nexo

So much pictures of servers.


----------



## DaveLT

Quote:


> Originally Posted by *jibesh*
> 
> As I understand it, bandwidth is capacity and speed is throughput.
> 
> Teaming 2 or more 1GbE links will give you the capacity to serve multiple requests but each link will still be limited to the speed of ~125 MB/s.
> 
> So a server with 2 or more teamed 1GbE links will only be able to serve out data at the maximum rate of ~125 MB/s to one or more servers (transfer rate can be increased with MPIO but that's not really NIC teaming).
> 
> Having a 10GbE link will allow you to transmit data up to ~1.2 GB/s to another device.
> 
> i.e. I recently had to move 7TB of data from a failed array to another. With a 10GbE link, it only took about 5 hours (at an average transfer rate of 400 MB/s). Over 1GbE, it would have taken about 18 hours (assuming an average transfer rate of 115 MB/s).


No, not really at all. Do you really understand the point of teaming? It's not a "parallel" link at all.


----------



## The_Rocker

There are several different types of 'NIC Teaming'. Smart load balancing, Failover and Link Aggregation.

Full Link aggregation is what turns several small pipes into one fat pipe and allows for full bandwidth utilization. (4 x 1Gbe NIC's will give you a 4Gbe pipe). However proper LACP (Aggregation) will require configuration on the switch the NIC's are connected to. On HP switches this is typically called an LACP trunk and on Cisco Catalyst gear its usually called a Port Channel.

It is worth mentioning that the price of 10Gbit Ethernet is coming down fast. A Single 10Gbit NIC is costing around £400 now. Whereas a quad port 1Gbit card will cost around £250. Double that and add 2 more ports, 10 1Gbit NIC's will cost you more than a single 10Gbit NIC and IMO look really untidy.

Its the price of the switches that will hurt with 10Gbit though.... Around £1000 for a Netgear Prosafe 12 port.

Middle ground though you can achieve the following set up and get 4Gbits of bandwidth (yes fully utilizable which means over 400MB/s throughput).....

A Used HP Procurve 2810-24G switch. - $150. Here

A used Intel Quad Port server NIC. - $50. Here

Now when new, those parts were expensive and used in fairly high end environments. You will be able to configure a full LACP trunk and enjoy a 4Gbit's link.


----------



## Plan9

Quote:


> Originally Posted by *jibesh*
> 
> As I understand it, bandwidth is capacity and speed is throughput.


No. Bandwidth is throughput and latency is speed. But since latency matters more for real time services (such as gaming), throughput has by far the biggest significance on the speed you can transmit data above a certain size.

I thought experiment would be this:
Posting (via snail mail) a 3TB HDD to someone would have a slower latency than FTPing it. As no data can be received until the HDD arrives, you'll have a latency of 1 day (assuming next day delivery - which is pretty standard these days). However most home internet connections cannot upload 3TB of data in 1 day, so although FTPing will have a lower latency, posting the data is actually quicker as you're sending the entire data in one package. Thus the bandwidth of snail mail is far greater and thus snail mail is quicker.

Obviously I'm not saying we should all be posting harddisks instead of uploading data. The point is that bandwidth can have a more profound affect on the speed at which data is sent than the latency will. And what's more, when talking about electronic equipment, the difference between latency is pretty negligible (which is why it's generally not an issue unless you're building real time systems). Whereas bandwidth capacity can jump significantly from one specification to another.


----------



## DaveLT

Quote:


> Originally Posted by *Plan9*
> 
> No. Bandwidth is throughput and latency is speed. But since latency matters more for real time services (such as gaming), throughput has by far the biggest significance on the speed you can transmit data above a certain size.
> 
> I thought experiment would be this:
> Posting (via snail mail) a 3TB HDD to someone would have a slower latency than FTPing it. As no data can be received until the HDD arrives, you'll have a latency of 1 day (assuming next day delivery - which is pretty standard these days). However most home internet connections cannot upload 3TB of data in 1 day, so although FTPing will have a lower latency, posting the data is actually quicker as you're sending the entire data in one package. Thus the bandwidth of snail mail is far greater and thus snail mail is quicker.
> 
> Obviously I'm not saying we should all be posting harddisks instead of uploading data. The point is that bandwidth can have a more profound affect on the speed at which data is sent than the latency will. And what's more, when talking about electronic equipment, the difference between latency is pretty negligible (which is why it's generally not an issue unless you're building real time systems). Whereas bandwidth capacity can jump significantly from one specification to another.


You got it wrong as well, there's no such thing as "speed" in networking.
There's only bandwidth and throughput. Bandwidth = theoretical throughput. That's all it is
Oh, Latency is latency by the way.


----------



## Plan9

Quote:


> Originally Posted by *DaveLT*
> 
> You got it wrong as well, there's no such thing as "speed" in networking.
> There's only bandwidth and throughput. Bandwidth = theoretical throughput. That's all it is
> Oh, Latency is latency by the way.


I love you say "I'm wrong" and then go on to make the same bloody points that I made









Seriously mate - there wasn't much different between our two posts aside the fact that I gave a dumbed down explanation and you just echoed networking terms without any explanation. However the point we're both making is still the same


----------



## Theloudtrout

Here is an update on my FD Core 1000 server, took a picture while i was working on it hence why the 4 pin power and ram is missing.









The core 1000 makes a great little cheap server box once you shove an improvised HDD rack in.


----------



## Oedipus

Not mine, but one that I'm inheriting at work:





It's an intel MFSYS25V2 blade enclosure with three compute nodes, each with two E5645s and 24-32GB of RAM. So far, I hate it. If it weren't so new, we'd be heaving it out the window and replacing it with some R720s or R820s. Side note: It never ceases to amaze me what kind of half-assed IT infrastructure exists in even the most well-heeled organizations. Did any of you know that intel used to make 1u switches?

http://www.xtremetek.com/reviews/?id=30&page=1

Yeah, neither did I.


----------



## tycoonbob

Quote:


> Originally Posted by *Oedipus*
> 
> Not mine, but one that I'm inheriting at work:
> 
> It's an intel MFSYS25V2 blade enclosure with three compute nodes, each with two E5645s and 24-32GB of RAM. So far, I hate it. If it weren't so new, we'd be heaving it out the window and replacing it with some R720s or R820s. Side note: It never ceases to amaze me what kind of half-assed IT infrastructure exists in even the most well-heeled organizations. Did any of you know that intel used to make 1u switches?
> 
> http://www.xtremetek.com/reviews/?id=30&page=1
> 
> Yeah, neither did I.


I know I love CIsco UCS, but why do you hate that system? Storage, computer, and network all in one box.


----------



## TheNegotiator

Just ordered a PowerEdge R710 to add to my home lab.

*OS:* Undecided
*Case:* R710 stock
*CPU:* 2x E5520
*Motherboard:* R710 stock
*Memory:* 48GB
*PSU:* 870w
*OS HDD:* 64GB SSD
*Storage HDD(s):* 4x 2TB
*Server Manufacturer:* Dell


----------



## killabytes

Quote:


> Originally Posted by *tycoonbob*
> 
> I know I love CIsco UCS, but why do you hate that system? Storage, computer, and network all in one box.


We just went to full Cisco gear at work. UCS and Nexus switches. I love the ear ripping sound of 64 UCS chassis running ESX.


----------



## Plan9

I've just ordered 3x 3TB HDDs to add to my ZFS storage pool. So I'll be running:

Code:



Code:


3x 1TB raidz1
3x 1TB raidz1
3x 3TB raidz1
-----------------
~9TB usable disk space
-----------------

I'll put more up when I've got the hardware


----------



## Oedipus

Quote:


> Originally Posted by *tycoonbob*
> 
> I know I love CIsco UCS, but why do you hate that system? Storage, computer, and network all in one box.


I don't want it to all be in one box, especially the storage side of things. There's also no 10gb uplink available to go to our switches, and on a S55 or PC 7048, an 8 port LAG just feels like a waste of precious space.

Fun fact: Drive 1 (or 0) is dead, and I found out tonight that it's a 600GB SSD. Ouch.


----------



## DaveLT

Quote:


> Originally Posted by *Oedipus*
> 
> I don't want it to all be in one box, especially the storage side of things. There's also no 10gb uplink available to go to our switches, and on a S55 or PC 7048, an 8 port LAG just feels like a waste of precious space.
> 
> Fun fact: Drive 1 (or 0) is dead, and I found out tonight that it's a 600GB SSD. Ouch.


OUCHHHH... Is there warranty for that?


----------



## Oedipus

I would imagine. It's been dead since March and it's a part of the OS RAID 1 that encompasses all three nodes. The fact that it hasn't been replaced yet (and that it was brushed off when we recommended it be replaced) is symbolic of why we're inheriting this client.


----------



## ndoggfromhell

Just picked up an HP microserver this past weekend. It's the N40L model. OS is HomeServer 2011 on a 64Gb SSD. The 4 drive bays are populated with 3Tb Seagates... doing a JBOD for 12Tb. No need for RAID, just copying my movies and music and tv shows to it.


----------



## TheNegotiator

Quote:


> Originally Posted by *ndoggfromhell*
> 
> Just picked up an HP microserver this past weekend. It's the N40L model. OS is HomeServer 2011 on a 64Gb SSD. The 4 drive bays are populated with 3Tb Seagates... doing a JBOD for 12Tb. No need for RAID, just copying my movies and music and tv shows to it.


I haven't seen one of those before. How do you like it? It looks like it would be perfect for some of my smaller business clients that need a server.


----------



## PCSarge

Quote:


> Originally Posted by *Jeci*
> 
> Here's my £6 rack - No server yet, I'm in the process of acquiring some parts for a silent home build that I'm going to be using for ripping, encoding, remuxing as well as running plex server:


why in GODS NAME do you need that many switching ports. im running off a 24 port DLINK on mine rolls just fine.....theres no way your going to have 50~ odd computers in your house at once.


----------



## DaveLT

Quote:


> Originally Posted by *cmgunn*
> 
> I haven't seen one of those before. How do you like it? It looks like it would be perfect for some of my smaller business clients that need a server.


Basic *File*server. Got me thinking about the horsepower on that processor. Surely rather little wouldn't it be


----------



## herkalurk

Quote:


> Originally Posted by *PCSarge*
> 
> why in GODS NAME do you need that many switching ports. im running off a 24 port DLINK on mine rolls just fine.....theres no way your going to have 50~ odd computers in your house at once.


Looks like a CCNA/CCNP practice setup more than need for home.


----------



## PCSarge

Quote:


> Originally Posted by *herkalurk*
> 
> Looks like a CCNA/CCNP practice setup more than need for home.


true but my god. youd laugh at our server setup at work.. can you say a stack of old HP proliants with win server 2003 R2 and a 6 disk scsi hotswap bay in each? failed to mention the OS drives in them are RAID 1 and 32GB... the storage is RAID 5 in each on 4 x 74GB drives.....oh the horror of the 90s......still fast as snot on those xeons theyve got though.....

for god sakes my home server is more modern....i5 750 @4ghz, 16GB of RAM, 2x 6850s that mine 24/7, 6x 2TB drives windows server 2008 R2, and a 800W enermax psu....on a Dlink 1024D switch.

our switches at work.....are so old the names are worn off them......but i believe thier cisco....dlink dont use blue housings. lol


----------



## cones

Quote:


> Originally Posted by *Theloudtrout*
> 
> Here is an update on my FD Core 1000 server, took a picture while i was working on it hence why the 4 pin power and ram is missing.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The core 1000 makes a great little cheap server box once you shove an improvised HDD rack in.
> 
> 
> 
> Spoiler: Warning: Spoiler!


Got any more information on how you mounted the hdds?


----------



## PCSarge

Quote:


> Originally Posted by *cones*
> 
> [/SPOILER]
> 
> Got any more information on how you mounted the hdds?


looking like he made custom cables and just...screwed them in XD


----------



## Menty

Quote:


> Originally Posted by *cmgunn*
> 
> I haven't seen one of those before. How do you like it? It looks like it would be perfect for some of my smaller business clients that need a server.


Got an N36L in my cupboard - only difference from the N40L is a slightly slower processor (I think it's 1.3ghz vs 1.6ghz). They're very nice boxes for simple file serving or a small Exchange install - that's really what they were designed for. <25 users kinda thing.

The new one coming out has a bit more oomph to it - Dual Core Pentium instead of the AMD Athlon II Neo, so realistically around 3x the single threaded performance probably







.

http://www8.hp.com/uk/en/products/proliant-servers/product-detail.html?oid=5379860#!tab=features


----------



## TheNegotiator

Quote:


> Originally Posted by *DaveLT*
> 
> Basic *File*server. Got me thinking about the horsepower on that processor. Surely rather little wouldn't it be


I'm talking about small businesses (25 or less computers) than need a server for their financial database or something like that. The Pentium G2020T would be an improvement over what I've come across in a lot of places like that around where I live.


----------



## DaveLT

Quote:


> Originally Posted by *cmgunn*
> 
> I'm talking about small businesses (25 or less computers) than need a server for their financial database or something like that. The Pentium G2020T would be an improvement over what I've come across in a lot of places like that around where I live.


Well ... it better be. HP used the Turion simply because it is very cheap. The pentium is not.


----------



## TheNegotiator

Quote:


> Originally Posted by *DaveLT*
> 
> Well ... it better be. HP used the Turion simply because it is very cheap. The pentium is not.


Missed the model he posted. I'm talking about this.


----------



## driftingforlife

(not mine but keeps the thread going







)

Finally changed the switch in the server rack today at work.

Before it was 2x 24 port 1GbE switches, top one had a 4GbE trunk to the core switch and they linked the lower switch by one 1GbE connection into the top switch









Its now running a 48 port GbE with a 8GbE trunk via a new cable.

Old core switch link (shoddy work by techs before us)



New one (8x CAT5e)









The servers

3 HP ESXI machines with 1 2011 4c/8t xeon, 2 have 32GB of ram, 1 has 24GB.

1 HP SAN.

1 Dell web server for our VLE (we are a school)

2 UPS's





Me and the network manager both started in nov last year, the stuff we are finding is just unbelievable. The people before us were morons. NM has found over 150 GPOs that do nothing or the same thing









I need to do a LOT of re-cabling, will now have to wait till a half term now.

I got to keep one of the switches though


----------



## xNovax

Looks like you have lots of work ahead of you.


----------



## xNovax

Double post.


----------



## Zeus

Quote:


> Originally Posted by *PCSarge*
> 
> true but my god. youd laugh at our server setup at work.. can you say a stack of old HP proliants with win server 2003 R2 and a 6 disk scsi hotswap bay in each? failed to mention the OS drives in them are RAID 1 and 32GB... the storage is RAID 5 in each on 4 x 74GB drives.....oh the horror of the 90s......still fast as snot on those xeons theyve got though.....


At work we have 4 Fujitsu Teamserver L830i servers. They each have 4 P2 450MHz Xeons cpus, 4GB ram (8 x 512MB), O/S is on a 4GB RAID1, the data is on a RAID5 with 6 x 8GB drives (no hot spare). These are used to host our call logging solution. And the o/s is NT4









I know them well as I built them Q1 2000 and the worrying this is that they are more reliable than our new Fujitsu Primergy RX200 S7 servers.


----------



## ndoggfromhell

The CPU speed isn't an issue. It runs everything I need it to just fine. I wanted something small and quiet, this meets both those requirements.
Quote:


> Originally Posted by *cmgunn*
> 
> I haven't seen one of those before. How do you like it? It looks like it would be perfect for some of my smaller business clients that need a server.


----------



## Norse

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *driftingforlife*
> 
> (not mine but keeps the thread going
> 
> 
> 
> 
> 
> 
> 
> )
> 
> Finally changed the switch in the server rack today at work.
> 
> Before it was 2x 24 port 1GbE switches, top one had a 4GbE trunk to the core switch and they linked the lower switch by one 1GbE connection into the top switch
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Its now running a 48 port GbE with a 8GbE trunk via a new cable.
> 
> Old core switch link (shoddy work by techs before us)
> 
> 
> 
> New one (8x CAT5e)
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The servers
> 
> 3 HP ESXI machines with 1 2011 4c/8t xeon, 2 have 32GB of ram, 1 has 24GB.
> 
> 1 HP SAN.
> 
> 1 Dell web server for our VLE (we are a school)
> 
> 2 UPS's
> 
> 
> 
> 
> 
> Me and the network manager both started in nov last year, the stuff we are finding is just unbelievable. The people before us were morons. NM has found over 150 GPOs that do nothing or the same thing
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I need to do a LOT of re-cabling, will now have to wait till a half term now.
> 
> I got to keep one of the switches though






Looks good, though am wondering why a single 48? surely 2x24 would offer more redundancy so at least things can partially run in the event of failure


----------



## TheNegotiator

Quote:


> Originally Posted by *cmgunn*
> 
> Just ordered a PowerEdge R710 to add to my home lab.
> 
> *OS:* Undecided
> *Case:* R710 stock
> *CPU:* 2x E5520
> *Motherboard:* R710 stock
> *Memory:* 48GB
> *PSU:* 870w
> *OS HDD:* 64GB SSD
> *Storage HDD(s):* 4x 2TB
> *Server Manufacturer:* Dell


The R710 came in today, here it is installed:


----------



## Oedipus

I spy an orange light. What's up with that?


----------



## TheNegotiator

I accidentally unplugged one of the power supplies when I was putting the new server in.


----------



## TopicClocker

Gonna edit this post, gonna add my home server here









Windows Server 2012
Galaxy III
Intel Core 2 Duo E6750 (Underclocked to 2.33Ghz, undervolted to 1.040v)
Gigabyte GA-P35-DS4
Corsair 6GB DDR2
WinPower 450w (Gotta change this to something modern and 80+)
320GB OS HDD, used for storage as well atm until I get new drives.
Planning on purchasing a 1TB HDD

I put this together from spare parts, I love this thing.

Update:
I've recently just built and set this up and I use it as a file server, I run subsonic and plex to stream music to my phone, I would do videos and movies but my internet cant handle it, on LAN it's fine though.
I also run a minecraft server with my friends, I undervolted and underclocked it for lower power consumption since it's a 24/7 machine I only play with about 4 friends on minecraft anyway so It's adequiate, was looking at a Core 2 Quad since they can be had for cheap and would grant me two more processing cores, may just build something new when I can and get a Haswell Pentium to replace this build.


----------



## Plan9

Quote:


> Originally Posted by *TopicClocker*
> 
> Gonna edit this post, gonna add my home server here
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Windows Server 2012
> Galaxy III
> Intel Core 2 Duo E6750 (Underclocked to 2.33Ghz, undervolted to 1.040v)
> Gigabyte GA-P35-DS4
> Corsair 6GB DDR2
> WinPower 450w (Gotta change this to something modern and 80+)
> 320GB OS HDD, used for storage as well atm until I get new drives.
> Planning on purchasing a 1TB HDD
> 
> I put this together from spare parts, I love this thing.


That's quite a modest spec - particularly compared to most of the systems in this thread. What do you use it for?


----------



## TopicClocker

Quote:


> Originally Posted by *Plan9*
> 
> That's quite a modest spec - particularly compared to most of the systems in this thread. What do you use it for?


Was reading 30+ pages and was blown away by the setups :O
I've recently just built and set this up and I use it as a file server, I run subsonic and plex to stream music to my phone, I would do videos and movies but my internet cant handle it, on LAN it's fine though.
I also run a minecraft server with my friends, I undervolted and underclocked it for lower power consumption since it's a 24/7 machine I only play with about 4 friends on minecraft anyway so It's adequiate, was looking at a Core 2 Quad since they can be had for cheap and would grant me two more processing cores, may just build something new when I can and get a Haswell Pentium to replace this build.


----------



## Plan9

Quote:


> Originally Posted by *TopicClocker*
> 
> Was reading 30+ pages and was blown away by the setups :O
> I've recently just built and set this up and I use it as a file server, I run subsonic and plex to stream music to my phone, I would do videos and movies but my internet cant handle it, on LAN it's fine though.
> I also run a minecraft server with my friends, I undervolted and underclocked it for lower power consumption since it's a 24/7 machine I only play with about 4 friends on minecraft anyway so It's adequiate, was looking at a Core 2 Quad since they can be had for cheap and would grant me two more processing cores, may just build something new when I can and get a Haswell Pentium to replace this build.


To be fair, I think some people massively over spec their systems. I like the look of yours as it's proof that you don't need a beast of a system for home use (my home server is a little more powerful, but still modest compared to most on here and I run virtual machines off that)


----------



## Wildcard36qs

*EDIT* This is not my personal setup sorry wrong thread!

Just got my hands on some nice and fast goodies!!!
1x R320 (Management server for Hyper-V cluster)
3x R720 with 384GB RAM each + 2x E52760 hosting the VM's
2x Dell PowerConnect 8132F 10GB switch
1x Dell EqualLogic PS6100 series

It looks a bit messy because it is just a temp setup on-site as we are moving the client to a new location and are in the middle of P2V right now. We are virtualizing about 20 old servers that were top of the line in 2007 to this.


----------



## TopicClocker

Quote:


> Originally Posted by *Wildcard36qs*
> 
> *EDIT* This is not my personal setup sorry wrong thread!
> 
> Just got my hands on some nice and fast goodies!!!
> 1x R320 (Management server for Hyper-V cluster)
> 3x R720 with 384GB RAM each + 2x E52760 hosting the VM's
> 2x Dell PowerConnect 8132F 10GB switch
> 1x Dell EqualLogic PS6100 series
> 
> It looks a bit messy because it is just a temp setup on-site as we are moving the client to a new location and are in the middle of P2V right now. We are virtualizing about 20 old servers that were top of the line in 2007 to this.


Would pass out if I had to move all of that and reconnect it


----------



## herkalurk

Quote:


> Originally Posted by *TopicClocker*
> 
> Would pass out if I had to move all of that and reconnect it


That's nothing in today's enterprise. One of my VMware hosts at work has 15 different network connections (regular network, iscsi, management, backup, dmz, misc). It's just a matter of pre-planning, and pre-pulling labelled cables to the right places before even racking the servers.


----------



## Gunfire

Quote:


> Originally Posted by *TopicClocker*
> 
> Would pass out if I had to move all of that and reconnect it


Labeling, labeling, labeling.


----------



## Wildcard36qs

The crap you see above the servers is actually the phone switches. Those are not coming as we are having a new system installed. So don't worry Lol.


----------



## Plan9

Quote:


> Originally Posted by *herkalurk*
> 
> That's nothing in today's enterprise. One of my VMware hosts at work has 15 different network connections (regular network, iscsi, management, backup, dmz, misc). It's just a matter of pre-planning, and pre-pulling labelled cables to the right places before even racking the servers.


Yeah, but it's still a pretty horrible job as sysadmin / datacentre jobs go by. Or at least it's one of my most hated jobs anyway


----------



## jibesh

Quote:


> Originally Posted by *Plan9*
> 
> Yeah, but it's still a pretty horrible job as sysadmin / datacentre jobs go by. Or at least it's one of my most hated jobs anyway


I think cabling is the most hated job of anyone who has ever worked in a data center lol.

At work, there is an entire row of network cabling layered on top of each other (willing to bet there is several miles of cabling). Trying to trace anything there is a nightmare.


----------



## wtomlinson

Quote:


> Originally Posted by *jibesh*
> 
> I think cabling is the most hated job of anyone who has ever worked in a data center lol.
> 
> At work, there is an entire row of network cabling layered on top of each other (willing to bet there is several miles of cabling). Trying to trace anything there is a nightmare.


The only thing I hate more than cabling is crimping.


----------



## herkalurk

Quote:


> Originally Posted by *Plan9*
> 
> Yeah, but it's still a pretty horrible job as sysadmin / datacentre jobs go by. Or at least it's one of my most hated jobs anyway


I work with a guy who loves it. Which makes it easy to decide who gets the joy of doing that job. It also helps he has a little OCD, so he hates having extra long wires hanging around making the back of a server rack hard to deal with.


----------



## Muskaos

Quote:


> Originally Posted by *herkalurk*
> 
> I work with a guy who loves it. Which makes it easy to decide who gets the joy of doing that job. It also helps he has a little OCD, so he hates having extra long wires hanging around making the back of a server rack hard to deal with.


Give that man a raise, and never let him leave.









I just bought a house, so I have a lot of cabling to look forward to. Gonna put a 24u rack in the basement, and run cat 6 where I need it.


----------



## rrims

Haven't updated mine in awhile.



It may not be much, but it's mine. It currently is a file server, web server, and a download server so I can download large files over night without leaving the main rig on. Nothing to fancy or powerful but does it's job perfect with low power draw.


----------



## TheNegotiator

Quote:


> Originally Posted by *herkalurk*
> 
> I work with a guy who loves it. Which makes it easy to decide who gets the joy of doing that job. It also helps he has a little OCD, so he hates having extra long wires hanging around making the back of a server rack hard to deal with.


I work with a guy like that. He just finished running a mile or two of cable last week..


----------



## jibesh

Quote:


> Originally Posted by *Muskaos*
> 
> Give that man a raise, and never let him leave.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I just bought a house, so I have a lot of cabling to look forward to. Gonna put a 24u rack in the basement, and run cat 6 where I need it.


Well as long as you're running the cable, might as well put in CAT 7 and/or some fiber instead of CAT 6 to make things more future proof.


----------



## Plan9

Cat6a is futureproof. And also more practical than fibre.

Re, the OCD guys who love cabling, you're Damn lucky. The only guy in our office that likes cabling is bloody useless at it.


----------



## jibesh

Quote:


> Originally Posted by *Plan9*
> 
> Cat6a is futureproof. And also more practical than fibre.
> 
> Re, the OCD guys who love cabling, you're Damn lucky. The only guy in our office that likes cabling is bloody useless at it.


Yea most likely but considering that CAT 7 is only a couple more dollars than CAT 6A, why not?


----------



## TopicClocker

Quote:


> Originally Posted by *Plan9*
> 
> Cat6a is futureproof. And also more practical than fibre.
> 
> Re, the OCD guys who love cabling, you're Damn lucky. The only guy in our office that likes cabling is bloody useless at it.


lmao


----------



## Plan9

Quote:


> Originally Posted by *jibesh*
> 
> Well as long as you're running the cable, might as well put in CAT 7 and/or some fiber instead of CAT 6 to make things more future proof.


I'm by no means a cabling / network specialist, but from what I've gathered from chatting to those who are, cat7 doesnt actually offer any real world benefit for upping your network throughput. Where as cat6a will comfortably push out 10GbE, which should hold you in good stead for at least 10years (looking at the pace of development over the last 10years)
Quote:


> Originally Posted by *jibesh*
> 
> Yea most likely but considering that CAT 7 is only a couple more dollars than CAT 6A, why not?


----------



## jibesh

Quote:


> Originally Posted by *Plan9*
> 
> I'm by no means a cabling / network specialist, but from what I've gathered from chatting to those who are, cat7 doesnt actually offer any real world benefit for upping your network throughput. Where as cat6a will comfortably push out 10GbE, which should hold you in good stead for at least 10years (looking at the pace of development over the last 10years)


CAT 6A will be more than enough but just saying if a spool of 100ft of CAT 7 is like $5 more than a spool for CAT 6A, might as well get CAT 7.


----------



## DaveLT

Quote:


> Originally Posted by *jibesh*
> 
> CAT 6A will be more than enough but just saying if a spool of 100ft of CAT 7 is like $5 more than a spool for CAT 6A, might as well get CAT 7.


Just save the money. Unnecessary money wastage in unnecessary


----------



## Plan9

Quote:


> Originally Posted by *jibesh*
> 
> CAT 6A will be more than enough but just saying if a spool of 100ft of CAT 7 is like $5 more than a spool for CAT 6A, might as well get CAT 7.


But if you're not getting any benefit from Cat7 then why not just donate that $5 to charity instead. Or perhaps you fancy buying these magic beans off me for a super special deal of just $5.


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> But if you're not getting any benefit from Cat7 then why not just donate that $5 to charity instead. Or perhaps you fancy buying these magic beans off me for a super special deal of just $5.


This is OCN. We are all about overkill. If I can get a 100ft spool of Cat7 for only $5 more than Cat6a, I would also do it. Why future proof for 10 years when you can future proof for 12?

Just for fun...Cat7 is rated for 600MHz, where 6a is rated for 550MHz. Cat7 is ScTP where Cat6a is UTP. Each pair in Cat7 is shielded, where this is not always the case with Cat6a. Cat6a can come in UTP, STP, or SCTP where Cat7 is only ScTP or STP. Cat7 has a much more strict specifications with more stringent Crosstalk and System Noise features. Does that really mean anything when used on a typical Gigabit network? Nope. Cat6a and Cat7 both are rated at 10Gbps, so I say skip Starbucks one day next week and buy Cat7 if it's only a $5 difference. If it's a $50 difference, I say pass. Who knows if copper cabling will even be the standard in 10 years...just sayin.


----------



## Wildcard36qs

All I know is QSFP+ cables are crazy expensive. Was over $100 per meter.


----------



## u3b3rg33k

Quote:


> Originally Posted by *jibesh*
> 
> CAT 6A will be more than enough but just saying if a spool of 100ft of CAT 7 is like $5 more than a spool for CAT 6A, might as well get CAT 7.


However, those of you that aren't good at terminating screened, foiled copper cabling will see WORSE performance than you would with UTP 6a. also, don't forget that there are multiple grades of any "spec'd" wire. Remember "Gigaspeed" category 5? properly terminated, it can outperform a cheap 5e solution.

Also remember, if ANY component of a shielded system is unshielded (like using a UTP patchcord with STP), you just threw away virtually all the benefits of STP.

If you truely care about futureproofing, you'd be better off putting in MPO singlemode fiber. remember the giant push for Fiber-to-the-desktop? lol.

As for tracing cabling, if you ain't labeling, you're doing it wrong. every wire is to be labelled on both ends, and the runs documented. then there is no tracing, because you already know exactly where everything is in a searchable format. I've never been in a real data center that wasn't done this way - only the hodgepodge small office, and in those situations, toning it out is easy peasy.


----------



## Plan9

Quote:


> Originally Posted by *u3b3rg33k*
> 
> However, those of you that aren't good at terminating screened, foiled copper cabling will see WORSE performance than you would with UTP 6a. also, don't forget that there are multiple grades of any "spec'd" wire. Remember "Gigaspeed" category 5? properly terminated, it can outperform a cheap 5e solution.
> 
> Also remember, if ANY component of a shielded system is unshielded (like using a UTP patchcord with STP), you just threw away virtually all the benefits of STP.
> 
> If you truely care about futureproofing, you'd be better off putting in MPO singlemode fiber. remember the giant push for Fiber-to-the-desktop? lol.
> 
> As for tracing cabling, if you ain't labeling, you're doing it wrong. every wire is to be labelled on both ends, and the runs documented. then there is no tracing, because you already know exactly where everything is in a searchable format. I've never been in a real data center that wasn't done this way - only the hodgepodge small office, and in those situations, toning it out is easy peasy.


those tone thingies are awesome


----------



## shadow5555

Servers:

Server below tv

rackmount 4u case
quad 2.5
8gig ddr2
promise sata controller card
gig nic
500gig os
15tbs storage 3 of that for data raid parity

uses: media server/subonsic/ teamspeak 3 server/ plex server

server below that
dell poweredge 1500sc
dual xeon 2.8
4gig ddr2 ecc ram

uses: dc\dhcp\dns server



Spoiler: Warning:[URL=http://s1126.photobucket.com/user/shadow555/media/IMG_20130812_191934_zpsacdab117.jpg.html



[/URL] Spoiler!]



Backup Server

p4 2.5 i think
4gig ddr2
500gig os
4tbs storage


Spoiler: Warning: Spoiler!



http://s1126.photobucket.com/user/shadow555/media/IMG_20130827_170823_zpsd91f50d5.jpg.html



Networking area:



Spoiler: [URL=http://s1126.photobucket.com/user/shadow555/media/IMG_20130812_191944_zpse15dfbec.jpg.html



[/URL]Warning: Spoiler!]



core 2 duo 2.6 i think
4gig ddr2
80gig hd
dual gig nics

runs untangle in bridge mode as perimeter firewall 24/7
below that
linsys business class gig switch
linksys router running as a wap


----------



## TopicClocker

Nice servers.


----------



## herkalurk

Quote:


> Originally Posted by *Plan9*
> 
> those tone thingies are awesome


That statement clearly says "I'm the desk sysadmin, not the one in the server room" lol......

Sadly, I work in a place small enough that I have to do both sides.


----------



## Plan9

Quote:


> Originally Posted by *herkalurk*
> 
> That statement clearly says "I'm the desk sysadmin, not the one in the server room" lol......


I'm both, actually. But thanks for the condescension.


----------



## Oedipus

I have a toner in my desk. It picks up whatever radio station our office is tuned to at the time... not sure how that works.


----------



## Hydroplane

My poweredge C1100 in its final resting spot. 2TB hard drive mirrored to a 3TB hard drive, for a total of 2TB of storage. I'm only using 300GB of it for now. I'm making myself go through all of the unwatched or unread stuff on there first before downloading more. I'll fill it eventually.









The fans on it were annoyingly loud when it was 78F in here, but now they're not perceptively louder than my desktop fans at 68F.


----------



## Plan9

Quote:


> Originally Posted by *Hydroplane*
> 
> 
> 
> My poweredge C1100 in its final resting spot. 2TB hard drive mirrored to a 3TB hard drive, for a total of 2TB of storage. I'm only using 300GB of it for now. I'm making myself go through all of the unwatched or unread stuff on there first before downloading more. I'll fill it eventually.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The fans on it were annoyingly loud when it was 78F in here, but now they're not perceptively louder than my desktop fans at 68F.


What's your mixing desk for?


----------



## Hydroplane

Just my audio system. I use my galaxy s4 as a source into the mixer. From there, I run a mono output into the amplifier (Behringer inuke 6000 dsp) and the dsp in the amplifier crosses over at 90hz with one channel fed to the sub and the other to the mains. The sub gets 2200w at 4 ohms and the mains each get 1100w. You might be able to see the 12 gauge 4 conductor cable coming out of the back of the amplifier, that's the stuff they use to plug in dryers







One of the projects on my to-do list is to widen that cart so that I can fit one of my laptops up there and use it as a music player instead of my phone.


----------



## Plan9

Quote:


> Originally Posted by *Hydroplane*
> 
> Just my audio system. I use my galaxy s4 as a source into the mixer. From there, I run a mono output into the amplifier (Behringer inuke 6000 dsp) and the dsp in the amplifier crosses over at 90hz with one channel fed to the sub and the other to the mains. The sub gets 2200w at 4 ohms and the mains each get 1100w. You might be able to see the 12 gauge 4 conductor cable coming out of the back of the amplifier, that's the stuff they use to plug in dryers
> 
> 
> 
> 
> 
> 
> 
> One of the projects on my to-do list is to widen that cart so that I can fit one of my laptops up there and use it as a music player instead of my phone.


Why only mono? I can see stereo RCAs going into the mixer.

How do you find your amp? My experience of Behringer has been pretty bad


----------



## Hydroplane

The amplifier has only two channels. I have the left and right stereo inputs both panned to the left, then run just one xlr out of the mixer and into the amp which then crosses over to the sub out of one channel and the mains on the other. I'd need more amplifiers to run stereo or to bi-amp the mains. Eventually I will buy a few more, but right now the small improvement in sound quality wouldn't be worth the expense. Main limiting factor in my sound quality is the fact that I'm using 15" and 18" PA speakers in a room that's 13'x17' with sloped ceilings. Nothing I can do about that for now.


----------



## Plan9

Quote:


> Originally Posted by *Hydroplane*
> 
> The amplifier has only two channels. I have the left and right stereo inputs both panned to the left, then run just one xlr out of the mixer and into the amp which then crosses over to the sub out of one channel and the mains on the other. I'd need more amplifiers to run stereo or to bi-amp the mains. Eventually I will buy a few more, but right now the small improvement in sound quality wouldn't be worth the expense. Main limiting factor in my sound quality is the fact that I'm using 15" and 18" PA speakers in a room that's 13'x17' with sloped ceilings. Nothing I can do about that for now.


I take it the sub isn't powered then?


----------



## Hydroplane

Nope. It already weighs 148 pounds, I'd hate to move it with a 2000+ W amp in there too


----------



## Plan9

Quote:


> Originally Posted by *Hydroplane*
> 
> Nope. It already weighs 148 pounds, I'd hate to move it with a 2000+ W amp in there too


Wow. That's beefier than anything I have!


----------



## Hydroplane

I managed to get it up a flight of stairs


----------



## SuperMudkip

Got some new hardware! Got me 6 1U servers.











And this is the 6th 1U which is ontop of my 4U server that I previously posted










Specs:

Short depth servers:
Biostar P4M80-M7 (LGA775) w/ Celeron 3.2 Ghz, 1 GB DDR RAM
Biostar P4M800 Pro-M7 (LGA775) w/ 3.00 Ghz Pentium, 1 GB DDR RAM
Biostar U8668-D (PGA478) w/ 1.8 Ghz Pentium, 512MB DDR RAM
SuperMicro P4SGE (PGA478) w/ 2.8 Ghz Pentium 4, 1GB DDR RAM
SuperMicro P4SCi (PGA478) w/ 3.0 Ghz Pentium 4, 2GB DDR RAM

Full sized server
SuperMicro P4SC8 (PGA478) w/ 3.2 Ghz Pentium 4, 4 GB DDR RAM

All of them come in varieties of Hard drives, mostly 80GB to 120GB capcities some are from Hitachi, Maxtor, and Western Digital brands. Also, who ever owned these servers put new PSU's which are from SPI and SuperMicro. I also got the server rails as well.

Price for this? $90.


----------



## xNovax

My server and office as it stands right now.


----------



## Farmer Boe

Quote:


> Originally Posted by *xNovax*
> 
> My server and office as it stands right now.
> 
> 
> Spoiler: Warning: Spoiler!


I like where you've positioned the server cabinet. Right by the window! The best for efficient cooling! Is your system set to exhaust or intake out the window?


----------



## xNovax

Quote:


> Originally Posted by *Farmer Boe*
> 
> I like where you've positioned the server cabinet. Right by the window! The best for efficient cooling! Is your system set to exhaust or intake out the window?


The big window behind the server doesn't open .


----------



## tycoonbob

Quote:


> Originally Posted by *Farmer Boe*
> 
> I like where you've positioned the server cabinet. Right by the window! The best for efficient cooling! Is your system set to exhaust or intake out the window?


The drapes really class up the place too.


----------



## CSCoder4ever

well as long as it's able to cool itslef effectively, I don't think it should matter right?


----------



## xNovax

Quote:


> Originally Posted by *CSCoder4ever*
> 
> well as long as it's able to cool itslef effectively, I don't think it should matter right?


There are two exhaust fans for the room out the side windows.


----------



## Plan9

Quote:


> Originally Posted by *tycoonbob*
> 
> The drapes really class up the place too.










drapes should be a standard for all server rooms and data centres


----------



## DaveLT

Quote:


> Originally Posted by *Plan9*
> 
> 
> 
> 
> 
> 
> 
> 
> drapes should be a standard for all server rooms and data centres


Nah, everytime you fire up the server racks the drapes will be torn apart by the sheer airflow


----------



## Kylepdalton

Quote:


> Originally Posted by *DaveLT*
> 
> Nah, everytime you fire up the server racks the drapes will be torn apart by the sheer airflow


You hope they blow away instead of being sucked in. I had a fun experience with one of the industrial delta 120x38 mm 250 cfm fans on a tech bench next to a curtain. The fan sucked in a sheer curtain and ended up dragging the bench about a foot across the desk towards the open window. Fan survived; curtain not so much.


----------



## DaveLT

Quote:


> Originally Posted by *Kylepdalton*
> 
> You hope they blow away instead of being sucked in. I had a fun experience with one of the industrial delta 120x38 mm 250 cfm fans on a tech bench next to a curtain. The fan sucked in a sheer curtain and ended up dragging the bench about a foot across the desk towards the open window. Fan survived; curtain not so much.











I had even more fun with a PFC1212DE (I'm sure that's what you used since it's 252CFM that or the FFC1212DE or TFC1212DE) ... Testing it and then forgot some homework left next to the fan ... well guess what it ate it







into shreds









Therefore i stopped using my paper shredder (Even though it can handle 10+ pieces it is nothing like what the PFC can eat ...) and used my new Delta shredder until one day someone bought it off me and i netted a nice sum of money for the delta
Although i'll probably buy a PFC1212DE again soon







(I never actually buy my fans new but rather lightly used for like 1/5 the original cost)
My ultimate paper shredder though is this ...


----------



## TheNegotiator

I had 3 of those TFB0812UHE fans in an old Rackable Systems server running at full power. I could clearly hear them from 4 rooms away...


----------



## xNovax

Quote:


> Originally Posted by *DaveLT*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I had even more fun with a PFC1212DE (I'm sure that's what you used since it's 252CFM that or the FFC1212DE or TFC1212DE) ... Testing it and then forgot some homework left next to the fan ... well guess what it ate it
> 
> 
> 
> 
> 
> 
> 
> 
> into shreds
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Therefore i stopped using my paper shredder (Even though it can handle 10+ pieces it is nothing like what the PFC can eat ...) and used my new Delta shredder until one day someone bought it off me and i netted a nice sum of money for the delta
> Although i'll probably buy a PFC1212DE again soon
> 
> 
> 
> 
> 
> 
> 
> (I never actually buy my fans new but rather lightly used for like 1/5 the original cost)
> My ultimate paper shredder though is this ...


My rack came with two deltas preinstalled into the roof. I had to remove them. They are way too loud for the cooling my server needs.


----------



## Hydroplane

I think the little 38mm fans in my C1100 are Deltas. They have a very evil scream at full speed.


----------



## DaveLT

Quote:


> Originally Posted by *TheNegotiator*
> 
> I had 3 of those TFB0812UHE fans in an old Rackable Systems server running at full power. I could clearly hear them from 4 rooms away...


It's presence as well is massive you just can't help yourself but hear the massive screeching it makes ... Which sounds like a single-bypass jet engine
Quote:


> Originally Posted by *Hydroplane*
> 
> I think the little 38mm fans in my C1100 are Deltas. They have a very evil scream at full speed.


any 40mm fan sounds evil ... even if it's a San Ace ... My DF4056B12U (65dB version) makes them sound a bit ... tame. It is the embodiment of evil, the tenth circle of hell and certainly the stuff of nightmares








(HP put 6 of these IIRC in a DL140G2/DL145G2, now, HP put 9 Inventecs 4048 fans in the DL360G5. That's mad!)


----------



## Wildcard36qs

I am going to be getting a C1100 off ebay within the week. Just debating on config if I want 48 or 72GB RAM or if I should spring for the 6-cores.


----------



## Hydroplane

Quote:


> Originally Posted by *DaveLT*
> 
> It's presence as well is massive you just can't help yourself but hear the massive screeching it makes ... Which sounds like a single-bypass jet engine
> any 40mm fan sounds evil ... even if it's a San Ace ... My DF4056B12U (65dB version) makes them sound a bit ... tame. It is the embodiment of evil, the tenth circle of hell and certainly the stuff of nightmares
> 
> 
> 
> 
> 
> 
> 
> 
> (HP put 6 of these IIRC in a DL140G2/DL145G2, now, HP put 9 Inventecs 4048 fans in the DL360G5. That's mad!)


The poweredge isn't too bad if it's in the 60s in here, but when the temperature starts touching 80 it gets quite annoying. The fans will full blast if it's restarted (which is rare, considering it's a server) and that scream will put my old 4870x2s at 100% to shame. One of these days I'm going to stick it in the closet when I get around to building a new shelf. I just use mine as a file server so it doesn't generate much heat.

Which anime is the girl in your avatar from?


----------



## TheNegotiator

I came across a HP dl380 G6 on eBay for $150 and couldn't pass it up. It replaced the 2950 III. It only draws ~90 watts at idle, that's about half of what the 2950 used.

*OS:* Windows Server 2008 R2 Standard
*Case:* dl380 G6 stock
*CPU:* Intel Xeon E5520
*Motherboard:* dl380 G6 stock
*Memory:* 12GB DDR3
*PSU:* 2x 460w
*OS HDDs:* 2x 2.5" 72GB 10k SAS
*Storage HDDs:* 3x 2.5" 2TB
*Server Manufacturer:* HP


----------



## DaveLT

Quote:


> Originally Posted by *Hydroplane*
> 
> The poweredge isn't too bad if it's in the 60s in here, but when the temperature starts touching 80 it gets quite annoying. The fans will full blast if it's restarted (which is rare, considering it's a server) and that scream will put my old 4870x2s at 100% to shame. One of these days I'm going to stick it in the closet when I get around to building a new shelf. I just use mine as a file server so it doesn't generate much heat.
> 
> Which anime is the girl in your avatar from?


DAL/Date A Live







Quote:


> Originally Posted by *Wildcard36qs*
> 
> I am going to be getting a C1100 off ebay within the week. Just debating on config if I want 48 or 72GB RAM or if I should spring for the 6-cores.


48GB and definitely go for the hexa-cores! Specifically L5639


----------



## xNovax

Quote:


> Originally Posted by *TheNegotiator*
> 
> I came across a HP dl380 G6 on eBay for $150 and couldn't pass it up. It replaced the 2950 III. It only draws ~90 watts at idle, that's about half of what the 2950 used.
> 
> *OS:* Windows Server 2008 R2 Standard
> *Case:* dl380 G6 stock
> *CPU:* Intel Xeon E5520
> *Motherboard:* dl380 G6 stock
> *Memory:* 12GB DDR3
> *PSU:* 2x 460w
> *OS HDDs:* 2x 2.5" 72GB 10k SAS
> *Storage HDDs:* 3x 2.5" 2TB
> *Server Manufacturer:* HP


Link if it is still available.


----------



## TheNegotiator

Quote:


> Originally Posted by *xNovax*
> 
> Link if it is still available.


They only had one unfortunately. I'll post a link if I come across any other good deals.

Edit: Here's an identical config for $279 shipped, best offer accepted. Not as good of a deal, but still much cheaper than any similarly spec'd R710's.


----------



## xNovax

Could you measure the length of the server/rails, please?


----------



## TheNegotiator

Quote:


> Originally Posted by *xNovax*
> 
> Could you measure the length of the server/rails, please?


The server itself is 3.38" H x 17.54" W (19" incl. rack ears) x 27.25" L. I don't really have a way to measure the rails since the rack has stuff on either side of it.


----------



## xNovax

Quote:


> Originally Posted by *TheNegotiator*
> 
> The server itself is 3.38" H x 17.54" W (19" incl. rack ears) x 27.25" L. I don't really have a way to measure the rails since the rack has stuff on either side of it.


Well that info will do. Thank you. Server is a bit too long.


----------



## DaveLT

Quote:


> Originally Posted by *xNovax*
> 
> Well that info will do. Thank you. Server is a bit too long.


It's normal length you know ...


----------



## TheNegotiator

Quote:


> Originally Posted by *DaveLT*
> 
> It's normal length you know ...


^What he said.

Are you wanting to mount it in a rack? If so, what are the internal dimensions of the rack? The rails are adjustable - IIRC the server is ~2" longer than the rail ears when collapsed all the way down.


----------



## Wildcard36qs

Are the HP Proliant DL160 G6 basically same thing as the Dell C1100? I see very similar specs. Also if I got one, what should I do drive wise on the cheap?


----------



## TheNegotiator

Quote:


> Originally Posted by *Wildcard36qs*
> 
> Are the HP Proliant DL160 G6 basically same thing as the Dell C1100? I see very similar specs. Also if I got one, what should I do drive wise on the cheap?


The DL160 G6 is the equivalent of the PowerEdge R610 as far as I can tell. What are you going to use it for? I've used WD Red and Green drives for my home storage servers, but I wouldn't recommend them for a business server.


----------



## Wildcard36qs

Quote:


> Originally Posted by *TheNegotiator*
> 
> The DL160 G6 is the equivalent of the PowerEdge R610 as far as I can tell. What are you going to use it for? I've used WD Red and Green drives for my home storage servers, but I wouldn't recommend them for a business server.


Going to be used for my home office. Will be running ESXi and running several different VMs - Server 2012 and Hyper-V testing mostly. I'll use it as a storage server as well.


----------



## Hydroplane

Quote:


> Originally Posted by *TheNegotiator*
> 
> The DL160 G6 is the equivalent of the PowerEdge R610 as far as I can tell. What are you going to use it for? I've used WD Red and Green drives for my home storage servers, but I wouldn't recommend them for a business server.


I use the 2tb green in my C1100, it hasn't had any issues in 2+ years. I have it backed up to a seagate 3tb I tore out of an external drive (it was cheaper than the internal one last black friday) that's known for being even more unreliable lol. Though I do agree with you, there are better drive for business use but they tend to cost 2-3 times as much per gb than "civilian" drives. Paying more for a drive doesn't guarantee you against failure, so you should still have a backup.


----------



## xNovax

Quote:


> Originally Posted by *TheNegotiator*
> 
> ^What he said.
> 
> Are you wanting to mount it in a rack? If so, what are the internal dimensions of the rack? The rails are adjustable - IIRC the server is ~2" longer than the rail ears when collapsed all the way down.


This is the rack that I have. http://www.ebay.ca/itm/321181113497?ssPageName=STRK:MEWNX:IT&_trksid=p3984.m1439.l2649

I had to modify the rails on my HP Storageworks 60.


----------



## Hydroplane

That's a pretty intense rack. Where'd you find it? How much does one of those go for?


----------



## TheNegotiator

Quote:


> Originally Posted by *xNovax*
> 
> This is the rack that I have. http://www.ebay.ca/itm/321181113497?ssPageName=STRK:MEWNX:IT&_trksid=p3984.m1439.l2649
> 
> I had to modify the rails on my HP Storageworks 60.
> 
> 
> Spoiler: Pictures


The rails ProLiant's use are different in design from the rails Storageworks use. (PowerEdge/PowerVault is the same way)

I pulled the server out of the rack and got the measurements with the rails.

From the front rack ear to the rear rack ear is about 22.75"


From the front faceplate to the cable management clip is just under 30"


The front rack ear to the cable management clip is about 29.5"


So as long as you have ~29.50" between the front rail and the rear door and have 22.75" or more between the front and rear rails, it'll fit.

On a side note, the C1100 in your rack is 27.8" long, slightly longer than the 27.25" DL380 G6.


----------



## xNovax

Quote:


> Originally Posted by *TheNegotiator*
> 
> The rails ProLiant's use are different in design from the rails Storageworks use. (PowerEdge/PowerVault is the same way)
> 
> I pulled the server out of the rack and got the measurements with the rails.
> 
> From the front rack ear to the rear rack ear is about 22.75"
> 
> 
> From the front faceplate to the cable management clip is just under 30"
> 
> 
> The front rack ear to the cable management clip is about 29.5"
> 
> 
> So as long as you have ~29.50" between the front rail and the rear door and have 22.75" or more between the front and rear rails, it'll fit.
> 
> On a side note, the C1100 in your rack is 27.8" long, slightly longer than the 27.25" DL380 G6.


Thanks for the help.


----------



## TheNegotiator

Quote:


> Originally Posted by *Wildcard36qs*
> 
> Are the HP Proliant DL160 G6 basically same thing as the Dell C1100? I see very similar specs. Also if I got one, what should I do drive wise on the cheap?


Quote:


> Originally Posted by *TheNegotiator*
> 
> The DL160 G6 is the equivalent of the PowerEdge R610 as far as I can tell. What are you going to use it for? I've used WD Red and Green drives for my home storage servers, but I wouldn't recommend them for a business server.


I confused the DL160 G6 with the DL360 G6. The DL160 G6 is probably closer to the C1100 while the DL360 G6 is closer to the R610.


----------



## Wildcard36qs

Quote:


> Originally Posted by *TheNegotiator*
> 
> I confused the DL160 G6 with the DL360 G6. The DL160 G6 is probably closer to the C1100 while the DL360 G6 is closer to the R610.


Yeah I agree. Before I pull the trigger on one of these things. For ESXi free, I know there is a 32GB limit, but if I have more RAM, will it still work? Or do I have to physically have only 32GB. I am about to grab a C1100 with 2x L5639 and 36GB RAM minimum.


----------



## xNovax

Quote:


> Originally Posted by *Wildcard36qs*
> 
> Yeah I agree. Before I pull the trigger on one of these things. For ESXi free, I know there is a 32GB limit, but if I have more RAM, will it still work? Or do I have to physically have only 32GB. I am about to grab a C1100 with 2x L5639 and 36GB RAM minimum.


It is a physical ram limit. The system will not boot into esxi with more than 32.


----------



## Wildcard36qs

Thanks. Hmmm maybe I'll forgo ESXi and just go straight hyper v


----------



## levontraut

i thought it was it will only utilize 32 gig and if you had 64 gig you pretty much lost out on the rest of the ram you had.

the same with cpus... 4 cpus... you got 6 but it will only use 4 of them.


----------



## Plan9

Quote:


> Originally Posted by *Wildcard36qs*
> 
> Thanks. Hmmm maybe I'll forgo ESXi and just go straight hyper v


There's other ESX-like hypervisors on the market. Personally I use Proxmox on one of my physical boxes. It's free to use but does offer optional support licences


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> There's other ESX-like hypervisors on the market. Personally I use Proxmox on one of my physical boxes. It's free to use but does offer optional support licences


XenCloud Platform is also worth looking at.


----------



## Plan9

Quote:


> Originally Posted by *tycoonbob*
> 
> XenCloud Platform is also worth looking at.


I've not used XenCloud Platform before, but anything based on Xen should be pretty solid.

Currently I'm building a platform based on FreeBSD Jails


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> I've not used XenCloud Platform before, but anything based on Xen should be pretty solid.
> 
> Currently I'm building a platform based on FreeBSD Jails


I've started to play more with OpenVZ, which is similar to Jails/containers, I think.


----------



## Plan9

Quote:


> Originally Posted by *tycoonbob*
> 
> I've started to play more with OpenVZ, which is similar to Jails/containers, I think.


OpenVZ are containers, so the same thing as Jails and Solaris Zones. In fact I use OpenVZ containers on my Proxmox box I mentioned earlier - rather than full hardware virtualisation.


----------



## Wildcard36qs

I have been familiar with proxmox and my brother is running that on his server so I may check that out.

One last thing before I pull the trigger on one of these C1100s: I know it has an Intel ICH10R chipset and I have read varying reports of poor RAID performance. Originally I was just going to throw 4 harddrives in and run a RAID 10, but has anyone had experience with the chipset and RAID? Or should I just buy a cheap add-in card, or even go SSD?


----------



## tycoonbob

Quote:


> Originally Posted by *Wildcard36qs*
> 
> I have been familiar with proxmox and my brother is running that on his server so I may check that out.
> 
> One last thing before I pull the trigger on one of these C1100s: I know it has an Intel ICH10R chipset and I have read varying reports of poor RAID performance. Originally I was just going to throw 4 harddrives in and run a RAID 10, but has anyone had experience with the chipset and RAID? Or should I just buy a cheap add-in card, or even go SSD?


I say grab a Dell PERC 5i for about $15 from ebay, and get a breakout cable. It will do RAID 10 with pretty good performance, as long as your drives are 2TB or smaller each.


----------



## tiro_uspsss

Quote:


> Originally Posted by *Wildcard36qs*
> 
> I have been familiar with proxmox and my brother is running that on his server so I may check that out.
> 
> One last thing before I pull the trigger on one of these C1100s: I know it has an Intel ICH10R chipset and I have read varying reports of poor RAID performance. Originally I was just going to throw 4 harddrives in and run a RAID 10, but has anyone had experience with the chipset and RAID? Or should I just buy a cheap add-in card, or even go SSD?


ICH10R is only poor in RAID5 - RAID 0, 1 & iirc 10 it does quite well


----------



## Oedipus

Parity RAID via fakeraid is never going to perform worth a crap.


----------



## DaveLT

Quote:


> Originally Posted by *Oedipus*
> 
> Parity RAID via fakeraid is never going to perform worth a crap.


Agreed.


----------



## driftingforlife

Want to build/buy a VM server next. I still need to buy the HDDs for my file server









Its this http://www.ebay.co.uk/itm/400570718512?ssPageName=STRK:MEWAX:IT&_trksid=p3984.m1423.l2649

or I spend more and get a SR-2 with 2 4c/8t xeons. Going towards the SR-2 setup even though it will cost more as it will have more power/ram and less power usage and I can put it in a proper case and not have server fans makeing my ears bleed


----------



## SuperMudkip

Dat server.


----------



## SuperMudkip

Quote:


> Originally Posted by *driftingforlife*
> 
> Want to build/buy a VM server next. I still need to buy the HDDs for my file server
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Its this http://www.ebay.co.uk/itm/400570718512?ssPageName=STRK:MEWAX:IT&_trksid=p3984.m1423.l2649
> 
> or I spend more and get a SR-2 with 2 4c/8t xeons. Going towards the SR-2 setup even though it will cost more as it will have more power/ram and less power usage and I can put it in a proper case and not have server fans makeing my ears bleed


Could always buy a 4U case you can put 120mm fans. Its only the little fans that make noise. And there usually used for 1U servers.


----------



## driftingforlife

You can't use a normal case for the 1U hardware though.

I leaning toward the SR-2 set-up more for twice the cores and better power draw.


----------



## SuperMudkip

Quote:


> Originally Posted by *driftingforlife*
> 
> You can't use a normal case for the 1U hardware though.
> 
> I leaning toward the SR-2 set-up more for twice the cores and better power draw.


well you actually can if you get server hardware from Supermicro or Tyan that has a ATX form factor and you will be fine. I have mobos from Supermicro 1U servers and I can put it in a regular case like any other mobo. Just make sure there ATX.


----------



## driftingforlife

it would be out of the Dell one though. made my mind ip, will save for the SR-2 set-up. Can always upgrade later to 6-cores.


----------



## DaveLT

Errr ... Any server motherboard will do what the SR-2 does, only cheaper. Besides the SR-2 is even larger than server mobos


----------



## driftingforlife

Yes, but i can use the SR-2 for OCing as well.


----------



## DaveLT

Quote:


> Originally Posted by *driftingforlife*
> 
> Yes, but i can use the SR-2 for OCing as well.


You wouldn't OC a server ... you just won't. It's not safe


----------



## greenscobie86

My server









Works fine running VirtualBox giving me everything I need.


----------



## tycoonbob

Quote:


> Originally Posted by *DaveLT*
> 
> You wouldn't OC a server ... you just won't. It's not safe


Some people have to learn from doing. The reason you don't overclock a server is because of stability. A server is meant to be stable for non-stop use, where computers are not designed that way. overclocking anything decreases your stability. Period.


----------



## driftingforlife

LOL, im not that stupid.

im a LN2 bencher, it will be at stock when as a server but OCed for benching.


----------



## Jeci

Quote:


> Originally Posted by *driftingforlife*
> 
> Want to build/buy a VM server next. I still need to buy the HDDs for my file server
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Its this http://www.ebay.co.uk/itm/400570718512?ssPageName=STRK:MEWAX:IT&_trksid=p3984.m1423.l2649
> 
> or I spend more and get a SR-2 with 2 4c/8t xeons. Going towards the SR-2 setup even though it will cost more as it will have more power/ram and less power usage and I can put it in a proper case and not have server fans makeing my ears bleed


I was going to do this, goodluck finding a server case that supports a HPTX motherboard. The only ones I could find are the supermicro ones which cost the earth.


----------



## driftingforlife

I have a LD PC-V4 bench table that it will fit on and you can make a HPTX fit in a cosmos s with some adjustment.


----------



## DaveLT

Quote:


> Originally Posted by *Jeci*
> 
> I was going to do this, goodluck finding a server case that supports a HPTX motherboard. The only ones I could find are the supermicro ones which cost the earth.


Well you can buy a Rosewill 4U case and remove the fan + HDD portions (Of course drilling out stuff i bet) and then put fans up front


----------



## tiro_uspsss

Quote:


> Originally Posted by *Jeci*
> 
> I was going to do this, goodluck finding a server case that supports a HPTX motherboard. The only ones I could find are the supermicro ones which cost the earth.


Xigmatek elsyium whatever


----------



## suicidegybe

This case does just strip out the crap that comes in it. Just ordered one myself $425 for a $1000 case not bad. http://www.ebay.com/itm/Supermicro-24-Bay-SATA-4U-AMD-DC-2-0GHz-8GB-H8DME-2-SC846TQ-Server-/151128996482?pt=COMP_EN_Servers&hash=item232ffd7a82


----------



## DaveLT

Quote:


> Originally Posted by *suicidegybe*
> 
> This case does just strip out the crap that comes in it. Just ordered one myself $425 for a $1000 case not bad. http://www.ebay.com/itm/Supermicro-24-Bay-SATA-4U-AMD-DC-2-0GHz-8GB-H8DME-2-SC846TQ-Server-/151128996482?pt=COMP_EN_Servers&hash=item232ffd7a82


Keep the mobo, lol. And put in hexa-core Thubans


----------



## lowfat

Quote:


> Originally Posted by *driftingforlife*
> 
> You can't use a normal case for the 1U hardware though.
> 
> I leaning toward the SR-2 set-up more for twice the cores and better power draw.


Definitely wouldn't suggest the SR-2 for a server ever. It isn't exactly a stable board, when I got rid of both of mine it was a good day. Plus no IPMI on a server just ain't cool.


----------



## driftingforlife

it won't be a permanent server, just set-up when I want to try some stuff, thast for letting me know though


----------



## flyin15sec

I recently converted two older systems into 1. These house various movie ISO, MP3 etc., so speed was not terribly important. I wanted a single large volume. Additionally the Q9550 system had two bad drives go out at the same time. The Tempest case is a pain to swap drives, so moving them to a hot plug chasis will make swapping much easier in the future.

Old System:
Intel Q9550
NZXT Tempest
Gigabyte EP45-UD3P
8 Gig DDR2 800mhz
8 x 2TB




Iomega NAS


New Server:
Xeon 5506
EVGA X58
12 gig ECC DDR3
12 x 2 TB
FreeNAS 9.1.1 (ZFS1)
Rosewill RSV-L4411
Generic 4 port SATA PCI adapter


----------



## driftingforlife

Just bought 2 of these, got them for £40 each.

http://www.ebay.co.uk/itm/321217485496


----------



## DaveLT

Might get this. You SFF people look a bit tame now


----------



## Beezie

DaveLT: is that a dell c6100 node with a custom box with 1u active cpu cooling?
were is it from? buychina?


----------



## tycoonbob

Quote:


> Originally Posted by *Beezie*
> 
> DaveLT: is that a dell c6100 node with a custom box with 1u active cpu cooling?
> were is it from? buychina?


That is exactly what it is. Taobao, probably.


----------



## TheReciever

Where are the PCIe 2.0 slots on the C1100? I have been looking at some pictures but cant seem to locate it

Been considering getting one soon, just covering every angle before I pull the trigger, thanks!

EDIT: Did some more looking around, looks like I might need some riser cards for this


----------



## xNovax

Quote:


> Originally Posted by *TheReciever*
> 
> Where are the PCIe 2.0 slots on the C1100? I have been looking at some pictures but cant seem to locate it
> 
> Been considering getting one soon, just covering every angle before I pull the trigger, thanks!
> 
> EDIT: Did some more looking around, looks like I might need some riser cards for this


There is only one slot in the C1100. Also it comes with a riser card.


----------



## TheReciever

Weird, then it must be the one in the back towards the rear. Dell Website puts it down for 3 PCIe G2 slots


----------



## tycoonbob

Quote:


> Originally Posted by *xNovax*
> 
> There is only one slot in the C1100. Also it comes with a riser card.


There is also a slot right by the PCIe slot, which I believe is for a mezzanine card. I've not been able to confirm though.


----------



## SuperMudkip

Quote:


> Originally Posted by *SuperMudkip*
> 
> Got some new hardware! Got me 6 1U servers.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And this is the 6th 1U which is ontop of my 4U server that I previously posted
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Specs:
> 
> Short depth servers:
> Biostar P4M80-M7 (LGA775) w/ Celeron 3.2 Ghz, 1 GB DDR RAM
> Biostar P4M800 Pro-M7 (LGA775) w/ 3.00 Ghz Pentium, 1 GB DDR RAM
> Biostar U8668-D (PGA478) w/ 1.8 Ghz Pentium, 512MB DDR RAM
> SuperMicro P4SGE (PGA478) w/ 2.8 Ghz Pentium 4, 1GB DDR RAM
> SuperMicro P4SCi (PGA478) w/ 3.0 Ghz Pentium 4, 2GB DDR RAM
> 
> Full Length server
> SuperMicro P4SC8 (PGA478) w/ 3.2 Ghz Pentium 4, 4 GB DDR RAM
> 
> All of them come in varieties of Hard drives, mostly 80GB to 120GB capcities some are from Hitachi, Maxtor, and Western Digital brands. Also, who ever owned these servers put new PSU's which are from SPI and SuperMicro. I also got the server rails as well.
> 
> Price for this? $90.


In all I'm thinking about keeping the Full-Length Server and two of the LGA775 servers. Really hate to sell the other 3 but in actuality it seems like I would just be using the 4U and maybe just the 3 other 1U chassis. So yea. Thinking about $100 each? I mean the cases themselves are like $120 (I think even without the PSU).


----------



## DaveLT

Quote:


> Originally Posted by *Beezie*
> 
> DaveLT: is that a dell c6100 node with a custom box with 1u active cpu cooling?
> were is it from? buychina?


Yes, taobao. It's actually more like 1.5U but w/e, it's tiny as heck


----------



## TheReciever

Is there link for it?

Guess it couldnt hurt to look around that site lol, lots of random things there


----------



## DaveLT

Quote:


> Originally Posted by *TheReciever*
> 
> Is there link for it?
> 
> Guess it couldnt hurt to look around that site lol, lots of random things there


Comes with 16GB 1333 ECC RAM and 2 L5639s!
http://item.taobao.com/item.htm?spm=a1z10.1.w4004-1496436775.2.5xWeCJ&id=35099767610


----------



## TheReciever

It looks like its expandable to 32GB if Im not mistaken?


----------



## DaveLT

Quote:


> Originally Posted by *TheReciever*
> 
> It looks like its expandable to 32GB if Im not mistaken?


96GB is actually possible if you use 8GB sticks. I would recommend to throw another 2 sticks in for triple-channel (Stock is 4x4)


----------



## Plan9

My home server only has a total of 8GB RAM and I'm running a number of next gen technologies.









Sometimes I quite like running on smaller budgets / specs as it forces you to be more efficient and to have a little more foresight in your build.


----------



## DaveLT

Yeah. Anything above 24GB for a homeserver is IMO overkill and often very expensive


----------



## bobfig

shoot im only on 4gb and using stuff from my old computer. runs anything i need and gosnt need anything more other then more hard drive space. just those disks are so flippin expensive.


----------



## Sean Webster

more RAM = better.


----------



## Plan9

Quote:


> Originally Posted by *Sean Webster*
> 
> more RAM = better.


more RAM == lazier sysadmin


----------



## tycoonbob

Quote:


> Originally Posted by *DaveLT*
> 
> Yeah. Anything above 24GB for a homeserver is IMO overkill and often very expensive


My pair of C1100s are up to 48GB each currently, haha. my storage server has 8GB of RAM (non-ZFS), just because. My HTPC that I am building this evening (parts arriving this afternoon!) will have 8GB also. Why? Just because.

FWIW, my 4 CentOS VMs have either 512MB or 640MB of RAM, where as my Server 2012 boxes are 1536MB at a minimum (for a full GUI install), and 512MB for Server 2012 Core installs. I'm very stingy with my RAM for the most part, but I do have a lot of unneccasy VMs running (full System Center lab, Exchange, Sharepoint, Lync, a 3 node XenApp environment, 2 node XenDesktop environment, and some other learning/lab stuff).

Maybe I should make a thread for my network and all the things I have running on it. I love making Visio diagrams.


----------



## Plan9

I'm impressed Server 2012 runs on 512MB.


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> I'm impressed Server 2012 runs on 512MB.


Well, let me explain a little more. I actually have two Server 2012 boxes running with 640MB of static RAM, on Hyper-V on server 2012 R2. (I thought it was 512MB)

One of the boxes is my FTP box, which I run the IIS role to configure FTPS (FTP over SSL), and is also my DFS server (about 12 shares for 6 servers, approximately). Running a PoSH command:
(Get-Counter -Counter "\Memory\Available MBytes" -ComputerName FTP01).CounterSamples[0].CookedValue

I currently have 96MB of RAM free on that VM. When I log in via console to this VM, all I get is a PowerShell command prompt but it's snappy. No cursor lag, and the login is quick.

My other 2012 VM with 640MB of RAM is my RDS box (Remote Desktop Services). All I have configured on here is the RDS role, and the RD Gateway Role Feature (this box allows me to RDP into any of my servers over WAN, without having to configure and forward a port for each server -- makes things simplistic when I use Devolutions Remote Desktop Manager so I can get to all my servers from work, or wherever I am). It's also configure with a core install, and that same PoSH command returns 184-212MB RAM free (if I run it over and over). Meaning it's running on as low as 428MB of RAM, with 4 managed connections on that RDS role.

I'm honestly surprised that I was able to get these working like this, but it does. The only thing I hate is that these boxes have a 30GB VHDx. If CentOS could host RDS and DFS for me, I would get rid of these VMs just so I could free up 20-25GB of storage space.

On a similar note (similar to my RDS box), I have a CentOS 6.4 x64 box setup with 2 vCPU and 512MB RAM (running on Hyper-V 2012 R2) that is my RDGATEWAY server. Very similar to RDS, but it runs an application called Guacamole which gives me RDP/SSH/VNC capabilities from any HTML5 enabled browser (i.e., so I can RDP to all my servers over LAN or WAN, from my Chromebook). It's only using 220MB of the 512MB of RAM available. That's when brokering 3 active connections as well. I wrote a blog about Guacamole a few months back, if you're interested in reading (I don't write many blogs -- just when I feel like it):
http://deviantengineer.com/guacamole-html5-rdp

EDIT:
Looks like I have a third Server 2012 VM with low RAM, but it is running with 512MB static. All it does is host my UniFi AP Controller software. It has 71MB of RAM free, currently, and is using a 25GB VHDx. That software is only supported on Mac and Windows, but not Linux (so strange). However, it's a java app so I'm sure I could pull out the .jar and get it to work, just haven't had the time to try.


----------



## Plan9

Impressive stuff (I'll have a read of that blog post later







)

Up until a couple of months ago, I had 1 VM running on 128MB RAM - but that was just used as a WAN facing SSH server. I also had some 512MB and 256MB servers (private web servers, IRC servers, nothing too heavy). These days I have them all running as FreeBSD containers (with no RAM limit) so I've go no idea how much memory each container is consuming


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> Impressive stuff (I'll have a read of that blog post later
> 
> 
> 
> 
> 
> 
> 
> )
> 
> Up until a couple of months ago, I had 1 VM running on 128MB RAM - but that was just used as a WAN facing SSH server. I also had some 512MB and 256MB servers (private web servers, IRC servers, nothing too heavy). These days I have them all running as FreeBSD containers (with no RAM limit) so I've go no idea how much memory each container is consuming


I don't know jack about BSD, haha. I no longer have any web servers hosted at home (ChicagoVPS is just too cheap to deal with hosting myself), but I do have 3 sites hosted in my VPS, including deviantengineer.com ,which I linked above -- that's my personal tech blog, that I don't do a whole lot with). I could probably drop my CentOS VMs to 256MB, but I don't need to be that stingy...at least not yet. With 26 VMs running across both my Dell C1100s, I still have like 38GB (combined) of RAM free.


----------



## DaveLT

Quote:


> Originally Posted by *tycoonbob*
> 
> My pair of C1100s are up to 48GB each currently, haha. my storage server has 8GB of RAM (non-ZFS), just because. My HTPC that I am building this evening (parts arriving this afternoon!) will have 8GB also. Why? Just because.
> 
> FWIW, my 4 CentOS VMs have either 512MB or 640MB of RAM, where as my Server 2012 boxes are 1536MB at a minimum (for a full GUI install), and 512MB for Server 2012 Core installs. I'm very stingy with my RAM for the most part, but I do have a lot of unneccasy VMs running (full System Center lab, Exchange, Sharepoint, Lync, a 3 node XenApp environment, 2 node XenDesktop environment, and some other learning/lab stuff).
> 
> Maybe I should make a thread for my network and all the things I have running on it. I love making Visio diagrams.


I say that but i feel like a hypocrite since my file server has 24GB as well


----------



## lowfat

Quote:


> Originally Posted by *DaveLT*
> 
> Might get this. You SFF people look a bit tame now


I think I'm in love.








Quote:


> Originally Posted by *DaveLT*
> 
> Yeah. Anything above 24GB for a homeserver is IMO overkill and often very expensive


64GB of registered ECC in mine. But I got it for the ridiculously low price of $150 shipped!


----------



## Jeci

I'm picking up a pair of Dell SC1435's tomorrow afternoon with the following spec:


AMD Opteron 2376 (Dual)
16GB ECC RAM
80GB HDD
Dual Gigabit Nic's
They're not going to set any compute records but should serve their purpose nicely (One for Plex Media Server, one for test VM's)


----------



## bav182

My little server of joy..

Used for File Server, Media Server & Print Server.

Microsoft Server 2012 Essentials
HP Proliant Case
AMD Turion N40L 1.5Ghz
HP 041 Motherboard
8GB Kingston DDR3 1066Mhz
250GB Seagate VB0250EA (OS)
3x 2TB Seagate ST2000DM001 (Storage)
3TB Seagate Backup+ USB (Backup)
150W HP Power Supply

It'll hold me until I can afford a nice Dual Xeon Server


----------



## fritz_sean

I just picked up a Dell C6100 with 8x Six Core Xeon L5639 and 96 GB of Ram.

I will post some pictures when I can.


----------



## driftingforlife

FFFUUUUUUUU jelllllllllllllly


----------



## TheReciever

Right? lol


----------



## fritz_sean

Quote:


> Originally Posted by *driftingforlife*
> 
> FFFUUUUUUUU jelllllllllllllly


Quote:


> Originally Posted by *TheReciever*
> 
> Right? lol


lol my bad!


----------



## GigaByte

http://s1194.photobucket.com/user/gamerx1990/media/IMG_3639.jpg.html

sig laptop running a counterstrike global offensive idle server.


----------



## Wildcard36qs

Quote:


> Originally Posted by *fritz_sean*
> 
> I just picked up a Dell C6100 with 8x Six Core Xeon L5639 and 96 GB of Ram.
> 
> I will post some pictures when I can.


Holy cow I hope you can utilize all that. Lots of power at your disposal. Lol


----------



## fritz_sean

I will put it to good use. Between work and personal use, I will find something to do with it.


----------



## scutzi128

OS: Windows 7
Case: NMEDIA 6000B
CPU: 2600k @ 4.7
Motherboard: Asrock P67 Extreme 6
Memory: 16gb ddr3 @ 1600 MHZ
PSU: Corsair CX600
Video: GTX460 768MB @ 900 MHz
Misc: Ceton Tuner Card
OS HDD: Corsair M4 256 GB
Storage HDD(s): 16TB total: 1TB WDB, 2 x 1.5TB WDG, 5 x 2TB WDG, 2TB USB 3.0 Portable External) using ESata Probox Enclosure
Server Manufacturer: Me

I use this server for a TS3 server / PC Racing Gaming / Media Server / HTPC (XBMC) / Gaming Server/ DVR (WMC) / Transcoding or Encoding Videos / FTP Server / Automated Backups

http://s144.photobucket.com/user/scutzi128/media/Server/74c53261.jpg.html
http://s144.photobucket.com/user/scutzi128/media/Server/b30eb860.jpg.html
http://s144.photobucket.com/user/scutzi128/media/Server/12cc56ac.jpg.html
http://s144.photobucket.com/user/scutzi128/media/Server/db0f5cda.jpg.html
http://s144.photobucket.com/user/scutzi128/media/Server/IMG_3487_zpsddd017ab.jpg.html
http://s144.photobucket.com/user/scutzi128/media/Server/IMG_3488_zps0609e754.jpg.html
http://s144.photobucket.com/user/scutzi128/media/Server/IMG_3492_zpscc74dd42.jpg.html
http://s144.photobucket.com/user/scutzi128/media/Server/IMG_3493_zpsae624269.jpg.html
http://s144.photobucket.com/user/scutzi128/media/9e118b0e.jpg.html


----------



## CloudX

Nice!


----------



## TopicClocker

Mother of...
Quote:


> Originally Posted by *scutzi128*
> 
> OS: Windows 7
> Case: NMEDIA 6000B
> CPU: 2600k @ 4.7
> Motherboard: Asrock P67 Extreme 6
> Memory: 16gb ddr3 @ 1600 MHZ
> PSU: Corsair CX600
> Video: GTX460 768MB @ 900 MHz
> Misc: Ceton Tuner Card
> OS HDD: Corsair M4 256 GB
> Storage HDD(s): 16TB total: 1TB WDB, 2 x 1.5TB WDG, 5 x 2TB WDG, 2TB USB 3.0 Portable External) using ESata Probox Enclosure
> Server Manufacturer: Me
> 
> I use this server for a TS3 server / PC Racing Gaming / Media Server / HTPC (XBMC) / Gaming Server/ DVR (WMC) / Transcoding or Encoding Videos / FTP Server / Automated Backups
> 
> http://s144.photobucket.com/user/scutzi128/media/Server/IMG_3493_zpsae624269.jpg.html


----------



## NKrader

Cord clutter and mismatch heatsinks fixed soon, but dual dual socket six core cruncher rigs powered by a single PSU


----------



## Mugen87

Quote:


> Originally Posted by *tycoonbob*
> 
> My pair of C1100s are up to 48GB each currently, haha. my storage server has 8GB of RAM (non-ZFS), just because. My HTPC that I am building this evening (parts arriving this afternoon!) will have 8GB also. Why? Just because.
> 
> FWIW, my 4 CentOS VMs have either 512MB or 640MB of RAM, where as my Server 2012 boxes are 1536MB at a minimum (for a full GUI install), and 512MB for Server 2012 Core installs. I'm very stingy with my RAM for the most part, but I do have a lot of unneccasy VMs running (full System Center lab, Exchange, Sharepoint, Lync, a 3 node XenApp environment, 2 node XenDesktop environment, and some other learning/lab stuff).
> 
> Maybe I should make a thread for my network and all the things I have running on it. I love making Visio diagrams.


I would be all about that tread. Can I ask what u do for a living? I have really dived deep into VMs myself and would love more info on your set up.

I'm fresh out of school (aa in network administration) and my plan is to create a lab environment. For me its hard to run multiple VMs with limited ram. So, I want to daul boot my main sys with a super light linux cli build and run more vms that way.


----------



## cones

Quote:


> Originally Posted by *NKrader*
> 
> Cord clutter and mismatch heatsinks fixed soon, but dual dual socket six core cruncher rigs powered by a single PSU
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> ]


Curious how that 24pin slitter works for the motherboard, does one motherboard control the on/off of the psu?


----------



## NKrader

Quote:


> Originally Posted by *cones*
> 
> Curious how that 24pin slitter works for the motherboard, does one motherboard control the on/off of the psu?


Both Mobo set to turn on when givin power.
Although I think that both would control it if needed.


----------



## Aussiejuggalo

Heres mine, its a HTPC, file server and I run my Teamspeak 3, 2 Minecraft, 7 Days to Die and a Team Fortress 2 server off it









OS: Win 7 Ultimate 64 Bit
Case: Some generic Acer thing I modded
CPU: i5 4430
Motherboard: ASRock B85M PRO4
Memory: G.Skill Ripjaws X F3-12800CL10D-16GBXL 16GB (2x8GB) DDR3
HDD 1: Western Digital WD Red 2TB WD20EFRX
HDD 2: Western Digital WD Red 3TB WD30EFRX
Keyboard: Logitech Wireless Touch Keyboard K400
PSU: Corsair VS350


----------



## DaveLT

That blacked-out interior makes me think it isn't a Acer case ...


----------



## Aussiejuggalo

Quote:


> Originally Posted by *DaveLT*
> 
> That blacked-out interior makes me think it isn't a Acer case ...


Lol i modded the crap out of it, got rid of the cd & floppy drives, rotated the hard drives to that, took all the stock switches and lights out, filled all the holes, put a single power button in with no lights and pained the whole thing matte black


----------



## tycoonbob

Quote:


> Originally Posted by *Mugen87*
> 
> I would be all about that tread. Can I ask what u do for a living? I have really dived deep into VMs myself and would love more info on your set up.
> 
> I'm fresh out of school (aa in network administration) and my plan is to create a lab environment. For me its hard to run multiple VMs with limited ram. So, I want to daul boot my main sys with a super light linux cli build and run more vms that way.


I recently was in the consulting realm, around Microsoft technologies (System Center, Hyper-V, Server Infrastrucuture, along with some storage and networking) but started with a new company a little over a month ago. I currently work for a healthcare system here in Kentucky, which has a user base near 20,000. My title is technically "Client\Server Infrastructure Analyst - Senior" but I'm in a Systems Engineering role. I work primarily with Citrix (4 prod XenApp farms, 1 prod XenDesktop farm, 4 NetScalers, Edgesight -- over 400 servers in the Citrix environment) but also XenServer, VMware, tier 3 support, and system engineering (design and implement new systems).

It's fun.


----------



## DaveLT

Quote:


> Originally Posted by *tycoonbob*
> 
> I recently was in the consulting realm, around Microsoft technologies (System Center, Hyper-V, Server Infrastrucuture, along with some storage and networking) but started with a new company a little over a month ago. I currently work for a healthcare system here in Kentucky, which has a user base near 20,000. My title is technically "Client\Server Infrastructure Analyst - Senior" but I'm in a Systems Engineering role. I work primarily with Citrix (4 prod XenApp farms, 1 prod XenDesktop farm, 4 NetScalers, Edgesight -- over 400 servers in the Citrix environment) but also XenServer, VMware, tier 3 support, and system engineering (design and implement new systems).
> 
> It's fun.


My dad has the exact opposite of what you work with ... A lot of HP UNIX servers that is







(And when i say a lot i actually mean GLOBAL.)


----------



## Mikey976

So as a lurker and sometimes contributor i finally got my server area cleaned up an nice nice in my office.

MediaVault - Fileserver-Media/backup/Plex
OS: Server 2008 r2
Case: Supermicro SC846TQ
CPU: Dual Opteron 2216 HE
Motherboard: Arima NM46X
Memory: 5gb ECC (its what i had laying around, soon to be 16gb
PSU: Antec earthwatts 550
OS HDD (If you have one): 1x WD 160gb
Storage HDD(s): a multitude of drives in a Drivebender pool totalling approx 12.7TB usable
Server Manufacturer Me

Home ESX system( runs my WMC DVR backend for xbmc / SQL for XBMC / SickBeard/Couchpotato/Headphones box/ PDC)
OS:ESXi 5.1
Case: Dell T1900 II
CPU: Dual Xeon E5345
Memory: 16GB samsung FB-DIMM
OS HDD (If you have one): cheapo 128GB SSD
Storage HDD(s): 2x500GB R0 / 3x640GB R5
Server Manufacturer (Ex: Dell, HP, You?): Dell

box at the bottom is some random chassis i was given with an old iwill dual opty 270 with 16gb in ram that was my ESXi box. i fire it up for
occasional hosting of a minecraft server and other lan games

http://s130.photobucket.com/user/mikey976/media/20131014_181037.jpg.html
http://s130.photobucket.com/user/mikey976/media/20131014_181047.jpg.html


----------



## Ferrari8608

I have two servers, both of which I recieved for free from a good friend who upgrades often.



The small box on top closest to the camera is gelatin
The Thermaltake V3 holds ontario



Gelatin is my DNS server (dnsmasq) and Git repository, and ontario is currently hosting my SubSonic server. SubSonic is my answer to the overhyped Spotify. It streams my music library to any PC or phone in the house, and I can access it away from home as well through SubSonic.org's redirect subdomain. It's only $1 a month, and I get to listen to my music at up to 320 kbps bitrate anywhere.


----------



## cones

Quote:


> Originally Posted by *Ferrari8608*
> 
> I have two servers, both of which I recieved for free from a good friend who upgrades often.
> 
> The small box on top closest to the camera is gelatin
> The Thermaltake V3 holds ontario
> 
> Gelatin is my DNS server (dnsmasq) and Git repository, and ontario is currently hosting my SubSonic server. SubSonic is my answer to the overhyped Spotify. It streams my music library to any PC or phone in the house, and I can access it away from home as well through SubSonic.org's redirect subdomain. It's only $1 a month, and I get to listen to my music at up to 320 kbps bitrate anywhere.


In wondering what does the DNS server do for you? Also I've used subsonic for a long time now, before the switch to paid service so I'm grandfathered in. Is that a new limit on bitrate or did you set that yourself?


----------



## Dream Killer

setting up a dns is awesome. i never have to type any ip to access any of my servers. for example, i just type "ssh [email protected]" to access my webserver instead of typing "ssh [email protected]". it's much easier this way.

ps: you guys better start learning how to set up your own dns servers. ipv6 is coming soon and you don't want to be caught trying to remember an ipv6 address to ssh into your servers


----------



## DaveLT

Quote:


> Originally Posted by *Dream Killer*
> 
> setting up a dns is awesome. i never have to type any ip to access any of my servers. for example, i just type "ssh [email protected]" to access my webserver instead of typing "ssh [email protected]". it's much easier this way.
> 
> ps: you guys better start learning how to set up your own dns servers. ipv6 is coming soon and you don't want to be caught trying to remember an ipv6 address to ssh into your servers


----------



## tycoonbob

Skip SubSonic and go with MadSonic (a fork). More features, faster development, and no monthly cost. Register your own domain (or a dyndns domain) and use that instead of x.subsonic.org. I used SubSonic up until MadSonic came around in the 4.6 days, and I prefer it.


----------



## cones

Quote:


> Originally Posted by *Dream Killer*
> 
> setting up a dns is awesome. i never have to type any ip to access any of my servers. for example, i just type "ssh [email protected]" to access my webserver instead of typing "ssh [email protected]". it's much easier this way.
> 
> ps: you guys better start learning how to set up your own dns servers. ipv6 is coming soon and you don't want to be caught trying to remember an ipv6 address to ssh into your servers


Why was i thinking that was something different. tycoonbob i've never heard of that but i like subsonic because i don't have to pay the monthly fee so it works well for me but i'll look into that.


----------



## Plan9

Quote:


> Originally Posted by *Dream Killer*
> 
> setting up a dns is awesome. i never have to type any ip to access any of my servers. for example, i just type "ssh [email protected]" to access my webserver instead of typing "ssh [email protected]". it's much easier this way.
> 
> ps: you guys better start learning how to set up your own dns servers. ipv6 is coming soon and you don't want to be caught trying to remember an ipv6 address to ssh into your servers


To be honest I just use my hosts file. It's far less painful than configuring bind9.
Quote:


> Originally Posted by *tycoonbob*
> 
> Skip SubSonic and go with MadSonic (a fork). More features, faster development, and no monthly cost. Register your own domain (or a dyndns domain) and use that instead of x.subsonic.org. I used SubSonic up until MadSonic came around in the 4.6 days, and I prefer it.


Subsonic doesn't have a monthly cost. It hasn't done for as long as I've used it (which is a couple of years now I think). Unless you're thinking of the optional one off donation that can be as large or as small as you like (and covers you for infinite future upgrades)?

I've not heard of MadSonic before though. Sound interesting. What additional features does it have? (I'm tempted to give it a try, but I'm running a pretty tightly integrated set up at the moment so switching media servers might prove more hassle than it's worth).


----------



## Dream Killer

it does both, i use BIND on mine. it acts as your own DNS server for external IPs like when you type in a url but it also translates hostnames into ip addresses within your private network.

i run dhcp on everything but i have my firewall rules setup according to hostnames. that way i don't have to muck around .conf files to set up a static ip.


----------



## cones

Quote:


> Originally Posted by *Plan9*
> 
> To be honest I just use my hosts file. It's far less painful than configuring bind9.
> Subsonic doesn't have a monthly cost. It hasn't done for as long as I've used it (which is a couple of years now I think). Unless you're thinking of the optional one off donation that can be as large or as small as you like (and covers you for infinite future upgrades)?
> 
> I've not heard of MadSonic before though. Sound interesting. What additional features does it have? (I'm tempted to give it a try, but I'm running a pretty tightly integrated set up at the moment so switching media servers might prove more hassle than it's worth).


You are grandfathered in like me. They just recently switched to subscription instead of the donation.


----------



## Plan9

Quote:


> Originally Posted by *Dream Killer*
> 
> it does both, i use BIND on mine. it acts as your own DNS server for external IPs like when you type in a url but it also translates hostnames into ip addresses within your private network.


I know, I manage our works name servers








There's nothing wrong with having your own name server, I'm not arguing against that. Just saying if you're only going to SSH from one machine then adding a line into /etc/hosts is far easier than having a name server for one person. But it's each to their own, both solutions work the same (though mine generates less latency







)
Quote:


> Originally Posted by *Dream Killer*
> 
> i run dhcp on everything but i have my firewall rules setup according to hostnames. that way i don't have to muck around .conf files to set up a static ip.


I did things the other way around and hardcoded IPs based on MAC addresses on the DHCP server. But again, there's "right" way to configure a LAN.
Quote:


> Originally Posted by *cones*
> 
> You are grandfathered in like me. They just recently switched to subscription instead of the donation.


Oh right. I'd better keep hold of my activation e-mail then








[edit]
Actually it looks like I've got a really old version: 4.7 (build 3105) - 11 September 2012
I'll have to check my repos tonight to see if it's been updated now that I've done an OS upgrade


----------



## Dream Killer

i guess. i'm not running a million servers in my house though. i do it for convenience - which is a higher priority than everything else. if it's not easy to use then whats the point?


----------



## Plan9

Quote:


> Originally Posted by *Dream Killer*
> 
> i guess. i'm not running a million servers in my house though. i do it for convenience - which is a higher priority than everything else. if it's not easy to use then whats the point?


The whole point I was making was that setting up bind9 is _NOT_ easier. Setting bind9 up is a complete pain in the arse compared with adding 1 line to /etc/hosts. The hosts file is far more convenient, which is why I've never bothered to set it up bind9 at home.

Don't get me wrong, I'm genuinely not knocking your choice. You're happy to build a name server and it is a perfectly good solution to the IPv6 problem you mentioned. I was just saying that you don't _need_ to run a name server at home to get around non-memorable IP addresses.


----------



## cones

I thought the /etc/host thing was for lan only and the otherthing was for wan? I'll need to look into the hosts file more I know there's a lot you can do with it. Plan9 I think they are on 4.9 I should check though.

Edit: it's 4.8


----------



## Dream Killer

/etc/hosts works well if you just need to map names for that single machine. for example, if you just need one computer in the administrator role. dns works every computer in the network since it will translate the name for every machine that it's serving. this model works better because any changes will reflect to all machines under that name server. for instance lets say on a network of 200 computers, ftp-server's ip is 10.0.0.2 but it was changed into 10.0.0.3, if /etc/hosts for each of the 200 machines had the entry "10.0.0.2 ftp-server", they would all each have to be changed by hand, where as you just change one line, one time on a nameserver and all the computers on the lan would resolve "ftp-server" as the new 10.0.0.3 entry.


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> Subsonic doesn't have a monthly cost. It hasn't done for as long as I've used it (which is a couple of years now I think). Unless you're thinking of the optional one off donation that can be as large or as small as you like (and covers you for infinite future upgrades)?
> 
> I've not heard of MadSonic before though. Sound interesting. What additional features does it have? (I'm tempted to give it a try, but I'm running a pretty tightly integrated set up at the moment so switching media servers might prove more hassle than it's worth).


Actually, the new version of SubSonic (dubbed, SubSonic Premium) is subscription based at $1/mo.
http://www.subsonic.org/pages/premium.jsp

"Upgrade to Subsonic Premium to enjoy these features:
-Mobile Apps
-Video Streaming
-No Ads
-etc
-etc"

I believe that started at verison 4.8, so 4.7 and below is free. I donated back in the 4.5 days (I think it was).

MadSonic is a direct fork of SubSonic, by a fellow known as MadEvil (great guy):
http://forum.subsonic.org/forum/viewtopic.php?f=15&t=10445
http://madsonic.org/

It's a fork of SubSonic 4.7 Build 3090 with schema modifications, and lots of new features. It has evolved to it's 5.0 beta builds to no longer follow directly with SubSonic, though. It brought features like access control with user groups to limit access to certain media folders, bandwidth, settings, etc. Pandora mode is pretty cool, which is a really new feature. Better Last.FM integration (similar artist, bio, artist artwork, etc), option to switch from flash video to HTML5 video, DLNA built-in, new theme, and quite a bit more. Would have to dig through the change logs to see all the new features since I'm so used to them just being there. I recently loaded the latest MadSonic build on my CentOS box if you want to check it out. Just shoot me a PM and I can send you the link.


----------



## Ferrari8608

Quote:


> Originally Posted by *cones*
> 
> In wondering what does the DNS server do for you? Also I've used subsonic for a long time now, before the switch to paid service so I'm grandfathered in. Is that a new limit on bitrate or did you set that yourself?


The limit is 320 for mp3 transcoding as that is the upper limit of mp3. Someone else answered the first question.
Quote:


> Originally Posted by *tycoonbob*
> 
> Skip SubSonic and go with MadSonic (a fork). More features, faster development, and no monthly cost. Register your own domain (or a dyndns domain) and use that instead of x.subsonic.org. I used SubSonic up until MadSonic came around in the 4.6 days, and I prefer it.


I know of MadSonic, but it doesn't really interest me. SubSonic does all I ever wanted in a media streaming server. My own domain would be at least triple the cost of using SubSonic's. That's not really in my budget right now for what I would end up using it for. I don't mind $1 a month to a project that's doing a great job.
Quote:


> Originally Posted by *Plan9*
> 
> To be honest I just use my hosts file. It's far less painful than configuring bind9.


Why not dnsmasq? I'm not networking savvy, so I had my friend come over and help me set it up. If you know a bit about networking though, the config file looked quite straight-forward. We had it up and running in about 20 minutes.

Any device on my network can connect to my SubSonic server with http://ontario:4040


----------



## Dream Killer

dnsmasq is a lot easier since it automatically acts as a dhcp server too. it's a pretty straight forward setup and many open source firewalls (pfsense, dd-wrt) use it so it's already built into whatever webgui they run.


----------



## Plan9

Quote:


> Originally Posted by *cones*
> 
> I thought the /etc/host thing was for lan only and the otherthing was for wan?
> I'll need to look into the hosts file more I know there's a lot you can do with it.


domain name resolution doesn't care about whether the IP address is local or not. An IP address is and IP address - it's up to your networking gear to route it appropriately.
All the hosts file is, is a look up for your OS before it goes off to check the nearest name server. Think of it a bit like a localised name server (it's not technically that, but it's one way of summarising it).

There isn't really a lot you can do with it either to be honest, it's not nearly a flexible as a name server (in fact some people run dnsmasq locally even in spite of the existences of the hosts file). But you can exploit the hosts file as a basic firewall if you want (ie i have a bunch of dodgy sites and ad networks listed in my hosts file and pointing to 127.0.0.1 which stops those sites from loading: http://someonewhocares.org/hosts/ )
Quote:


> Originally Posted by *cones*
> 
> Plan9 I think they are on 4.9 I should check though.
> Edit: it's 4.8


4.8 is available in my repos so I'll have a play installing it later (i have a feeling it's not going to be an easy update).
Quote:


> Originally Posted by *Dream Killer*
> 
> /etc/hosts works well if you just need to map names for that single machine. for example, if you just need one computer in the administrator role. dns works every computer in the network since it will translate the name for every machine that it's serving. this model works better because any changes will reflect to all machines under that name server. for instance lets say on a network of 200 computers, ftp-server's ip is 10.0.0.2 but it was changed into 10.0.0.3, if /etc/hosts for each of the 200 machines had the entry "10.0.02 ftp-server", they would all each have to be changed by hand, where as you just change one line, one time on a nameserver and all the computers on the lan would resolve "ftp-server" as the new 10.0.0.3 entry.


oh for crying out loud, don't you think if I manage name servers at work and prefer using a hosts file at home, that I know damn well the advantages and weaknesses of each solution? And given the number of times I've already said you're method is equally good, that I'm not knocking you for running bind9 at home? I mean seriously dude, stop being so defensive.

The only reason I comment was because you said, and I quote "_you guys better start learning how to set up your own dns servers. ipv6 is coming soon and you don't want to be caught trying to remember an ipv6 address to ssh into your servers_", so I was just pointing out that you don't _need_ to run a name server, you can just dump your IPv6 addresses into your hosts file instead. I wasn't saying you shouldn't run one, I was just offering up an alternative for those who don't fancy the prospect of learning how to set up bind9.

So can we please drop it now?








Quote:


> Originally Posted by *tycoonbob*
> 
> Actually, the new version of SubSonic (dubbed, SubSonic Premium) is subscription based at $1/mo.
> http://www.subsonic.org/pages/premium.jsp
> 
> "Upgrade to Subsonic Premium to enjoy these features:
> -Mobile Apps
> -Video Streaming
> -No Ads
> -etc
> -etc"
> 
> I believe that started at verison 4.8, so 4.7 and below is free. I donated back in the 4.5 days (I think it was).
> 
> MadSonic is a direct fork of SubSonic, by a fellow known as MadEvil (great guy):
> http://forum.subsonic.org/forum/viewtopic.php?f=15&t=10445
> http://madsonic.org/
> 
> It's a fork of SubSonic 4.7 Build 3090 with schema modifications, and lots of new features. It has evolved to it's 5.0 beta builds to no longer follow directly with SubSonic, though. It brought features like access control with user groups to limit access to certain media folders, bandwidth, settings, etc. Pandora mode is pretty cool, which is a really new feature. Better Last.FM integration (similar artist, bio, artist artwork, etc), option to switch from flash video to HTML5 video, DLNA built-in, new theme, and quite a bit more. Would have to dig through the change logs to see all the new features since I'm so used to them just being there. I recently loaded the latest MadSonic build on my CentOS box if you want to check it out. Just shoot me a PM and I can send you the link.


Madsonic's website is shockingly bad.









The DLNA is a nice addition though - that's been the only feature I've missed from Subsonic (and I've wasted hours trying to find a decent DNLA solution in it's place - and since given up).

Sadly Madsonic isn't in FreeBSD's repos so I'll probably just stick to Subsonic for now. Thanks for the info though, reps


----------



## Plan9

Quote:


> Originally Posted by *Ferrari8608*
> 
> Why not dnsmasq? I'm not networking savvy, so I had my friend come over and help me set it up. If you know a bit about networking though, the config file looked quite straight-forward. We had it up and running in about 20 minutes.


Quote:


> Originally Posted by *Dream Killer*
> 
> dnsmasq is a lot easier since it automatically acts as a dhcp server too. it's a pretty straight forward setup and many open source firewalls (pfsense, dd-wrt) use it so it's already built into whatever webgui they run.


You know, I've never tried dnsmasq. I might give that a play later tonight.








Cheers guys


----------



## Dream Killer

and i'm saying dns is easier than an /etc/hosts file because i don't have to go in each of my computers one by one to edit a line in the hosts file just because i need to add another fileserver where i can just edit a line, one time on my dns server. it's a big time saver ipv6 or not.

and i've never said BIND9 is the only option, there are other easier methods out there like dnsmasq


----------



## Plan9

Quote:


> Originally Posted by *Dream Killer*
> 
> and i'm saying dns is easier than an /etc/hosts file because i don't have to go in each of my computers one by one to edit a line in the hosts file just because i need to add another fileserver where i can just edit a line, one time on my dns server. it's a big time saver ipv6 or not.












But well don't for turning a simple comment into the most pointless argument ever.


----------



## cones

Plan9 thanks for the explanation, I get very basic networking. I remember when I updated I had issues with my subsonic database and I also think I had to readd my key but that was it.


----------



## Dream Killer

Quote:


> Originally Posted by *Plan9*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Dream Killer*
> 
> and i'm saying dns is easier than an /etc/hosts file because i don't have to go in each of my computers one by one to edit a line in the hosts file just because i need to add another fileserver where i can just edit a line, one time on my dns server. it's a big time saver ipv6 or not.
> 
> 
> 
> You just said you don't have many computers at home. Make your bloody mind up.
Click to expand...

where? i have 20 something vms and there's 6 laptops in the house. _there's no way_ im gonna go into each one to edit the hosts file if something changes in the network.


----------



## Plan9

Quote:


> Originally Posted by *Dream Killer*
> 
> and i've never said BIND9 is the only option, there are other easier methods out there like dnsmasq


for the last bloody time:

*My ONLY point was that people don't need to learn how to set up a DNS server because you can use a hosts file*

I'm *not* arguing that what you're doing is wrong. I just gave an alternative solution. But I'm sorry I've bruised your precious ego by not worshipping your every statement and networking decisions like the deity you clearly strive to be.









So please, for the love of god dream killer, can you stop preaching about how perfect your solution is because not everyone wants to live in your perfect world.


----------



## Plan9

Quote:


> Originally Posted by *Dream Killer*
> 
> where? i have 20 something vms and there's 6 laptops in the house. _there's no way_ im gonna go into each one to edit the hosts file if something changes in the network.


My mistake than. I took the following to mean an exaggerated form of sarcasm:
Quote:


> Originally Posted by *Dream Killer*
> 
> i guess. i'm not running a million servers in my house though.


(ie you don't have many machines at home)

Sorry dude


----------



## tycoonbob

You can also automate host file changes with scripts or third party software. Just sayin'


----------



## Plan9

Quote:


> Originally Posted by *tycoonbob*
> 
> You can also automate host file changes with scripts or third party software. Just sayin'


I do something like this at work. At work have a shell script that's automatically copied onto any server I SSH onto and it's imported into my environment. Aside some useful aliases and what not, I have a bunch of env vars for server names. But these are custom names that mean more to me (eg some will be short hand so instead of typing *ssh dnsserver2.sitename.co.uk* I can type *ssh $dns2*). And I find that more convenient than using the works name servers (plus I have the added bonus that env vars can be tab completed where as domain names cannot).

I wouldn't recommend my work solution for other people though - it's really not practical for most purposes but it does suit my work flow nicely and doesn't impact the other sys admins at work.


----------



## Dream Killer

Quote:


> Originally Posted by *tycoonbob*
> 
> You can also automate host file changes with scripts or third party software. Just sayin'


yet another thing i have to install to each device. i can't exactly scp or rsync into a tablet or smartphone.

Quote:


> Originally Posted by *Plan9*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Dream Killer*
> 
> where? i have 20 something vms and there's 6 laptops in the house. _there's no way_ im gonna go into each one to edit the hosts file if something changes in the network.
> 
> 
> 
> My mistake than. I took the following to mean an exaggerated form of sarcasm:
> Quote:
> 
> 
> 
> Originally Posted by *Dream Killer*
> 
> i guess. i'm not running a million servers in my house though.
> 
> Click to expand...
> 
> (ie you don't have many machines at home)
> 
> Sorry dude
Click to expand...

i was explaining why it's easier even if it's in a home environment (you mentioned you maintained the nameserver and use dns there). so thus, even though im not running a million servers, i still find dns to be easier than modifying host files.

the only real disadvantage of running a dns server is the initial setup (which is very easy anyway) and that it requires maybe 10 megabytes of space and 32 megabytes of ram (m0n0wall as a nameserver) on your hypervisor of choice.
Quote:


> Originally Posted by *Plan9*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Dream Killer*
> 
> and i've never said BIND9 is the only option, there are other easier methods out there like dnsmasq
> 
> 
> 
> for the last bloody time:
> 
> *My ONLY point was that people don't need to learn how to set up a DNS server because you can use a hosts file*
> 
> I'm *not* arguing that what you're doing is wrong. I just gave an alternative solution. But I'm sorry I've bruised your precious ego by not worshipping your every statement and networking decisions like the deity you clearly strive to be.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So please, for the love of god dream killer, can you stop preaching about how perfect your solution is because not everyone wants to live in your perfect world.
Click to expand...

i guess i'm the only crazy one accessing my servers using devices that cant have their hosts files changed like phones or tablets.

PS: some browsers and programs completely ignore hosts files. for example, chrome.


----------



## Jawswing

*Current Build:*

*OS:* Unraid
*Case:* Fractal Design Array R2
*CPU:* Intel G530 CPU
*Motherboard:* ASUS P8H77-I Mini ITX Intel Motherboard
*Memory:* Corsair Vengeance Low Profile 2x4GB
*PSU:* (Generic that came with case, SFX 300W)
*OS HDD (If you have one):* Generic USB drive
*Storage HDD(s):* 6x3TB Seagate Barracudas
*Server Manufacturer (Ex: Dell, HP, You?):* Me

*Proposed new build:*

*OS:* Windows 7 Professional or Windows Home Server 2011
*Case:* Lian Li PC-A76X
*CPU:* 2700K
*Motherboard:* ASUS MAXIMUS IV EXTREME
*Memory:* CORSAIR Vengeance 8GB (2 x 4GB) and Corsair Vengeance Low Profile 2x4GB
*PSU:* SILVERSTONE ST1200 1200W
*OS HDD (If you have one):* OCZ120 GB Vertex 3
*Storage HDD(s):* 8x3TB Seagate Barracudas
*Server Manufacturer (Ex: Dell, HP, You?):* Me

Rebuilding my main computer, so the motherboard, CPU, RAM and SSD will be coming out of that. Just need to buy the case, a few more HDDs and the PSU.
Few questions, the RAM currently in the server is the low profile Vengeance RAM and in my current computer is the normal Vengeance RAM, both 1600Mhz and I believe CL9, so the same RAM with different heatsinks, so 4x4GB, this wouldn't be a problem would it?

For the OS, what's the differences between Windows 7 Professional and Windows Home Server 2011? I can get Pro for free, or home server is like £30.

Finally, the proposed system doesn't need 1200W, but I can't find any a smaller PSU with 16 Sata ports, any suggestions?


----------



## herkalurk

Quote:


> Originally Posted by *Plan9*
> 
> The whole point I was making was that setting up bind9 is _NOT_ easier. Setting bind9 up is a complete pain in the arse compared with adding 1 line to /etc/hosts. The hosts file is far more convenient, which is why I've never bothered to set it up bind9 at home.


Or you could have a multi platform server setup and have a windows server running your DNS/DHCP.







Windows DNS is really easy to setup/configure. Bind isn't too bad but most GUI monsters don't like it. We got the joy of completely rebuilding our external BIND servers last summer. Luckily we only have like 150 ish records, so it wasn't too bad. Plus only 2 servers, a master and slave. Again, not rocket science but I can see why windows DNS servers are very popular as well.


----------



## Plan9

Quote:


> Originally Posted by *Dream Killer*
> 
> yet another thing i have to install to each device. i can't exactly scp or rsync into a tablet or smartphone.


Sure you can. There's SSH servers in Googles app store. The bigger issue is rooting the device so you can modify system files once you have SSHed in.
Quote:


> Originally Posted by *Dream Killer*
> 
> i was explaining why it's easier even if it's in a home environment


No. You explained why you prefer to run it in a home environment. And from the sounds of it, you have more servers than most small businesses anyway, so your usage is anything but typical.

The problem is you think your personal preference is scientific fact when in fact it's just what you personally prefer to do in your specific non-typical home network. And this is why you're still here arguing that we should all be doing things your way instead of any of the plethora of other, equally practical, solutions out there.

As a aside note, I get so fed up with inflexible people who demand everything must be done their way. Most of the people I've worked with who's like that usually end up causing more problems than they solve because they fail to take context into account. They don't account for the users ability, for specific specifications in that particular situation, and so on. So they blindly roll out their favourite toy which either creates more work as it fails to do the job properly, or doesn't get used at all because nobody wanted that solution to begin with. I'm not saying you're as bad as these guys I've worked with in the past, but your lack of flexibility in this thread does remind me of the sorts of people who aren't great at evaluating each situation, case-by-case.

For what it's worth though, some good has come from our argument. I do plan on giving dnsmasq a look as I was thinking of setting up a DHCP server anyway (I wanted to be able to PXE boot minimal Linux ISOs so I can do away with burning CDs), so if I can roll domain name resolution into that as well then that might be a fortunate bonus.


----------



## Plan9

Quote:


> Originally Posted by *herkalurk*
> 
> Or you could have a multi platform server setup and have a windows server running your DNS/DHCP.
> 
> 
> 
> 
> 
> 
> 
> Windows DNS is really easy to setup/configure. Bind isn't too bad but most GUI monsters don't like it. We got the joy of completely rebuilding our external BIND servers last summer. Luckily we only have like 150 ish records, so it wasn't too bad. Plus only 2 servers, a master and slave. Again, not rocket science but I can see why windows DNS servers are very popular as well.


I live in the command line and find GUIs more confusing than config files, but something about bind9 leaves me scratching my head sometimes. Once it's set up it's easy as pie; anyone with a basic understanding of DNS's can work it. But fresh installs of bind9 aren't fun.


----------



## ozlay

OS: server 2012
Case: antec
CPU: xeon 5160 3ghz x2
Motherboard: Tyan Tempest i5000PX (S5380)
Memory: fb dimms 16gb
PSU: antec
Storage 640gb raid 0
Server Manufacturer me

private game server minecraft mostly


----------



## Plan9

Quote:


> Originally Posted by *Dream Killer*
> 
> i guess i'm the only crazy one accessing my servers using devices that cant have their hosts files changed like phones or tablets.


I can change the hosts file on my tablet. However the real question is why you're regularly SSHing from a touch screen? Just buy yourself a decent laptop and save yourself the trauma of using (ba|z|tc)sh on a touch screen keyboard








Quote:


> Originally Posted by *Dream Killer*
> 
> PS: some browsers and programs completely ignore hosts files. for example, chrome.


Erm no. The hosts file works transparently to your applications as domain lookups are managed by OS APIs that have a sequence of sources to contact. In fact on some OSs even allow you to set the priority order for name resolution, so you can have your name server lookup before your hosts file. Or even discard the hosts file entirely.

What you may have experienced is changing your hosts file for an entry that your browser has already looked up and not restarted that browser; thus the browser uses the cached IP address rather than looking up that domain name again. But you'd have that same issue if your name server entries were changed while your browser was open.


----------



## herkalurk

Quote:


> Originally Posted by *Plan9*
> 
> I live in the command line and find GUIs more confusing than config files, but something about bind9 leaves me scratching my head sometimes. Once it's set up it's easy as pie; anyone with a basic understanding of DNS's can work it. But fresh installs of bind9 aren't fun.


During our reinstall we had the old configs, so we just re-wrote it and cut out the fat (old dns records, etc). I think the thing I like the most about windows AD based DNS is that you can change a record on any AD server, and it will replicate to all the other DNS servers. There is no "master". The logging really sucks though. Still haven't figured out how I'm going to parse it with splunk.


----------



## Dream Killer

plan9, if you want a portable pxe server, look into the proxydhcp option in dnsmasq. that way you can carry around a laptop with parted magic, memtest iso and dnsmasq and network boot a machine in the network you connect that laptop to without modifying the existing dhcp server.


----------



## Plan9

Quote:


> Originally Posted by *Dream Killer*
> 
> plan9, if you want a portable pxe server, look into the proxydhcp option in dnsmasq. that way you can carry around a laptop with parted magic, memtest iso and dnsmasq and network boot a machine in the network you connect that laptop to without modifying the existing dhcp server.


Interesting. How does that work? I thought you couldn't (or rather shouldn't) run two DHCP servers in the same subnet?


----------



## Dream Killer

a PXE proxyDHCP server behaves much like a regular DHCP server by listening/answering to ordinary DHCPDISCOVER client traffic. However, unlike a regular DHCP server, the PXE proxyDHCP server does not provide/administer network IP addresses, and it only responds to clients that identify themselves as PXE clients.

it's been a standard in the UNDI APIs built into NICs for a while, the only problems i've had with it are machines with very ancient NICs.


----------



## Plan9

Quote:


> Originally Posted by *Dream Killer*
> 
> a PXE proxyDHCP server behaves much like a regular DHCP server by listening/answering to ordinary DHCPDISCOVER client traffic. However, unlike a regular DHCP server, the PXE proxyDHCP server does not provide/administer network IP addresses, and it only responds to clients that identify themselves as PXE clients.
> 
> it's been a standard in the UNDI APIs built into NICs for a while, the only problems i've had with it are machines with very ancient NICs.


That's awesome. Exactly what I needed. Thanks mate


----------



## Dream Killer

here's my /etc/dnsmasq.conf on my laptop with comments:

Code:



Code:


#disable dns forwarding:
port=0

#only reply on this interface (it will also work over wifi!)
interface=eth0

#use dnsmasq's internal tftp server
enable-tftp

#home folder of the tftp
tftp-root=/tftpboot

#boot pxelinux.0, the possibilities are endless with this
dhcp-boot=pxelinux.0

#the network it will respond on
dhcp-range=192.168.1.0,proxy

#option #1, boot pxelinux
pxe-service=x86PC,"Boot from Network",pxelinux

#cancel pxeboot and boot hard drive 0
pxe-service=x86PC,"Boot from local hard drive",0

pxe boot can do more than that (you can check out grub4dos too), but this is my conf for booting parted magic using pxelinux.0. it wont interfere with whatever dhcp server is already on the network.

with a normal unmodified netgear router


you can even do a WDS RIS server without a windows server


----------



## Plan9

Quote:


> Originally Posted by *Dream Killer*
> 
> here's my /etc/dnsmasq.conf on my laptop with comments:
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> #disable dns forwarding:
> port=0
> 
> #only reply on this interface (it will also work over wifi!)
> interface=eth0
> 
> #use dnsmasq's internal tftp server
> enable-tftp
> 
> #home folder of the tftp
> tftp-root=/tftpboot
> 
> #boot pxelinux.0, the possibilities are endless with this
> dhcp-boot=pxelinux.0
> 
> #the network it will respond on
> dhcp-range=192.168.1.0,proxy
> 
> #option #1, boot pxelinux
> pxe-service=x86PC,"Boot from Network",pxelinux
> 
> #cancel pxeboot and boot hard drive 0
> pxe-service=x86PC,"Boot from local hard drive",0
> 
> pxe boot can do more than that (you can check out grub4dos too), but this is my conf for booting parted magic using pxelinux.0. it wont interfere with whatever dhcp server is already on the network.


I'm going to run my PXE menus from pxelinux as I should be able to do more stuff in that than from the DHCP server. But either way I've worked with pxelinux quite a bit in the past so I'm playing with a known entity.

The built in TFTP server is handy to know though - saves me firing up inetd.


----------



## Dream Killer

You can also try the ipxe chainloader. It supports iscsi, http and ftp if tftp is way too slow for you, for example if you want to load whole oses.


----------



## Plan9

Quote:


> Originally Posted by *Dream Killer*
> 
> You can also try the ipxe chainloader. It supports iscsi, http and ftp if tftp is way too slow for you, for example if you want to load whole oses.


pxelinux supports HTTP as well. Probably FTP as well, but I strongly oppose installing FTP on any of my boxes. pxelinux also has hardware diagnostics built in and I already have code to hand for booting from NFS (which I'll definitely want). Plus there is a way you can get it to boot ISO images as if they were running off a CD/DVD locally - I've not experimented with that just yet but it's definitely something I'm going to play with.

I'm sure ipxe supports most if not all of the above as well, but I've never used it where as I have used and got some pretty decent configs for pxelinux already. So I'd rather stick with what I know on this occasion


----------



## cones

Quote:


> Originally Posted by *Plan9*
> 
> pxelinux supports HTTP as well. Probably FTP as well, but I strongly oppose installing FTP on any of my boxes. pxelinux also has hardware diagnostics built in and I already have code to hand for booting from NFS (which I'll definitely want). Plus there is a way you can get it to boot ISO images as if they were running off a CD/DVD locally - I've not experimented with that just yet but it's definitely something I'm going to play with.
> 
> I'm sure ipxe supports most if not all of the above as well, but I've never used it where as I have used and got some pretty decent configs for pxelinux already. So I'd rather stick with what I know on this occasion


Linux iso work best if you unpack them and use NFS, others you can get away with just the iso (there's a specific kernel you boot and point to the iso, can't remember it right now). I've played around with pxeboot which reminds me I need to fix it again.


----------



## Mikey976

ive used FOG for this in the past to boot most ISO's, I have looked into switching ti ipxe so i can chainload iso's and windows installers but i found i dont really use the booting iso's that way as much.
i picked up a zalman ZM-VE200 for that. works fantastically


----------



## Plan9

Quote:


> Originally Posted by *cones*
> 
> Linux iso work best if you unpack them and use NFS, others you can get away with just the iso (there's a specific kernel you boot and point to the iso, can't remember it right now). I've played around with pxeboot which reminds me I need to fix it again.


I was only going to have install CDs as ISOs, so I'm not worried about performance there. The plan was that I can download install CDs and then just point a menu file to the ISO (in fact I'm planning to find some way to automate it so that any ISOs found in a folder will automatically have a menu item, but I need to do some thinking about how best to approach that one) so that I don't have the trouble of burning new CDs when I fancy building a new machine. The actual working desktop OS was going to have an NFS root and be either Arch or Debian (possibly Debian because it can go for longer periods without updates, but I feel more at home on Arch)

As for the kernel you boot for the ISO, if it's not pxelinux then it will be isolinux, but they're all basically the same thing (along with syslinux)


----------



## cones

Quote:


> Originally Posted by *Plan9*
> 
> I was only going to have install CDs as ISOs, so I'm not worried about performance there. The plan was that I can download install CDs and then just point a menu file to the ISO (in fact I'm planning to find some way to automate it so that any ISOs found in a folder will automatically have a menu item, but I need to do some thinking about how best to approach that one) so that I don't have the trouble of burning new CDs when I fancy building a new machine. The actual working desktop OS was going to have an NFS root and be either Arch or Debian (possibly Debian because it can go for longer periods without updates, but I feel more at home on Arch)
> 
> As for the kernel you boot for the ISO, if it's not pxelinux then it will be isolinux, but they're all basically the same thing (along with syslinux)


That sounds familiar on what it's called. It's been a while since i've done it but if i remember right the iso's are loaded into the host ram, so it needs enough to store and run the iso.


----------



## Plan9

Quote:


> Originally Posted by *cones*
> 
> That sounds familiar on what it's called. It's been a while since i've done it but if i remember right the iso's are loaded into the host ram, so it needs enough to store and run the iso.


yeah that's isolinux.


----------



## Ecstacy

I don't really understand what you guys are saying as I'm not that experienced with Linux, but if you're trying to boot ISOs why not make a bootable flash drive? There is a program called YUMI that allows you to create multiboot flash drives. I have 16 different ISOs on my 16GB ADATA.


----------



## Plan9

Quote:


> Originally Posted by *Ecstacy*
> 
> I don't really understand what you guys are saying as I'm not that experienced with Linux, but if you're trying to boot ISOs why not make a bootable flash drive? There is a program called YUMI that allows you to create multiboot flash drives. I have 16 different ISOs on my 16GB ADATA.


PXE booting means that you can boot any and all of those 16 different ISOs from the network as if they were on your USB flash drive. So that way I don't have to hunt around for a flash drive (or buy one as my wife seems to take mine







), I just change the boot order in the BIOS then let the system do the rest.


----------



## Ecstacy

Quote:


> Originally Posted by *Plan9*
> 
> PXE booting means that you can boot any and all of those 16 different ISOs from the network as if they were on your USB flash drive. So that way I don't have to hunt around for a flash drive (or buy one as my wife seems to take mine
> 
> 
> 
> 
> 
> 
> 
> ), I just change the boot order in the BIOS then let the system do the rest.


Ahh, so it's the network boot option you see in the BIOS. I've always wanted to set that up, but when I looked up how I had to setup a server which didn't seem worth it to me.

Thanks for the clarification.









Btw, don't buy tiny flash drives like this one, I lost mine in my bedroom and I haven't been able to find it for almost a year now. xD


----------



## Kitler

Quote:


> Originally Posted by *xNovax*
> 
> My server and office as it stands right now.


Is that the Startech 25U enclosure? How do you like it? I am thinking of upgrading already from my tripp-lite SR4Post to something with side panels. Sleeker and probably quieter.


----------



## xNovax

Quote:


> Originally Posted by *Kitler*
> 
> Is that the Startech 25U enclosure? How do you like it? I am thinking of upgrading already from my tripp-lite SR4Post to something with side panels. Sleeker and probably quieter.


Its a quest steel 28U rack. I really like it but the only problem is it is not quite deep enough. I could just barely fit my C1100 in it.
http://store.cablesplususa.com/fe4119-28-xx.html


----------



## cones

Quote:


> Originally Posted by *Ecstacy*
> 
> Ahh, so it's the network boot option you see in the BIOS. I've always wanted to set that up, but when I looked up how I had to setup a server which didn't seem worth it to me.
> 
> Thanks for the clarification.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Btw, don't buy tiny flash drives like this one, I lost mine in my bedroom and I haven't been able to find it for almost a year now. xD


Just remember most of the time you can't do this through WiFi it has to be hard wired.


----------



## Plan9

It's taken me most of the evening, but finally got DNS resolution working in dnsmasq. Turns out you have to be extra specific about what IP to listen on when running the damn thing in a Jail - doh.

I'm quite liking dnsmasq though because it seems to be the best of both worlds, the network wide benefits of running a DNS server with the ease of only having to add hosts entries (which also means I can put my http://someonewhocares.org/hosts/ file to use as well


----------



## smandrews

Here's mine!





Not a massive powerhouse, but it gets the job done. Streams out movies and backs up all the other PCs in the house.

Specs:

Server OS: Windows Home Server 2011
Case: Corsair 550d
Motherboard: ASRock 870iCafe
CPU: AMD Phenom II X4 955 Deneb 3.2GHz Quad-Core
Video: MSI N210-MD1G/D3 GeForce 210 1GB
RAM: Kingston 1066MHz DDR3 (4x2 GB)
HDD: Seagate Barracuda 1TB 7200RPM x 2
Seagate Barracuda 2TB 7200RPM x 3
PSU: Cooler Master GX 450w

It also has a dual port Intel NIC.

Will be doing a big upgrade to a few computers in the house in December and so this will get a bump to an i5 with 16GB of RAM. Might try some virtualization later down the line but for now this is working.


----------



## Plan9

Quote:


> Originally Posted by *smandrews*
> 
> Here's mine!
> 
> Not a massive powerhouse, but it gets the job done. Streams out movies and backs up all the other PCs in the house.


That's almost identical specs to mine - albeit my Phenom is only 3 cores. Very similar cases as well - I have the Fractal version of your case.
I have more HDDs though


----------



## smandrews

Quote:


> Originally Posted by *Plan9*
> 
> That's almost identical specs to mine - albeit my Phenom is only 3 cores. Very similar cases as well - I have the Fractal version of your case.
> I have more HDDs though


Nice! Do you like your case? I plan on putting in a few more HDDs sometime down the line when I need some more space


----------



## Plan9

Quote:


> Originally Posted by *smandrews*
> 
> Nice! Do you like your case? I plan on putting in a few more HDDs sometime down the line when I need some more space


Yeah i love it. Its awesome. Very heavy though


----------



## TheShadowStorm

Here's my server

Case: Dell Poweredge 840
PSU: Original Dell
CPU: Xeon X3230
RAM: 2GB DDR2 ECC
Hard Drive 1: 500GB Seagate Barracuda used for OS and back up of laptop
Hard Drive 2: 1TB Samsung for films and photos, backed up to a 1TB external Drive
OS: Windows Server 2008 R2

Used as a file server for all my media as well as backing up all the data on my laptop. Also used with handbrake for video compression.


----------



## broadbandaddict

Hey guys I'm in need of a rackmount UPS for my home stuff. Any recommendations for brands/models/etc? I need it to run off a 120V plug and I'm thinking something like 1000W for future expandability would be good. I have this Dell one in mind, not sure if it is a good deal or not. Any help is appreciated. Thanks.


----------



## beers

What kind of load and how much runtime are you looking for?

Sent from my Kindle Fire using Tapatalk 2


----------



## broadbandaddict

Quote:


> Originally Posted by *beers*
> 
> What kind of load and how much runtime are you looking for?
> 
> Sent from my Kindle Fire using Tapatalk 2


I'd like to be able to run a couple of C1100s off it initially along with any other projects I throw at it in the coming years. How much do the C1100s pull? 200W each? I can always buy another in the future if I need to. Run time could be short, a few (3-5) minutes at least.


----------



## tycoonbob

Quote:


> Originally Posted by *broadbandaddict*
> 
> I'd like to be able to run a couple of C1100s off it initially along with any other projects I throw at it in the coming years. How much do the C1100s pull? 200W each? I can always buy another in the future if I need to. Run time could be short, a few (3-5) minutes at least.


C1100s (for me, at least) pull around 140W during normal use. With CPUs at max, it topped out around 215W.

I actually picked up an APC Smart UPS 2200VA for around $150 from Craigslist. Hell of a deal, if you ask me. All in all, I have about 500W on it, and it will be able to run for about 40 minutes! I'm going to set it to send shutdown command after 15 minutes though.

That one you linked would do fine for 3 C1100s and your network equipment (switch, firewall, wireless AP).


----------



## fritz_sean

(sorry the picture isn't the greatest, bad lighting in my apartment)

Dell C6100 - Each blade has Dual 6-Core Xeon's with 24 GB of Ram

Each blade is running Linux with Xen for virtual environments, most of the testing environments are for work.

I am going to be replacing the Dell box at the bottom of the cabinet soon, it is currently my pfsense router.

The ReadyNAS Pro 6 has 6 - 2 TB drives in it currently.

I have some more shelves on order for the cabinet as well as a new 24 port switch, I will re-post more pictures once I get everything installed.


----------



## iandroo888

Cooler Master 690 II
AMD Athlon X2 4050e
Asus Crosshair
Corsair Dominator 4GB DDR2 1066
2x Seagate Barracuda 1.5TB Raid 1
3x Seagate Barracuda 3TB Raid 0
Corsair Hydro H50
Corsair CX430
XFX 8800GT G92

Running FreeNAS

simple. quiet. fast :]


----------



## broadbandaddict

Quote:


> Originally Posted by *tycoonbob*
> 
> C1100s (for me, at least) pull around 140W during normal use. With CPUs at max, it topped out around 215W.
> 
> I actually picked up an APC Smart UPS 2200VA for around $150 from Craigslist. Hell of a deal, if you ask me. All in all, I have about 500W on it, and it will be able to run for about 40 minutes! I'm going to set it to send shutdown command after 15 minutes though.
> 
> That one you linked would do fine for 3 C1100s and your network equipment (switch, firewall, wireless AP).


That seems like an awesome deal.









All my network stuff is on my unRAID servers UPS (300W) so the Dell should be able to do 4 C1100s right?

Which CPUs do you have in your C1100s? I'm looking to get the L5639s. Also, are they crazy loud or anything?


----------



## mypcisugly

DELL 755 sff got it free put in 1t hd and 4 gigs of dd2 800 black dragon ram i had sitting around for a while and my nephew who is in College gave me a windows server 2008r2 key








this my first time trying this server stuff so it will fun best part the 755 is so clean and has a Q6600
this thing was supper clean when i got it.. i replaced the thermal paste put in the ram and hd and reformatted it ..still reading more stuff before i set things up
the thing is so quiet i just wish it was smaller.. but hey it was free


----------



## tycoonbob

Quote:


> Originally Posted by *broadbandaddict*
> 
> That seems like an awesome deal.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> All my network stuff is on my unRAID servers UPS (300W) so the Dell should be able to do 4 C1100s right?
> 
> Which CPUs do you have in your C1100s? I'm looking to get the L5639s. Also, are they crazy loud or anything?


I have the L5520, which are more than enough for the 12-15 VMs I have on my C1100s. I personally don't think the C1100 is all that loud, considering it's an enterprise grade 1U server. I have mine in my office, and I don't have a problem with them.


----------



## DaveLT

Quote:


> Originally Posted by *broadbandaddict*
> 
> That seems like an awesome deal.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> All my network stuff is on my unRAID servers UPS (300W) so the Dell should be able to do 4 C1100s right?
> 
> Which CPUs do you have in your C1100s? I'm looking to get the L5639s. Also, are they crazy loud or anything?


If you want to know what is considered normal out there, get any series of HP DL365








It makes the C1100 seem silent


----------



## TheNegotiator

I've got a pair of DL360 G4's at work. Those two are louder than the entire 42u rack of Poweredge servers next to them and can be heard from 2 rooms away...


----------



## PCSarge

pair of HP DL100 storage servers

2.8 ghz celeron D processors

4GB of ram per server

windows storage server 2003 w/ print

1x 250GB WD blue3x 1TB WD blacks in both

blue is the OS drive, blacks are in RAID 5

one is used for my main rig's backup once weekly.

other one is storage for copies of music / pictures / other important files

both in 1U HP stock chassis

not very loud at all have them in the spare room that i also have my folding/nbitcoin rigs in.

they link up to a dlink 1024D switch along with the incoming internet line and my main rig.


----------



## KYKYLLIKA

This thing is the second generation of machine serving my home network with file shares, torrents, a printer, and other stuff. This *IS* an upgrade from the previous version.

OS: Windows XP SP 3
Case: Compaq, very genuine case.
CPU: Pentium III-S 1.13 GHz
Motherboard: Compaq ProLiant ML330 G2
Memory: 2 GB Kingston
PSU: Some PSU I pulled from under the bed.
OS HDD (If you have one): 120GB ×1.
Storage HDD(s): 500GB×2 Raid 1.
Server Manufacturer (Ex: Dell, HP, You?): HP (Compaq, duh).









Has been up since 5:21AM on 10th of September, which was the last time electricity went out. Because of power outages it has not accumulated a single run of a year-long uptime.


----------



## Plan9

Quote:


> Originally Posted by *KYKYLLIKA*
> 
> This thing is the second generation of machine serving my home network with file shares, torrents, a printer, and other stuff. This *IS* an upgrade from the previous version.
> 
> OS: Windows XP SP 3
> Case: Compaq, very genuine case.
> CPU: Pentium III-S 1.13 GHz
> Motherboard: Compaq ProLiant ML330 G2
> Memory: 2 GB Kingston
> PSU: Some PSU I pulled from under the bed.
> OS HDD (If you have one): 120GB ×1.
> Storage HDD(s): 500GB×2 Raid 1.
> Server Manufacturer (Ex: Dell, HP, You?): HP (Compaq, duh).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Has been up since 5:21AM on 10th of September, which was the last time electricity went out. Because of power outages it has not accumulated a single run of a year-long uptime.


I appreciate the age of your components, but running an unsupported OS on your file and download server seems a little like asking for trouble.


----------



## KYKYLLIKA

Quote:


> Originally Posted by *Plan9*
> 
> I appreciate the age of your components, but running an unsupported OS on your file and download server seems a little like asking for trouble.


True enough, It's scheduled for shutdown before New Year's.


----------



## driftingforlife

Moving my file server from my rack to a normal case, rack is going till I move house in a few years









Doing a re-install while im at it and it means I can get some ICYBOX hot-swap bays as well, Im thinking about going from my 8-port Highpoint RAID card to a 16 port Highpoint RAID card as I can now fit 15 HDDs in my Cosmos S. I STILL haven't bought any RED HDDs yet though, so i need to save £300 for the new 16 port which means the HDDs will have to wait even longer BUT Im much better covered if I need more space later on and I can just buy the first 3 HDDs to set-up the raid 5 array and just add 1 HDD every month till I have 15.

*EDIT*

Finally tried the card with some drives from work and the array fails while formatting, dam pita, need to get it replaced so I can sell it, I haven't even used the dam thing.


----------



## Tankster399

case- dunno got it for 30$ got a old 939 pc
psu- neo eco 400W
mb-B75M 2.0
cpu- Intel celeron sandy bridge G540 2.5ghz socket 1155
ram- Gskill 4gb (1 stick only paid 40$ got the cheapest)
Hardrives as followed
(1) 250gb seagate OS
(3) 2Tb hdds Samsung and Toshiba (got for 80$ ea) and then seagate drive (converts to usb3.0 and sata you just take the bottom off got for christmas)
(2) 1TB western digital green
(2) 500GB seagate

Mouse- cheap hp mouse
Os- windows server 2012 (testing it atm might go with 2008) (gonna get a friend to get me a free key because i got a mate in collage ^_^

this is my file server and print server atm going to mess with vms shortly
also note 2 of the 2tb hdds arent in the pictures due to moving data off of the toshiba to my seagate for snapraid.

also i need more sata ports (also used my molex to sata ports too) for my psu can i buy addons like converts molex to like 4 sata or something?


----------



## Ferrari8608

For the love of airflow, please do something with those cables.


----------



## rrims

Quote:


> Originally Posted by *Tankster399*
> 
> *also i need more sata ports (also used my molex to sata ports too) for my psu can i buy addons like converts molex to like 4 sata or something?*


Here you go

Also, clean those cables up for better airflow


----------



## Plan9

Quote:


> Originally Posted by *Tankster399*


Deta?
Quote:


> Originally Posted by *Ferrari8608*
> 
> For the love of airflow, please do something with those cables.


That doesn't look that bad. At least it's all cable tied up. I've seen _much_ worse.


----------



## lowfat

Not really a server but this is my updated FreeNAS box.

Lian Li PC-Q25B
Celeron G555
Asus P8H77-I
8GB Samsung DDR3
8 x 3TB
30GB X25-E ZIL
80GB X25-M L2ARC
9211-8i-IT

http://s18.photobucket.com/user/tulcakelume/media/PCQ25B/export-8.jpg.html

http://s18.photobucket.com/user/tulcakelume/media/PCQ25B/export-7.jpg.html


----------



## spice003

Quote:
Originally Posted by *Tankster399* 

also i need more sata ports (also used my molex to sata ports too) for my psu can i buy addons like converts molex to like 4 sata or something?

something likes this? http://www.frozencpu.com/products/16375/cab-947/MaxFinder_Triple_Braided_4-Pin_Molex_to_Quad_SATA_Power_Adapter_Cable_-_35cm_-_Black_MF-OBK-SIQ-35.html?tl=g11c413s1223&id=ksqTS2AN

buncha different ones here

http://www.frozencpu.com/cat/l3/g11/c413/s1223/list/p1/Power_Supplies-PSU_Cables-SATA_Power_Adapters-Page1.html


----------



## dushan24

Quote:


> Originally Posted by *lowfat*
> 
> Not really a server but this is my updated FreeNAS box.
> 
> Lian Li PC-Q25B
> Celeron G555
> Asus P8H77-I
> 8GB Samsung DDR3
> 8 x 3TB
> 30GB X25-E ZIL
> 80GB X25-M L2ARC
> 9211-8i-IT
> 
> http://s18.photobucket.com/user/tulcakelume/media/PCQ25B/export-8.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/PCQ25B/export-7.jpg.html


You're using ZFS yeah?

How does it run with 8 x 3TB drives and only 8GB of RAM?

And what RAIDZ level?

PS: Very nice build


----------



## lowfat

Quote:


> Originally Posted by *dushan24*
> 
> You're using ZFS yeah?
> 
> How does it run with 8 x 3TB drives and only 8GB of RAM?
> 
> And what RAIDZ level?
> 
> PS: Very nice build


RAIDZ1. I get rather poor reads honestly, about 80MB/s over the network. Writes saturate the gigabit connection. I'll upgrade to 16GB of ram here sometime in the near future, although I am not sure that will help w/ read speeds.


----------



## Plan9

Quote:


> Originally Posted by *lowfat*
> 
> RAIDZ1. I get rather poor reads honestly, about 80MB/s over the network. Writes saturate the gigabit connection. I'll upgrade to 16GB of ram here sometime in the near future, although I am not sure that will help w/ read speeds.


I use a 30GB SSD as a L2ARC cache drive to speed up ZFS reads on my FreeBSD file server


----------



## lowfat

Quote:


> Originally Posted by *Plan9*
> 
> I use a 30GB SSD as a L2ARC cache drive to speed up ZFS reads on my FreeBSD file server


I have an 80GB X25-M G2 in there for my L2ARC.


----------



## Plan9

Quote:


> Originally Posted by *lowfat*
> 
> I have an 80GB X25-M G2 in there for my L2ARC.


Ahh yes, so you have. Maybe your ZIL drive is affecting performance some how. Are the SSDs sharing the same SATA controller? If so, could there be some IO saturation happening on that bus?

Also what spec is the Celeron? Is it 64bit?

And lastly, what options have you set in ZFS? Are you deduping?

I'm only running 8GB RAM and I'm fairly certain I get better write performance (around 300MB/s IIRC, but I'd need to verify that)


----------



## dushan24

Also note, a crappy NIC could be the culprit. (I only ever go Intel)

Though in your case I'd suspect a lack of RAM, weak (or non-64 bit) CPU or as Plan9 said, bus saturation.


----------



## Plan9

Quote:


> Originally Posted by *dushan24*
> 
> Also note, a crappy NIC could be the culprit. (I only ever go Intel)
> 
> Though in your case I'd suspect a lack of RAM, weak (or non-64 bit) CPU or as Plan9 said, bus saturation.


I think RAM would affect reads more than writes since RAM is primarily used for caching reads. Plus as I said earlier, I have 8GB RAM (plus likely a whole lot more running on my box than lowfat since I run about half a dozen VMs on there as well) and I have faster writes.

The only ZFS that comes to mind which could be affecting his RAM requirements would be if he has deduping enabled - but I really wouldn't recommend that on his CPU specs either.

re the ZIL drive, it's not recommended running a lone ZIL drive like that anyway. Since ZIL is used as a ZFS log table, it should be mirrored as if the ZIL drive goes down then you can lose data (and given how robust ZFS is, running an unmirrored ZIL drive feels a little like intentionally breaking the file system). Obviously this is just personal use so the lowfat is free to build the system as he chooses, but I wouldn't be happy with that in my own set up (in fact the problem of paring SSDs is why I didn't bother with ZIL on my own ZFS storage array)


----------



## DaveLT

Is a G555 so weak that people tend to forget it's a modern Sandy Bridge Chip


----------



## Plan9

Quote:


> Originally Posted by *DaveLT*
> 
> Is a G555 so weak that people tend to forget it's a modern Sandy Bridge Chip


I hadn't realised is was a SB, but even so, deduping is _heavy_ work. Think of it like winziping your entire filesystem, on the fly, 24/7.


----------



## DaveLT

Quote:


> Originally Posted by *Plan9*
> 
> I hadn't realised is was a SB, but even so, deduping is _heavy_ work. Think of it like winziping your entire filesystem, on the fly, 24/7.


Gosh ... Minimum i would put in is a L5520. Even so if it's a single L5520 it's going to be a 3.6GHz or if dual L5520 it's going to be 2.4GHz (133x18)


----------



## Plan9

Quote:


> Originally Posted by *DaveLT*
> 
> Gosh ... Minimum i would put in is a L5520. Even so if it's a single L5520 it's going to be a 3.6GHz or if dual L5520 it's going to be 2.4GHz (133x18)


Deduping is intended more for SANs though. e.g. when you have a dozen or so VMs stored over iSCSI.

For normal storage needs, deduping isn't really necessary. And for normal storage needs, a 64bit celery is sufficient.


----------



## DaveLT

Quote:


> Originally Posted by *Plan9*
> 
> Deduping is intended more for SANs though. e.g. when you have a dozen or so VMs stored over iSCSI.
> 
> For normal storage needs, deduping isn't really necessary. And for normal storage needs, a 64bit celery is sufficient.


Calling celeron a Celery ...







It is kinda like a celery of the computing world though, nobody actually wants one nor use one


----------



## Plan9

Quote:


> Originally Posted by *DaveLT*
> 
> Calling celeron a Celery ...
> 
> 
> 
> 
> 
> 
> 
> 
> It is kinda like a celery of the computing world though, nobody actually wants one nor use one


IIRC the name harps back to the original Celeron chip which was just awful. The Celeron II was pretty good though - I used to run a dual processor motherboard with two Celeron IIs (500MHz) and it ran circles around Pentium 3's close to double the spec.

Plus my old Celeron IIs were a dream to overclock. Even with stock coolers, I could push the frequency up by as much as 75% and the thing would still be stable.

I do miss that PC - it was awesome <3


----------



## DaveLT

Quote:


> Originally Posted by *Plan9*
> 
> IIRC the name harps back to the original Celeron chip which was just awful. The Celeron II was pretty good though - I used to run a dual processor motherboard with two Celeron IIs (500MHz) and it ran circles around Pentium 3's close to double the spec.
> 
> Plus my old Celeron IIs were a dream to overclock. Even with stock coolers, I could push the frequency up by as much as 75% and the thing would still be stable.
> 
> I do miss that PC - it was awesome <3


Those chips were awesome. But i still remember a Socket 370 800MHz (Yes, the expensive one!) Pentium III i had been using up until ... 2004







I mean like, that thing is still far less power hungry than any willamette proc anyway

But meh, those were the days. I can remember 1 AGP and 6 PCI slots though, with 2-3 IDE channels and about 768MB of DDR RAM <- Yeah, 768!

Next up, i'm grabbing a E5-2667 C0 or C1 if C0 has problems with X79. My server is staying as 2x L5639 as i am adding another rig to my room :lol:
I might buy a R4G if it's cheap enough, my friend got one for 200SGD and another 400 for his E5-2660 C1 (Which is pretty darn low considering his Z77+3770k cost him 700$+)


----------



## lowfat

I am not using deduplication, however I am using LZ4 compression, which is generally suggested. My CPU usage really doesn't go above 50% usage. I could try turning it off to see if it helps.

As for the zil, it is an X25-E which uses SLC. I've had the drive for 4 years and it has been solid the entire time. I expect it to last a whole lot longer.


----------



## Plan9

Quote:


> Originally Posted by *lowfat*
> 
> I am not using deduplication, however I am using LZ4 compression, which is generally suggested. My CPU usage really doesn't go above 50% usage. I could try turning it off to see if it helps.


I wouldn't disable lz4 as it's supposed to be really efficient (in fact I keep meaning to switch my pools over to lz4 now I've upgraded)

50% CPU seems a lot disk IO, though I've never checked my CPU load. How are you measuring your CPU by the way? Just the load averages in (for example) _uptime_?
Quote:


> Originally Posted by *lowfat*
> 
> As for the zil, it is an X25-E which uses SLC. I've had the drive for 4 years and it has been solid the entire time. I expect it to last a whole lot longer.


I appreciate it's a decent SSD. My point is that you have redundancy drives on your storage pool so you don't lose data - yet you're trusting all your writes to a single SSD which of it fails, you'll also lose data (though admittedly significantly less).

I'm sure you'll be fine though - I just prefer to be safe than sure


----------



## lowfat

Quote:


> Originally Posted by *Plan9*
> 
> I wouldn't disable lz4 as it's supposed to be really efficient (in fact I keep meaning to switch my pools over to lz4 now I've upgraded)
> 
> 50% CPU seems a lot disk IO, though I've never checked my CPU load. How are you measuring your CPU by the way? Just the load averages in (for example) _uptime_?
> I appreciate it's a decent SSD. My point is that you have redundancy drives on your storage pool so you don't lose data - yet you're trusting all your writes to a single SSD which of it fails, you'll also lose data (though admittedly significantly less).
> 
> I'm sure you'll be fine though - I just prefer to be safe than sure


There is a CPU usage graph under the reports menu. 50% is the highest I've seen it. Usually it sits @ 5-10%.


----------



## Plan9

Quote:


> Originally Posted by *lowfat*
> 
> There is a CPU usage graph under the reports menu. 50% is the highest I've seen it. Usually it sits @ 5-10%.


Ahh I see.

I don't know much about the FreeNAS web interface since I run vanilla FreeBSD. 5-10% is definitely quite high for an idle CPU, but FreeNAS might have some weird stuff running in the background. However without knowing more about where those figures comes from, I would only be guessing (eg CPU load averages are very process orientated so you can have high CPU loads but where the CPU isn't actually busy due to networking threads stacking up waiting for a window to send packets*). So a high CPU load might not mean that the compression on the filesystem is causing problems - it might point to network saturation.

With regards to improving your write speeds, did you say how your various SATA devices are connected to your motherboard? What devices are using on board controllers and what are plugged into SATA/RAID PCIe cards (or any other boards you have)

* I've explained that very badly, but it's getting late here and my brain is shutting down for the night


----------



## nismoskyline

my server for now.


running as a file server for my other two machines(they only have 60 and 120gb ssd and it isn't enough







)
specs:
core i7 920;
radeon 6850;
9gb ram;
500gb hdd.
i know it's overkill, gonna use it for virtual machine things in the near future though


----------



## aldorgan

Here is my servers: 2x Dell PowerEdge 1850, Compaq alpha xp900/ds10, Sun E420R 4xultrasparc II 450mhz cpu's.


----------



## lowfat

Quote:


> Originally Posted by *Plan9*
> 
> Ahh I see.
> 
> I don't know much about the FreeNAS web interface since I run vanilla FreeBSD. 5-10% is definitely quite high for an idle CPU, but FreeNAS might have some weird stuff running in the background. However without knowing more about where those figures comes from, I would only be guessing (eg CPU load averages are very process orientated so you can have high CPU loads but where the CPU isn't actually busy due to networking threads stacking up waiting for a window to send packets*). So a high CPU load might not mean that the compression on the filesystem is causing problems - it might point to network saturation.
> 
> With regards to improving your write speeds, did you say how your various SATA devices are connected to your motherboard? What devices are using on board controllers and what are plugged into SATA/RAID PCIe cards (or any other boards you have)
> 
> * I've explained that very badly, but it's getting late here and my brain is shutting down for the night


Actually it is just read speeds that are low. Write speeds are perfectly fine.

I have 7 of the drives hooked up to the 9211-8i. Then the two SSDs and one HDD are hooked up to the H77 PCH.

The NIC is onboard Realtek. Unfortunately since it is an ITX system I have no choice but to use the onboard.


----------



## driftingforlife

RAID question, can i use a SAS expander and a RAID card together e.g. I have a 8-port card but can fit 15 HDDs in my case, can I use a expander connected to one of the 2 8707 ports on the RAID card for a RAID 6 array (basically can I raid all the HDDs on the SAS expander with my RAID card)?

Sorry if this is stupid, I have a massive head ache atm, im derping


----------



## Sean Webster

Quote:


> Originally Posted by *driftingforlife*
> 
> RAID question, can i use a SAS expander and a RAID card together e.g. I have a 8-port card but can fit 15 HDDs in my case, can I use a expander connected to one of the 2 8707 ports on the RAID card for a RAID 6 array (basically can I raid all the HDDs on the SAS expander with my RAID card)?
> 
> Sorry if this is stupid, I have a massive head ache atm, im derping


Yes.


----------



## driftingforlife

Awsome, that saves a lot of hassle, thank you









+REP if I could


----------



## Plan9

Quote:


> Originally Posted by *lowfat*
> 
> Actually it is just read speeds that are low. Write speeds are perfectly fine.


Oh sorry mate. Though that's even weirder.
Quote:


> Originally Posted by *lowfat*
> 
> I have 7 of the drives hooked up to the 9211-8i. Then the two SSDs and one HDD are hooked up to the H77 PCH.
> 
> The NIC is onboard Realtek. Unfortunately since it is an ITX system I have no choice but to use the onboard.


That's fine. I've done no actual testing to back up this claim, but in my mind it makes more sense to have the ZIL and L2ARC using onboard and the storage array on the PCI bus. But I'm sure that logic could change from motherboard to motherboard. So I'd take that advice with a pinch of salt (though it sounds like you're already doing that)

It's really weird your reads are slower than your writes though. How are you testing that? Random writes and reads or sequentially? Also, have you run any SMART tools against your L2ARC SSD to check it's not failing?


----------



## KYKYLLIKA

Quote:


> Originally Posted by *driftingforlife*
> 
> RAID question, can i use a SAS expander and a RAID card together e.g. I have a 8-port card but can fit 15 HDDs in my case, can I use a expander connected to one of the 2 8707 ports on the RAID card for a RAID 6 array (basically can I raid all the HDDs on the SAS expander with my RAID card)?
> 
> Sorry if this is stupid, I have a massive head ache atm, im derping


I know of a certain technology called "Sata multiplier". It is used in the famous "Backblaze" build for providing cheap supermassive storage capabilities. You can look up CFI-B53PM which is one of the few multipliers available out there. Be wary, however, than such tech will split the bandwidth of your one sata port between all the drives attached to it, which may create a bottleneck for your drives if you care about these things.


----------



## 21276

I have an old Apple Xserve G5, does that count? Lol


----------



## shadow5555

old setup


Spoiler: Warning: Spoiler!









Spoiler: Warning: Spoiler!







new setup


Spoiler: Warning: Spoiler!









Spoiler: Warning: Spoiler!







I recased my server from my packed 4u short depth rackmount case with rails included ( for sale btw) Into a HAF 932 and its got more room than I know what to do with lol


----------



## SuperMudkip

Went home last week and reorganized my servers, finally got time to pull out the cardboard back plane behind this entertainment center that Is in my room. Oddly enough it fits all my servers in one spot. Cleaned up all the wire in the back and added a upward pushing fan so that the air can be pushed up and out of the corner of the room. Also got me a Linksys KVM for about 4 bucks at the Goodwill Computer Works.


----------



## LDV617

K no pics yet, but I've been working on a frankenserver, mostly from extra work parts.

Case: "Enterprise Server Case" (Don't see a brand name, $19 at microcenter a couple weeks ago, has lots of HDD space though.
CPU: Core 2 Duo e4500 @ 2.2 - Work surplus
Motherboard: Asrock Esata 2+ Conroe - $20 craigslist
RAM: 2x 1gb Samsung DDR2 - Work surplus
HDD: 2x WD Red 1tb - Bought both of these for $39 each from another IT pro who was parting out some client servers
HDD: 1x WD Blue 200gb - Work surplus, currently serves as my external HDD to bring ISOs and Software from work back home to upload to the server.
Graphics: EVGA GT520 refurb - $15 craigslist
PSU: Raidmax 450watt - Leftover from after replacing the raidmax included PSU
OS: Right not I'm using Windows 7 because it's quick and easy. I plan on buying a 64gb USB drive to run Ubuntu Server / Windows Server 2008 off of.

Teamviewer for use at home and at work (No monitor / keyboard / mouse setup







)

So far it's nothing special, but once my adapters come I'm going to return my Conroe to replace it with a 771 Xeon.







I have spent about $100 on it over the past 2 months and finally got an OS on it yesterday. I have set it up using Teamviewer (after drivers / deployment). I moved 500gb of data from my local setup onto it so far. I don't plan on deleting anything from my local machines quite yet, but after a few weeks of testing / tweaking and maybe changing the OS, it will be the media center for my house. (Definitely got to grab a gigabit PCI card from work before that happens)


----------



## SuperMudkip

Quote:


> Originally Posted by *LDV617*
> 
> K no pics yet, but I've been working on a frankenserver, mostly from extra work parts.
> 
> Case: "Enterprise Server Case" (Don't see a brand name, $19 at microcenter a couple weeks ago, has lots of HDD space though.
> CPU: Core 2 Duo e4500 @ 2.2 - Work surplus
> Motherboard: Asrock Esata 2+ Conroe - $20 craigslist
> RAM: 2x 1gb Samsung DDR2 - Work surplus
> HDD: 2x WD Red 1tb - Bought both of these for $39 each from another IT pro who was parting out some client servers
> HDD: 1x WD Blue 200gb - Work surplus, currently serves as my external HDD to bring ISOs and Software from work back home to upload to the server.
> Graphics: EVGA GT520 refurb - $15 craigslist
> PSU: Raidmax 450watt - Leftover from after replacing the raidmax included PSU
> OS: Right not I'm using Windows 7 because it's quick and easy. I plan on buying a 64gb USB drive to run Ubuntu Server / Windows Server 2008 off of.
> 
> Teamviewer for use at home and at work (No monitor / keyboard / mouse setup
> 
> 
> 
> 
> 
> 
> 
> )
> 
> So far it's nothing special, but once my adapters come I'm going to return my Conroe to replace it with a 771 Xeon.
> 
> 
> 
> 
> 
> 
> 
> I have spent about $100 on it over the past 2 months and finally got an OS on it yesterday. I have set it up using Teamviewer (after drivers / deployment). I moved 500gb of data from my local setup onto it so far. I don't plan on deleting anything from my local machines quite yet, but after a few weeks of testing / tweaking and maybe changing the OS, it will be the media center for my house. (Definitely got to grab a gigabit PCI card from work before that happens)


Is the case a rackmount or is it a tower?


----------



## LDV617

Mid sized tower. Wish I had a rackmount server, I've been scouting craigslist for rackmounts but haven't found anything in my price range.


----------



## pe4nut666

Quote:


> Originally Posted by *KYKYLLIKA*
> 
> I know of a certain technology called "Sata multiplier". It is used in the famous "Backblaze" build for providing cheap supermassive storage capabilities. You can look up CFI-B53PM which is one of the few multipliers available out there. Be wary, however, than such tech will split the bandwidth of your one sata port between all the drives attached to it, which may create a bottleneck for your drives if you care about these things.


hello i am new this but how many drives can a single sata 6gbps port mange before the bandwidth is too bottlenecked for hd media to play right? sorry probly a stupid question but can't find the answer on google


----------



## TheReciever

Depends on your drives in question I would think? SSD or HDD?


----------



## pe4nut666

Quote:


> Originally Posted by *pe4nut666*
> 
> hello i am new this but how many drives can a single sata 6gbps port mange before the bandwidth is too bottlenecked for hd media to play right? sorry probly a stupid question but can't find the answer on google


HDD 3gbps


----------



## Master__Shake

deleted


----------



## TheReciever

Quote:


> Originally Posted by *pe4nut666*
> 
> HDD 3gbps


Then you have to factor the performance of those drives, as well as how many you plan to use


----------



## pe4nut666

i am looking to use 4 - 3 tb green drives and i got a 1 of these from my friend http://www.newegg.ca/Product/Product.aspx?Item=N82E16816124043 it will be hooked to a sata 6 gbps port wondering if this will bottleneck the port all it needs to do is read and write my HD media files


----------



## DaveLT

Quote:


> Originally Posted by *pe4nut666*
> 
> hello i am new this but how many drives can a single sata 6gbps port mange before the bandwidth is too bottlenecked for hd media to play right? sorry probly a stupid question but can't find the answer on google


Most 2-3TB 7200rpm HDDs peak out at 200MBPS so 3 of them. Low access time 1TB 7200rpm HDDs peak out at 133MBPS

Really depends on what drives you are using but they are mostly only 200mbps to begin with as they will fall to 160mbps soon (megabytes not megabits)


----------



## LDV617

I have 3 users simultaneously reading/writing data to the server at home, and many more at work and client setups. File sharing servers take a LOT of use to become bottlenecked. Make sure you have a gigabit card, and 6gb/s sata and you have nothing to worry about.


----------



## KYKYLLIKA

Quote:


> Originally Posted by *pe4nut666*
> 
> hello i am new this but how many drives can a single sata 6gbps port mange before the bandwidth is too bottlenecked for hd media to play right? sorry probly a stupid question but can't find the answer on google


You will have to identify the manner of HD media you want to playback. Like for example if you are doing realtime 250Kbps video, then you will need slightly more than 250Kbps, there's going to be overhead. I do not know exactly the limits, but somehow I do not think there will be more than four-way splitters around, and if there would be, I am not sure it would be healthy to use more than four drives in one connector. Between them a sata3 port will allow 1.5Gbps, which is more than enough for storage needs, but you will find it sluggish playing any major game titles off of that or doing 4k video off of such. This kind of tech was designed for cheap storage, not active work. I know people who use 10Gbps connection to do photo processing and at least one person is thinking about getting more bandwidth for it.


----------



## pe4nut666

Planning on using it to serve up HD TV content and movies. Max file sizes be under 8 gigs just looking for a temp solution till I rebuild


----------



## SuperMudkip

Quote:


> Originally Posted by *LDV617*
> 
> Mid sized tower. Wish I had a rackmount server, I've been scouting craigslist for rackmounts but haven't found anything in my price range.


Yea, I know. It's hard to find server cases on criagslist. I was lucky enough to find a 4U chassis a couple of months ago. Just up until recently I got 6 1U server cases (with hardware and what not for about $90). Try looking at ebay.


----------



## Muskaos

WHS 2011 server, with Synology DS411j on top. 18.2 TB of useable space in the former, 9.1 TB in the latter. Will add some second hand iron later for playing around with, but in the mean time, these two serve up media with ease.


----------



## spice003

Case: Rocketfish (Lian Li)
CPU: Xeon L5520 soon to be replaced by L5639
Motherboard: Gygabite GA-X58A-UD3R
RAMl GSkill 8GB
HDD: OS WD120 Blue, Samsung 1.5TB, Samsung 750GB, 2x Seagate 500GB
Rosewill RSV-Cage 4 x 3.5" HDD Cage
PSU: Antec true power 750
OS: Windows 7, running Plex server, file server, VM server via RDP, torrent box.


----------



## stumped

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *spice003*
> 
> Case: Rocketfish (Lian Li)
> CPU: Xeon L5520 soon to be replaced by L5639
> Motherboard: Gygabite GA-X58A-UD3R
> RAMl GSkill 8GB
> HDD: OS WD120 Blue, Samsung 1.5TB, Samsung 750GB, 2x Seagate 500GB
> Rosewill RSV-Cage 4 x 3.5" HDD Cage
> PSU: Antec true power 750
> OS: Windows 7, running Plex server, file server, VM server via RDP, torrent box.
> 
> http://www.overclock.net/content/type/61/id/1748429/width/500/height/1000
> http://www.overclock.net/content/type/61/id/1748431/width/500/height/1000





What gpu is that in your server?


----------



## spice003

nvidia g210


----------



## stumped

Quote:


> Originally Posted by *spice003*
> 
> nvidia g210


ah, thanks.


----------



## LDV617

Just found another IT guy in Boston who is parting out old client servers and selling cheap refurbished enterprise drives, looks like they are all WD. He's got a few posting on Craigslist if anyone else is interested. I don't have the need yet, but I'll be buying a few 1tbs from him when I see a good deal.


----------



## NKrader

I'm looking for a good cheap priced lto tape sas single bay, anyone have any leads other than fleabay?


----------



## Oedipus

No, and I'm not sure I'd bother even If I found a cheap one. Single drive LTOs are without a doubt the most consistently unreliable and fragile piece of server hardware I have ever worked with. Typically we have to replace them at least once a year.


----------



## DaveLT

Quote:


> Originally Posted by *Oedipus*
> 
> No, and I'm not sure I'd bother even If I found a cheap one. Single drive LTOs are without a doubt the most consistently unreliable and fragile piece of server hardware I have ever worked with. Typically we have to replace them at least once a year.


You only have to change them once a year?! When my dad worked with single drive LTOs they had to change it every 2 MONTHS.


----------



## TheNegotiator

Speaking of tape drives, I have a Dell ML6000 sitting in my closet with over 10TB of storage capacity. Does anyone know of some sub-$300 (or even better, free) backup utilities that work with a tape drive and Windows Server 2012?


----------



## herkalurk

Quote:


> Originally Posted by *DaveLT*
> 
> You only have to change them once a year?! When my dad worked with single drive LTOs they had to change it every 2 MONTHS.


What are you doing to your drives to cause this? LTO is very consistent storage. I have a LTO2 drive at home and use LTO5 at work with no issues at all. Even have hardware encryption at work.


----------



## DaveLT

Quote:


> Originally Posted by *herkalurk*
> 
> What are you doing to your drives to cause this? LTO is very consistent storage. I have a LTO2 drive at home and use LTO5 at work with no issues at all. Even have hardware encryption at work.


IIRC they used it for checking the tapes. They didn't actually use a tiny tape drive storage either. A NEO 9000 in the storage room. Fully loaded


----------



## ElectroGeek007

The second incarnation of my home server is now up and running. This replaces a motherboard with only two SATA ports, it was definitely time for an upgrade. I thought about buying a C1100 as the price is quite tempting, but I have no need for such power and so decided to use hardware I already had lying around instead; only had to buy the CPU to finish up the build.







Coming soon: more storage capacity, possibly more RAM (although it isn't needed with the current usage)

Usage: file sharing/backup, streaming media, Minecraft server, seedbox, BTSync

OS: Ubuntu Server Edition 13.10 x64
Case: Zalman Z9
CPU: Intel Core i3 3220T (35w low TDP version)
Motherboard: Asus P8Z68-V LX
Memory: 1x4GB G.Skill 1333 MHz DDR3
PSU: Antec EarthWatts Green EA-430D
OS HDD (If you have one): 160GB Seagate 2.5" HDD
Storage HDD(s): Toshiba DT01ACA300 3TB HDD
Server Manufacturer: Me


----------



## cdoublejj

got this form the recycle bin

http://www.overclock.net/t/1445065/got-this-from-the-recycle-bin



After pics


----------



## Sarec

Nice find! Non-techs seem to be used to tossing old hardware not realizing that all current day hardware has a use in some other application. They still think, "It's old, probably not worth anything".


----------



## DaveLT

Quote:


> Originally Posted by *Sarec*
> 
> Nice find! Non-techs seem to be used to tossing old hardware not realizing that all current day hardware has a use in some other application. They still think, "It's old, probably not worth anything".


It's hardly old lol, it just needs 2 L5639 six cores and it'll be zipping away. And probably faster than the same price bracket six-core Sandy


----------



## LDV617

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *cdoublejj*
> 
> got this form the recycle bin
> 
> http://www.overclock.net/t/1445065/got-this-from-the-recycle-bin
> 
> 
> 
> After pics






Amazing find. Good job!


----------



## Mikey976

you know whos a hater

<<<<---- this guy right here


----------



## Sarec

Quote:


> Originally Posted by *DaveLT*
> 
> It's hardly old lol, it just needs 2 L5639 six cores and it'll be zipping away. And probably faster than the same price bracket six-core Sandy


We know it is not old but the person who tossed it probably thought it was dated the moment it was bought. Non techs have some odd views.


----------



## DaveLT

Quote:


> Originally Posted by *Sarec*
> 
> We know it is not old but the person who tossed it probably thought it was dated the moment it was bought. Non techs have some odd views.


Exactly.


----------



## danilon62

CPU: A4 5300 w/AMD FX 8120 Stock cooler (much better than stock one...)
Mobo: Gigabyte F2A75M-D3H
RAM : 4GB Sillicon Power DDR3 @ 1600
Data HDD: Seagate Barracuda HDD (500GB data)
Boot HDD: Toshiba HDD (120GB Boot)
PSU : Tooq 400w PSU
Case: Fractal Desing Core 1000




The case has 0 cable management so I had to do it my own way, I think I did a good job with it


----------



## BWG

Fold those things in Coremageddon!


----------



## NKrader

Quote:


> Originally Posted by *BWG*
> 
> Fold those things in Coremageddon!


lol

worldcommunitygrid>[email protected]


----------



## BWG

Did it win a Nobel Prize too?









OCN doesn't have one of those teams, but we do have a folding team. The guys folding on their servers enjoy the little side event quite a bit.


----------



## NKrader

Quote:


> Originally Posted by *BWG*
> 
> Did it win a Nobel Prize too?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *OCN doesn't have one of those teams*, but we do have a folding team. The guys folding on their servers enjoy the little side event quite a bit.


http://www.overclock.net/f/365/overclock-net-boinc-team


----------



## BWG

You're so predictable


----------



## NKrader

Quote:


> Originally Posted by *BWG*
> 
> You're so predictable


Finished dedicated cruncher!
Looks so nice sitting next to my file server


----------



## Jakeey802




----------



## Wildcard36qs

The angle of those pics make it look like talll racks lol


----------



## NKrader

Quote:


> Originally Posted by *Wildcard36qs*
> 
> The angle of those pics make it look like talll racks lol


lol, can I post this link here?


----------



## BWG

I really like it. Good job.


----------



## spice003

i love that Lian Li case to bad its so dam expensive.


----------



## NKrader

Quote:


> Originally Posted by *spice003*
> 
> i love that Lian Li case to bad its so dam expensive.


about a year and a half ago before I started doing this server I was trying to sell here no one even messaged when i put the price down to 150$ with free shipping so thats when i decided to repourpose it into a file server instead of a gaming rig

now i have well over 700$ into *just the case* when including case+powdercoat+mdpc hardware+ supermicro hotswap


----------



## Muskaos

I have to ask, though: Why two optical drives? Ripping DVD/Blu Ray? You could put a drive cage there...


----------



## lowfat

Quote:


> Originally Posted by *NKrader*
> 
> about a year and a half ago before I started doing this server I was trying to sell here no one even messaged when i put the price down to 150$ with free shipping so thats when i decided to repourpose it into a file server instead of a gaming rig
> 
> now i have well over 700$ into *just the case* when including case+powdercoat+mdpc hardware+ supermicro hotswap


Lian Li doesn't get much love these days. Everyone wants huge gaudy steel monsters now.


----------



## DaveLT

Quote:


> Originally Posted by *lowfat*
> 
> Lian Li doesn't get much love these days. Everyone wants huge gaudy steel monsters now.


I still want a full aluminum case. Lian Li doesn't get much love these days because they haven't been terribly up to date







What people want now in a aluminum case is a CL M8


----------



## u3b3rg33k

IDK man - my PC-Z60B is awesome. I helped it out a bit by adding a shrouded 2x 120mm watercooling setup, but it was pretty before I did that, too.


The picture does the finish a horrible injustice. it's really fun to look at.

FYI it comes with a lockable, hot swap SATA backplane. that's cool.
i hear they have a bigger version now.


----------



## Muskaos

Quote:


> Originally Posted by *lowfat*
> 
> Lian Li doesn't get much love these days. Everyone wants huge gaudy steel monsters now.


Doesn't help that they want two arms and a leg for one of their cases...


----------



## lowfat

Quote:


> Originally Posted by *Muskaos*
> 
> Doesn't help that they want two arms and a leg for one of their cases...


I don't think they are badly priced compared to other full aluminum cases.


----------



## DaveLT

Quote:


> Originally Posted by *lowfat*
> 
> I don't think they are badly priced compared to other full aluminum cases.


I agree. Just that they are a bit out of date. Stuff like ... At least a powder-coated motherboard tray ... cutouts ... removable hard disk trays ... Nope, they don't have any of that.


----------



## NKrader

Quote:


> Originally Posted by *Muskaos*
> 
> Doesn't help that they want two arms and a leg for one of their cases...


its becuase they arent made out of cheap materials like plastic and steel, like say, corsair does..
Quote:


> Originally Posted by *lowfat*
> 
> I don't think they are badly priced compared to other full aluminum cases.


true story. people compare flashy cases with these, and expect these to be cheaper because they have "less" alot of the stupid flashy stuff that comes with alot of the biger named cases corsair/antec/etc etc would make me NOT want to buy lianli.


----------



## bobfig

imo fractal > lian li


----------



## SamKook

I've been meaning to post a picture of my 44U server rack for a while, but since I wanted to clean it up first, I kept putting it off.

Since it doesn't look like I'll do that in the near future, but I cleaned the room which allowed me to at least move it out of the corner and take a full pic of it, I thought I might as well post one anyway.



From top to bottom:
Half-hidden old Dlink router for the wireless.

Netgear 24 port managed gigabit switch.

Supermicro SYS-5015A-EHF-D525 running pfsense for which I need to find a quieter PSU fan.

Really, really, really old PC monitor.

Old server in the 2 sense of the term which I need to get some data from.

My current server which is pretty much this but with an SSD for the OS and a total of 12 WD RED drives.

And below it an imaginary virtualization server for which I don't have enough money for yet.

And if I can repair one of the 4 UPS I have, there would be one in there.


----------



## mrkambo

*Hardware:*

CASE: Bitfenix Shinobi
PSU: Corsair CX 430
MB: Asus Maximus V Formula
CPU: Intel i3-3225
HS: Stock Heatsink
RAM: Corsair Vengence Pro 8GB 1600mhz @ XMP
RAID CARD: LSI MegaRAID 9260-16i
SSD: Kingston value 30GB
HDD 1: Seagate 1.5TB
HDD 2: WD Red 2TB
HDD 3: WD Red 2TB
HDD 4: WD Red 2TB
HDD 5: WD Red 2TB
HDD 6: WD Red 2TB

*Software & Config:*
Windows Server 2008 R2
The SSD is stricly for OS and software installation only, the Seagate acts as
a temporary/dump drive for files that need to be worked on and added to the
RAID array. The WD Reds are setup in RAID 6 (2 drive parity) and give a 5.5TB
usable storage capacity.
Server runs headless, just sits connected to the router, all maintenance is
carried out via RDP

*Usage:*
Primary use for this is just a file server for my HTPC which is a RaspberryPI
running XBMC. I may in the near future setup the printer and playing around
with virtualbox on it, but for now just a file server.

*Other:*
Most of the parts in the server came from my old gaming build, and when I
was first building it i really wanted to get a Xeon for it, but meant id need a
video card and stuff and essentially would of ended up costing more so went
with the i3

*Pictures:*


----------



## NKrader

Quote:


> Originally Posted by *mrkambo*
> 
> *Hardware:*
> 
> CASE: Bitfenix Shinobi
> PSU: Corsair CX 430
> MB: Asus Maximus V Formula
> CPU: Intel i3-3225
> HS: Stock Heatsink
> RAM: Corsair Vengence Pro 8GB 1600mhz @ XMP
> RAID CARD: LSI MegaRAID 9260-16i
> SSD: Kingston value 30GB
> HDD 1: Seagate 1.5TB
> HDD 2: WD Red 2TB
> HDD 3: WD Red 2TB
> HDD 4: WD Red 2TB
> HDD 5: WD Red 2TB
> HDD 6: WD Red 2TB
> 
> *Software & Config:*
> Windows Server 2008 R2
> The SSD is stricly for OS and software installation only, the Seagate acts as
> a temporary/dump drive for files that need to be worked on and added to the
> RAID array. The WD Reds are setup in RAID 6 (2 drive parity) and give a 5.5TB
> usable storage capacity.
> Server runs headless, just sits connected to the router, all maintenance is
> carried out via RDP
> 
> *Usage:*
> Primary use for this is just a file server for my HTPC which is a RaspberryPI
> running XBMC. I may in the near future setup the printer and playing around
> with virtualbox on it, but for now just a file server.
> 
> *Other:*
> Most of the parts in the server came from my old gaming build, and when I
> was first building it i really wanted to get a Xeon for it, but meant id need a
> video card and stuff and essentially would of ended up costing more so went
> with the i3
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> *Pictures:*


lucky.. im jelly of storage array.. i was looking at that card.. and want to pickup 5 2tb re4 to start my array..

SOO
MUCH
MONEYS,...


----------



## mrkambo

Quote:


> Originally Posted by *NKrader*
> 
> lucky.. im jelly of storage array.. i was looking at that card.. and want to pickup 5 2tb re4 to start my array..
> 
> SOO
> MUCH
> MONEYS,...


I got quite lucky with the drives, Amazon had them up for £72 per drive with free next day delivery, so i jumped up at the chance


----------



## DaveLT

What you have is the shinobi XL not the shinobi







Sorry for nitpicking.
Otherwise ... LGA1155 do have Xeons with iGPUs. They do!
Anyway. Look to changing out that PSU, i wouldn't see it lasting long as it's a very mediocre PSU and might possibly take your rig with it


----------



## LDV617

Quote:


> Originally Posted by *DaveLT*
> 
> What you have is the shinobi XL not the shinobi
> 
> 
> 
> 
> 
> 
> 
> Sorry for nitpicking.
> Otherwise ... LGA1155 do have Xeons with iGPUs. They do!
> Anyway. Look to changing out that PSU, i wouldn't see it lasting long as it's a very mediocre PSU and might possibly take your rig with it


I second that


----------



## NKrader

looks like I will be buying the parts for my second dedicated cruncher tommorow.

dual supermicros running side by side! 24cores!

also mrkambo those blank pci slots without shields in hurt my brain..


----------



## Muskaos

So, is there any kind of disk space capacity war in this thread? I'm kinda curious, as I've seen some pretty significant storage arrays in here.


----------



## Plan9

Quote:


> Originally Posted by *Muskaos*
> 
> So, is there any kind of disk space capacity war in this thread? I'm kinda curious, as I've seen some pretty significant storage arrays in here.


To be honest, the biggest war you see here is between those with epic spec'ed servers (not just in terms of storage) arguing why the need a machine that beef vs those with the lower powered servers arguing why you don't need beefy machine as a home server.


----------



## racer86

My old server before I moved everything over to removable media usb/esata enclosures. Worked mostly as a file / web server

Lian LI PC-K65
Phenom II x4 B93
Biostar 890gx AM3 Matx
6gb DDR3 1866
4 WD Red 2tb
1 Caviar Green 1tb
1 scorpio blue 1tb
Perc 5i controller


----------



## mrkambo

Quote:


> Originally Posted by *DaveLT*
> 
> What you have is the shinobi XL not the shinobi
> 
> 
> 
> 
> 
> 
> 
> Sorry for nitpicking.
> Otherwise ... LGA1155 do have Xeons with iGPUs. They do!
> Anyway. Look to changing out that PSU, i wouldn't see it lasting long as it's a very mediocre PSU and might possibly take your rig with it


Sorry thats alright i know its an XL was just an effort to type









Quote:


> Originally Posted by *LDV617*
> 
> I second that


What PSU do you recommend?

Quote:


> Originally Posted by *NKrader*
> 
> also mrkambo those blank pci slots without shields in hurt my brain..


Sorry dude, but it isnt on show so not too bother how it looks


----------



## DaveLT

Quote:


> Originally Posted by *mrkambo*
> 
> Sorry thats alright i know its an XL was just an effort to type
> 
> 
> 
> 
> 
> 
> 
> 
> What PSU do you recommend?
> Sorry dude, but it isnt on show so not too bother how it looks


Something like a Rosewill capstone 450
http://www.newegg.com/Product/Product.aspx?Item=N82E16817182066


----------



## mrkambo

Quote:


> Originally Posted by *DaveLT*
> 
> Something like a Rosewill capstone 450
> http://www.newegg.com/Product/Product.aspx?Item=N82E16817182066


ill have a look into that, cheers


----------



## NKrader




----------



## BWG

Does it fold well?


----------



## NKrader

Quote:


> Originally Posted by *BWG*
> 
> Does it fold well?


i do world community grid,
but,
each one does a little more than 2x a q6600 at less than 200watts each

for less than 350$ total invested in each of them its pretty decent, not the best in power usage but i love 2p crunchers and cant afford the upfront capital required for a more efficent rig, undervolted correctly they are right around 2600k points per kWh


----------



## NKrader

been crunching for days (100% cpu 24/7) and running nice and cold.


----------



## NKrader

Purchased four 2419ee to replace 2419 in the crunchers, all stats stay the same apart from TDP which goes from 115 to 60 for each CPU 220w TDP savings in my two rigs

Finished one of them! got the final part installed today!


----------



## Rian

First server build, very overkill but the parts were dirt cheap so why not really. Sever is mainly used as file storage and plex media server to quite a few clients.

Specs are follows:

Mobo: P9X79 PRO
CPU: Intel Xeon 1620 @ Stock
RAM: 16GB DDR3 Kingston Red @ 1600MHz
PSU: Be Quiet! 430w
GPU: Nvidia GT210
RAID Crd: RocketRAID 2720SGL
HDD: 8x Toshiba 3TB 7200rpm
SSD: 2x 120GB SSD (Kingston SSDNOW & Crucial M500)
Case: Fractal Design Define

Storage setup:
I run the 8 HDD's through the RocketRAID simply as an interface, with 6GBp/s compatibility and 3+TB support the card was worth the buy as only 2 ports on my board art SATA3 and they are both used for my SSD's.
Hardware raid was not appealing to me so I went with FlexRiad using tRaid which gives me a 1 disk redundancy and one major plus is that I am able to flexibly add drives to the pool without disrupting and the speeds of read/write are only limited to that of a single drive so writing and reading from my pool is fast enough for data and even some games I don't want to keep on my rig.

One thing worth mentioning also is that originally the plan was to keep this in our storage cupboard because of heat/noise and simply the size of the thing but this case is *so damn quiet* that it found a place in the living room. The only time I hear the machine is when a HDD spins up and then just gets drowned out by ambient noise. Fractal Design +1


















My very modest networking setup


----------



## xNovax

Quote:


> Originally Posted by *Rian*
> 
> First server build, very overkill but the parts were dirt cheap so why not really. Sever is mainly used as file storage and plex media server to quite a few clients.
> 
> Specs are follows:
> 
> Mobo: P9X79 PRO
> 
> 
> Spoiler: Snip!
> 
> 
> 
> CPU: Intel Xeon 1620 @ Stock
> RAM: 16GB DDR3 Kingston Red @ 1600MHz
> PSU: Be Quiet! 430w
> GPU: Nvidia GT210
> HDD: 8x Toshiba 3TB 7200rpm
> SSD: 2x 120GB SSD (Kingston SSDNOW & Crucial M500)
> Case: Fractal Design Define
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> My very modest networking setup


Thats a very nice motherboard for a file server.


----------



## k1mz3

*Rian*: What about the controller, and do you run any raid? Otherwise a very sweety setup!


----------



## Rian

Quote:


> Originally Posted by *k1mz3*
> 
> *Rian*: What about the controller, and do you run any raid? Otherwise a very sweety setup!


Completely forgot about the storage setup, updated original post. Thanks bud


----------



## k1mz3

*Rian*: I've never heard about tRaid from FlexRaid, but it looks very nice! - is it a pre-system like FreeNAS / NAS4Free?


----------



## Rian

Quote:


> Originally Posted by *k1mz3*
> 
> *Rian*: I've never heard about tRaid from FlexRaid, but it looks very nice! - is it a pre-system like FreeNAS / NAS4Free?


No it runs on top of Windows which is one of the reasons I chose it since the system is far to powerful for a dedicated freenas box, so I can run Couch Potato, SickBeard IIS ect..
I was a bit dubious at first as it isn't talked about as much but it seems to be running great and the price is pretty reasonable.
As stated before I can also simply add a disk into the system and add into any of my Flexraid pools and it will just work without any reconstruction/formatting, even if that drive *already* has data on it. Check it out


----------



## Plan9

You can run all of the above on freenas as well.


----------



## Rian

Quote:


> Originally Posted by *Plan9*
> 
> You can run all of the above on freenas as well.


Even IIS? :S

I was just saying that even if you can you're not nearly as restricted with what you run on Windows as opposed to freenas.


----------



## Plan9

Quote:


> Originally Posted by *Rian*
> 
> Even IIS? :S
> 
> I was just saying that even if you can you're not nearly as restricted with what you run on Windows as opposed to freenas.


Not IIS, but then most of the connected world don't run IIS web servers either. FreeNAS will happily run Apache and/or nginx though (in fact it will come with one of them pre-installed anyway since the GUI management is over HTTP).

With regards to more stuff running on Windows, that's complete rubbish. This is a server you're building, not a gaming machine. There's just as much server side software available for Linux and UNIX than there is for Windows (and some of what is available for Windows is ports from Linux/UNIX anyway).

I can understand you wanting to choose Windows because that's what you're more familiar with; I have no qualm with that. But that's a whole other point


----------



## Rian

Quote:


> Originally Posted by *Plan9*
> 
> Not IIS, but then most of the connected world don't run IIS web servers either. FreeNAS will happily run Apache and/or nginx though (in fact it will come with one of them pre-installed anyway since the GUI management is over HTTP).
> 
> With regards to more stuff running on Windows, that's complete rubbish. This is a server you're building, not a gaming machine. There's just as much server side software available for Linux and UNIX than there is for Windows (and some of what is available for Windows is ports from Linux/UNIX anyway).
> 
> I can understand you wanting to choose Windows because that's what you're more familiar with; I have no qualm with that. But that's a whole other point


Fair enough, I didn't know that it had a wide verity of apps like that, I kinda stopped paying attention after I found out adding additional storage would not be as easy as just adding a drive.
You are correct though, I just wanted a fully functional OS, I almost went with Ubuntu but I am not familiar enough with Linux to troubleshoot anything I might have run into.


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> Not IIS, but then most of the connected world don't run IIS web servers either. FreeNAS will happily run Apache and/or nginx though (in fact it will come with one of them pre-installed anyway since the GUI management is over HTTP).
> 
> With regards to more stuff running on Windows, that's complete rubbish. This is a server you're building, not a gaming machine. There's just as much server side software available for Linux and UNIX than there is for Windows (and some of what is available for Windows is ports from Linux/UNIX anyway).
> 
> I can understand you wanting to choose Windows because that's what you're more familiar with; I have no qualm with that. But that's a whole other point


I will second this. While there are plenty of external websites running IIS, there are more running Apache and a ton of other web servers. I prefer a combination of Nginx, PHP-fpm, and Varnish for a lightning fast webserver.


----------



## Plan9

Quote:


> Originally Posted by *Rian*
> 
> Fair enough, I didn't know that it had a wide verity of apps like that, I kinda stopped paying attention after I found out adding additional storage would not be as easy as just adding a drive.
> You are correct though, I just wanted a fully functional OS, I almost went with Ubuntu but I am not familiar enough with Linux to troubleshoot anything I might have run into.


FreeNAS is based on FreeBSD, so it has roughly the same support Linux would. In fact my home server is FreeBSD









_re: I found out adding additional storage would not be as easy as just adding a drive._:

Depends on how you want your set up to work (type of RAID, redundancy disks, etc). It is possible but not always practical. For what it's worth, I do think the set up you've chosen makes the most sense for you. I might be quite a big FOSS / FreeBSD advocate, but I wouldn't ever push a solution if I didn't think the user would want it


----------



## DaveLT

IIS is easy to configure and optimize but it's not easy to manage AND very hacker-prone. NGINX and Apache over IIS anytime







Nginx is actually easier to optimize though


----------



## opty165

Link to what hardware my server has is in my sig below.

Drives pooled using StableBit Drive Pool for Windows Home Server - Non duplicated at the moment.

7.73TB total so far...
2.11TB free space

*Functions:*

Plex Transcoding
PXE Server
Sabnzbd
Sickbeard
Couchpotato
Client Computer Backups
FTP Server
Web Server


----------



## Plan9

I have sigs turned off. What's the hardware?

Also, I've not heard of people using PXE on WHS before. What OSs are you booting (if not Windows) and how does that work?


----------



## opty165

Quote:


> Originally Posted by *Plan9*
> 
> I have sigs turned off. What's the hardware?
> 
> Also, I've not heard of people using PXE on WHS before. What OSs are you booting (if not Windows) and how does that work?


Hardware:

Phenom II X4 955BE
8GB Gskill DDR3 1600
MSI 890GXM-G65
Antec 430watt psu

2x Hitachi 2TB drive
1x WD 3TB Green
1x WD 1.5TB Green

For the PXE booting, I'm using TinyPxe Server which is an application from here. Normally I would just use WDS (Windows Deployment Services) but I have to be on a Domain which my home network is not setup that way. It's sole purpose at the moment is to PXE boot OpenElec (XBMC) on my HTPC which is just an ION/Atom board with no hard drive. I setup NFS space on the server for booting OpenElec. This way enables me to turn any PXE bootable machine into an instant media center that connects back to my library. Eventually I will have my server PXE boot all my network bootable utilities such as PartedMagic, and Also network installs of Windows using wimboot.

I'm looking into doing a video tutorial on the whole Tiny PXE application and how to setup some basic things to network boot.


----------



## cones

Quote:


> Originally Posted by *opty165*
> 
> Hardware:
> 
> Phenom II X4 955BE
> 8GB Gskill DDR3 1600
> MSI 890GXM-G65
> Antec 430watt psu
> 
> 2x Hitachi 2TB drive
> 1x WD 3TB Green
> 1x WD 1.5TB Green
> 
> For the PXE booting, I'm using TinyPxe Server which is an application from here. Normally I would just use WDS (Windows Deployment Services) but I have to be on a Domain which my home network is not setup that way. It's sole purpose at the moment is to PXE boot OpenElec (XBMC) on my HTPC which is just an ION/Atom board with no hard drive. I setup NFS space on the server for booting OpenElec. This way enables me to turn any PXE bootable machine into an instant media center that connects back to my library. Eventually I will have my server PXE boot all my network bootable utilities such as PartedMagic, and Also network installs of Windows using wimboot.
> 
> I'm looking into doing a video tutorial on the whole Tiny PXE application and how to setup some basic things to network boot.


Never heard of wimboot, quick search showed it just boots the windows in installer is that all or does it do more then that?


----------



## opty165

Quote:


> Originally Posted by *cones*
> 
> Never heard of wimboot, quick search showed it just boots the windows in installer is that all or does it do more then that?


This is a quick tutorial of wimboot

http://ipxe.org/wimboot

On their page they just show that you can boot the installer to install windows, but you can also boot other .wim files like a WinPE environment, Litetouch deployment, or MS DaRT. This is provided you are using iPXE as stated in the instructions.


----------



## Plan9

Quote:


> Originally Posted by *opty165*
> 
> Hardware:
> 
> Phenom II X4 955BE
> 8GB Gskill DDR3 1600
> MSI 890GXM-G65
> Antec 430watt psu
> 
> 2x Hitachi 2TB drive
> 1x WD 3TB Green
> 1x WD 1.5TB Green
> 
> For the PXE booting, I'm using TinyPxe Server which is an application from here. Normally I would just use WDS (Windows Deployment Services) but I have to be on a Domain which my home network is not setup that way. It's sole purpose at the moment is to PXE boot OpenElec (XBMC) on my HTPC which is just an ION/Atom board with no hard drive. I setup NFS space on the server for booting OpenElec. This way enables me to turn any PXE bootable machine into an instant media center that connects back to my library. Eventually I will have my server PXE boot all my network bootable utilities such as PartedMagic, and Also network installs of Windows using wimboot.
> 
> I'm looking into doing a video tutorial on the whole Tiny PXE application and how to setup some basic things to network boot.


Impressive set up there mate. I don't know what you're exposure to Linux is outside of OpenELEC, but getting pxelinux working isn't the easiest of things in the world when you're a Windows guy









My home server is pretty similarly spec'ed too; a Phenom II x3 with 8GB RAM (DD2 though I think).

edit: Just one last question, how are you hosting NFS? Just through _UNIX Services for Windows_?


----------



## opty165

Quote:


> Originally Posted by *Plan9*
> 
> Impressive set up there mate. I don't know what you're exposure to Linux is outside of OpenELEC, but getting pxelinux working isn't the easiest of things in the world when you're a Windows guy
> 
> 
> 
> 
> 
> 
> 
> 
> 
> My home server is pretty similarly spec'ed too; a Phenom II x3 with 8GB RAM (DD2 though I think).
> 
> edit: Just one last question, how are you hosting NFS? Just through _UNIX Services for Windows_?


Actually at work we have full blown pxelinux working right off our Windows Server 2012 R2 Deployment server. No linux involved







It was pretty easy actually. We just needed some files from Syslinux placed in the proper locations under Remote Install directory, then just make the changes in DHCP for the option 67 file name. What you have is.....



There was a guide I followed in the beginning to setup pxelinux on windows server. If I can find it, I'll share it in the thread with anyone interested. That's an old screenshot above, but we currently boot Litetouch, DaRT, PartedMagic, Multiple Linux Distros, and a couple other tools. We haven't gotten Wimboot to work properly yet, so we just have an entry in the pxelinux menu to fall back to the default WDS boot file.

And before I forget... We enabled NFS under server roles I believe. They're are multiple guides out there that tell you where it is to enable. Once you have it enable it's as easy as right clicking a folder and toggling the NFS option. The only reason we really needed NFS was for the deployment of linux distros, as well as a tiny OpenElec machine at my desk that connects to my Plex server with PleXBMC









I'm definitely over simplifying this, but it really is that easy lol. I should really do up some threads on this....


----------



## Plan9

Quote:


> Originally Posted by *opty165*
> 
> Actually at work we have full blown pxelinux working right off our Windows Server 2012 R2 Deployment server. No linux involved


Yeah I know you don't need Linux installed to run syslinux (I have syslinux running of FreeBSD at home), but OpenELEC is Linux and I'm guessing you'd have had to change some settings in the /boot directory of OpenELEC? Or are you not running OpenELEC with an NFS root?

Even without the OpenELEC stuff, pxelinux config files are very Linux-centric. It's enough to put off a lot of Windows sysadmins who prefer GUI tools like Regedit (though I've never really understood why so many Windows sysadmins are scared of config files because I don't think they're any harder to use than Regedit - but it does seem a popular complaint I've read against Linux. Anyway, I digress).
Quote:


> Originally Posted by *opty165*
> 
> It was pretty easy actually. We just needed some files from Syslinux placed in the proper locations under Remote Install directory, then just make the changes in DHCP for the option 67 file name. What you have is.....
> 
> 
> 
> There was a guide I followed in the beginning to setup pxelinux on windows server. If I can find it, I'll share it in the thread with anyone interested. That's an old screenshot above, but we currently boot Litetouch, DaRT, PartedMagic, Multiple Linux Distros, and a couple other tools. We haven't gotten Wimboot to work properly yet, so we just have an entry in the pxelinux menu to fall back to the default WDS boot file.
> 
> And before I forget... We enabled NFS under server roles I believe. They're are multiple guides out there that tell you where it is to enable. Once you have it enable it's as easy as right clicking a folder and toggling the NFS option. The only reason we really needed NFS was for the deployment of linux distros, as well as a tiny OpenElec machine at my desk that connects to my Plex server with PleXBMC
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm definitely over simplifying this, but it really is that easy lol. I should really do up some threads on this....


I like your Mario background, would you mind sharing that?

I'm very much a Linux and UNIX guy (Windows scares and confuses me lol) so personally I wouldn't have much use for the guide. But I did find it fascinating to read about your set up so very grateful you've given more detail about your set up.


----------



## opty165

Quote:


> Originally Posted by *Plan9*
> 
> Yeah I know you don't need Linux installed to run syslinux (I have syslinux running of FreeBSD at home), but OpenELEC is Linux and I'm guessing you'd have had to change some settings in the /boot directory of OpenELEC? Or are you not running OpenELEC with an NFS root?
> 
> Even without the OpenELEC stuff, pxelinux config files are very Linux-centric. It's enough to put off a lot of Windows sysadmins who prefer GUI tools like Regedit (though I've never really understood why so many Windows sysadmins are scared of config files because I don't think they're any harder to use than Regedit - but it does seem a popular complaint I've read against Linux. Anyway, I digress).
> I like your Mario background, would you mind sharing that?
> 
> I'm very much a Linux and UNIX guy (Windows scares and confuses me lol) so personally I wouldn't have much use for the guide. But I did find it fascinating to read about your set up so very grateful you've given more detail about your set up.


This is all I used for Openelec

http://wiki.openelec.tv/index.php?title=Network_Boot_-_NFS

Since I already had the pxe server in place, I didn't need that part of the guide. pxelinux is just calling the Openelec kernel and appending to the NFS share I setup earlier. It creates a separate file system root by MAC address per each system I boot if the "overlay" string is used at the end. If I don't use "overlay" then there will just be one file system root for every machine I pxe boot. I like the former since different settings can be set and saved for different machines, and persist through a reboot. Since you're a linux guy I'm sure you understand all of that though









I hope I was of some help in answering your questions! I'm honestly still learning more about linux in each encounter I have with it lol. For me though, i'm mostly a Windows system admin. I've setup Linux PXE/NFS servers before, but I just wanted to try something different and see if I could do the same on a Windows platform. Luckily it worked out great!

Also here is the image link

https://www.dropbox.com/s/22rvtoa4c1omm4m/background.jpg

Any other questions, let me know!


----------



## opty165

Also I don't use TFTP for PXE booting. HTTP is much faster, but requires gpxe or ipxe.


----------



## Plan9

I didn't know about overlay. And that config looks a lot cleaner than the hoops I had to jump through to get the Linux dumb terminals working which I built at work. Thanks for that.









[edit]
Quote:


> Originally Posted by *opty165*
> 
> Also I don't use TFTP for PXE booting. HTTP is much faster, but requires gpxe or ipxe.


I've only recently started playing around with HTTP over TFTP. Not had much time to test properly though, but I think I'll invest more time into that after reading this


----------



## opty165

Quote:


> Originally Posted by *Plan9*
> 
> I didn't know about overlay. And that config looks a lot cleaner than the hoops I had to jump through to get the Linux dumb terminals working which I built at work. Thanks for that.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [edit]
> I've only recently started playing around with HTTP over TFTP. Not had much time to test properly though, but I think I'll invest more time into that after reading this


nothing to it really. Just have a web server going that points to where all your files are that you need to load. Below is my Pxelinux.cfg/default file
Quote:


> DEFAULT OpenElec.tv
> PROMPT 0
> 
> LABEL OpenElec.tv
> KERNEL http://192.168.1.16:12845/pxesrv2/files/tftp/KERNEL
> APPEND ip=dhcp boot=NFS=192.168.1.16:/NFS/Openelec disk=NFS=192.168.1.16:/NFS/Openelec/storage overlay


As you can see, Kernel just points back to the web server and loads over HTTP. No TFTP involved. As I said though, you will want to use Gpxelinux.0 bootfile as Pxelinux.0 does not have HTTP support in it.

This should get you pointed in the right direction


----------



## NKrader

Upgraded all four cpus in my two crunchers! yeah yeah!


----------



## Callist0

So my work was decomissioning some hardware and I came across a Dell PowerEdge 2600 tower..Twin core Xeon, but and ooold machine (1gb DDR, 512kb L2 cache, probably from around 2004).

I have a file server running already and was looking for suggestions for this monster. I think it'd be a shame to just toss it and was looking for some ideas on what to do with it.


----------



## u3b3rg33k

DDR ram means netburst at best. be prepared for your electric bill to increase by $30 a month at least if you run it 24/7.


----------



## Callist0

True about net burst. It's a Xeon III so pretty old. DDR for it is cheap but SCSI drives are not. Just curious as to what I could use it for. Power consumption isn't really an issue for me.


----------



## cones

Is it to old to use for virtualization? Don't know how useful they would be but a good learning experience.


----------



## Callist0

Thanks for the recommendation, but unfortunately it's too old for virtualization. My current server has a couple of virtual machines that I love using and was hoping to maybe convert this thing to do the same. However it has no virtualization tech and isn't even 64-bit, so it seems to be good for little more than a fat power bill. I does have pxe booting so maybe I can look into using it for learning about that


----------



## Plan9

Quote:


> Originally Posted by *cones*
> 
> Is it to old to use for virtualization? Don't know how useful they would be but a good learning experience.


I don't think netburst have VT-x, so I'm not really sure it would be all that practical for virtualisation.


----------



## cones

I read it as twin CPU and not as twin core. PXE is nice, I use it for some things and it's very nice. Once you get the basics its easy to take things like Linux based utilities and get them to boot through PXE. What I use the most is clonezilla, gparted, and ocasinaly openelec.


----------



## u3b3rg33k

Quote:


> Originally Posted by *Callist0*
> 
> True about net burst. It's a Xeon III so pretty old. DDR for it is cheap but SCSI drives are not. Just curious as to what I could use it for. Power consumption isn't really an issue for me.


I have a large quantity of SCA U320 drives. LMK if you need some.


----------



## BWG

Free?


----------



## u3b3rg33k

no, but for cheaps. best reasonable offer.


----------



## Manyak

Quote:


> Originally Posted by *Callist0*
> 
> Thanks for the recommendation, but unfortunately it's too old for virtualization. My current server has a couple of virtual machines that I love using and was hoping to maybe convert this thing to do the same. However it has no virtualization tech and isn't even 64-bit, so it seems to be good for little more than a fat power bill. I does have pxe booting so maybe I can look into using it for learning about that


I know I'm really late to post this, but VT-x isn't actually _required_ for virtualization. When the hardware doesn't support it VMWare falls back to binary translation.


----------



## Plan9

Quote:


> Originally Posted by *Manyak*
> 
> I know I'm really late to post this, but VT-x isn't actually _required_ for virtualization. When the hardware doesn't support it VMWare falls back to binary translation.


Nobody suggested VT-x was required. What we said was the specs are too low to make virtualization practical (the lack of VT-x won't help things there either as you're only bumping up the overhead in a system that would already have been stressed)

That hardware could run OS containers though, Eg Freebsd jails, Linux OpenVZ or Solaris Zones.


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> That hardware could run OS containers though, Eg Freebsd jails, Linux OpenVZ or Solaris Zones.


Virtualization without using hardware is called paravirtualization (like VMware workstation). Without hardware with VT-x, you can use jails/zones, or paravirtualization.


----------



## Plan9

Quote:


> Originally Posted by *tycoonbob*
> 
> Virtualization without using hardware is called paravirtualization (like VMware workstation). Without hardware with VT-x, you can use jails/zones, or paravirtualization.


I think you have things a little mixed up there.

Paravirtualisation most definitely requires using the hardware as the whole point if it is to offload computationally expensive steps of the hardware virtualization on to the hardware itself. Which is actually a bit like what VT-x provides, except it will be other components you're virtualising, Eg network adapters.

What you might be thinking of is full hardware emulation, and that's slooow.

OS containers are a whole other thing entirely and there's no equivalent in Windows nor VMWare. But they're awesome.


----------



## Norse

My fileserver "Njord" 3x3TB 2x2TB and a 750GB partition of OS drive in Software raid/Storage pooling (1x3TB is parity) giving me 9.82TB usable

Havent done cable management yet









Dual Opteron 6136 (8 core 2.4ghz) , ASUS KGPE-D16 Mobo, 32GB ram, HX1050 PSU


----------



## Aximous

Nice, what are you using for raid/pooling?


----------



## Norse

Quote:


> Originally Posted by *Aximous*
> 
> Nice, what are you using for raid/pooling?


Flexraid "Raid F" sits ontop of the file system so even if the software fails all the files are just on the drives like normal (although spread amongst the drives). Can have multiple parity drives, drives of differant sizes hell i even have a 750GB partition off the OS drive in use on it!. because all it requires is the drives to be seen by the OS it makes it a cheap and effective solution, yes the speeds "suck" compared to a proper raid controller but for a fileserver that is limited to 100MB/s (1gbps) network it doesnt matter


----------



## Plan9

Quote:


> Originally Posted by *Norse*
> 
> yes the speeds "suck" compared to a proper raid controller but for a fileserver that is limited to 100MB/s (1gbps) network it doesnt matter


100MB/s isn't gigabit.

That's a very beefy speced machine for a file server, by the way; are you planning on doing anything else with that? It seems a shame to have all that horsepower and only use it as a file server since you could get away with a box a quarter of that spec and still have the same performance.


----------



## Norse

Quote:


> Originally Posted by *Plan9*
> 
> 100MB/s isn't gigabit.
> 
> That's a very beefy speced machine for a file server, by the way; are you planning on doing anything else with that? It seems a shame to have all that horsepower and only use it as a file server since you could get away with a box a quarter of that spec and still have the same performance.


you know what i meant regarding the speed







it maxes out gigabit, its currently also streaming to my PS3

its only very beefy spec because i managed to get each CPU for only £35 and didnt want to lose out and struggle to get a second in the future


----------



## Plan9

Quote:


> Originally Posted by *Norse*
> 
> you know what i meant regarding the speed


I had no idea what you meant. That's why I commented.


----------



## Norse

Quote:


> Originally Posted by *Plan9*
> 
> I had no idea what you meant. That's why I commented.


Well it maxes out gbps ethernet which is very roughly 100 megabytes a second


----------



## Aximous

Quote:


> Originally Posted by *Norse*
> 
> Flexraid "Raid F" sits ontop of the file system so even if the software fails all the files are just on the drives like normal (although spread amongst the drives). Can have multiple parity drives, drives of differant sizes hell i even have a 750GB partition off the OS drive in use on it!. because all it requires is the drives to be seen by the OS it makes it a cheap and effective solution, yes the speeds "suck" compared to a proper raid controller but for a fileserver that is limited to 100MB/s (1gbps) network it doesnt matter


That sounds pretty similar to unraid then, I'm kinda thinking about moving to ZFS when I fill up the 4TB array I have currently, not sure though I like the ability to mix and match drives, but sometimes I wish I had the increased speed of ZFS when I hit the slower parts of the drives.
Quote:


> Originally Posted by *Plan9*
> 
> I had no idea what you meant. That's why I commented.


1 gigabit equals 125 megabytes.


----------



## Plan9

Quote:


> Originally Posted by *Norse*
> 
> Well it maxes out gbps ethernet which is very roughly 100 megabytes a second


It shouldn't be because then you're losing 1/5 of your bandwidth, there's 8 bits in a byte, not 10.









What you're probably looking at is the file transfer speed, but that's a little different as there's going to be a lot of overhead in the network and file transfer protocols (TCP headers, protocol handshakes, keep alives, success packets, etc) which means your files will always transfer slower than your actual, real world, network throughput.


----------



## Plan9

Quote:


> Originally Posted by *Aximous*
> 
> 1 gigabit equals 125 megabytes.


124 actually









Neither of which is approximately 100MB (unless you consider being out by a quarter of the estimated value a close enough approximation







). Hence why I wasn't sure what Norse meant (honestly, I thought he got his case mixed up -Mb / MB- and accidentally missed a zero off (ie 1000Mb/s, 1Gbps) which still wouldn't have been strictly accurate since it's not base ^8, but I'd have accepted that as an approximation since it follows the stricter rules of metric prefixes)


----------



## Manyak

Quote:


> Originally Posted by *Plan9*
> 
> Nobody suggested VT-x was required. What we said was the specs are too low to make virtualization practical (the lack of VT-x won't help things there either as you're only bumping up the overhead in a system that would already have been stressed)
> 
> That hardware could run OS containers though, Eg Freebsd jails, Linux OpenVZ or Solaris Zones.


The way that post was worded makes it sound like he thought it was a requirement.

I do agree that it's impractical though, and IMO anything older than a 45nm Core2 is pretty much trash these days. It's either too slow, too costly in electricity, or both.
Quote:


> Originally Posted by *Plan9*
> 
> Paravirtualisation most definitely requires using the hardware as the whole point if it is to offload computationally expensive steps of the hardware virtualization on to the hardware itself. Which is actually a bit like what VT-x provides, except it will be other components you're virtualising, Eg network adapters.


You're heading in the right direction here, but that's not quite it...

Paravirtualization replaces non-virtualizable code in the OS with API calls to the hypervisor. This code isn't necessarily hardware related, and is often something as simple as certain MOV instructions. It definitely can be though, and these days most often is, such as is the case with VMWare Tools (and similar). VT-x or binary translation is used instead of true paravirtualization for the bulk of the OS and kernel, while the Tools install drivers that act as API wrappers for the hypervisor. So it still goes through software, just through a much more efficient path.

What you're describing is closer to VT-d, which allows for a hardware interrupt and/or DMA to be mapped into a virtual machine, allowing communication directly with the hardware as if it were native.


----------



## Plan9

Quote:


> Originally Posted by *Manyak*
> 
> The way that post was worded makes it sound like he thought it was a requirement.


Upon reading his post back, I can now see why you thought that; and there's a definite possibility that was what he meant and it was I who read it wrong.
Quote:


> Originally Posted by *Manyak*
> 
> You're heading in the right direction here, but that's not quite it...
> 
> Paravirtualization replaces non-virtualizable code in the OS with API calls to the hypervisor. This code isn't necessarily hardware related, and is often something as simple as certain MOV instructions. It definitely can be though, and these days most often is, such as is the case with VMWare Tools (and similar). VT-x or binary translation is used instead of true paravirtualization for the bulk of the OS and kernel, while the Tools install drivers that act as API wrappers for the hypervisor. So it still goes through software, just through a much more efficient path.
> 
> What you're describing is closer to VT-d, which allows for a hardware interrupt and/or DMA to be mapped into a virtual machine, allowing communication directly with the hardware as if it were native.


Ahh ok. Thanks for the correction


----------



## micul

My first server


----------



## Mugen87

What case is that, super clean looking big too. Why have it down there put it up on a table or something


----------



## micul

That's a Silverstone Milo04
It's a temporary case , not that good to keep a server


----------



## void

Quote:


> Originally Posted by *micul*
> 
> My first server
> 
> 
> 
> Spoiler: Warning: Spoiler!


Specs and what are you using it for?


----------



## micul

Quote:


> Originally Posted by *void*
> 
> Specs and what are you using it for?


backup, storage and media streaming

CPU - AMD A4 3400
Mobo - Biostar A55MH
Memory - Kingston HyperX 2x2GB 1600MHz
Boot Drive - OCZ Vertex 2 50GB
HDD1 - Seagate 2TB
HDD2 - WD 3TB Green
Power Supply - Corsair CX400Watt
Case - Silverstone Milo04


----------



## Ferrari8608

Quote:


> Originally Posted by *micul*
> 
> My first server


That's going to be my Steam Machine case, been planning it for months now. How's the build quality? Do you think it would make an OK game console case?


----------



## Mugen87

Quote:


> Originally Posted by *Ferrari8608*
> 
> That's going to be my Steam Machine case, been planning it for months now. How's the build quality? Do you think it would make an OK game console case?


I feel like the steam is will breed a whole new wave case and system designs. I would want more USB ports in front for a gaming case. 2 open, one kb/m, one game controller. Massive quiet cooling and a blue ray drive


----------



## DaveLT

Quote:


> Originally Posted by *Mugen87*
> 
> I feel like the steam is will breed a whole new wave case and system designs. I would want more USB ports in front for a gaming case. 2 open, one kb/m, one game controller. Massive quiet cooling and a blue ray drive


Case design? Expandibility is 0.
Move almost half of the USB ports in front is all I want. Oh and optical media is so out of this world most most of us now.


----------



## Wildcard36qs

Can anyone who is running a C1100 tell me what their temps and fan speeds are like? I recently updated BIOS and BMC firmware, and I feel like my fans are running faster than they used to. CPUs are around 60C and the fans are 7500-7900 RPM. I know that some poweredge servers have had custom firmware to modify fan speeds, is there anything like that out there?


----------



## divinextract

*OS*:
ESXi 5.0
*Case*:
Norco 470
*CPU*:
Xeon E3 1240V2
*Motherboard*:
Asus P8B-E 4/L
*Memory*:
(2) 4g DDR3 1600 ECC UB
*PSU*:
CX500M
*ESXi HDD's: *
60g SSD & 320G WD Blue
*Storage HDD(s)*:
(3) 3tb WD Red (1) 1tb WD Green
*Virtual Machine(s): *
(2) 2008R2 Windows VMs Running Plex Media Server
(1)Untangle 10 VM with dual intel nic passed through
(1)unRAID 5.0 VM with usb & M1015 passed through

This box isn't finished. There is 1-2 more VM's coming and a SAS expander chassis in the works. When I get more time I'll throw it in my rack with my other equip and resubmit.


----------



## TopicClocker

Quote:


> Originally Posted by *divinextract*
> 
> 
> 
> 
> 
> *OS*:
> ESXi 5.0
> *Case*:
> Norco 470
> *CPU*:
> Xeon E3 1240V2
> *Motherboard*:
> Asus P8B-E 4/L
> *Memory*:
> (2) 4g DDR3 1600 ECC UB
> *PSU*:
> CX500M
> *ESXi HDD's: *
> 60g SSD & 320G WD Blue
> *Storage HDD(s)*:
> (3) 3tb WD Red (1) 1tb WD Green
> *Virtual Machine(s): *
> (2) 2008R2 Windows VMs Running Plex Media Server
> (1)Untangle 10 VM with dual intel nic passed through
> (1)unRAID 5.0 VM with usb & M1015 passed through
> 
> This box isn't finished. There is 1-2 more VM's coming and a SAS expander chassis in the works. When I get more time I'll throw it in my rack with my other equip and resubmit.


Wow that looks great, nice machine.

Quote:


> Originally Posted by *Ferrari8608*
> 
> That's going to be my Steam Machine case, been planning it for months now. How's the build quality? Do you think it would make an OK game console case?


What I think would be cool is if the main PC/Server that streams for in home streaming could work as a server and can still run other things in the background, for instance, a 8 core Intel or a AMD 8320, allocate steam to use 4 cores, and another 4 for your normal server usage, a VM may solve this not exactly sure how efficiently but If it was made with this functionality it would be good. Would allow us to squeeze more uses out of our servers, you wouldn't need to have beefy hardware in your HTPC under the TV, just have your server do all the processing and stream to it.


----------



## Plan9

Quote:


> Originally Posted by *TopicClocker*
> 
> What I think would be cool is if the main PC/Server that streams for in home streaming could work as a server and can still run other things in the background, for instance, a 8 core Intel or a AMD 8320, allocate steam to use 4 cores, and another 4 for your normal server usage, a VM may solve this not exactly sure how efficiently but If it was made with this functionality it would be good. Would allow us to squeeze more uses out of our servers,


A good number of us in this thread do this sort of thing already.








Quote:


> Originally Posted by *TopicClocker*
> 
> you wouldn't need to have beefy hardware in your HTPC under the TV, just have your server do all the processing and stream to it.


That wouldn't cut down on the processing your HTPC would have to do since it would still have to decode a video file regardless of whether that's an MPEG file sat on disk or an MPEG streamed over DNLA.


----------



## divinextract

Quote:


> Wow that looks great, nice machine.


Thanks!
Quote:


> What I think would be cool is if the main PC/Server that streams for in home streaming could work as a server and can still run other things in the background, ~ ~ ~ ~ Would allow us to squeeze more uses out of our servers, you wouldn't need to have beefy hardware in your HTPC under the TV, just have your server do all the processing and stream to it.


ESXi is really good at allocating resources where its needed, and as long as your setup has VT-d or the amd equivalent you could simply passthrough a Graphics card and USB card, then use usb/hdmi through cat6 to your TV. Viola instant HTPC/Gaming rig with room for other processes via VMs running in the backround. And nothing under the TV except a USB hub


----------



## TopicClocker

Quote:


> Originally Posted by *divinextract*
> 
> Thanks!
> 
> ESXi is really good at allocating resources where its needed, and as long as your setup has VT-d or the amd equivalent you could simply passthrough a Graphics card and USB card, then use usb/hdmi through cat6 to your TV. Viola instant HTPC/Gaming rig with room for other processes via VMs running in the backround. And nothing under the TV except a USB hub


Quote:


> Originally Posted by *Plan9*
> 
> A good number of us in this thread do this sort of thing already.
> 
> 
> 
> 
> 
> 
> 
> 
> That wouldn't cut down on the processing your HTPC would have to do since it would still have to decode a video file regardless of whether that's an MPEG file sat on disk or an MPEG streamed over DNLA.


Hmm interesting, thanks for the info


----------



## Callist0

Quote:


> Originally Posted by *divinextract*
> 
> Thanks!
> 
> ESXi is really good at allocating resources where its needed, and as long as your setup has VT-d or the amd equivalent you could simply passthrough a Graphics card and USB card, then use usb/hdmi through cat6 to your TV. Viola instant HTPC/Gaming rig with room for other processes via VMs running in the backround. And nothing under the TV except a USB hub


To ask a dumb question...how do you pass hdmi through cat6 to the TV? Do you plug the TV up to the ethernet cable on a switch? I'd love to get this going as I have just purchased a machine for VM's that has AMD-Vi support with IOMMU.


----------



## cones

Quote:


> Originally Posted by *Callist0*
> 
> To ask a dumb question...how do you pass hdmi through cat6 to the TV? Do you plug the TV up to the ethernet cable on a switch? I'd love to get this going as I have just purchased a machine for VM's that has AMD-Vi support with IOMMU.


HDMI over Ethernet, search for the adapters. I think you need two Ethernet cables for it to work.


----------



## Plan9

I did a similar thing with VGA (though you only needed one cat5 cable)


----------



## Wildcard36qs

VGA over cat5 works well. I haven't tried HDMI over cat. But I have tried several wireless hdmi and they work very well.


----------



## airbozo

Quote:


> Originally Posted by *Wildcard36qs*
> 
> VGA over cat5 works well. I haven't tried HDMI over cat. But I have tried several wireless hdmi and they work very well.


I've used HDMI over CAT5X and you only need one cable and the adapters. Don't buy the cheapo no-name ones or you will get signal degradation or noise. I use the blackbox models. There are 2 ways to do it; one is using a KVM type extender and another is to use your server to stream the media over Ethernet (multicast).

There is a product called Stream Valve IP that will use your server to send the HDMI signal over Ethernet so you only need one receiver for each device you want to decode the signal to (uses IPX Multicast). I _think_ VLC will do the same thing, but last time I used it it was not very reliable (why I bought Stream Valve IP).

Here is a link to the HDMI KVM extenders I use:

http://www.blackbox.com/Store/Detail.aspx/ServSwitch-HDMI-with-USB-2-0-KVM-Extender-CATx/ACU2500A


----------



## Plan9

Quote:


> Originally Posted by *airbozo*
> 
> I've used HDMI over CAT5X and you only need one cable and the adapters. Don't buy the cheapo no-name ones or you will get signal degradation or noise. I use the blackbox models. There are 2 ways to do it; one is using a KVM type extender and another is to use your server to stream the media over Ethernet (multicast).
> 
> There is a product called Stream Valve IP that will use your server to send the HDMI signal over Ethernet so you only need one receiver for each device you want to decode the signal to (uses IPX Multicast). I _think_ VLC will do the same thing, but last time I used it it was not very reliable (why I bought Stream Valve IP).
> 
> Here is a link to the HDMI KVM extenders I use:
> 
> http://www.blackbox.com/Store/Detail.aspx/ServSwitch-HDMI-with-USB-2-0-KVM-Extender-CATx/ACU2500A


Those aren't adapters though (which is why they're hugely expensive). What myself and the other guys in this thread are talking about is an adapter that takes each HDMI pin and assigns that to a wire in the ethernet twisted pair cables. There's no signal conversion going on, it's literally just rewiring an RJ45 into a HDMI plug (just like how the old adapters for component (CMY) to SCART worked)

eg http://www.ebay.co.uk/itm/HDMI-Extender-By-Cat5e-Cat6e-RJ45-Cable-Up-To-30M-For-Full-HD-1080p-PS3-New-/270946422917?pt=UK_Computing_Sound_Vision_Video_Cables_Adapters&hash=item3f15aa5085

I've used cat5e cable for all sorts of adhoc stuff like this from analogue stuff such as audio cables and VGA leads, through to some really niche uses like some of the circuits I've soldered in recent years (obviously I used solid core in those situations).


----------



## airbozo

Quote:


> Originally Posted by *Plan9*
> 
> Those aren't adapters though (which is why they're hugely expensive). What myself and the other guys in this thread are talking about is an adapter that takes each HDMI pin and assigns that to a wire in the ethernet twisted pair cables. There's no signal conversion going on, it's literally just rewiring an RJ45 into a HDMI plug (just like how the old adapters for component (CMY) to SCART worked)
> 
> eg http://www.ebay.co.uk/itm/HDMI-Extender-By-Cat5e-Cat6e-RJ45-Cable-Up-To-30M-For-Full-HD-1080p-PS3-New-/270946422917?pt=UK_Computing_Sound_Vision_Video_Cables_Adapters&hash=item3f15aa5085
> 
> I've used cat5e cable for all sorts of adhoc stuff like this from analogue stuff such as audio cables and VGA leads, through to some really niche uses like some of the circuits I've soldered in recent years (obviously I used solid core in those situations).


Ahhh, I see,

Wouldn't there be a lot of interference doing it that way?


----------



## Plan9

Quote:


> Originally Posted by *airbozo*
> 
> Ahhh, I see,
> 
> Wouldn't there be a lot of interference doing it that way?


Why would there be? both cat5e and cat6 will do GbE, so I doubt their electrical bandwidth (or whatever it's called) is rated much below HDMI.


----------



## airbozo

Quote:


> Originally Posted by *Plan9*
> 
> Why would there be? both cat5e and cat6 will do GbE, so I doubt their electrical bandwidth (or whatever it's called) is rated much below HDMI.


I have had issues in the past using VGA over cat5 with a similar adapter is why I mentioned it. Granted VGA is an analog signal so that may have been why.


----------



## Plan9

Quote:


> Originally Posted by *airbozo*
> 
> I have had issues in the past using VGA over cat5 with a similar adapter is why I mentioned it. Granted VGA is an analog signal so that may have been why.


I can't speak for why you had issues, but I've ran VGA over cat5e for long(ish) distances with some relatively heavy duty equipment on the go (speaker stacks, amps, smoke machines, projectors, lasers and old CRTs) and didn't suffer any ghosting or other side effects. So it definitely can be done. But I couldn't explain why it worked for me and not yourself.


----------



## cones

Its all just an electrical signal correct? So would the only issues be a bad connection or voltage drop from it being to long of a run so the signal is weak on the other end causing issues?


----------



## DaveLT

Quote:


> Originally Posted by *cones*
> 
> Its all just an electrical signal correct? So would the only issues be a bad connection or voltage drop from it being to long of a run so the signal is weak on the other end causing issues?


Not just an electrical signal. An DIGITAL VERY High-Speed High-Bandwidth Digital signal, problems with not using HDMI cable is 1) crosstalk 2) natural delay from cable capacitance
Using the right cables (Cat6e) which are partially shielded should give you pretty good results. Just not Cat5
Quote:


> Originally Posted by *Plan9*
> 
> Why would there be? both cat5e and cat6 will do GbE, so I doubt their electrical bandwidth (or whatever it's called) is rated much below HDMI.


"electrical bandwidth" what are you on about? You mean bandwidth. Just bandwidth. There's no such thing as electrical bandwidth








/sarcasm


----------



## Plan9

Quote:


> Originally Posted by *DaveLT*
> 
> Not just an electrical signal. An DIGITAL VERY High-Speed High-Bandwidth Digital signal, problems with not using HDMI cable is 1) crosstalk 2) natural delay from cable capacitance
> Using the right cables (Cat6e) which are partially shielded should give you pretty good results. Just not Cat5


Cat6e doesn't exist as a technical standard. It's just some made up term some manufacturers have invented to differentiate their cat6 cables from other peoples. So I'm guessing you're talking about cat6 STP (which some refer to as cat6e)?
Quote:


> Originally Posted by *DaveLT*
> 
> "electrical bandwidth" what are you on about? You mean bandwidth. Just bandwidth. There's no such thing as electrical bandwidth
> 
> 
> 
> 
> 
> 
> 
> 
> /sarcasm


Actually I meant frequencies


----------



## DaveLT

Quote:


> Originally Posted by *Plan9*
> 
> Cat6e doesn't exist as a technical standard. It's just some made up term some manufacturers have invented to differentiate their cat6 cables from other peoples. So I'm guessing you're talking about cat6 STP (which some refer to as cat6e)?
> Actually I meant frequencies


Yeah that. I know Cat6e doesn't exist as a standard but we'll just play along
Frequencies are a different matter entirely. Any cable will do a certain frequency no problem if they aren't too thick. Bandwidth is another matter.


----------



## airbozo

Quote:


> Originally Posted by *Plan9*
> 
> I can't speak for why you had issues, but I've ran VGA over cat5e for long(ish) distances with some relatively heavy duty equipment on the go (speaker stacks, amps, smoke machines, projectors, lasers and old CRTs) and didn't suffer any ghosting or other side effects. So it definitely can be done. But I couldn't explain why it worked for me and not yourself.


In our case it was due to the length of the cable and frequency of the signal (120hz). When it was dropped to 60hz, the interference was less noticeable but still there. Of course this was before the CAT5e specs so we were using CAT5. Plus (just remembering this) we were also using a 13w3 to vga adapter. That really shouldn't be an issue though.

Thanks for all the info guys, this is educational!


----------



## Plan9

ahh I was using cat5e. Reasonably good quality stuff too I think


----------



## NKrader

Quote:


> Originally Posted by *Plan9*
> 
> ahh I was using cat5e. Reasonably good quality stuff too I think


cat 5 is so 2011

i use cat 7 cables (because they are cheaper than cat5 cables on newegg lol)


----------



## Plan9

Quote:


> Originally Posted by *Plan9*
> 
> ahh I was using cat5e. Reasonably good quality stuff too I think


Quote:


> Originally Posted by *NKrader*
> 
> cat 5 is so 2011
> 
> i use cat 7 cables (because they are cheaper than cat5 cables on newegg lol)


"was" being the operative word. I built that cable some time around 2009.


----------



## KYKYLLIKA

Quote:


> Originally Posted by *NKrader*
> 
> cat 5 is so 2011
> 
> i use cat 7 cables (because they are cheaper than cat5 cables on newegg lol)


If you have any notion of how to pull some cat7 from the third floor to my basement, I'm all ears. For now, I'll have to stick with the cat5 in my walls. =(


----------



## cones

Quote:


> Originally Posted by *KYKYLLIKA*
> 
> If you have any notion of how to pull some cat7 from the third floor to my basement, I'm all ears. For now, I'll have to stick with the cat5 in my walls. =(


Connect the new cable to the old one and pull, wouldn't work if the old cable is attached inside the walls.


----------



## KYKYLLIKA

Quote:


> Originally Posted by *cones*
> 
> Connect the new cable to the old one and pull, wouldn't work if the old cable is attached inside the walls.










Why did I not do that before? Thanks, I'll try that. Maybe it will let me make a few more outlets as well.


----------



## Menty

My newly commisioned home server











Pair of Xeon E5504s w/modified Coolermaster heatsinks, 24gb of HP 10600R, 1x64gb SSD for OS, 2x 300gb Velociraptors for VMs, 2x 3tb Toshiba HDDs for backups and storage, all in an Antec Sonata Proto. Most of the components came from a 1U server I got cheap, though the storage HDDs are new.

Running Server 2012 R2 as a Hyper-V host - went for the full OS rather than just the hypervisor so I have a bit more flexibility in how I use the machine







. Machine is used as a VM lab, downloads box and backup target.

EDIT - how bizarre, it kept flipping the image 180 degrees. Fixed.


----------



## cones

Quote:


> Originally Posted by *KYKYLLIKA*
> 
> 
> 
> 
> 
> 
> 
> 
> Why did I not do that before? Thanks, I'll try that. Maybe it will let me make a few more outlets as well.


Sometimes simple solutions are over thought. But it won't work if the wire is attached to things like a stud.


----------



## M3nta1

My current server is massively overkill, given that it only needs to run a plex server and torrent the occasional thing.

AMD 4300 quad core on a Gigabyte board, 8 gigs of DDR3 RAM and a GTX 480. A single 1tb Seagate for my data is good enough for now, and the whole thing is powered by a 500W corsair power supply. Sitting on top is an Asus external DVD drive and a cheap external hot-swap hard drive holder. Its way overkill for a server for just Plex and torrents, and it being an old gaming computer its got 5 bright LED fans. At some point i will build a proper server, but for now it does the job very well.


Sorry for phone quality, its all i got on hand.


----------



## Wildcard36qs

I was going to post this in the PERC 5i/6i thread, but quick question. I am using Dell's ESXi 5.5 image on my C1100, and everything is working fine, but I have no health monitoring on the PERC 6i. I was googling around and some said that it should be picked up since I am using the Dell image. Could it be because I originally installed ESXi when I had a PERC 5?


----------



## stolid

I finally upgraded/replaced my old 2P home server.







It's used as a NAS but also running some other apps (Mumble, occasional game servers) and of course folding with all its spare cycles:

OS: Debian Stable
Case: Cardboard box
CPU: 4x Opteron 8431 2.4GHz hexacores
Motherboard: Supermicro H8QME-2+
Memory: 8x 2GB DDR2-667 ECC Registered
PSU: Corsair CX750M
OS HDD: Old 160GB laptop hard drive
Storage: 2x 1TB Samsung F3 in ZFS Mirror
Cooling: 4x Cooler Master Hyper TX-3

It's rather ghetto as I just assembled it yesterday and don't have a proper case for the massive motherboard just yet. I picked up the mobo, RAM, and CPUs from ebay for about $140 total.


----------



## TopicClocker

Quote:


> Originally Posted by *M3nta1*
> 
> My current server is massively overkill, given that it only needs to run a plex server and torrent the occasional thing.
> 
> AMD 4300 quad core on a Gigabyte board, 8 gigs of DDR3 RAM and a GTX 480. A single 1tb Seagate for my data is good enough for now, and the whole thing is powered by a 500W corsair power supply. Sitting on top is an Asus external DVD drive and a cheap external hot-swap hard drive holder. Its way overkill for a server for just Plex and torrents, and it being an old gaming computer its got 5 bright LED fans. At some point i will build a proper server, but for now it does the job very well.
> 
> 
> Sorry for phone quality, its all i got on hand.


Wow that looks really nice, have you ever thought about pulling the plug on the GPU and running it as a headless server(I think they call them) or using the onboard VGA if it has one, unless you require the GPU for server operations?
Quote:


> Originally Posted by *stolid*
> 
> I finally upgraded/replaced my old 2P home server.
> 
> 
> 
> 
> 
> 
> 
> It's used as a NAS but also running some other apps (Mumble, occasional game servers) and of course folding with all its spare cycles:
> 
> OS: Debian Stable
> Case: Cardboard box
> CPU: 4x Opteron 8431 2.4GHz hexacores
> Motherboard: Supermicro H8QME-2+
> Memory: 8x 2GB DDR2-667 ECC Registered
> PSU: Corsair CX750M
> OS HDD: Old 160GB laptop hard drive
> Storage: 2x 1TB Samsung F3 in ZFS Mirror
> Cooling: 4x Cooler Master Hyper TX-3
> 
> It's rather ghetto as I just assembled it yesterday and don't have a proper case for the massive motherboard just yet. I picked up the mobo, RAM, and CPUs from ebay for about $140 total.


Wow 24 processing cores








That cardboard box is perhaps the most ghetto case I've ever seen


----------



## cones

Quote:


> Originally Posted by *TopicClocker*
> 
> Wow that looks really nice, have you ever thought about pulling the plug on the GPU and running it as a headless server(I think they call them) or using the onboard VGA if it has one, unless you require the GPU for server operations?
> Wow 24 processing cores
> 
> 
> 
> 
> 
> 
> 
> 
> That cardboard box is perhaps the most ghetto case I've ever seen


Isn't he folding/mining or something with it? Also wish i had that 4p server looks really nice and would be fun. I've hung a motherboard from the wall along with the PSU, wish i still had a picture of it


----------



## TopicClocker

Quote:


> Originally Posted by *divinextract*
> 
> Thanks!
> 
> ESXi is really good at allocating resources where its needed, and as long as your setup has VT-d or the amd equivalent you could simply passthrough a Graphics card and USB card, then use usb/hdmi through cat6 to your TV. Viola instant HTPC/Gaming rig with room for other processes via VMs running in the backround. And nothing under the TV except a USB hub


Quote:


> Originally Posted by *Plan9*
> 
> A good number of us in this thread do this sort of thing already.
> 
> 
> 
> 
> 
> 
> 
> 
> That wouldn't cut down on the processing your HTPC would have to do since it would still have to decode a video file regardless of whether that's an MPEG file sat on disk or an MPEG streamed over DNLA.


Quote:


> Originally Posted by *cones*
> 
> Isn't he folding/mining or something with it? Also wish i had that 4p server looks really nice and would be fun. I've hung a motherboard from the wall along with the PSU, wish i still had a picture of it


Oh if he is then I suppose thats why he has it in there








I thought you needed to mount a mobo to a metal surface or something like a case for it to work, something to do with grounding? I guess I'm wrong, never built a computer outside of a case or on a bench or anything, is it safe running it outside of a case/bench?

Also has anyone successfully gotten a VM or anything to run games through a passthrough or something, I feel this could be useful for Steam's upcoming In Home Streaming feature If you wanted to use your server to stream games within a VM, I feel that if its in a VM you can limit the amount of hardware it uses, at the moment streaming renders your own PC/Machine unusable, but If you run it in a VM this could possibly circumvent this, I havent done it myself as I havent got the sufficent hardware in my server, just wondering if its possible and it works well.

@Plan9

Yes that's true but I suppose it opens up possibilities, you can have one main machine to do the processing of the game and then streaming it to other systems all around the house, a HTPC in the livingroom, a HTPC in the bedroom, a Laptop, just continue where you left off, I feel that the aim isn't to cut down on the processing but to allow weaker systems to play games, a low power I3 could possibly be in all of these systems but just to decode the stream as the I3 by itself cant play these games at such a fidelity a I5 + 7950 could.


----------



## cones

Quote:


> Originally Posted by *TopicClocker*
> 
> Oh if he is then I suppose thats why he has it in there
> 
> 
> 
> 
> 
> 
> 
> 
> I thought you needed to mount a mobo to a metal surface or something like a case for it to work, something to do with grounding? I guess I'm wrong, never built a computer outside of a case or on a bench or anything, is it safe running it outside of a case/bench?


Nope doesn't need to be in a case. Just don't connect any traces that shouldn't be I.e. have it on a conductive surffice. Its safe as long as nobody wants to touch it or you stick something in a moving fan.


----------



## TopicClocker

Quote:


> Originally Posted by *cones*
> 
> Nope doesn't need to be in a case. Just don't connect any traces that shouldn't be I.e. have it on a conductive surffice. Its safe as long as nobody wants to touch it or you stick something in a moving fan.


Haha ok cool thanks


----------



## Wildcard36qs

Finally! Got everything wired and mounted in my house. Before I was running WiFi only and actually was able to use my PC as bridge to wired for my server. Anyways things are sooo much better now. Running wires wasn't too bad except for the fact that this old house was built in the 50s and there is literally like 1/2 inch tops between the drywall and cinderblock exterior. It was enough room to run cables, but not enough to allow me to use all my faceplates and keystone jacks. Oh wells.

Anyways here are the specs and a pic:
Modem: Motorola SB6141
WiFi AP/Switch: D-Link DIR-615 with DD-WRT firmware.
Server: Dell C1100
CPU: 2x Quad Core Xeon L5520
RAM: 72GB DDR3
Storage: PERC 6/i w/ 4x Segate Barracuda 7200.12 1TB in RAID 10 w/ 256Kb stripe
OS: ESXi 5.5 Dell image
VMs: pFsense for Firewall/Routing/DHCP + more once I get things installed on it
Ubuntu 13.10 LAMP server
Server 2012 R2 Standard for training purposes (Just got MCSA 2012 in Jan. Working on MCSE then CCNA)
Battery Backup is the next must have, then a better WiFi router as this thing does OK for my needs it is only 100Mbit and basic Wireless N 300. But I am trying to be cheap lol


----------



## Callist0

FX-6300
Gigabyte GA-970A-D3P
HX850
Nvidia gt610
Nvidia fx6300 quadro
Some really old AMD workstation card
12gb g skill ddr3
Exsi 5.5

Used mainly as a VM lab..
Runs Debian minimal for folding
Two windows 8 machines
Kali, Fedora, mint Linux
Going to set it up a mine craft server as well.
(Any other suggestions for vms?)

Tried to set it up another machine as openelec but couldn't ever get it to boot. Also was totally unable to get steamOS installed or pass through the GPUs but I guess it's easier to do it with AMD cards...










Tiny file server
AsRock E350M1
Cx430
4gb ram stick from the other machine
Debian minimal.
Mostly just hosting movies and music as well as a download box


----------



## M3nta1

Quote:


> Originally Posted by *TopicClocker*
> 
> -SNIP-
> 
> Wow that looks really nice, have you ever thought about pulling the plug on the GPU and running it as a headless server(I think they call them) or using the onboard VGA if it has one, unless you require the GPU for server operations?


Well im glad at least one of us thinks it looks good xD Its just old parts from a friend who is basically constantly upgrading, hence the bright blue LED fans, and the 480. Which doesnt do anything server wise, but it does work really well heating my room. I think I'm gonna frame it, just because it looks so cool. And then I'm selling the other parts to a friend, and with that money gonna build a proper server.

On that note, anyone have any recommendations? Silence and power efficiency are my main goals (just like everyone else), so i think an older low voltage xeon chip would be worth it. No fancy needs, just a PLEX server for the forseeable future.


----------



## boyk0

Quote:


> Originally Posted by *Callist0*
> 
> Tried to set it up another machine as openelec but couldn't ever get it to boot. Also was totally unable to get steamOS installed or pass through the GPUs but I guess it's easier to do it with AMD cards...


install SteamOS under which OS?
is vt-d/IOMMU enabled in your BIOS?


----------



## Callist0

Quote:


> Originally Posted by *boyk0*
> 
> install SteamOS under which OS?
> is vt-d/IOMMU enabled in your BIOS?


Exsi 5.5 doesn't have Debian 7 as an option (only goes up to 6). I tried using Debian 6 as well as Other Linux 64-bit. Neither booted.

I do have IOMMU enabled in bios and can add PCI devices to the EXSI host as well as assign them to virtual machines. However they never seem to accept it.

Added the GT610 to the windows 8 machine and it recognizes it, but installing drivers always fails.


----------



## boyk0

Quote:


> Originally Posted by *Callist0*
> 
> Exsi 5.5 doesn't have Debian 7 as an option (only goes up to 6). I tried using Debian 6 as well as Other Linux 64-bit. Neither booted.
> 
> I do have IOMMU enabled in bios and can add PCI devices to the EXSI host as well as assign them to virtual machines. However they never seem to accept it.
> 
> Added the GT610 to the windows 8 machine and it recognizes it, but installing drivers always fails.


interesting, is amd-v enabled also? what happens when you try to boot 64-bit Windows machines?


----------



## Sarec

Quote:


> Originally Posted by *Callist0*
> 
> Exsi 5.5 doesn't have Debian 7 as an option (only goes up to 6). I tried using Debian 6 as well as Other Linux 64-bit. Neither booted.
> 
> I do have IOMMU enabled in bios and can add PCI devices to the EXSI host as well as assign them to virtual machines. However they never seem to accept it.
> 
> Added the GT610 to the windows 8 machine and it recognizes it, but installing drivers always fails.


When I do other 64-bit linux my Debian 7 installs fine. Done this on various platforms recently. You can also try telling it to install under Debian 6 and just use a debian 7 disc. I did this once before as well, worked then.


----------



## Callist0

Quote:


> Originally Posted by *boyk0*
> 
> interesting, is amd-v enabled also? what happens when you try to boot 64-bit Windows machines?


Yeah AMD-Vi is also enabled. I also have set up Exsi to force Hardware for virtualization in the options.

64-bit windows machines boot fine. My two Win8 machines are both 64 bit and run perfect. I know there is supposed trouble passing through Nvidia cards so I'm not sure if that could be an issue.

Quote:


> Originally Posted by *Sarec*
> 
> When I do other 64-bit linux my Debian 7 installs fine. Done this on various platforms recently. You can also try telling it to install under Debian 6 and just use a debian 7 disc. I did this once before as well, worked then.


I was referring to the base system option for SteamOS. I have successfully booted Debian. I guess I'll need to tinker on this one a bit more.


----------



## CJston15

Recently upgraded my home server from WHS2011 to WSE2012 and with that added some additional hardware. No pics handy at the moment because I am at work but...

Motherboard: Asus P6X58D-E
Processor: Intel Core i7 980x
RAM: 16GB
OS: Windows Server Essentials R2 2012
HDD: 1x 300gb WD Velociraptor (OS), 1x 1TB WD Black (ClientComputerBackups), 2x 3TB WD Red (DataStorage), 1x 1TB External (ServerBackup)
PSU: Corsair TX850 850w
Case: Re-used DiabloTek Mid Tower Case

Uses: DC, AD, DNS, DHCP, VPN, RWA, TeamSpeak Server, Plex Server, Google Music, NetworkPrinting, Firewall, LAN-ClientCompMonitoring, and nightly backups.

I was trying to think of any other useful things to do with it. If anyone else has some suggestions or cool stuff I can utilize it for i'm all ears. If I find a need i'll mess around with Hyper-V but at the moment I have so many devices I don't need to virtualize anything at this time.


----------



## TopicClocker

Quote:


> Originally Posted by *CJston15*
> 
> Recently upgraded my home server from WHS2011 to WSE2012 and with that added some additional hardware. No pics handy at the moment because I am at work but...
> 
> Motherboard: Asus P6X58D-E
> Processor: Intel Core i7 980x
> RAM: 16GB
> OS: Windows Server Essentials R2 2012
> HDD: 1x 300gb WD Velociraptor (OS), 1x 1TB WD Black (ClientComputerBackups), 2x 3TB WD Red (DataStorage), 1x 1TB External (ServerBackup)
> PSU: Corsair TX850 850w
> Case: Re-used DiabloTek Mid Tower Case
> 
> Uses: DC, AD, DNS, DHCP, VPN, RWA, TeamSpeak Server, Plex Server, Google Music, NetworkPrinting, Firewall, LAN-ClientCompMonitoring, and nightly backups.
> 
> I was trying to think of any other useful things to do with it. If anyone else has some suggestions or cool stuff I can utilize it for i'm all ears. If I find a need i'll mess around with Hyper-V but at the moment I have so many devices I don't need to virtualize anything at this time.


SubSonic or Madsonic are pretty good to use imo, I use Madsonic for streaming my music to my phone, it's unreal to have tons of gigabytes of music available to your phone when It has like 8-16GB storage, if that







, I think this is the first I've heard of Google Music for a server, isn't that a cloud service or something?

I'm looking for some useful things too, I'm mainly looking for media things, would like an alternative to Plex, perhaps LAN only with the inclusion of transcoding.


----------



## CJston15

Yes Google Music is the cloud service but I have the manager running on the server which is pointed to my music share. It's not streaming directly from my server to my phone (and that's fine - less bandwidth getting used up when Google will do it for free!).


----------



## TopicClocker

Quote:


> Originally Posted by *CJston15*
> 
> Yes Google Music is the cloud service but I have the manager running on the server which is pointed to my music share. It's not streaming directly from my server to my phone (and that's fine - less bandwidth getting used up when Google will do it for free!).


Oh I see


----------



## cones

Quote:


> Originally Posted by *TopicClocker*
> 
> SubSonic or Madsonic are pretty good to use imo, I use Madsonic for streaming my music to my phone, it's unreal to have tons of gigabytes of music available to your phone when It has like 8-16GB storage, if that
> 
> 
> 
> 
> 
> 
> 
> , I think this is the first I've heard of Google Music for a server, isn't that a cloud service or something?
> 
> I'm looking for some useful things too, I'm mainly looking for media things, would like an alternative to Plex, perhaps LAN only with the inclusion of transcoding.


Media browser 3.


----------



## TopicClocker

Quote:


> Originally Posted by *cones*
> 
> Media browser 3.


Thanks!
I'll give it a look.


----------



## Dimestore55

Here's my unRAID Server aka "HEISENBERG"

O.S: unRAID 5.0-beta14
CASE: Lian Li PC-A17B
CPU: AMD Sempron 145 Sargas
CPU Cooler: Scythe SCSK-1100 Shuriken
MB: AsRock 880GM-LE
RAM: G.Skill Ripjaw 4GB SDRAM DDR3 1333
Raid Card: SUPERMICRO AOC-SASLP-MV8
PSU: Silverstone SFX 450 watt
Hot Swap Cages: ICY DOCK ( 1X 3 in 2, 2X 4 in 3)
Fan Controller: Lamptron FC Touch
Cache Drive: WD Black 7200rpm 500GB
HDD's: Mostly WD 5200rpm Green Drives 2TB


----------



## tiro_uspsss

Quote:


> Originally Posted by *Dimestore55*
> 
> Here's my unRAID Server aka "HEISENBERG"


nice work on the cables!


----------



## Callist0

Quote:


> Originally Posted by *Dimestore55*
> 
> Here's my unRAID Server aka "HEISENBERG"
> 
> O.S: unRAID 5.0-beta14
> CASE: Lian Li PC-A17B
> CPU: AMD Sempron 145 Sargas
> CPU Cooler: Scythe SCSK-1100 Shuriken
> MB: AsRock 880GM-LE
> RAM: G.Skill Ripjaw 4GB SDRAM DDR3 1333
> Raid Card: SUPERMICRO AOC-SASLP-MV8
> PSU: Silverstone SFX 450 watt
> Hot Swap Cages: ICY DOCK ( 1X 3 in 2, 2X 4 in 3)
> Fan Controller: Lamptron FC Touch
> Cache Drive: WD Black 7200rpm 500GB
> HDD's: Mostly WD 5200rpm Green Drives 2TB


How do you like that raid card? Seems like a good price considering so many are 500+


----------



## Master__Shake

Quote:


> Originally Posted by *Callist0*
> 
> How do you like that raid card? Seems like a good price considering so many are 500+


its not a raid card, its just an hba.


----------



## Dimestore55

Quote:


> Originally Posted by *Callist0*
> 
> How do you like that raid card? Seems like a good price considering so many are 500+


It works great. Evidently I need a firmware upgrade to get it to recognize 3 & 4TB drives so I'll be doing that this week because I'm running out of room already on my 2TB's.


----------



## DeviousMachine

When building my server I set out with a couple goals in mind, I wanted my server to be powerful but quiet (it sits in my living room) and also use very little power and I think I achieved what I set out to accomplish.

Uses: Plex media server, htpc (light gaming with steam), web server, game server (minecraft mostly), utorrent.

Name: Koios
OS: Ubuntu 12.04LTS
Case: Samsung ChromeBox S3 case
CPU: Core i5-2450M @ 2.5ghz (3.1ghz turbo)
Motherboard: Chromebox Motherboard
Memory: 8gb ddr3 1600mhz dual channel
PSU: built in chromebox PSU
OS HDD (If you have one): 16GB SLC SSD
Storage HDD(s): 2TB WD Green External + 16GB flash drive for a couple web sites (lets the external go to sleep)
Server Manufacturer: Samsung

I picked up a 2012 Google I/O chromebox from ebay for about $330 , threw in the 8gb of ram and proceeded to spend countless hours reformatting and modifying the original chrubuntu install script to automate my server setup for me, if anyone has a need for it you're welcome to use the script I made.

https://github.com/austinksmith/fixmyserver





It may not be the most practical server, especially given how much time it took to get it working properly however its extremely powerful for its size and it does its job beautifully without making a peep. Eventually I'll upgrade it to 16GB of ram as Ive noticed 8gigs gets eaten pretty quickly while doing multiple torrents at once but for now its just fine.


----------



## Wildcard36qs

That's slick!


----------



## DeviousMachine

Quote:


> Originally Posted by *Wildcard36qs*
> 
> That's slick!


Thanks man, I'll add a couple pictures later.


----------



## Wildcard36qs

You got a good price on that thing with the i5. Normally the Pentium model costs that much.


----------



## DeviousMachine

Yeah, surprisingly the i5 chromebox never got valued for what it is, most people who got one at Google I/O just sold them or gave them away because they dont like chromeos.


----------



## lowfat

Finished my rebuild of my ESXi server.









My first finished system in a VERY long time. Forever Alone Fortress 02 Server Edition.

http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT021.jpg.html

http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT023.jpg.html

http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT022.jpg.html

http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT025.jpg.html

http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT026.jpg.html


----------



## KYKYLLIKA

I wish my exsi server at work was that cool. To be honest, It's a sweet piece of purely commercial tech, but&#8230; Liquid cooled server&#8230;


----------



## cones

Why the water cooling, just because?


----------



## lowfat

Quote:


> Originally Posted by *KYKYLLIKA*
> 
> I wish my exsi server at work was that cool. To be honest, It's a sweet piece of purely commercial tech, but&#8230; Liquid cooled server&#8230;


Thanks. Not exactly practical but it looks good.







Quote:


> Originally Posted by *cones*
> 
> Why the water cooling, just because?


Pretty much. I already had the case and most of the watercooling gear. Plus I enjoy the building of the rig more than anything.


----------



## boyk0

Quote:


> Originally Posted by *lowfat*
> 
> Finished my rebuild of my ESXi server.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> My first finished system in a VERY long time. Forever Alone Fortress 02 Server Edition.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT021.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT023.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT022.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT025.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT026.jpg.html


moar pictahs!

also a spec list pleaaaaase


----------



## lowfat

Quote:


> Originally Posted by *boyk0*
> 
> [/SPOILER]
> 
> moar pictahs!
> 
> also a spec list pleaaaaase












AMD Opteron 6176
Supermicro H8SGL-F
32GB registered DDR3
120GB Sandisk Extreme SSD
240GB OCZ Revodrive 3
bunch of other HDDs
Various Intel NICs.

You can see a few more pics in my build log here.
http://www.overclock.net/t/1206604/forever-alone-ft02bw-server-edition-finished/200_20#post_22019127


----------



## boyk0

Quote:


> Originally Posted by *lowfat*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AMD Opteron 6176
> Supermicro H8SGL-F
> 32GB registered DDR3
> 120GB Sandisk Extreme SSD
> 240GB OCZ Revodrive 3
> bunch of other HDDs
> Various Intel NICs.
> 
> You can see a few more pics in my build log here.
> http://www.overclock.net/t/1206604/forever-alone-ft02bw-server-edition-finished/200_20#post_22019127


I can appreciate a cooling loop being setup for the sake of cleanliness but I guess that cpu doesn't really need it right?


----------



## lowfat

Ya the cooling is way overkill. But I had it all from previous builds.


----------



## Plan9

How does that type of cooling compare to the fans in terms of noise?


----------



## Wildcard36qs

That server looks amazing but man I'd be annoyed having to run all my cables out the top and then the back, but it does keep it clean I suppose.


----------



## NKrader

Quote:


> Originally Posted by *Wildcard36qs*
> 
> That server looks amazing but man I'd be annoyed having to run all my cables out the top and then the back, but it does keep it clean I suppose.


Trust me, it's much more convenient to have cables come out of the top


----------



## lowfat

Quote:


> Originally Posted by *Wildcard36qs*
> 
> That server looks amazing but man I'd be annoyed having to run all my cables out the top and then the back, but it does keep it clean I suppose.


Power cable + 5-6 network cables is all that will come out of the computer. Pretty sure I can manage that.









Quote:


> Originally Posted by *Plan9*
> 
> How does that type of cooling compare to the fans in terms of noise?


The fans are very quiet. The pump however are not. But that is my own choosing since I used spare parts where I could. It could easily be done w/ a pump that is quieter.


----------



## cones

Quote:


> Originally Posted by *lowfat*
> 
> Power cable + 5-6 network cables is all that will come out of the computer. Pretty sure I can manage that.
> 
> 
> 
> 
> 
> 
> 
> 
> The fans are very quiet. The pump however are not. But that is my own choosing since I used spare parts where I could. It could easily be done w/ a pump that is quieter.


Why so many Ethernet cables?


----------



## lowfat

Quote:


> Originally Posted by *cones*
> 
> Why so many Ethernet cables?


1 for IPMI, 1 for host, 1 for pfsense WAN. 1 for the rest of the VMs. And likely 2 dedicated for Win Server 2012 to take advantage of SMB3.0.


----------



## cones

Quote:


> Originally Posted by *lowfat*
> 
> 1 for IPMI, 1 for host, 1 for pfsense WAN. 1 for the rest of the VMs. And likely 2 dedicated for Win Server 2012 to take advantage of SMB3.0.


Ok, no LAN for pfsense? I still want a server that can pass through hardware, would be so nice.


----------



## lowfat

Quote:


> Originally Posted by *cones*
> 
> Ok, no LAN for pfsense? I still want a server that can pass through hardware, would be so nice.


for pfsense LAN I should be able to use a virtual adapter.


----------



## Wildcard36qs

Yea I ran pfsense with just 2 nics. Wan for one and lan for the other. Now I got a new server and it has 4 NICs. I am using ipfire now and it'll be one for wan one for lan one for wlan and one for DMZ.


----------



## lowfat

Quote:


> Originally Posted by *Wildcard36qs*
> 
> Yea I ran pfsense with just 2 nics. Wan for one and lan for the other. Now I got a new server and it has 4 NICs. I am using ipfire now and it'll be one for wan one for lan one for wlan and one for DMZ.


Not familiar w/ ipfire but I may give it a try. I got rather attached to having a squid cache and it seems Ipfire also has this.


----------



## Wildcard36qs

Quote:


> Originally Posted by *lowfat*
> 
> Not familiar w/ ipfire but I may give it a try. I got rather attached to having a squid cache and it seems Ipfire also has this.


I did the same thing with pfsense. I find that Squid and filtering and ClamAV are actually easier to implement on ipfire.


----------



## alpenwasser

Some very nice machines in this thread, currently working myself through it page by page.









In the spirit of contribooting something myself, here's the build I'm currently in the process
of finishing up:

*APOLLO*

*Hardware*

*Case:* InWin PP689, modded to fit 24 3.5" disks
*CPU:* 2 x Intel Xeon L5630
*M/B:* Supermicro X8DT3-LN4
*RAM:* 12 GB Hynix unbuffered ECC (might upgrade this at some point, but currently it's just about sufficient)
*HBA:* 3 x LSI 9211-8i, flashed to IT mode (that was a "fun" process, figuring it out the first time)
*System SSD:* Intel 520 128 GB
*HDD:* 3 x Samsung HD103UJ 1 TB (3-way mirror, business data)
*HDD:* 6 x WD Red 3 TB (RAIDZ-2, media files)
*HDD:* 4 x WD RE4 2 TB (RAIDZ-2, my personal data)
*PSU:* Enermax Platimax 550W

*Software*

*Host O/S:* Arch Linux
*Guest O/S:* I'm running three guests (also Arch) via KVM/QEMU, plus I have one more guest
with BOINC which I occasionally start up to do some distributed computing. Each of the storage VMs
has its own dedicated LAN port (the board has four natively, very nice).
*Storage*: Running ZFS, each VM gets its dedicated ZFS pool, bought three LSI 9211-8i host
bus adapters on eBay for ~100 USD each and am using them in IT mode. Can't use the ports on
the M/B for drives larger than 2 TB because the board is from the era before 3 TB drives were common.

This is a diagram representing the setup as it is now, might need to click for full res version if you
want to be able to read the text.









(click image for full res)


*Pics*

With the help of my neighbour (who has a mill and some spare time due to being a pensioneer),
we made two disk racks with capacity for 12 3.5" drives each, which are mounted into the front
of the case.

(click image for full res)


(click image for full res)


(click image for full res)


(click image for full res)


(click image for full res)


There are still a few smaller things to do, but for the most part it's done and operational for now.

For those interested, build log can be found here.

Cheers,
-aw


----------



## M3nta1

Quote:


> Originally Posted by *alpenwasser*
> 
> With the help of my neighbour (who has a mill and some spare time due to being a pensioneer),
> we made two disk racks with capacity for 12 3.5" drives each, which are mounted into the front
> of the case.
> 
> -snip-
> 
> Cheers,
> -aw


WOW those are pretty. Looks fantastic, especially with the nicely managed cables.


----------



## alpenwasser

Quote:


> Originally Posted by *M3nta1*
> 
> WOW those are pretty. Looks fantastic, especially with the nicely managed cables.


Yeah, and they work pretty well too, very happy with the system so far. Thanks!


----------



## cchalogamer

Here's the Current iteration of my toy. It's been through MANY changes of hardware upgrades as parts get passed down from older main rigs. It's only been a few months ago that I retired the i5 and swapped out the E8400/EP45-UD3P for this.

It's primarily used as a file server (Docs pics and movies mostly) and to host my website and Teamspeak server. Occasionally I'll fire up something like a GTA San Andres Multiplier server or something similar. Sometimes it gets used to test out other hardware (has a 7850 in it atm I just got back from XFX RMA) I'm really limited by my ISP's upload @ <1Mbps until I move later this year.

OS: Windows 7 Ultimate x64 (Previous version had 32 bit Server 08 and Server 03 before that but I don't NEED the server OS)
Case: Bookshelf








CPU: i5-3570K @ 4.5Ghz with a delid
Motherboard: Gigabyte GA-Z77X-UD5H
Memory: 2 x 4GB Kingston DDR3 @ 667
PSU: Antec BP550
OS HDD: 500GB Seagate 7200 RPM
Storage HDD(s): 2 x 2TB Seagate 5400RPM + 2 320GB Seagate in RAID 1 (though one of these has recently failed so I moved the data to a 2TB and I've got another 2TB I've just been too lazy to put in)
Server Manufacturer (Ex: Dell, HP, You?): ME!

Current Server pics:


(That's the dead card on the PSU that I RMAed for the 7850)

Before I bought the Rosewill HDD cage and with the older Core 2 Duo parts:



And here's my awesome HDD rack from WAY back in the day, I know that x700 is AGP (and still around here somewhere) so I'm guessing that's using my old 939 Athlon 64 3000+ and GA-K8NS-939:



I've been reading through this thread for a while now and haven't posted much on OCN in YEARS (well with 40 post at all...) and when I got to the end I figured it was as good of a time as any to post my server. I was looking at replacing this thing with a cheap 2 CPU server before doing the i5 upgrade but didn't have a reason to do it when I have such a grand case to work with and the i5 parts on hand and paid for.


----------



## alpenwasser

Quote:


> Originally Posted by *cchalogamer*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Here's the Current iteration of my toy. It's been through MANY changes of hardware upgrades as parts get passed down from older main rigs. It's only been a few months ago that I retired the i5 and swapped out the E8400/EP45-UD3P for this.
> 
> It's primarily used as a file server (Docs pics and movies mostly) and to host my website and Teamspeak server. Occasionally I'll fire up something like a GTA San Andres Multiplier server or something similar. Sometimes it gets used to test out other hardware (has a 7850 in it atm I just got back from XFX RMA) I'm really limited by my ISP's upload @ <1Mbps until I move later this year.
> 
> OS: Windows 7 Ultimate x64 (Previous version had 32 bit Server 08 and Server 03 before that but I don't NEED the server OS)
> Case: Bookshelf
> 
> 
> 
> 
> 
> 
> 
> 
> CPU: i5-3570K @ 4.5Ghz with a delid
> Motherboard: Gigabyte GA-Z77X-UD5H
> Memory: 2 x 4GB Kingston DDR3 @ 667
> PSU: Antec BP550
> OS HDD: 500GB Seagate 7200 RPM
> Storage HDD(s): 2 x 2TB Seagate 5400RPM + 2 320GB Seagate in RAID 1 (though one of these has recently failed so I moved the data to a 2TB and I've got another 2TB I've just been too lazy to put in)
> Server Manufacturer (Ex: Dell, HP, You?): ME!
> 
> Current Server pics:
> 
> 
> (That's the dead card on the PSU that I RMAed for the 7850)
> 
> Before I bought the Rosewill HDD cage and with the older Core 2 Duo parts:
> 
> 
> 
> And here's my awesome HDD rack from WAY back in the day, I know that x700 is AGP (and still around here somewhere) so I'm guessing that's using my old 939 Athlon 64 3000+ and GA-K8NS-939:
> 
> 
> 
> I've been reading through this thread for a while now and haven't posted much on OCN in YEARS (well with 40 post at all...) and when I got to the end I figured it was as good of a time as any to post my server. I was looking at replacing this thing with a cheap 2 CPU server before doing the i5 upgrade but didn't have a reason to do it when I have such a grand case to work with and the i5 parts on hand and paid for.


Haha, that HDD rack in the last pic is awesome!


----------



## Wildcard36qs

So I have been debating back and forth with myself on what I want to do with my C1100. I love it and it works great, but I am tempted to move it into a full tower. I know it has been done, but not sure on what CPU coolers to get or even what case I really want.


----------



## k1mz3

#alpenwasser

Wow.. Nice cable management and the casemodding, it's sweet!


----------



## rrims

Finally got around to updating my server. I went from:

CPU / Mobo: ASUS C60M1-I (1.0ghz dual core)
Case: Some random InWIN
OS: Windows 7 + SnapRAID

To this:

CPU: Intel i5 4570s
Mobo: Asrock Z87 Extreme6
RAM: Kingston HyperX 2x4gb
Case: Fractal Design R4
OS: Windows 7 + FlexRAID

Currently have 8.64TB of usable storage. This will be used mainly for serving out my music, movies, and TV shows. But also be using it as a web server, SFTP server, and eventually a small game server.


----------



## alpenwasser

Quote:


> Originally Posted by *k1mz3*
> 
> #alpenwasser
> 
> Wow.. Nice cable management and the casemodding, it's sweet!


Thanks, appreciate it!









I must admit I am indeed rather happy with how it's turned out.

Quote:


> Originally Posted by *rrims*
> 
> Finally got around to updating my server. I went from:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> CPU / Mobo: ASUS C60M1-I (1.0ghz dual core)
> Case: Some random InWIN
> OS: Windows 7 + SnapRAID
> 
> To this:
> 
> CPU: Intel i5 4570s
> Mobo: Asrock Z87 Extreme6
> RAM: Kingston HyperX 2x4gb
> Case: Fractal Design R4
> OS: Windows 7 + FlexRAID
> 
> Currently have 8.64TB of usable storage. This will be used mainly for serving out my music, movies, and TV shows. But also be using it as a web server, SFTP server, and eventually a small game server.


Always sweet to see an R4 server! I like the additional HDD cage in there, thought about doing that with my own
R4 server as well, but didn't really have the space.

Since we're on the subject, might as well contriboot something else. This was my previous
server to the one posted above. It's also serving as an HTPC and a BOINC rig (hence the
W/C and the rather powerful CPU).

Unfortunately, I did not really plan this rig with ZFS in mind (ZFS on Linux was not yet
production-ready at the time of planning and buying), so it does not have all that much
RAM, and more importantly, it does not have ECC RAM. Also, no room for expandability,
just replacing the HDDs with larger ones. That's why I decided to upgrade to our larger
server from up above. But this machine is still serving nicely in its other roles.

Enough talk:

*ZEUS*

*M/B*: MSI Z77-GD65A
*CPU*: Intel 2600k
*RAM*: 4 GB of Kingston
*HDDs*: 4 × WDC RE4 2 TB (now in my newer server)
*HDDs*: 3 × WDC Red 3 TB (also now in newer server)
*System SSD*: 60 GB Intel 330
*PSU*: BeQuiet! 550W
*Case*: Fractal Design R4
*O/S*: Arch Linux w/ ZFS on Linux

*Special Mods*: I completely replaced the stock back panel of the case with a
custom panel to fit a 360 radiator. Was a rather fun project, and while a bit unorthodox,
very well suited to this build's purpose. Also, I replaced the 5.25" cage assembly with
the PSU at the front of the case.

For anyone interested, a summarized build log can be found here.

(click image for full res version)


(click image for full res version)


(click image for full res version)


(click image for full res version)


(click image for full res version)


(click image for full res version)


(click image for full res version)


(click image for full res version)


----------



## rrims

Quote:


> Originally Posted by *alpenwasser*
> 
> Always sweet to see an R4 server! I like the additional HDD cage in there, thought about doing that with my own
> R4 server as well, but didn't really have the space.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Since we're on the subject, might as well contriboot something else. This was my previous
> server to the one posted above. It's also serving as an HTPC and a BOINC rig (hence the
> W/C and the rather powerful CPU).
> 
> Unfortunately, I did not really plan this rig with ZFS in mind (ZFS on Linux was not yet
> production-ready at the time of planning and buying), so it does not have all that much
> RAM, and more importantly, it does not have ECC RAM. Also, no room for expandability,
> just replacing the HDDs with larger ones. That's why I decided to upgrade to our larger
> server from up above. But this machine is still serving nicely in its other roles.
> 
> Enough talk:
> 
> *ZEUS*
> 
> *M/B*: MSI Z77-GD65A
> *CPU*: Intel 2600k
> *RAM*: 4 GB of Kingston
> *HDDs*: 4 × WDC RE4 2 TB (now in my newer server)
> *HDDs*: 3 × WDC Red 3 TB (also now in newer server)
> *System SSD*: 60 GB Intel 330
> *PSU*: BeQuiet! 550W
> *Case*: Fractal Design R4
> *O/S*: Arch Linux w/ ZFS on Linux
> 
> *Special Mods*: I completely replaced the stock back panel of the case with a
> custom panel to fit a 360 radiator. Was a rather fun project, and while a bit unorthodox,
> very well suited to this build's purpose. Also, I replaced the 5.25" cage assembly with
> the PSU at the front of the case.
> 
> For anyone interested, a summarized build log can be found here.
> 
> (click image for full res version)
> 
> 
> (click image for full res version)
> 
> 
> (click image for full res version)
> 
> 
> (click image for full res version)
> 
> 
> (click image for full res version)
> 
> 
> (click image for full res version)
> 
> 
> (click image for full res version)
> 
> 
> (click image for full res version)


Thanks!

Your R4 is epic though! I love the 360mm rad in there, how did you go about doing that if you don't mind? I've wanted to be able to move my sig rig to a R4 for a WHILE now. But unfortunately (2) 240mm rads and 2 mechanical hard drives don't really fit in that case at the same time.


----------



## alpenwasser

Quote:


> Originally Posted by *rrims*
> 
> Thanks!
> 
> Your R4 is epic though! I love the 360mm rad in there, how did you go about doing that if you don't mind? I've wanted to be able to move my sig rig to a R4 for a WHILE now. But unfortunately (2) 240mm rads and 2 mechanical hard drives don't really fit in that case at the same time.


Thanks! I will admit that I still love looking at those pics even after almost a year.









The panel was quite a bit of work, but it was more tedious than difficult. Basically I
had a spare sheet of powdercoated alu from my Caselabs SMH10 (replaced it with
a vented one), and it had almost the perfect dimensions (width-wise, height needed
to be cut down of course).

So I drilled out all the rivets to remove the stock back panel on the R4, then basically
just drilled out the holes for the radiator on the new panel by hand (1173 holes or
something like that, a guy on OC3D actually counted them at some point), painted
the bare metal within the holes (with model paint and a fine brush) and then mounted
the whole thing to the R4 frame with some screws.

The holes are not perfectly aligned, but it was a trial-and-error thing, I hadn't really
perfected the process yet, and it's at the back of the case, so I can live with the
imperfections (besides, those gives our mods some character I'd say







).

There's a post in my SMH10 build log where I go into some detail on the process,
and where you can find more pics documenting it: linky.


----------



## KyadCK

Function over looks in this case, but I'll bet it leaves a dent.












Servers, from left to right;

*OS:* ESXi 5.5
*Case:* Corsair Carbide 300R
*CPU:* i5-4670
*Motherboard:* ASUS B85M-G
*Memory:* 2x8GB 1600 Corsair
*PSU:* 430w Corsair
*OS HDD:* 8GB Samsung SD chip
*Storage HDD(s):* 2TB WD, 2TB WD

*OS:* ESXi 5.5
*Case:* CM HAF 912
*CPU:* Phenom II x4 (3.5Ghz)
*Motherboard:* GA-970A-UD3 Rev 1.1
*Memory:* 2x8GB 1600 Corsair
*PSU:* 430w Thermaltake
*OS HDD:* 8GB Samsung SD chip
*Storage HDD(s):* 320GB WD, 500GB WD

*OS:* ESXi 5.5
*Case:* CM HAF 912
*CPU:* Phenom II x6 (3.6Ghz)
*Motherboard:* GA-970A-UD3 Rev 1.1
*Memory:* 4x8GB 1600 G.Skill
*PSU:* 430w Thermaltake
*OS HDD:* 8GB Samsung SD chip
*Storage HDD(s):* 500GB WD, 750GB WD

They also each have a weak GPU to allocate to the VMs as needed. They run a variety of OSs in VMs, mostly Windows Server 2008 and 2012, including 3 Domain/DCHP/DNS servers, two media servers, several linux distros, OSX, and several experimental XP/7/8 installs. At least a dozen VMs are running at any given time.


----------



## ColSanderz

Finally built my file server that I've been needing for a while. Was really debating a rack vs tower, but decided on tower as I don't have any space for a rack in my apartment right now.

*OS*: Windows Server 2012
*Case*: iStarUSA S-917
*CPU*: E3-1240 V3 (w/ Noctua NH-U9B)
*Motherboard*: ASROCK E3C226D2I
*Memory*: 2x8GB 1333 Kingston Unbuffered ECC
*PSU*: Seasonic SS-400FL 400w
*OS HDD*: 2x Western Digital Se 1 TB in Raid 1
*Raid Card*: Adaptec 8885
*Storage HDD(s)*: 8x Western Digital Red 3 TB in Raid 10

http://smg.photobucket.com/user/erik_njorl/media/IMG_7747_zps4fbe7297.jpg.html

Replaced every fan with a noctua. It's so nice and quiet (and the temps are still pretty good!). The adaptec will easily allow me to expand another 8+ hdd's without needing another server, though it might be overkill otherwise


----------



## Muskaos

Quote:


> Originally Posted by *ColSanderz*
> 
> Finally built my file server that I've been needing for a while. Was really debating a rack vs tower, but decided on tower as I don't have any space for a rack in my apartment right now.
> 
> *OS*: Windows Server 2012
> *Case*: iStarUSA S-917
> *CPU*: E3-1240 V3 (w/ Noctua NH-U9B)
> *Motherboard*: ASROCK E3C226D2I
> *Memory*: 2x8GB 1333 Kingston Unbuffered ECC
> *PSU*: Seasonic SS-400FL 400w
> *OS HDD*: 2x Western Digital Se 1 TB in Raid 1
> *Raid Card*: Adaptec 8885
> *Storage HDD(s)*: 8x Western Digital Red 3 TB in Raid 10
> 
> Replaced every fan with a noctua. It's so nice and quiet (and the temps are still pretty good!). The adaptec will easily allow me to expand another 8+ hdd's without needing another server, though it might be overkill otherwise


What drive bays are you using? Also, why RAID 10? I thought Server 2012 could do drive pooling...?


----------



## cdoublejj

Quote:


> Originally Posted by *lowfat*
> 
> Finished my rebuild of my ESXi server.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> My first finished system in a VERY long time. Forever Alone Fortress 02 Server Edition.
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT021.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT023.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT022.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT025.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/foreveraloneserverFT026.jpg.html


WOW! At first iw as like "That's jsut a gam..... WAIT... that's actually server/workstation hardware..... WOW"


----------



## lowfat

Quote:


> Originally Posted by *cdoublejj*
> 
> WOW! At first iw as like "That's jsut a gam..... WAIT... that's actually server/workstation hardware..... WOW"












I took the system for a drive tonight.

http://s18.photobucket.com/user/tulcakelume/media/FT02/export-1-7.jpg.html

http://s18.photobucket.com/user/tulcakelume/media/FT02/export-1-13.jpg.html


----------



## cdoublejj

Quote:


> Originally Posted by *lowfat*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I took the system for a drive tonight.
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/export-1-7.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/export-1-13.jpg.html


----------



## ColSanderz

Quote:


> Originally Posted by *Muskaos*
> 
> What drive bays are you using? Also, why RAID 10? I thought Server 2012 could do drive pooling...?


I'm using the SuperMicro CSE-M35T-1B. I've used their chassis's before at work and really enjoy their build quality, even if it's a bit more expensive.

I saw the drive pooling, but my understanding of it is that it is "essentially" a software raid 5 with added security/features. So, poor write speeds, more CPU intensive, depends on the OS to do parity calculations. With this in mind, I would be completely uncomfortable using Raid 5 with my current drives (Reds) when their URE is only 10^14. I mean, I'm not a raid expert, but from the reading and talking to people I do know, the most recommended path, if you can afford the cost, is hardware raid 10.


----------



## Muskaos

Oh, I get it, trust me. I'm not made of money, so I'm using Drivepool on my WHS 2011 box, so I can have max storage. I back my stuff up to three places, so losing a drive is just a time inconvenience.


----------



## Norse

Quote:


> Originally Posted by *cdoublejj*
> 
> WOW! At first iw as like "That's jsut a gam..... WAIT... that's actually server/workstation hardware..... WOW"


Some people use actual server hardware









my filesererver/streaming server Dual 8core Opteron 2.4ghz, 32GB ram


----------



## rrims

Quote:


> Originally Posted by *Norse*
> 
> Some people use actual server hardware


I would if I could!


----------



## ColSanderz

Quote:


> Originally Posted by *Muskaos*
> 
> Oh, I get it, trust me. I'm not made of money, so I'm using Drivepool on my WHS 2011 box, so I can have max storage. I back my stuff up to three places, so losing a drive is just a time inconvenience.


Yea I feel ya. My wallet is definitely hurting now. Actually, my work just asked me if my server could be used as offsite storage (why we aren't doing this already, don't ask me)... so maybe I can recoup the cost a little now


----------



## Muskaos

Nice. Hope you've got the room...


----------



## Master__Shake

Quote:


> Originally Posted by *Norse*
> 
> Some people use actual server hardware




i do, check out my infiniband network









sweet sweet 10gbps, gotta buy some more cables though.


----------



## NKrader

Quote:


> Originally Posted by *lowfat*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I took the system for a drive tonight.
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/export-1-7.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/FT02/export-1-13.jpg.html


and again i stand by my statement of marriage proposal.

yes i can cook and clean. LOL


----------



## cdoublejj

Quote:


> Originally Posted by *NKrader*
> 
> and again i stand by my statement of marriage proposal.
> 
> yes i can cook and clean. LOL


0_O


----------



## Plan9

Quote:


> Originally Posted by *Master__Shake*
> 
> 
> 
> i do, check out my infiniband network
> 
> 
> 
> 
> 
> 
> 
> 
> 
> sweet sweet 10gbps, gotta buy some more cables though.


Impressive, but what do you need 10GbE for? Half the time I'm not making much use of my 1GbE links.


----------



## driftingforlife

Quote:


> Originally Posted by *Plan9*
> 
> Impressive, but what do you need 10GbE for? Half the time I'm not making much use of my 1GbE links.


*Because 10GbE is why*








I planned on doing this myself when i finally get my fileserver done, will work a treat for a VM testing and file transfers.


----------



## Plan9

Quote:


> Originally Posted by *driftingforlife*
> 
> *Because 10GbE is why*
> 
> 
> 
> 
> 
> 
> 
> 
> I planned on doing this myself when i finally get my fileserver done, will work a treat for a VM testing and file transfers.


But with OS level containers and bind mounts you need to chop data up over the network to begin with. And as for sending files to and from your other computers in the house, I doubt any of them would have 10GbE.

I'm all for a little self indulgence when it comes to hardware, but I just can't see how any home server would benefit from 10GbE


----------



## Master__Shake

Quote:


> Originally Posted by *Plan9*
> 
> Impressive, but what do you need 10GbE for? Half the time I'm not making much use of my 1GbE links.


to move data quickly between my 2 file servers.

also because 10gbe is a fun project.

50ish dollars per card and 180 for the switch and cables are cheap out of china.


----------



## Plan9

Quote:


> Originally Posted by *Master__Shake*
> 
> to move data quickly between my 2 file servers.
> 
> also because 10gbe is a fun project.
> 
> 50ish dollars per card and 180 for the switch and cables are cheap out of china.


Fair enough, but are you actually getting 10GbE though. I've found cheap switches never run at the advertised throughput. Though I guess that would still be quicker than gigabit.

Why 2 file servers by the way? I'm guessing it's not just a capacity issue?


----------



## Master__Shake

Quote:


> Originally Posted by *Plan9*
> 
> Fair enough, but are you actually getting 10GbE though. I've found cheap switches never run at the advertised throughput. Though I guess that would still be quicker than gigabit.
> 
> Why 2 file servers by the way? I'm guessing it's not just a capacity issue?


i get around 450-550mb transfers between both servers

as far as why 2. one is a backup of the other.

one uses 16 toshiba 2tb drives on an 8888elp

and the other 12 seagate 2tb drives on a 9260-4i


----------



## tycoonbob

Another reason for 10Gb at home would be for virtualization storage. have 6 SSD's in RAID 10 for VM storage? 1Gb network would be the bottleneck for your VMs, but not 10Gb.

That's IB, right? Not GbE..


----------



## Plan9

Quote:


> Originally Posted by *Master__Shake*
> 
> i get around 450-550mb transfers between both servers


Is that in-memory copies or including disk IO?
Quote:


> Originally Posted by *Master__Shake*
> 
> as far as why 2. one is a backup of the other.
> 
> one uses 16 toshiba 2tb drives on an 8888elp
> 
> and the other 12 seagate 2tb drives on a 9260-4i


Wouldn't that be more redundancy rather than back up - since both servers are sat next to each other?
Quote:


> Originally Posted by *tycoonbob*
> 
> Another reason for 10Gb at home would be for virtualization storage. have 6 SSD's in RAID 10 for VM storage? 1Gb network would be the bottleneck for your VMs, but not 10Gb.


I can't think of many home users which would want (let alone need) to have a separate server for their storage pool to their hypervisor. If you think about it, a lot of websites and private commercial clouds aren't even this well equipped; and yet this is something which you'd expect to have the total number of users being 1 household (else it's not really a home server - it's just a server







)

I mean, fair play if you guys have the cash to splash and enjoy this stuff as a hobby but lets be realistic here, your argument isn't a reason for 10Gb IB; home servers don't need that level of sophistication. This is just an excuse for new toys







(which is the reason why I asked _Master__Shake_ if his specs were a requirement or just something for fun)
Quote:


> Originally Posted by *tycoonbob*
> 
> That's IB, right? Not GbE..


Sorry yeah, last night I'd completely forgotten what the 'E' stood for in GbE


----------



## Muskaos

I wouldn't mind a 10 gig network, myself. But I'm the one with 6+ TB of media residing on his servers.


----------



## beers

Speaking of 10GbE..
Quote:


> [[email protected] ~]$ sudo ethtool eth2 | grep Speed
> Speed: 10000Mb/s


Finally had my OM3 shipment delivered


----------



## Plan9

Quote:


> Originally Posted by *Muskaos*
> 
> I wouldn't mind a 10 gig network, myself. But I'm the one with 6+ TB of media residing on his servers.


8TB here. But how much of it do you need to access concurrently?

Honestly, most of the time I don't even see any difference from I upgraded from 100Mb to GbE (and I do push around HD content).

I think the problem is a lot of the time there's wasted overhead and rather than streamlining things, it's often easier just to buy better hardware.


----------



## Wildcard36qs

I am trying to find a good case, PSU, and CPU coolers to transfer my C1100 into. As summer heat is coming along, my C1100 in the closet is starting to get annoying. I want to put it in a nice case. I am finding it rather difficult to find a PSU that has 2 x 4pin CPU and 8 pin CPU power (don't want to use adapters) under $100. I am trying to keep this all affordable as well.


----------



## PhilWrir

Hey everyone, just cleaned the thread out.

Please try to keep it respectful and on topic.

I also wanted to publicly apologize to all of you for forgetting to unlock this thread when I was done.








I got sidetracked by something else on here and never came back.
Thanks for understanding!


----------



## Peanuthead

Quote:


> Originally Posted by *Wildcard36qs*
> 
> I am trying to find a good case, PSU, and CPU coolers to transfer my C1100 into. As summer heat is coming along, my C1100 in the closet is starting to get annoying. I want to put it in a nice case. I am finding it rather difficult to find a PSU that has 2 x 4pin CPU and 8 pin CPU power (don't want to use adapters) under $100. I am trying to keep this all affordable as well.


They make adapters for the 2x4 pin power you need.


----------



## cones

Quote:


> Originally Posted by *Wildcard36qs*
> 
> I am trying to find a good case, PSU, and CPU coolers to transfer my C1100 into. As summer heat is coming along, my C1100 in the closet is starting to get annoying. I want to put it in a nice case. I am finding it rather difficult to find a PSU that has 2 x 4pin CPU and 8 pin CPU power (don't want to use adapters) under $100. I am trying to keep this all affordable as well.


Why not adapters? Otherwise you could go modular with a custom cable if you're having trouble finding something.


----------



## kyle5281

Quote:


> Originally Posted by *PhilWrir*
> 
> Hey everyone, just cleaned the thread out.
> 
> Please try to keep it respectful and on topic.
> 
> I also wanted to publicly apologize to all of you for forgetting to unlock this thread when I was done.
> 
> 
> 
> 
> 
> 
> 
> 
> I got sidetracked by something else on here and never came back.
> Thanks for understanding!










You gave me a mini heart attack!!!!







How could you forget to unlock one of the best threads on OCN!!!!??????!!!!! Since you are an awesome mod I guess we can forgive you this time.


----------



## Jeci

I shuffled around a lot of high bit rate HD content (25GB+ Remuxes), with plex to multiple servers and don't saturate 1Gb, sure 10Gb would be nice, but your use cases for it are limited...


----------



## KyadCK

Quote:


> Originally Posted by *Jeci*
> 
> I shuffled around a lot of high bit rate HD content (25GB+ Remuxes), with plex to multiple servers and don't saturate 1Gb, sure 10Gb would be nice, but your use cases for it are limited...


Anything that goes from SSD to SSD. Done.

Or my personal favorite, letting another computer play a game off my Ramdisk like it's no big deal.


----------



## alpenwasser

Quote:


> Originally Posted by *ColSanderz*
> 
> Finally built my file server that I've been needing for a while. Was really debating a rack vs tower, but decided on tower as I don't have any space for a rack in my apartment right now.
> 
> *OS*: Windows Server 2012
> *Case*: iStarUSA S-917
> *CPU*: E3-1240 V3 (w/ Noctua NH-U9B)
> *Motherboard*: ASROCK E3C226D2I
> *Memory*: 2x8GB 1333 Kingston Unbuffered ECC
> *PSU*: Seasonic SS-400FL 400w
> *OS HDD*: 2x Western Digital Se 1 TB in Raid 1
> *Raid Card*: Adaptec 8885
> *Storage HDD(s)*: 8x Western Digital Red 3 TB in Raid 10
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> http://smg.photobucket.com/user/erik_njorl/media/IMG_7747_zps4fbe7297.jpg.html
> 
> 
> 
> Replaced every fan with a noctua. It's so nice and quiet (and the temps are still pretty good!). The adaptec will easily allow me to expand another 8+ hdd's without needing another server, though it might be overkill otherwise


Nice! I really like those iStarUSA cases, but they're _very_ hard to get where I live.
Quote:


> Originally Posted by *Norse*
> 
> Some people use actual server hardware
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> my filesererver/streaming server Dual 8core Opteron 2.4ghz, 32GB ram


Yup, some people do, and personally I love it.









Also: Nice system.
Quote:


> Originally Posted by *rrims*
> 
> I would if I could!


I got myself some used server components on eBay, there are some pretty good deals to be
had if you know what to look for and get a bit lucky.


----------



## NKrader

Quote:


> Originally Posted by *Norse*
> 
> Some people use actual server hardware
> 
> 
> 
> 
> 
> 
> 
> 
> 
> my filesererver/streaming server Dual 8core Opteron 2.4ghz, 32GB ram
> 
> 
> Spoiler: Warning: Spoiler!


some people use server hardware with server chassis







neener neener

needs more hdds, i know.. i have more in there now but they are small.. i want to pick up 3 4tb drives.


----------



## beers

Quote:


> Originally Posted by *Jeci*
> 
> I shuffled around a lot of high bit rate HD content (25GB+ Remuxes), with plex to multiple servers and don't saturate 1Gb, sure 10Gb would be nice, but your use cases for it are limited...


Y u no?

I rage any time the transfer rate drops below 100 MB/sec through Samba


----------



## Aximous

Quote:


> Originally Posted by *beers*
> 
> Y u no?
> 
> I rage any time the transfer rate drops below 100 MB/sec through Samba


I break out in tears of joy anytime it goes over 100... Still gotta use it, no way I'll get NFS running on all the clients on the network


----------



## alpenwasser

Quote:


> Originally Posted by *NKrader*
> 
> some people use server hardware with server chassis
> 
> 
> 
> 
> 
> 
> 
> neener neener
> 
> needs more hdds, i know.. i have more in there now but they are small.. i want to pick up 3 4tb drives.
> 
> 
> Spoiler: Warning: Spoiler!


I see I'm not the only one who has needed to add additional chipset cooling.


----------



## cdoublejj

I AS5ed all my heatsinkes including the mobo heat sinks and added some VRM cooling too.







i also have 2x 120mm intake fans blowing fresh to the chipset heat sinks.


----------



## Jeci

My file server:



A pair of 8 core/16GB servers hanging off the newtork as well:



Stuff that i'm running on them:


Plex Media Server
ru & rtorrent
Apache
Headphones (sabNZBD)
GNS3
Win 2k8 domain
I've got some work applications running on the second one, but they're not set up yet as I still need to set up a separate network with DNS.


----------



## cdoublejj

i've always wondered about shoe horning bigger heatsinks to 1u severs to make them quieter. (not being stack able then)


----------



## u3b3rg33k

The issue isn't so much the 1U heatsinks, it's the 40mm fans that are the noise culprit.


----------



## Plan9

Haha, those fans are loud.


----------



## DaveLT

Usually the heatsink themselves have no fans because there's no space for a proper fan







(height issue)
so those 40mm fans have to spin ultra fast to get airflow through the hdd cages and past the heatsink with enough velocity. normally I see most OCN rigs use ultra low static pressure fans for their front fans which result in essentially no airflow out through the hdd cage /facepalm
Couple that with Corsair's usually very dense HDD arrays. NZXT goes for much more gap between HDDs


----------



## cdoublejj

Quote:


> Originally Posted by *u3b3rg33k*
> 
> The issue isn't so much the 1U heatsinks, it's the 40mm fans that are the noise culprit.


i guess if you wire/zip tied a bigger fan on top but, i'd bet they'd run really cool with an Arctic freezer 7 pro.


----------



## DaveLT

Quote:


> Originally Posted by *cdoublejj*
> 
> i guess if you wire/zip tied a bigger fan on top but, i'd bet they'd run really cool with an Arctic freezer 7 pro.


Not if you only have 20mm of space between the cpu and the top of the case.


----------



## cdoublejj

Quote:


> Originally Posted by *DaveLT*
> 
> Not if you only have 20mm of space between the cpu and the top of the case.


that's the cost of quite, it requires more space. Thankfully my sever is massive.


----------



## ryanallan

A couple shots from my Lego Server build.


----------



## driftingforlife

That flipping epic man, nice work


----------



## tiro_uspsss

Quote:


> Originally Posted by *ryanallan*
> 
> A couple shots from my Lego Server build.


LEGO!


----------



## Plan9

What's that like for getting hot? It looks awesome


----------



## blooder11181

Quote:


> Originally Posted by *ryanallan*
> 
> A couple shots from my Lego Server build.
> 
> 
> Spoiler: Warning: Spoiler!


thats over 9000 of awesomeness


----------



## ryanallan

Thanks guys!

@Plan9 it's not too bad actually. Everything stays cool. CPU hovers in the mid 30's and the HDD's mostly stay under 40C.


----------



## Plan9

Quote:


> Originally Posted by *ryanallan*
> 
> Thanks guys!
> 
> @Plan9 it's not too bad actually. Everything stays cool. CPU hovers in the mid 30's and the HDD's mostly stay under 40C.


Nice.


----------



## Tadaen Sylvermane

Got my little server running. Just a file server and toying around a bit with KVM.



Mouse in picture for comparison. Is Antec ISK. Micromachine in sig is parts.


----------



## LuckyJack456TX

Second rig in my sig is my server.


----------



## Cyberion

I'm thinking about buying one of these puppies. What do you guys think?

Or should I just run VMs on my desktop? (specs in sig, planning on upgrading to 16GB soon)


----------



## void

Quote:


> Originally Posted by *Cyberion*
> 
> I'm thinking about buying one of these puppies. What do you guys think?
> 
> Or should I just run VMs on my desktop? (specs in sig, planning on upgrading to 16GB soon)


I'm not seeing anything in your sig? I guess it depends how many VMs you want to run, you haven't given a lot of info.


----------



## DaveLT

Quote:


> Originally Posted by *Cyberion*
> 
> I'm thinking about buying one of these puppies. What do you guys think?
> 
> Or should I just run VMs on my desktop? (specs in sig, planning on upgrading to 16GB soon)


CS24-SCs are not bad but they are the very low-end of the spectrum so they tend to be noisier than the more expensive brothers I think.


----------



## Offler

This is my concept of mobile server:



Base is netbook Lenovo S10-3t.
Atom N550
2gb Ram
Its cheap tablet/netbook hybrid with standard 2,5 inch disk bay (Sata2, AHCI support) and two miniPCI-E slots.

Further improvements:

a) TV tuner (dvb-t)
b) Crystal HD decoder
c) 1 TB Samsung EVO 840
d) "Antenna mod" (for Tv tuner, can be used even for Wifi)
e) Internal wifi was removed and replaced by external one with possibility to connect better antenna.

Purpose:
Media and TV sharing (mostly movies) over local network. Can serve for up to 6 clients in Wifi, 12 over 100Mbit lan, about 50 over USB/1gbit Lan adapter. This is enought for home entertainment.

*Wifi range*: About 30 meters around with 8db antenna
*Battery*: 8-cell LI-Ion. Up to 4 hours when not plugged in.
*Target clients*: Tablets, smartphones, smart TVs, PCs or laptops.
*Used format*: SD Mpeg2 (DVDs - key was to find the most compatible format with low HW requirements).
*Capacity*: More than 100 DVD images from my DVD collection.

SW:
Windows 7 (linux would be better choice, but I am not skilled enough)
XBMC and UPNP sharing
DHCP server - http://www.dhcpserver.de/
Can act as webserver as well

More info here:http://forum.xbmc.org/showthread.php?tid=186435

This device can act both as a client and server at the same time. For now there is no "silent" version of XBMC UPNP server...

Further expandability?
I believe that this netbook can use much bigger disk, and I am not sure about SD card reader - Lenovo and realtek are contradicting if its SDXC slot or not... (max capacity for card is up to 32gb or 2TB. Can make a difference in the future). Also MiniPCI connector can be used for storage as well in case that you dont plan to use TV tuner or Crystal HD decoder. Current Samsung Msata 840 has up to 1 Terabyte as well.

Disadvantages:
- The display has terrible viewing angles and very low resolution
- To keep it mobile I was forced to purchase SSD drive

Advantages:
- ATOM based Server, with UPS and KVM in one
- Act as a server when travelling







or even when I am walking in a park ...


----------



## KJ4MRC

From top to bottom

Switch x 2: Dell PowerConnect 3548 Switches with 4x 1G SFP

Server #1 CentOS: Plexmedia, Tekkit, Transmission Torrent

CPU: 2x Intel Xeon 5160
Motherboard: intel 5000PSL
Ram: 10GB DDR2 ECC FBDIMM
Case: Norco 270
OS SSD: SanDisk 32GB Sata II
Storage: 500GB Western Digital Blue
Server #2 CentOS: Backups and test server

CPU: 2x Intel Xeon 3.0ghz
Model: Dell 2850
Ram: 12GB DDR2 ECC
Storage: 6x 36GB 10K SCSI Raid 5
Server #3 PFSENSE

CPU: 4x Intel Xeon 1.9ghz
Model: HP Proliant DL580G2
Ram: 12GB DDR2 ECC
Storage: 4x 36GB 10K SCSI Raid 5
Server #4 Currently offline

CPU: N/A
Model: HP Proliant DL580G2
Ram: N/A
Storage: 4x 36GB 10K SCSI


----------



## Wildcard36qs

PFSense box is a bit overkill lol. Also how loud is that thing sitting right there?


----------



## ozlay

my server died









93 post code error on my tyan tempest


----------



## Wildcard36qs

Sucks man. Sorry to hear about that.


----------



## KJ4MRC

Quote:


> Originally Posted by *Wildcard36qs*
> 
> PFSense box is a bit overkill lol. Also how loud is that thing sitting right there?


That was my slowest server I had. LOL

It is pretty loud but the rack enclosure makes it tolerable .


----------



## Wildcard36qs

You really should get a C1100 or at least a newer gen 2950 and those will easily be better than your current ones combined lol. I actually have a few 2950s and 1950s sitting around now that I am not using. I bought a 22U Server cabinet and everything but then my wife saw all the equipment. So on craigslist it must go. Hahahah
It's cool though, way more than I needed.


----------



## Apple Pi

Sadly I cannot take pictures of it right now as I cannot take pictures within my works DC









OS: Centos, FreeNAS + Proxmox
Case: C6100
CPU: Dual L5520 Per node
Motherboard: 4x C6100 Sleds
Memory: 48GB, 16GB, 16GB 16GB
PSU: Dual 1200W PSUs
OS HDD (If you have one): 480GB SSD, 16GB USB, 120GB SSD, 1TB HDD
Storage HDD(s): 3x2TB HDD in Raid-Z + 2TB HDD Spare
Server Manufacturer (Ex: Dell, HP, You?): Dell


----------



## cones

Quote:


> Originally Posted by *Apple Pi*
> 
> Sadly I cannot take pictures of it right now as I cannot take pictures within my works DC
> 
> 
> 
> 
> 
> 
> 
> 
> 
> OS: Centos, FreeNAS + Proxmox
> Case: C6100
> CPU: Dual L5520 Per node
> Motherboard: 4x C6100 Sleds
> Memory: 48GB, 16GB, 16GB 16GB
> PSU: Dual 1200W PSUs
> OS HDD (If you have one): 480GB SSD, 16GB USB, 120GB SSD, 1TB HDD
> Storage HDD(s): 3x2TB HDD in Raid-Z + 2TB HDD Spare
> Server Manufacturer (Ex: Dell, HP, You?): Dell


I never quite understood that no picture thing.


----------



## Apple Pi

Yeah it's kind of a bummer but I can understand from a business standpoint as the rack is colocated so I wouldn't be able to take pictures of just my server and I don't have the other peoples permission to take pictures of their servers.


----------



## kujon

it's been a while since ive been in this subforum but what happened to that guy with the two huge norco 4220 servers in his closet? i vaguely remember his name being muergyls (sp)?


----------



## EvilMonk

Heres my own little home setup thats almost complete (I'm missing my replacement motherboard for my HP Proliant DL320 G5p and my HP Storageworks MSA20 storage array that are both still in the mail)
Im still studying at university while working so I go all the Windows Server Licenses through Microsoft DreamSpark for students for free and all the Apple Hardware with educational discount prices...
All the HP servers were purchased on ebay in parts and assembled by hand through auctions.






Server 1 - HP Proliant DL360 G5 (Sharepoint + MySQL + HyperV)
2x Intel Xeon e5450
32Gb DDR2 FBDIMM ECC 667
4x 146Gb SAS 10k on a HP Smart Array P400i 512 MB
NVidia Geforce GT 640 GDDR5 1Gb
iLO 2 advanced
Windows Server 2012 R2 Standard

Server 2 - HP Proliant DL380 G5 (Storage server)
2x Intel Xeon x5450
32Gb DDR2 FBDIMM ECC 667
8x 146Gb SAS 10k on a HP Smart Array P400 512 MB
HP Smart Array 642 uSCSI320 192Mb BBWC
NVidia Geforce 9800GT 1Gb
iLO 2 advanced
Windows Server 2012 R2 Enterprise

Server 3 - HP Proliant DL320 G5p (WMware ESXi 5.5u1)
1x Intel Xeon x3460
16Gb DDR2 ECC 800
2x 146Gb SAS 15k on a HP Smart Array E212 256 MB
NVidia Geforce 9800GT 1Gb
iLO 2 advanced

Server 4 - HP Proliant DL160 G6 (Transcoding + seedbox + Hyper V 2012)
2x Intel Xeon L5640 2.26Ghz Hexa Core
48Gb DDR3 Registered ECC 1333
4x 300Gb SAS 15k on a HP Smart Array P410 512 MB
NVidia Geforce GTS 250 1Gb
iLO 110i advanced
Windows Server 2012 R2 Standard

Server 5 - HP Proliant SE316M1R2 (SQL 2014 Server standard + Symantec Backup Exec 2010 R3 + Apache2)
2x Intel Xeon L5639 2.13Ghz Hexa Core
48Gb DDR3 Registered ECC 1333
8x 146Gb SAS 10k on a HP Smart Array P410 256 MB
NVidia Geforce 9800GT 1Gb
iLO 2 advanced
Windows Server 2012 R2 Standard

Server 6 - HP DL320 G6 (Active Directory + IIS + Sharepoint)
1x intel x5650 2.66Ghz Hexa Core
24Gb DDR3 Registered 1333
4x300Gb SAS 15k
NVidia Geforce 9800GT 1Gb
iLO 2 Advanced
Windows Server 2012 Standard

Server 7 - Apple Xserve 2008 (VPN + Time Machine + Open Directory)
2x intel E5462 2.8Ghz
32Gb DDR2 FBDIMM 800 ECC
3x1Tb to****a SATA6 7200k
ATI Radeon x1300 64mb
OSX Server 10.7.5

Workstation 1 - Apple Mac Pro 2010 Dual Xeon Hexa x5670
2x intel X5670 2.93Ghz
48 Gb DDR3 1333 ECC
2x Apricorn Velocity X2 + Mushkin advanced Chronos 480 Gb SSD Raid 0
2x Western Digital Mybook USB3 3Tb
1x Seagate 2Tb 5.9k SATA2
2x OCZ Agility4 128Gb (Windows 8.1 Pro)
Geforce GTX 670 2Gb eVGA (EFI Flashed by MacVidCards on eBay)
Mac OS X 10.9.3 + Aperture3 + Final Cut Pro X

Workstation 2 - Xeon Hexa L5640 @ 3.6Ghz
24 Gb DDR3 1600 G.Skill
2x Crucial M500 480 Gb SSD Raid 0
Vantec 4 ports 6Gbps + 2 esata
1x Seagate 2Tb 7.2k SATA3
2xGeforce GTX 670 4Gb eVGA SLI
Windows 8.1 Pro 64

Router
HP Pavilon Slimline :
AMD Athlon 64 X2 2.8 Ghz
8Gb DDR2 800
Seagate 500 Gb 7.2k Sata2
NForce 430
HP Dual Port NC362i PCIe 4x
UPS APC
PFSense

And finally my little Netgear ReadyNAS NAS Duo v1 upgraded with 1Gb of DDR 400 memory.
2x 2Tb seagate 7.2k SATA2 in raid1

Still waiting for
HP Storageworks MSA20 SATA raid array with 12x 1.5Tb Western Digital Caviar Green SATA2
2x400w HP Power Supplies


----------



## wtomlinson

Quote:


> Originally Posted by *kujon*
> 
> it's been a while since ive been in this subforum but what happened to that guy with the two huge norco 4220 servers in his closet? i vaguely remember his name being muergyls (sp)?


Are you referring to the guy who had them hooked up to a Popcorn Hour box? The servers that were running unRAID?


----------



## Plan9

Quote:


> Originally Posted by *EvilMonk*
> 
> Heres my own little home setup thats almost complete (I'm missing my replacement motherboard for my HP Proliant DL320 G5p and my HP Storageworks MSA20 storage array that are both still in the mail)
> Im still studying at university while working so I go all the Windows Server Licenses through Microsoft DreamSpark for students for free and all the Apple Hardware with educational discount prices...
> All the HP servers where purchased on ebay in parts and assembled by hand through auctions.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Server 1 - HP Proliant DL360 G5 (Sharepoint + MySQL + HyperV)
> 2x Intel Xeon e5450
> 32Gb DDR2 FBDIMM ECC 667
> 4x 146Gb SAS 10k on a HP Smart Array P400i 512 MB
> NVidia Geforce GT 640 GDDR5 1Gb
> iLO 2 advanced
> Windows Server 2012 R2 Standard
> 
> Server 2 - HP Proliant DL380 G5 (Storage server)
> 2x Intel Xeon x5450
> 32Gb DDR2 FBDIMM ECC 667
> 8x 146Gb SAS 10k on a HP Smart Array P400 512 MB
> HP Smart Array 642 uSCSI320 192Mb BBWC
> NVidia Geforce 9800GT 1Gb
> iLO 2 advanced
> Windows Server 2012 R2 Enterprise
> 
> Server 3 - HP Proliant DL320 G5p (WMware ESXi 5.5u1)
> 1x Intel Xeon x3460
> 16Gb DDR2 ECC 800
> 2x 146Gb SAS 15k on a HP Smart Array E212 256 MB
> NVidia Geforce 9800GT 1Gb
> iLO 2 advanced
> 
> Server 4 - HP Proliant DL160 G6 (Transcoding + seedbox + Hyper V 2012)
> 2x Intel Xeon L5640 2.26Ghz Hexa Core
> 48Gb DDR3 Registered ECC 1333
> 4x 300Gb SAS 15k on a HP Smart Array P410 512 MB
> NVidia Geforce GTS 250 1Gb
> iLO 110i advanced
> Windows Server 2012 R2 Standard
> 
> Server 5 - HP Proliant SE316M1R2 (SQL 2014 Server standard + Symantec Backup Exec 2010 R3 + Apache2)
> 2x Intel Xeon L5639 2.13Ghz Hexa Core
> 48Gb DDR3 Registered ECC 1333
> 8x 146Gb SAS 10k on a HP Smart Array P410 256 MB
> NVidia Geforce 9800GT 1Gb
> iLO 2 advanced
> Windows Server 2012 R2 Standard
> 
> Server 6 - HP DL320 G6 (Active Directory + IIS + Sharepoint)
> 1x intel x5650 2.66Ghz Hexa Core
> 24Gb DDR3 Registered 1333
> 4x300Gb SAS 15k
> NVidia Geforce 9800GT 1Gb
> iLO 2 Advanced
> Windows Server 2012 Standard
> 
> Server 7 - Apple Xserve 2008 (VPN + Time Machine + Open Directory)
> 2x intel E5462 2.8Ghz
> 32Gb DDR2 FBDIMM 800 ECC
> 3x1Tb to****a SATA6 7200k
> ATI Radeon x1300 64mb
> OSX Server 10.7.5
> 
> Workstation 1 - Apple Mac Pro 2010 Dual Xeon Hexa x5670
> 2x intel X5670 2.93Ghz
> 48 Gb DDR3 1333 ECC
> 2x Apricorn Velocity X2 + Mushkin advanced Chronos 480 Gb SSD Raid 0
> 2x Western Digital Mybook USB3 3Tb
> 1x Seagate 2Tb 5.9k SATA2
> 2x OCZ Agility4 128Gb (Windows 8.1 Pro)
> Geforce GTX 670 2Gb eVGA (EFI Flashed by MacVidCards on eBay)
> Mac OS X 10.9.3 + Aperture3 + Final Cut Pro X
> 
> Workstation 2 - Xeon Hexa L5640 @ 3.6Ghz
> 24 Gb DDR3 1600 G.Skill
> 2x Crucial M500 480 Gb SSD Raid 0
> Vantec 4 ports 6Gbps + 2 esata
> 1x Seagate 2Tb 7.2k SATA3
> 2xGeforce GTX 670 4Gb eVGA SLI
> Windows 8.1 Pro 64
> 
> Router
> HP Pavilon Slimline :
> AMD Athlon 64 X2 2.8 Ghz
> 8Gb DDR2 800
> Seagate 500 Gb 7.2k Sata2
> NForce 430
> HP Dual Port NC362i PCIe 4x
> UPS APC
> PFSense
> 
> And finally my little Netgear ReadyNAS NAS Duo v1 upgraded with 1Gb of DDR 400 memory.
> 2x 2Tb seagate 7.2k SATA2 in raid1
> 
> Still waiting for
> HP Storageworks MSA20 SATA raid array with 12x 1.5Tb Western Digital Caviar Green SATA2
> 2x400w HP Power Supplies


Any reason there's no Linux boxes? I'm recruiting at the moment and it surprises / disappoints me just how many graduates have little to no experience of Linux. It's a huge industry that seems to be massively underserved by universities.


----------



## Jakeey802

Quote:


> Originally Posted by *Plan9*
> 
> Any reason there's no Linux boxes? I'm recruiting at the moment and it surprises / disappoints me just how many graduates have little to no experience of Linux. It's a huge industry that seems to be massively underserved by universities.


Agreed, most courses are all windows based and it frustrates me so much haha


----------



## EvilMonk

Quote:


> Originally Posted by *Plan9*
> 
> Any reason there's no Linux boxes? I'm recruiting at the moment and it surprises / disappoints me just how many graduates have little to no experience of Linux. It's a huge industry that seems to be massively underserved by universities.


Quote:


> Originally Posted by *Jakeey802*
> 
> Agreed, most courses are all windows based and it frustrates me so much haha


Hi guys, sorry I forgot to mention that the VMware 5.5 U1 server runs Linux RedHat Enterprise Server 6.4... I worked as a LAMP admin for a year, I am still at university but I already have 2 degrees (1 in IT and the other one in administration) now I am doing a certificate in computer programming. I work as a Windows / VMware ESXi sys admin for now and just don't have the time to continue with Linux as I am studying to pass the MCITP certification on Windows Server 2012 R2


----------



## tycoonbob

I've got a Microsoft certification resume that's about 5 pages long (x3 MCSE's, x3 MCSA's, x2 MCITP's, xX MCP's), along with some other vendor certifications (Citrix CCA's and CCAA, Splunk, etc), and have been putting off my RHCSA and/or RCHSE certifications for quite some time. My problem is that I don't work with linux enough daily to warrant getting those certs, and I can't transition into a Linux role without those certs, or without a pay cut...so it's just not been worth it to me. It's like once you start with Microsoft, it's hard to transition out of it without loosing money. This is the exact same reason I never sat the CCNA and CCNA: Security exams, which I have studied extensively for a few years ago.

Oh well.


----------



## EvilMonk

Quote:


> Originally Posted by *tycoonbob*
> 
> I've got a Microsoft certification resume that's about 5 pages long (x3 MCSE's, x3 MCSA's, x2 MCITP's, xX MCP's), along with some other vendor certifications (Citrix CCA's and CCAA, Splunk, etc), and have been putting off my RHCSA and/or RCHSE certifications for quite some time. My problem is that I don't work with linux enough daily to warrant getting those certs, and I can't transition into a Linux role without those certs, or without a pay cut...so it's just not been worth it to me. It's like once you start with Microsoft, it's hard to transition out of it without loosing money. This is the exact same reason I never sat the CCNA and CCNA: Security exams, which I have studied extensively for a few years ago.
> 
> Oh well.


I hear you, its the same for me.
I passed CCNA inn 2005 and CCNP in 2006 and just never had the time to renew them after since I work mostly with Windows Server and VMware ESXi since 2008 now and it takes all my time to study for the certifications that I need for work... well back then ESXi was ESX


----------



## kujon

Quote:


> Originally Posted by *wtomlinson*
> 
> Are you referring to the guy who had them hooked up to a Popcorn Hour box? The servers that were running unRAID?


yeah. the guy with two norco 4220 cases with almost close to 100tb in storage if i remember. can't seem to find his thread


----------



## EvilMonk

Quote:


> Originally Posted by *kujon*
> 
> yeah. the guy with two norco 4220 cases with almost close to 100tb in storage if i remember. can't seem to find his thread


Damn, thats 2 badass storage servers, I'd like to see what those 2 look like in a wardrobe...








I bet if the 20 trays of both servers are loaded with drives and it runs a dual cpu dual power supply setup it must cost a ****load of money to run


----------



## cones

I know of what one you guys are talking about just can't find it. I think it was 3-4tb drives filling most of two norco 4220 cases. I know they were running unraid and I think using i3's.


----------



## tycoonbob

Murlocke is the guy with the build, I believe. Looks like the topic is gone though..

http://www.overclock.net/t/987494/52tb-unraid-server/0_50

EDIT:
http://web.archive.org/web/20120514022040/http://www.overclock.net/t/987494/52tb-unraid-server


----------



## cones

Quote:


> Originally Posted by *tycoonbob*
> 
> Murlocke is the guy with the build, I believe. Looks like the topic is gone though..
> 
> http://www.overclock.net/t/987494/52tb-unraid-server/0_50
> 
> EDIT:
> http://web.archive.org/web/20120514022040/http://www.overclock.net/t/987494/52tb-unraid-server


Knew his name started with an m. Wonder why its gone.


----------



## DaveLT

Quote:


> Originally Posted by *cones*
> 
> Knew his name started with an m. Wonder why its gone.


The admins are jealous he's got more storage than OCN servers combined


----------



## Plan9

Quote:


> Originally Posted by *DaveLT*
> 
> The admins are jealous he's got storage than OCN servers combined


Not hard. Sometimes I wonder if OCN is just running on a couple of Raspberry Pis.


----------



## DaveLT

Quote:


> Originally Posted by *Plan9*
> 
> Not hard. Sometimes I wonder if OCN is just running on a couple of Raspberry Pis.


Made a typo. I meant more storage


----------



## sdcalihusker

Hey All,

I recently got started with my home server farm. I built my own rack (18U) for the servers to be housed in, and have all attached to my home network via a TP-Link TL-SG2424 Gigabit Ethernet switch:

Server 1
OS: ESX 5.5u1
Case: generic 1U
CPU: Core I-7 4771
Motherboard: ASUS Sabertooth Z87
Memory: 32 GB DDR3
PSU: 450 watt, generic

Server 2
OS: ESX 5.5u1
Case: Dell
CPU: Dual Opteron 6 core 2419 EE
Motherboard: Dell
Memory: 32 GB DDR2
PSU: 500 Watt
Server Manufacturer: Dell CS24-NV7

Server 3
OS: ESX 5.5u1
Case: Dell
CPU: Dual Opteron 6 core 2419 EE
Motherboard: Dell
Memory: 32 GB DDR2
PSU: 500 Watt
Server Manufacturer: Dell CS24-NV7

Server 4
OS: FreeNAS 9.2.1.5
Case: Logisys 4U
CPU: Intel Celeron G3220
Motherboard: MSI Z87I
Memory: 12 GB DDR3
PSU: 500 Watt, Generic
Storage HDD: 3x3GB WD Red

I currently run 3 Windows 2012 r2 servers as VMs, and will be expanding to a test lab using VMs, VLANS and Link Aggregation. Most of it is for use for work projects, and I may be adding more servers later.

The pics:


----------



## EvilMonk

Quote:


> Originally Posted by *sdcalihusker*
> 
> Hey All,
> 
> I recently got started with my home server farm. I built my own rack (18U) for the servers to be housed in, and have all attached to my home network via a TP-Link TL-SG2424 Gigabit Ethernet switch:
> 
> Server 1
> OS: ESX 5.5u1
> Case: generic 1U
> CPU: Core I-7 4771
> Motherboard: ASUS Sabertooth Z87
> Memory: 32 GB DDR3
> PSU: 450 watt, generic
> 
> Server 2
> OS: ESX 5.5u1
> Case: Dell
> CPU: Dual Opteron 6 core 2419 EE
> Motherboard: Dell
> Memory: 32 GB DDR2
> PSU: 500 Watt
> Server Manufacturer: Dell CS24-NV7
> 
> Server 3
> OS: ESX 5.5u1
> Case: Dell
> CPU: Dual Opteron 6 core 2419 EE
> Motherboard: Dell
> Memory: 32 GB DDR2
> PSU: 500 Watt
> Server Manufacturer: Dell CS24-NV7
> 
> Server 4
> OS: FreeNAS 9.2.1.5
> Case: Logisys 4U
> CPU: Intel Celeron G3220
> Motherboard: MSI Z87I
> Memory: 12 GB DDR3
> PSU: 500 Watt, Generic
> Storage HDD: 3x3GB WD Red
> 
> I currently run 3 Windows 2012 r2 servers as VMs, and will be expanding to a test lab using VMs, VLANS and Link Aggregation. Most of it is for use for work projects, and I may be adding more servers later.
> 
> The pics:


Nice setup, I like!!


----------



## sdcalihusker

Thank You. It does look a little bare in the bottom though. I may have to add a couple of Dell C1100's in there, and a decent rack mountable UPS. I have this bad tendency to like overkill LOL.


----------



## EvilMonk

Quote:


> Originally Posted by *sdcalihusker*
> 
> Thank You. It does look a little bare in the bottom though. I may have to add a couple of Dell C1100's in there, and a decent rack mountable UPS. I have this bad tendency to like overkill LOL.


I hear you, me too







My setup is on the bottom 2 pages before this one








Still look better organised than my setup I've put in a garage shelf I bought at Home Depot to install in my office


----------



## TheOx

Sdcalihusker, if you don't mind me asking and if you can answer, what kind of work projects are you running on these bad boys?


----------



## sdcalihusker

Quote:


> Originally Posted by *EvilMonk*
> 
> I hear you, me too
> 
> 
> 
> 
> 
> 
> 
> My setup is on the bottom 2 pages before this one
> 
> 
> 
> 
> 
> 
> 
> 
> Still look better organised than my setup I've put in a garage shelf I bought at Home Depot to install in my office


I started off with storing my servers on a a TV stand in my man-cave LOL!

Quote:


> Originally Posted by *TheOx*
> 
> Sdcalihusker, if you don't mind me asking and if you can answer, what kind of work projects are you running on these bad boys?


I'm a Network Engineer and Systems Integration Specialist. I build labs of client builds as a precursor to performing installations. I use a pre-set configuration for most builds, but sometimes the builds will go sideways. I use my home lab as a means to do "perfect" installs that I can reference if I need to troubleshoot a client build. It saves time, and allows me to see how something should be set up. It helps me to isolate problems in the systems.

Edit to Add: I got the plans to build my rack from here: http://tombuildsstuff.blogspot.com/2014/02/diy-rack-server-plans.html


----------



## Aussiejuggalo

The everything server in my sig

*OS:* Windows 7 64bit Ultimate
*Case:* Fractal Design Define R4 Black Pearl
*CPU:* Intel i5 4430
*Motherboard:* ASRock B85M PRO4
*Memory:* G.Skill Ripjaws X F3-12800CL10D-16GBXL 16GB (2x8GB) DDR3 (only 8GB now. In a state of tiredness I put one stick in the wrong way *facepalm*)
*PSU:* Corsair VS350
*OS HDD:* Samsung 840 EVO Series 120GB SSD (60GB partition for Windows the rest for servers, Minecraft, 7 Days To Die etc)
*Storage HDDs:* Western Digital WD Red 2TB (games), Western Digital WD Red 3TB (t.v. shows), Western Digital WD Red 2TB (movies), SAMSUNG Spinpoint F3 1TB (recorded t.v)
*Cooling:* Xigmatek Dark Knight Night Hawk CPU Cooler Frostbourne Edition
*Fans:* Fractal Design Silent Series R2 140mm x3, Fractal Design Silent Series R2 120mm x2 (on CPU cooler)


----------



## Tadaen Sylvermane

Quote:


> In a state of tiredness I put one stick in the wrong way *facepalm*


Did you use a hammer to put it in? They only go in 1 way.


----------



## cones

Quote:


> Originally Posted by *Tadaen Sylvermane*
> 
> Did you use a hammer to put it in? They only go in 1 way.










They can kinda fit in and not fall out but they are not seated at all or facing the right direction.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *Tadaen Sylvermane*
> 
> Did you use a hammer to put it in? They only go in 1 way.


Quote:


> Originally Posted by *cones*
> 
> 
> 
> 
> 
> 
> 
> 
> They can kinda fit in and not fall out but they are not seated at all or facing the right direction.


Like I said I was tired and to me it looked like it was in and sitting right, then it started to boot cycle


----------



## cchalogamer

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> Like I said I was tired and to me it looked like it was in and sitting right, then it started to boot cycle


That's no where NEAR as bad as a friend of mine's dad back in the late PC 133 days. He managed to install a 512MB stick of 133 backwards in a Compaq and get it to seat. It let all the magic smoke out and a couple of the ram chips actually fell off the PCB. He returned it to Best Buy and said it didn't work for him. That PC was still running with a different more successful upgrade when we graduated High School in 2005. Old Athlon 800ish mzh and the whole thing was built like a tank. Needless to say nothing like modern HP/Compaq home systems built like piles of poop.

Another friend of mine went for nearly a year after he rebuilt the first PC "he built" (I told him what to do every step of the way and verified) with only 1GB of his 2GB working with his E8400 because the second stick wasn't fully seated. He spent a full week trying to figure it out once he saw only 1GB in windows before asking me to take a look at it. 5 mins later his computer was much faster at multitasking









I've personally had a few I didn't get fully seated over the years but never managed to get one to stay in backwards. (Tried to put some in that way by accident when blindly installing in a few OEM systems over the years by feel alone though)


----------



## cdoublejj

Quote:


> Originally Posted by *cchalogamer*
> 
> That's no where NEAR as bad as a friend of mine's dad back in the late PC 133 days. He managed to install a 512MB stick of 133 backwards in a Compaq and get it to seat. It let all the magic smoke out and a couple of the ram chips actually fell off the PCB. He returned it to Best Buy and said it didn't work for him. That PC was still running with a different more successful upgrade when we graduated High School in 2005. Old Athlon 800ish mzh and the whole thing was built like a tank. Needless to say nothing like modern HP/Compaq home systems built like piles of poop.
> 
> Another friend of mine went for nearly a year after he rebuilt the first PC "he built" (I told him what to do every step of the way and verified) with only 1GB of his 2GB working with his E8400 because the second stick wasn't fully seated. He spent a full week trying to figure it out once he saw only 1GB in windows before asking me to take a look at it. 5 mins later his computer was much faster at multitasking
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I've personally had a few I didn't get fully seated over the years but never managed to get one to stay in backwards. (Tried to put some in that way by accident when blindly installing in a few OEM systems over the years by feel alone though)


Got one in laptop to burn it's stickers off and smoke. I can't remember well but, i think we didn't dare test another module in that slot in case of further damage. I've seen it before a few times and even seen the ram slot and board survive.


----------



## xDuBz201

OS: Windows 7 Ultimate
Case: Gateway GT5464
CPU: AMD A6-6400K @ 3.9Ghz (Black Edition)
Motherboard: MSI A55M-E33 FM2
Memory: 2Gb Crucial DDR3 1333 (PC3-10600)
PSU: 350w
OS HDD (If you have one): Western Digital Blue 500GB 7200 Rpm
Storage HDD(s): Seagate 1Tb 7200 Rpm
Server Manufacturer (Ex: Dell, HP, You?):


----------



## vpex

@xDuBz201 have you plugged the second SATA cable in yet?


----------



## k1mz3

@xDuBz201

OS: Windows 7 Ultimate
Case: Gateway GT5464
CPU: AMD A6-6400K @ 3.9Ghz (Black Edition)
Motherboard: MSI A55M-E33 FM2
Memory: 2Gb Crucial DDR3 1333 (PC3-10600)
PSU: 350w
OS HDD (If you have one): Western Digital Blue 500GB 7200 Rpm
Storage HDD(s): Seagate 1Tb 7200 Rpm
Server Manufacturer (Ex: Dell, HP, You?):

2GB ram - Is it "just" a fileserver?

Otherwise, it´s a nice setup


----------



## xDuBz201

Quote:


> Originally Posted by *vpex*
> 
> @xDuBz201 have you plugged the second SATA cable in yet?


Didn't Even Notice The Sata Cable Was Unplugged









Quote:


> Originally Posted by *k1mz3*
> 
> @xDuBz201
> 
> OS: Windows 7 Ultimate
> Case: Gateway GT5464
> CPU: AMD A6-6400K @ 3.9Ghz (Black Edition)
> Motherboard: MSI A55M-E33 FM2
> Memory: 2Gb Crucial DDR3 1333 (PC3-10600)
> PSU: 350w
> OS HDD (If you have one): Western Digital Blue 500GB 7200 Rpm
> Storage HDD(s): Seagate 1Tb 7200 Rpm
> Server Manufacturer (Ex: Dell, HP, You?):
> 
> 2GB ram - Is it "just" a fileserver?
> 
> Otherwise, it´s a nice setup


2Gb For Now. Its Only My Plex Media Server


----------



## CynicalUnicorn

Hi, thread. You sound helpful. One, what are your opinions on using laptop drives in a NAS that can accept desktop drives? Two, what is the best way to get drives for cheap? Three, can FreeNAS be used with any sort of SSD cache?


----------



## vpex

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Hi, thread. You sound helpful. One, what are your opinions on using laptop drives in a NAS that can accept desktop drives? Two, what is the best way to get drives for cheap? Three, can FreeNAS be used with any sort of SSD cache?


Hi, @CynicalUnicorn. You sound like you need help.

One: Does the nas support more drives with 2.5" drives compared to 3.5" drives? If the nas supports the same number of 2.5" drives as 3.5"; use desktop drives as they have larger capacities. 3TB is the sweetspot at the moment for capacity/value.

Two: Buying in bulk and negotiating a price or eBay.

Three: FreeNAS can use a SSD cache as a ZIL or a L2ARC.

ZIL is the ZFS Intent Log, this is the write cache. L2ARC is the Level 2 Arc, this is the read cache. The ZIL should be ideally mirroed and the L2ARC can be in raid-0 if you want. IMO ZIL is primarily beneficial for databases.


----------



## Plan9

Quote:


> Originally Posted by *vpex*
> 
> Hi, @CynicalUnicorn. You sound like you need help.
> 
> One: Does the nas support more drives with 2.5" drives compared to 3.5" drives? If the nas supports the same number of 2.5" drives as 3.5"; use desktop drives as they have larger capacities. 3TB is the sweetspot at the moment for capacity/value.
> 
> Two: Buying in bulk and negotiating a price or eBay.
> 
> Three: FreeNAS can use a SSD cache as a ZIL or a L2ARC.
> 
> ZIL is the ZFS Intent Log, this is the write cache. L2ARC is the Level 2 Arc, this is the read cache. The ZIL should be ideally mirroed and the L2ARC can be in raid-0 if you want. IMO ZIL is primarily beneficial for databases.


SSD ZIL is pretty pointless for home servers (and even more so if you're not serving most of your shares over NFS). L2ARC can be handy though


----------



## driftingforlife

Finally built a VM server out of spares i have.

i3 (will look out for a cheap 2nd hand i7)
Asus MVF
8GB G.Skill, 8GB corasir
128GB samsung SSD, 2 x 160GB samsung HDDs
ESXI 5.5


----------



## CynicalUnicorn

Quote:


> Originally Posted by *vpex*
> 
> One: Does the nas support more drives with 2.5" drives compared to 3.5" drives? If the nas supports the same number of 2.5" drives as 3.5"; use desktop drives as they have larger capacities. 3TB is the sweetspot at the moment for capacity/value.
> 
> Two: Buying in bulk and negotiating a price or eBay.
> 
> Three: FreeNAS can use a SSD cache as a ZIL or a L2ARC.
> 
> ZIL is the ZFS Intent Log, this is the write cache. L2ARC is the Level 2 Arc, this is the read cache. The ZIL should be ideally mirroed and the L2ARC can be in raid-0 if you want. IMO ZIL is primarily beneficial for databases.


I salvaged a bunch of old P4 systems that were going to be trashed, and each contained a cage that fits 3x3.5" drives. Due to how I bolted them in, the bottom bays in two of the cages must have 2.5" adapters (or a modified 3.5" drive, but I'm not _that_ stupid). In addition, there is an adapter cage that fits in 2x5.25" bays and allows for three more desktop drives, meaning up to 10 potentially. Assuming price is not a restriction, do the smaller drives offer any benefits, e.g. durability? I know I'll end up with one or two anyway, and I want to know if more is worth it.

From whom? A manufacturer like WD or HGST (WD fanboy and proud!







) or a seller like Amazon or Newegg?

Alright, cool. I just want to know if this is possible. I'm not 100% sure if I want unRAID or FreeNAS at this point. unRAID is expensive, but it has a lot of cool features, while FreeNAS works better with NAS drives (e.g. WD Reds) than with normal desktop drives. Given how much more expensive NAS drives are, the two OSes should probably even out after 10TB or so. Then again, I also wouldn't be buying a 4TB WD Black, the anecdotally most reliable series ever, for unRAID parity which would save around $110 plus $70 for the OS to begin with.

Quote:


> Originally Posted by *Plan9*
> 
> SSD ZIL is pretty pointless for home servers (and even more so if you're not serving most of your shares over NFS). L2ARC can be handy though


You're implying I disagree!







It's like getting a 780 for an HTPC that will be running PS or N64 emulators at worst: totally pointless and arguably a waste, but who doesn't love overkill?


----------



## Plan9

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> I salvaged a bunch of old P4 systems that were going to be trashed, and each contained a cage that fits 3x3.5" drives. Due to how I bolted them in, the bottom bays in two of the cages must have 2.5" adapters (or a modified 3.5" drive, but I'm not _that_ stupid). In addition, there is an adapter cage that fits in 2x5.25" bays and allows for three more desktop drives, meaning up to 10 potentially. Assuming price is not a restriction, do the smaller drives offer any benefits, e.g. durability? I know I'll end up with one or two anyway, and I want to know if more is worth it.
> 
> From whom? A manufacturer like WD or HGST (WD fanboy and proud!
> 
> 
> 
> 
> 
> 
> 
> ) or a seller like Amazon or Newegg?
> 
> Alright, cool. I just want to know if this is possible. I'm not 100% sure if I want unRAID or FreeNAS at this point. unRAID is expensive, but it has a lot of cool features, while FreeNAS works better with NAS drives (e.g. WD Reds) than with normal desktop drives. Given how much more expensive NAS drives are, the two OSes should probably even out after 10TB or so. Then again, I also wouldn't be buying a 4TB WD Black, the anecdotally most reliable series ever, for unRAID parity which would save around $110 plus $70 for the OS to begin with.


WD Reds







Quote:


> Originally Posted by *CynicalUnicorn*
> 
> You're implying I disagree!
> 
> 
> 
> 
> 
> 
> 
> It's like getting a 780 for an HTPC that will be running PS or N64 emulators at worst: totally pointless and arguably a waste, but who doesn't love overkill?


You're misunderstanding the conversation. vpex said there's two different uses for SSD cache drives on ZFS and I said ZIL -specifically- was worthless for this particular set up but the other method, L2ARC, might be useful (in all practicality it wouldn't, but it's less of a worthless addition than ZIL)


----------



## CynicalUnicorn

Quote:


> Originally Posted by *Plan9*
> 
> You're misunderstanding the conversation. vpex said there's two different uses for SSD cache drives on ZFS and I said ZIL -specifically- was worthless for this particular set up but the other method, L2ARC, might be useful (in all practicality it wouldn't, but it's less of a worthless addition than ZIL)


Ah, I thought you meant SSD caching in general. Nevermind then. If it can lower latency, then I'll take it. If it gives me bragging rights, I will also take it. At that point, the time is reliant on a device sending a request to the server and the server's ability to fulfill it. Disk latency essentially drops to zero. Of course, nothing changes if the requested blocks aren't cached, but there's still a write buffer that can come in handy depending on the speed of a given array.


----------



## Plan9

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Ah, I thought you meant SSD caching in general. Nevermind then. If it can lower latency, then I'll take it. If it gives me bragging rights, I will also take it. At that point, the time is reliant on a device sending a request to the server and the server's ability to fulfill it. Disk latency essentially drops to zero. Of course, nothing changes if the requested blocks aren't cached, but there's still a write buffer that can come in handy depending on the speed of a given array.


You don't need a ZIL drive unless you have dozens of clients being served over NFS. Adding ZIL to a home server would just be costly and add additional points of breakage. Just don't bother with it.

Having a L2ARC, however, has no risk and will improve performance for anything accessing your ZFS array.

Bragging rights is all very good and well, but there needs to be some degree of sanity to it. There's no point risking data loss for zero performance gain just to show off (and anyone you do brag that setup to who understands ZFS would probably just mock you for wasting your money anyway)


----------



## cones

Quote:


> Originally Posted by *driftingforlife*
> 
> Finally built a VM server out of spares i have.
> 
> i3 (will look out for a cheap 2nd hand i7)
> Asus MVF
> 8GB G.Skill, 8GB corasir
> 128GB samsung SSD, 2 x 160GB samsung HDDs
> ESXI 5.5
> 
> 
> Spoiler: Warning: Spoiler!


Pretty empty case there









Since you three are talking about it how does using the SSD as a cache change the longevity compared with using it as an OS drive?


----------



## driftingforlife

Well no-one is buying it so I might as well make use of it. just need a nice Dual CPU board to fill it up.


----------



## DaveLT

Quote:


> Originally Posted by *Plan9*
> 
> SSD ZIL is pretty pointless for home servers (and even more so if you're not serving most of your shares over NFS). L2ARC can be handy though


That is if your network is fast enough to catch up with even the most basic 1TB HDDs








Quote:


> Originally Posted by *driftingforlife*
> 
> Finally built a VM server out of spares i have.
> 
> i3 (will look out for a cheap 2nd hand i7)
> Asus MVF
> 8GB G.Skill, 8GB corasir
> 128GB samsung SSD, 2 x 160GB samsung HDDs
> ESXI 5.5


and thought *looks at a corner* I'll use that enormous XSPC H2 nobody wants to buy anyway







Quote:


> Originally Posted by *CynicalUnicorn*
> 
> I salvaged a bunch of old P4 systems that were going to be trashed, and each contained a cage that fits 3x3.5" drives. Due to how I bolted them in, the bottom bays in two of the cages must have 2.5" adapters (or a modified 3.5" drive, but I'm not _that_ stupid). In addition, there is an adapter cage that fits in 2x5.25" bays and allows for three more desktop drives, meaning up to 10 potentially. Assuming price is not a restriction, do the smaller drives offer any benefits, e.g. durability? I know I'll end up with one or two anyway, and I want to know if more is worth it.
> 
> From whom? A manufacturer like WD or HGST (WD fanboy and proud!
> 
> 
> 
> 
> 
> 
> 
> ) or a seller like Amazon or Newegg?
> 
> Alright, cool. I just want to know if this is possible. I'm not 100% sure if I want unRAID or FreeNAS at this point. unRAID is expensive, but it has a lot of cool features, while FreeNAS works better with NAS drives (e.g. WD Reds) than with normal desktop drives. Given how much more expensive NAS drives are, the two OSes should probably even out after 10TB or so. Then again, I also wouldn't be buying a 4TB WD Black, the anecdotally most reliable series ever, for unRAID parity which would save around $110 plus $70 for the OS to begin with.
> You're implying I disagree!
> 
> 
> 
> 
> 
> 
> 
> It's like getting a 780 for an HTPC that will be running PS or N64 emulators at worst: totally pointless and arguably a waste, but who doesn't love overkill?


They aren't as quick as the 3.5" HDDs because of it being smaller but are 1) Quieter 2) Faster in terms of read/write which is what you want for a NAS anyway not just raw speed but the only 2.5" I will consider is a Hitachi Travelstar 7K1000 and would probably buy it over a 7K1000.C if i had a choice because it's MUCH quieter and more or less the same performance as a WD Black. It is more expensive than a 1000.C though and you can buy a 2TB hitachi/toshiba for the price of a 1TB 2.5" 7K1000 and the hitachi/toshiba 2T and 3T HDDs are what I call very quiet. The 7K1000.C actually drove me insane over it's noise many many times already. 3) Er, much cooler. with a very weak fan you can cool a bunch of them no problem.
Also, with 2.5" drives obviously you can stuff 12 HDDs in the space of 5 3.5" HDDs. How's that for density! Of course all this density is not cheap but it is good density. If only the 2.5" drives selection were more wide
Or 8 2.5" HDDs in the space of 3 3.5" HDDs
Anyway the loud ones are the WD1002FAEXs and the Hitachi 7K1000.C.
WD and HGST and Toshiba (Part of HGST) for me only!

Er unicorn, overkill and being plain dumb are two similar things ... We love overkill ... If they came cheap








Quote:


> Originally Posted by *driftingforlife*
> 
> Well no-one is buying it so I might as well make use of it. just need a nice Dual CPU board to fill it up.










Dual socket LGA1366 mobos go cheap now and X5650s too.


----------



## driftingforlife

Its a Little Devil PC-V8 man, none of that XSPC rubbish







.

Yea, im spending all my money on my car atm, will pick a dual at some point.


----------



## Plan9

Quote:


> Originally Posted by *cones*
> 
> Since you three are talking about it how does using the SSD as a cache change the longevity compared with using it as an OS drive?


No idea to be honest, but using an SSD as an OS drive is rather pointless on servers as your RAM will cache frequently accessed files anyway (it's the _cached_ figure in _free_) and boot up times are moot on servers since you don't reboot them often.

As for the SSD cache drive, if that gets trashed then you don't lose any data (since it's only cache).

So life span aside, it's pretty easy choice between the two imo.


----------



## cones

Quote:


> Originally Posted by *Plan9*
> 
> No idea to be honest, but using an SSD as an OS drive is rather pointless on servers as your RAM will cache frequently accessed files anyway (it's the _cached_ figure in _free_) and boot up times are moot on servers since you don't reboot them often.
> 
> As for the SSD cache drive, if that gets trashed then you don't lose any data (since it's only cache).
> 
> So life span aside, it's pretty easy choice between the two imo.


Yes I understand the pros/cons of one as an OS drive in a server, I meant compared to one in a desktop. Wasn't sure if anyone had much experience or knew something else. What you said is what I was thinking in my head.


----------



## DaveLT

Quote:


> Originally Posted by *driftingforlife*
> 
> Its a Little Devil PC-V8 man, none of that XSPC rubbish
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Yea, im spending all my money on my car atm, will pick a dual at some point.


IS IT?! They look the same to me. Apart from the cable management holes LD probably took XSPC's design and refined it.
LD first popped into my head then i thought maybe this is a XSPC.


----------



## Plan9

Quote:


> Originally Posted by *cones*
> 
> Yes I understand the pros/cons of one as an OS drive in a server, I meant compared to one in a desktop. Wasn't sure if anyone had much experience or knew something else. What you said is what I was thinking in my head.


IIRC ZFS has been designed not to thrash the SSDs in a method that increases wear (and this is certainly true for how the ZIL entries are written). But I cannot comment specifically about L2ARC.


----------



## CynicalUnicorn

Quote:


> Originally Posted by *Plan9*
> 
> IIRC ZFS has been designed not to thrash the SSDs in a method that increases wear (and this is certainly true for how the ZIL entries are written). But I cannot comment specifically about L2ARC.


64GiB MLC drive can take 15GB of writes daily for a decade and be fine. Wear and tear isn't an issue.

Quote:


> Originally Posted by *DaveLT*
> 
> Er unicorn, overkill and being plain dumb are two similar things ... We love overkill ... If they came cheap


Is free (hopefully) cheap enough for you?


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> 64GiB MLC drive can take 15GB of writes daily for a decade and be fine. Wear and tear isn't an issue.
> Is free (hopefully) cheap enough for you?


If that's true OCZ Vertexes wouldn't die so easily









If the SSD didn't cost anything in the first place


----------



## cones

Quote:


> Originally Posted by *Plan9*
> 
> IIRC ZFS has been designed not to thrash the SSDs in a method that increases wear (and this is certainly true for how the ZIL entries are written). But I cannot comment specifically about L2ARC.


Don't know much about ZFS but that's good it was added.
Quote:


> Originally Posted by *CynicalUnicorn*
> 
> 64GiB MLC drive can take 15GB of writes daily for a decade and be fine. Wear and tear isn't an issue.
> Is free (hopefully) cheap enough for you?


I thought newer ones were better but never knew that.


----------



## Plan9

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Is free (hopefully) cheap enough for you?


You've got 3 high speed SSDs for free and have no other PC nor laptop in the house without an SSD?


----------



## CynicalUnicorn

Quote:


> Originally Posted by *DaveLT*
> 
> If that's true OCZ Vertexes wouldn't die so easily


Quote:


> Originally Posted by *cones*
> 
> I thought newer ones were better but never knew that.


I'm speaking from the perspective of the memory and only the memory:

64GiB = 68.72GB
(68.72GB / P/E cycle) * 3000P/E cycle rating = 206158.43GB = 206.16TB total of writes
206.16TB / 3652.5 days = *56.44GB / day*

Oh, oops! I was thinking of a TLC drive with a third the rated write cycles and didn't bother to work out the numbers. Well, this is even better!


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> I'm speaking from the perspective of the memory and only the memory:
> 
> 64GiB = 68.72GB
> (68.72GB / P/E cycle) * 3000P/E cycle rating = 206158.43GB = 206.16TB total of writes
> 206.16TB / 3652.5 days = *56.44GB / day*
> 
> Oh, oops! I was thinking of a TLC drive with a third the rated write cycles and didn't bother to work out the numbers. Well, this is even better!


At the rate of how much I use my SSD (12GB/day) It will last donkey years ... LOL.
384TB writes on my M5S 128GB would literally mean it will last 88 years. Yeah as if it would before it craps out due to other factors ...


----------



## Plan9

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> I'm speaking from the perspective of the memory and only the memory:
> 
> 64GiB = 68.72GB
> (68.72GB / P/E cycle) * 3000P/E cycle rating = 206158.43GB = 206.16TB total of writes
> 206.16TB / 3652.5 days = *56.44GB / day*
> 
> Oh, oops! I was thinking of a TLC drive with a third the rated write cycles and didn't bother to work out the numbers. Well, this is even better!


That's a rather optimistic value though. While ware levelling does help distribute the writes, you cannot guarantee a perfect distribution. In fact quite the opposite; Many OS files are unlikely to see frequent changes, so that's a sizeable chunk of your SSD locked up to begin with. Plus the more you fill your SSD, the less free space there is to ware level. So it's clear that SSDs are not going to ware evenly and you only need a small few blocks to go bad for the device to be compromised (like how conventional HDDs should be binned once you start seeing the first few bad sectors as you never quite know how long it's got left nor if your data isn't already silently corrupting).

Let's also not forget that computers will do a hell of a lot more writing to your storage in the background than most users realise (creating / updating log files, updates to file system metadata which holds details like file access times, internet cache + cookies and other application temporary files, etc).

Don't get wrong; I'm not trying to undermine the point about modern SSDs being more tolerant that consumers generally give them credit for. But I do also think your figures are a very optimistic / simplistic overview.


----------



## shadow5555

I just moved so here is the new server/networking/firewall area


Spoiler: Warning: Spoiler!



http://s1126.photobucket.com/user/s...38_1031222824734743511_n_zps0e1fa5b1.jpg.html



3rd shelf down

Untangle dedi box
In router mode
core 2 duo cpu
4gig ddr2
dual gig nics

16 port business class linksys gig switch

Monster server on bottom shelf
Amd 9550 i think it is
8gig ddr2
500giig Os
20tbs total 16tb usable with raid parity and 7tbs free
haf x 932 case


----------



## Shiftstealth

I just set up my first home server. I'm finally getting into the server side of things. Although with my limited exposure i'm not sure what i can host at home that would benefit me just yet. So far i'm running a File Server, a domain controller, and a plex media server. I'm using this as a lab for my MCSA. Any other servers i could set up as a learning experience?
Thanks!


----------



## Blindsay

depending what you are looking to get into setting up some virtual servers (vmware would be my suggestion) and learning about virtualization as that seems to be a hot ticket these days


----------



## lowfat

Finished off my upgraded FreeNAS box this morning.
http://s18.photobucket.com/user/tulcakelume/media/Define/export-16-1.jpg.html

http://s18.photobucket.com/user/tulcakelume/media/Define/export-15-1.jpg.html

http://s18.photobucket.com/user/tulcakelume/media/Define/export-17-1.jpg.html

http://s18.photobucket.com/user/tulcakelume/media/Define/export-18-1.jpg.html


----------



## Blindsay

nice, very clean, not sure if you have enough drives though


----------



## stumped

I'm kinda of stuck in a transition right now. I need to move my nas into my 550d case and move my htpc/desktop into the my node 304 (currently, the nas is in the node 304 and htpc/desktop in 550d), move my 2x8GB + 2x2GB ram to the 550d.

Ultimately my nas will be running:

Ubuntu 14.04
Openstack (vms deployed on an lvm created over a zfs mount)
ZFS (raidz2 w/ 4 2TB WD reds)
maybe plex
I just need to get a break in my class homework and job and career searching before I can finish this.


----------



## spice003

Quote:


> Originally Posted by *lowfat*
> 
> Finished off my upgraded FreeNAS box this morning.
> 
> http://s18.photobucket.com/user/tulcakelume/media/Define/export-18-1.jpg.html


dang that looks clean! what case is that? and also how many drives can you run of one molex?


----------



## Peanuthead

What is the system specs? ZFS or UFS?


----------



## akshep

Quote:


> Originally Posted by *spice003*
> 
> dang that looks clean! what case is that? and also how many drives can you run of one molex?


Looks like the fractal design R4


----------



## lowfat

Quote:


> Originally Posted by *spice003*
> 
> dang that looks clean! what case is that? and also how many drives can you run of one molex?


Fractal Design Define R4. No idea, hopefully 8 isn't a problem.








Quote:


> Originally Posted by *Peanuthead*
> 
> What is the system specs? ZFS or UFS?


Opteron 6128, Supermicro H8SGL-F, 32GB Kingston 1333 ECC,450W Silverstone PSU, IBM M1015-IT, 8 x 3TB. ZFS RAIDZ1, Upgraded from a Celeron G555, 8GB ITX machine.


----------



## Simmons572

Quote:


> Originally Posted by *lowfat*
> 
> Fractal Design Define R4. No idea, hopefully 8 isn't a problem.


Do you make your own sata power connectors?


----------



## lowfat

Quote:


> Originally Posted by *Simmons572*
> 
> Do you make your own sata power connectors?


I crimped all the cables in the case. Pretty easy to do if you have the tools.


----------



## xanzion

Does it count if all of my hardware is a single Openstack array?











Annd the back(ish)



I know.. My first post on a forum that I have been lurking on for the last 7 years. Hi guys!


----------



## KYKYLLIKA

@xanzion That's quite good for a first post.

So&#8230; what's in your openstack array? Does it host nice stuffs?


----------



## xanzion

Current Server Configuration

The entire stack is one Openstack cluster, with a little bit of this, and a little bit of that thrown in (Mainly, 96TB of total storage. I recommend ONLY HGST if you're going platters instead of SSD)

-=Networking=-

UBNT Edgemax Lite
The World's first sub-$100, one million packet-per-second router. All CLI managed (Not a GUI guy..). Vyatta based.

Performance
(Layer-3 base forwarding) 1,000,000 pps for 64-byte packets. Line rate (3 Gbps) across all three ports for 512-byte packets and higher
CPU
Dual-core MIPS64 processor with hardware acceleration for packet processing and encryption/decryption
Ethernet
3 RJ-45 Gigabit Ethernet ports
1 RJ-45 Console Management Port
Memory
512 MB DDR2 RAM
Storage
4 GB
Console
1 RJ-45 serial console port

Dell PowerConnect 5224
24 copper Gb ports and 4 SFP ports

Managed via serial console port

vLan and Trunking support

Picked this up on the cheap because the fans were loud. Replaced with new fans I had sitting around, quiet and cooled.

-Network Path-

Fiber optic internet = GPON

GPON > Edgemax Lite eth1

Edgemax Lite:

-eth0 10.0.0.0/24

+Dell PowerConnect 5224

-Ports 2,10-14 vLAN.110

>BareMetal Server Management, general WAN access. Firewall with iptables.

-Ports 15-19 vLAN.111

>Openstack Management LAN. No WAN.

-Ports 20-23 vLAN.112

>Openstack Instance Tunnels

-eth1 (Static ISP IP)/32

+eth1 > GPON #Bridged pppoe0 and configured to MASQUERADE.

-eth2 192.168.1.0/24

+eth2 > UBNT RocketMax M2

-UBNT RocketMax M2 > Distributed WiFi

PowerConnect 5224 Physical (Ports are NOT set to the physical layout as seen in pics, they have been reassigned for reasons...)

Port 24 > Edgemax eth0

Port 2 > station1.iviper

Ports 3-9 >

Ports 10-14 > All eth0 ports of servers.

Openstack Node Management:

Port 15 > Network node eth1

Port 16 > Controller Node eth1

Port 17 > Compute node eth1

Port 18 > Block node eth1

Port 19 > Swift node eth1

Openstack Instance Tunnels:

Port 20 > Controller Node eth2

Port 21 > Compute node eth2

-=Hardware=- (From top, down)

-Dell Monitor
-UBNT Edgemax

-Dell Powerconnect 5224

-Supermicro 1U 6 core xeon (12 threads) w/140 GB DDR3 RAM (Compute 1)

-Sun Microsystems x4600m2 16 core AMD Opteron (Sun's weird custom card layout. Each card appears to have a dual core Opteron. Play around with stuff-server and Neutron Networking+Compute-2) 140 GB DDR2 RAM, Dual 5GB Fiber, 8-1GB Eth ports. This thing has the BEST fans that I have ever seen. Also, the Mobo is like 7 mobo's stacked together, built tough.

-Custom Supermicro build. Dual 4 Core Intel xeons (16 threads) 32 GB DDR3 RAM (Compute-3)

-Supermicro customized 24 core AMD Opteron, 192 GB DDR3 RAM, 24x2TB drives (custom storage config, RAID6 LIO-Target+Compute-4 build)

-Dell PowerEdge 1950 Gen-3 Quad Core Intel Xeon W/20GB DDR2 RAM (Openstack Controller, aka "My main b****") Learned everything enterprise with this freebie box that I obtained 2 years ago.

-Custom Supermicro build. Dual Core AMD Opteron w/32GB DDR2 ECC RAM, 24x2TB Drives (Another custom storage config, RAID6 LIO-Target)

Both storage boxen are chained using 1 LSI RAID card in the 24 core box. dual RAID6 storage pipes into RAID1, so all of the drives look like one giant drive with that extra redundancy.

If I were to call threads, cores: 74 cores.

Right now, it is currently not being used to do much of anything publicly... yet. I have 12 instances spun up, the most important: 1 does Firewalling and storage for all of our stuff including cell phones with Owncloud, 2 are my gf's learning playground for Arch Linux, and the 4th VIP to the party is my personal testing environment for GaaS with nVidia GRID Prototype stuffs that I obtained pulling hamstrings over the phone. 5th is a LAMP running 6 of my sites, IRCd and Openfire servers. 6th is a Minecraft master node with inception KVM's for nodes (This pays my bills). The rest are boring virtual Windows 7 instances for those whom want a simple and safe computing environment to do their youtube-ing and facebooking, banking, whatever.

Recently, I have been working on some new Openstack Modules and plan on going public with it this November. That will be the day when this thing actually does something more useful than what I could do with one server.. What originally started as a learning project on 2 boxes, quickly became an addiction and exploded into this.


----------



## stolid

Quote:


> Originally Posted by *lowfat*
> 
> Opteron 6128, Supermicro H8SGL-F, 32GB Kingston 1333 ECC,450W Silverstone PSU, IBM M1015-IT, 8 x 3TB. ZFS RAIDZ1, Upgraded from a Celeron G555, 8GB ITX machine.


An 8 drive array and only one drive is being used for parity?!







I'd have done RAIDZ3 (or at least 2) on that kind of array.


----------



## lowfat

Quote:


> Originally Posted by *stolid*
> 
> An 8 drive array and only one drive is being used for parity?!
> 
> 
> 
> 
> 
> 
> 
> I'd have done RAIDZ3 (or at least 2) on that kind of array.


I've done more drives than that before. I don't expect more than one drive to die at the same time. Nor do I use drives more than 3 years old.


----------



## Plan9

Quote:


> Originally Posted by *lowfat*
> 
> I've done more drives than that before. *I don't expect more than one drive to die at the same time.* Nor do I use drives more than 3 years old.


Has been known to happen. And on drives younger than 3 years too. 1 dodgy batch and your how array is trashed.


----------



## Sean Webster

Quote:


> Originally Posted by *Plan9*
> 
> Quote:
> 
> 
> 
> Originally Posted by *lowfat*
> 
> I've done more drives than that before. *I don't expect more than one drive to die at the same time.* Nor do I use drives more than 3 years old.
> 
> 
> 
> Has been known to happen. And on drives younger than 3 years too. 1 dodgy batch and your how array is trashed.
Click to expand...

And then that is what backup is for.







lol


----------



## Plan9

Quote:


> Originally Posted by *Sean Webster*
> 
> And then that is what backup is for.
> 
> 
> 
> 
> 
> 
> 
> lol


Not really as you could then argue about what point is there having any parity disks if you can just restore from back up.









The point here is resilience to minimize the risk of down time, and given the former poster would potentially have 21TB of data to back up, that would be an awfully long restore job


----------



## Sean Webster

Quote:


> Originally Posted by *Plan9*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Sean Webster*
> 
> And then that is what backup is for.
> 
> 
> 
> 
> 
> 
> 
> lol
> 
> 
> 
> Not really as you could then argue about what point is there having any parity disks if you can just restore from back up.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The point here is resilience to minimize the risk of down time, and given the former poster would potentially have 21TB of data to back up, that would be an awfully long restore job
Click to expand...

Yeah, good point. With 10GbE-40GbE networking to another RAID array or something shouldn't take too long...if they have something like that, that is lol 

I think a better point against me would be that one could lose some unique hot data that hasn't been backed up yet... :/


----------



## Plan9

Quote:


> Originally Posted by *Sean Webster*
> 
> Yeah, good point. With 10GbE-40GbE networking to another RAID array or something shouldn't take too long...if they have something like that, that is lol
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I think a better point against me would be that one could lose some unique hot data that hasn't been backed up yet... :/


Sorry about the tone of my post by the way. It was early morning and I'd not had my first coffee of the day (well tea - I'm an Earl Grey drinker







) so didn't really put much consideration into my reply


----------



## cdoublejj

Quote:


> Originally Posted by *lowfat*
> 
> Finished off my upgraded FreeNAS box this morning.
> http://s18.photobucket.com/user/tulcakelume/media/Define/export-16-1.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/Define/export-15-1.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/Define/export-17-1.jpg.html
> 
> http://s18.photobucket.com/user/tulcakelume/media/Define/export-18-1.jpg.html


WOW! Did you sleeve the SAS/SATA break out cables your self?


----------



## lowfat

Quote:


> Originally Posted by *cdoublejj*
> 
> WOW! Did you sleeve the SAS/SATA break out cables your self?


Well I sleeved the RAID card up until the point where the cables split and go to the drives. The bottom set of cables had each cable sleeved but poorly done.


----------



## EvilMonk

Damn, thats impressive!!! good job there!!!


----------



## driftingforlife

Got me a netgear 48 port Gigabit switch from work today. The fans had all died so they said I could have it.

A bit of modding and problem sorted.


----------



## cdoublejj

Quote:


> Originally Posted by *driftingforlife*
> 
> Got me a netgear 48 port Gigabit switch from work today. The fans had all died so they said I could have it.
> 
> A bit of modding and problem sorted.


If you made them suck air out the case you would still get pretty decent cooling and less dust inside.


----------



## link1393

Hi all, I want to build a server to replace my router and start a ftp/file server with a media server like Plex or XBMC. At the moment I am trying ClearOS which can do the DHCP server/ftp. Do I am better to do an ESXi server and put pfSense on a VM + other VM for the ftp/media server +gaming server ?

I am still learning on the server side so I am open to any recommendation 

Here is the build log if you want more info on what hardware I want to use.


----------



## christoph

nice job on that switch...


----------



## Wildcard36qs

So excited!


----------



## nismoskyline

So I got a good deal on a server off the fleabay, and i'd like to use it for a few things, a file server, a game server, and as a firewall. What would be the best OS/software to use to get these three tasks done? It has 8 cores total and 8gb of ram. I don't mind setting up vms, just cheaper is better.







thanks for any suggestions/help.


----------



## Wildcard36qs

Esxi as the host. Any flavor of Linux or Windows for file and game serving. For firewall I use ipfire. I like it better than pfsense.


----------



## ikem

woot! free server. Just need some more caddies to fill out the sas drives, and I will be set.


----------



## tiro_uspsss

Quote:


> Originally Posted by *ikem*
> 
> woot! free server. Just need some more caddies to fill out the sas drives, and I will be set.


is that dual s1366??


----------



## mbudden

Quote:


> Originally Posted by *Wildcard36qs*
> 
> So excited!


Specs?


----------



## cdoublejj

how do you guys deal with the noise? I usually install heat pipe/tower coolers, not sure that is possible with slim servers, at least with the lid on.


----------



## DaveLT

Quote:


> Originally Posted by *cdoublejj*
> 
> how do you guys deal with the noise? I usually install heat pipe/tower coolers, not sure that is possible with slim servers, at least with the lid on.


With 2U it's possible to use heat pipe coolers (side heat pipe but still attached to base fanless though) but either way heat pipe coolers are no match for vapor chamber heatsinks and for better optimization still need a fan with decent airflow in front of the heatsinks behind the HDD bays.


----------



## tompsonn

About to replace my server with an E5-2620 and loading in 64GB of RAM to start with. Too much virtual machines...


----------



## cdoublejj

Quote:


> Originally Posted by *DaveLT*
> 
> With 2U it's possible to use heat pipe coolers (side heat pipe but still attached to base fanless though) but either way heat pipe coolers are no match for vapor chamber heatsinks and for better optimization still need a fan with decent airflow in front of the heatsinks behind the HDD bays.


Well as far as as sound goes, I'm sure any thing is better than 12 million RPM fan. The first upgrade to my server was a pair artic freezer pros. way way quieter and cooler.

http://www.overclock.net/t/1470611/nice-cooling-upgrade-tight-fit-but-waaaaay-quieter

Now i just need to find something for socket J. That or make my own brackets and grab what ever heat pipe or tower coolers i have in the heat sink box.

http://www.overclock.net/t/1506439/any-decent-socket-j-603-604-coolers


----------



## tompsonn

Quote:


> Originally Posted by *cdoublejj*
> 
> how do you guys deal with the noise? I usually install heat pipe/tower coolers, not sure that is possible with slim servers, at least with the lid on.


For home, I always custom build tower servers where it needs to live in a room the same as or close to me, so I have complete control over the noise.

A network + server rack in the garage though (eventually), I couldn't give a hoot!


----------



## cdoublejj

Quote:


> Originally Posted by *tompsonn*
> 
> For home, I always custom build tower servers where it needs to live in a room the same as or close to me, so I have complete control over the noise.
> 
> A network + server rack in the garage though (eventually), I couldn't give a hoot!


I don't think i have any where i could put a server tower and nearest data center is an hour away. I'd try and figure something out as far as that goes. I may get some more free server gear in the future. all though i suppose i could install a few holes in the lid and have the tower coolers pop out and maybe have a few 120mms blowing and or sucking air to keep the air flowing over the other components.


----------



## ikem

Quote:


> Originally Posted by *tiro_uspsss*
> 
> is that dual s1366??


yep dual x5550's


----------



## CynicalUnicorn

Quote:


> Originally Posted by *tompsonn*
> 
> A network + server rack in the garage though (eventually), I couldn't give a hoot!


In Australia? Is your garage climate controlled or buried underground? Last I checked, processors and drives don't like too much heat.


----------



## tompsonn

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> In Australia? Is your garage climate controlled or buried underground? Last I checked, processors and drives don't like too much heat.


I buy Australian-grade CPUs and drives.


----------



## Plan9

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> In Australia? Is your garage climate controlled or buried underground? Last I checked, processors and drives don't like too much heat.


If, like most garages, it doesn't have many windows, then I can't see why it would get that hot. Plus not all of Australia is like Bondi Beach.









That all said, I'm pretty sure I've burnt out 2 HDDs and an SSD on my file server just from the heatwave we've enjoyed in England this summer.








Quote:


> Originally Posted by *tompsonn*
> 
> I buy Australian-grade CPUs and drives.


----------



## t00sl0w

what SATA expansion cards are worth their salt?
i need a lot of SATA space and a card that wont dump on me.


----------



## jibesh

Quote:


> Originally Posted by *t00sl0w*
> 
> what SATA expansion cards are worth their salt?
> i need a lot of SATA space and a card that wont dump on me.


Define "a lot of SATA space". How many drives are we talking about?


----------



## t00sl0w

Quote:


> Originally Posted by *jibesh*
> 
> Define "a lot of SATA space". How many drives are we talking about?


i'd like 10 slots minimum on the expansion.


----------



## pelplouffe

Quote:


> Originally Posted by *Wildcard36qs*
> 
> 
> 
> So excited!


I just ordered 2 of them for my work, By the way even if Lenovo say they don't Server 2003, they do, Lan driver is a bit tricky but they work great,

Ps: Raid Card (LSI 9240-8i) in one of them just died on me today.


----------



## Wildcard36qs

Quote:


> Originally Posted by *pelplouffe*
> 
> I just ordered 2 of them for my work, By the way even if Lenovo say they don't Server 2003, they do, Lan driver is a bit tricky but they work great,
> 
> Ps: Raid Card (LSI 9240-8i) in one of them just died on me today.


You think the raid card gets too hot? I know my perc need air on them.


----------



## Master__Shake

Quote:


> Originally Posted by *Wildcard36qs*
> 
> You think the raid card gets too hot? I know my perc need air on them.


this my 9260-4i intel sas expander and infiniband card all need extra cooling...

i had to zip tie a fan to the rear pci vents to get them to a reasonable temp


----------



## lowfat

Quote:


> Originally Posted by *Master__Shake*
> 
> this my 9260-4i intel sas expander and infiniband card all need extra cooling...
> 
> i had to zip tie a fan to the rear pci vents to get them to a reasonable temp


What infiniband cards are you using and what OS(s)?


----------



## Master__Shake

Quote:


> Originally Posted by *lowfat*
> 
> What infiniband cards are you using and what OS(s)?


voltaire hca 410ex with windows 7 and home server 2011


----------



## lowfat

Quote:


> Originally Posted by *Master__Shake*
> 
> voltaire hca 410ex with windows 7 and home server 2011


What drivers are you using? I tried using Infiniband EX3 w/ Windows 8 and it was terribly unreliable.


----------



## Master__Shake

these

https://www.openfabrics.org/downloads/Windows/previous_releases/v3.1/Win7-or-Svr_2008_R2-or-HPC_Edition/

just remember one computer has to run open subnet manager


----------



## pelplouffe

Quote:


> Originally Posted by *Wildcard36qs*
> 
> You think the raid card gets too hot? I know my perc need air on them.


I dont think its was a colling issue, The first one i ordere about 2 month ago is still running fine, the 2 server are in diferent room with AC ( it is a glue factory, so the air is filtrated and they all have AC)

I tried opening the side of the case and it did not change anything.

the 9240-8i is also the Lenovo Raid 500, which came with the server from the factory, i would think that Lenovo made sur the cooling was proper for the card.


----------



## Wildcard36qs

Quote:


> Originally Posted by *pelplouffe*
> 
> I dont think its was a colling issue, The first one i ordere about 2 month ago is still running fine, the 2 server are in diferent room with AC ( it is a glue factory, so the air is filtrated and they all have AC)
> 
> I tried opening the side of the case and it did not change anything.
> 
> the 9240-8i is also the Lenovo Raid 500, which came with the server from the factory, i would think that Lenovo made sur the cooling was proper for the card.


Oh I agree with you. Mine has the Lenovo Raid 500 as well. And I know Lenovo has the quietest server on the market. Just wondering if there might be a price to pay for the quiet.

On a side note, did you flash the Raid 500? I am going to be using mine as an ESXi box, and am wondering if I can use the onboard SATA for ESXi and then pass through the Raid 500 as an HBA for FreeNAS?


----------



## pelplouffe

Quote:


> Originally Posted by *Wildcard36qs*
> 
> Oh I agree with you. Mine has the Lenovo Raid 500 as well. And I know Lenovo has the quietest server on the market. Just wondering if there might be a price to pay for the quiet.
> 
> On a side note, did you flash the Raid 500? I am going to be using mine as an ESXi box, and am wondering if I can use the onboard SATA for ESXi and then pass through the Raid 500 as an HBA for FreeNAS?


They are really Quiet, the power supply are the noisiest part in the server, and only at sartup, whish my server room would be this quiet.

The card come with the LSI Firmware, i did update my defect one to see if it would help, but no change.


----------



## tiro_uspsss

Quote:


> Originally Posted by *t00sl0w*
> 
> i'd like 10 slots minimum on the expansion.


if you've been buying actual SATA only cards, its time to quit playing around with toys & buy some enterprise grade stuff.. get a M1015 + expander, done








Quote:


> Originally Posted by *tompsonn*
> 
> I buy Australian-grade CPUs and drives.


complete with Australian-grade prices!


----------



## offroadz

So many nice servers in here, makes a guy just want to build one.


----------



## koekwau5

Quote:


> Originally Posted by *pelplouffe*
> 
> I just ordered 2 of them for my work, By the way even if Lenovo say they don't Server 2003, they do, Lan driver is a bit tricky but they work great,
> 
> Ps: Raid Card (LSI 9240-8i) in one of them just died on me today.


Why still choosing for Windows Server 2003 as it will face the same retirement as Windows XP did next year.
Server 2008 R2 is developed over and over and is IMHO better than 2003!
I liked Server 2003 allot due to its speed and stability. But since Server 2008 R2 with SP1 is here it's much better than 2003.
Quote:


> Originally Posted by *offroadz*
> 
> So many nice servers in here, makes a guy just want to build one.


@ work I get to play with the big HP ProLiant and Blade stuff. It's awesome and I'd like them @ home for preparing installation. Untill you see that mothahuge power bill.

So as a home servers I got running:

Server 1: HP Compaq DC7900 Workstation with a Intel Core2Duo E8500 and 4GB of RAM. 1x 1TB for OS and 2x 2TB RAID1 for storage. OS: Windows Server 2008 R2 SP1. Backup: Windows Server Backup to external HD. Tasks: Home storage, SABnzbd+ server
Future tasks: VPN, FTP server, Print Server (fresh installation, not done yet so thats why. Configuring just takes a couple of minutes







)

Server 2: RealPC Bionic mATX with passive cooled Intel Atom @ 1.6Ghz with 2GB of RAM. HDD 500GB 2.5' 5400RPM. OS: Debian. Tasks: TeamSpeak3 servers for my gaming clan.

Works great and no more speed needed.
In future I'd like to add some more memory to the DC7900 so I can run Hyper-V.
This way I can transfer the Debian to this rig.
Since DDR2 has risen in price allot I'll just wait for some dead workstations at the companies I work for to strip the RAM


----------



## cdoublejj

What do you all think of the "LSI 8888ELP"? I was also looking at the dell Perc i6. about 45 bucks for either. however with the deal perc 6i i was able to source a battery which can be had for free with purchase or little cost. can't say that about the 8888ELP but, if i had opportunity to buy a 8888ELP for $45 shipped would that be a good deal?


----------



## tiro_uspsss

Quote:


> Originally Posted by *cdoublejj*
> 
> What do you all think of the "LSI 8888ELP"? I was also looking at the dell Perc i6. about 45 bucks for either. however with the deal perc 6i i was able to source a battery which can be had for free with purchase or little cost. can't say that about the 8888ELP but, if i had opportunity to buy a 8888ELP for $45 shipped would that be a good deal?


I believe all the cards you mentioned have a 2TB HDD limit - none will recognise a HDD that is 3TB +


----------



## pelplouffe

Quote:


> Originally Posted by *koekwau5*
> 
> Why still choosing for Windows Server 2003 as it will face the same retirement as Windows XP did next year.
> Server 2008 R2 is developed over and over and is IMHO better than 2003!
> I liked Server 2003 allot due to its speed and stability. But since Server 2008 R2 with SP1 is here it's much better than 2003.


Simply because we use some old version of citect and profibus wihich do not support anything newer, trust me i wish i could use 2008 r2


----------



## levontraut

I have my old Games rig converted to my server.
My server specs are as follows:

Mobo: AMD 990FXA UD5
CPU: 1055T
Ram: 4 X 2 Gig DDR3 1333 (8 GiG total)
OS: Server 2012 ( Evaluation)
NIC: Intel DualPort PRO 1000T
PSU: 400WATT Corsair CX
Controler/Raid Card: LSI Controler Card 4 Port
UPS: EATON 5110
KVM: StarView SV231UADVI KVM
Chassis: Fractal Design R4 XL
GPU: Evga 550Ti
HDD's
2 X 3 Terabyte Seagate Barracuda
10 X 1 Terabyte WD Black
OS Drive: 120 Gig SSD OCZ

The OS drive is old but the performance is better tan a mechanical drive .

I know it is over kill for my use but, I Love It


----------



## cdoublejj

Quote:


> Originally Posted by *tiro_uspsss*
> 
> I believe all the cards you mentioned have a 2TB HDD limit - none will recognise a HDD that is 3TB +


That's fine i was wondering if 1 was faster than other or something along those lines. I doubt i could ever afford more than 2TB drives. Can either support more than 8 HDDs internally?


----------



## tiro_uspsss

Quote:


> Originally Posted by *cdoublejj*
> 
> That's fine i was wondering if 1 was faster than other or something along those lines. I doubt i could ever afford more than 2TB drives. Can either support more than 8 HDDs internally?


you mean thru an expander? no idea sorry

I don't know which is faster; iirc the PERC6i is good for ~600MB/s


----------



## koekwau5

Quote:


> Originally Posted by *pelplouffe*
> 
> Simply because we use some old version of citect and profibus wihich do not support anything newer, trust me i wish i could use 2008 r2


Aii .. that kind of situations suck. If there aint new versions available or company doens't want to pay for it then there is no other solution =(


----------



## Plan9

Quote:


> Originally Posted by *koekwau5*
> 
> Aii .. that kind of situations suck. If there aint new versions available or company doens't want to pay for it then there is no other solution =(


There's plenty of other solutions that aren't Windows







(WINE is pretty good or older software). He could also run a hypervisor of some kind and still have Server 2008 (or Linux, with a Server 2008 VM - if he wants a lower footprint hypervisor) and a 2003 VM just for the older software.


----------



## pelplouffe

Well we have to replace the card on the automat side to switch from profibus to profinet, curently they need to be installed directly since it need to talk to the PCI card.

Do you know how hard it is to find a server with dual PSU, a PCI slot that support windows server 2003?

Also it is in next year budget to switch all of it to profinet and server 2012


----------



## koekwau5

Quote:


> Originally Posted by *Plan9*
> 
> There's plenty of other solutions that aren't Windows
> 
> 
> 
> 
> 
> 
> 
> (WINE is pretty good or older software). He could also run a hypervisor of some kind and still have Server 2008 (or Linux, with a Server 2008 VM - if he wants a lower footprint hypervisor) and a 2003 VM just for the older software.


If possible then I would remove the 2003 servers from the internet. But most software needs a gateway for communication to the outside or remote sites.
When Server 2003 maintenance stops next year I don't want to have any of those connected to the internet. New security leaks won't get fixed and you have a potential risk of getting hacked.

Or like you suggest and even better; Linux


----------



## pelplouffe

Quote:


> Originally Posted by *koekwau5*
> 
> If possible then I would remove the 2003 servers from the internet. But most software needs a gateway for communication to the outside or remote sites.
> When Server 2003 maintenance stops next year I don't want to have any of those connected to the internet. New security leaks won't get fixed and you have a potential risk of getting hacked.
> 
> Or like you suggest and even better; Linux


Thats why i went with server 2003 instead of XP, give me about a years to do something about it, they were windows 2000 until i changed them so its already a nice upgrade.

Also we still run about 15 pc with xp and 2 in dos for specific hardware, but they donc have acces to the net, only there own V-Lan


----------



## cdoublejj

Quote:


> Originally Posted by *koekwau5*
> 
> Aii .. that kind of situations suck. If there aint new versions available or company doens't want to pay for it then there is no other solution =(


This is the most common situation i have seen.


----------



## Plan9

Quote:


> Originally Posted by *pelplouffe*
> 
> Thats why i went with server 2003 instead of XP, give me about a years to do something about it, they were windows 2000 until i changed them so its already a nice upgrade.
> 
> Also we still run about 15 pc with xp and 2 in dos for specific hardware, but they donc have acces to the net, only there own V-Lan


Wow, what are you using DOS for and how have you got that networked? I'm guessing it's not as simple as a twisted pair Ethernet RJ45.since that stuff is pretty new compared to DOS.

Weirdly, DOS is probably more secure than XP.


----------



## CynicalUnicorn

Quote:


> Originally Posted by *Plan9*
> 
> Weirdly, DOS is probably more secure than XP.


In other news, water is wet and the sky is blue.







Windows probably isn't the best long-term choice for anything connected to the Internet. It's ubiquitous and can be virus'd easily.


----------



## tompsonn

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> In other news, water is wet and the sky is blue.
> 
> 
> 
> 
> 
> 
> 
> Windows probably isn't the best long-term choice for anything connected to the Internet. It's ubiquitous and can be virus'd easily.


Under proper administration, it is fine. And when administered as a server, I would argue it is neither more or less secure than anything else.


----------



## t00sl0w

Quote:


> Originally Posted by *tiro_uspsss*
> 
> if you've been buying actual SATA only cards, its time to quit playing around with toys & buy some enterprise grade stuff.. get a M1015 + expander, done


i havent actually bought anything yet as this is the first time i am gobbling up terrabytes of space and running out of SATA ports for more HDDs.
even if i build a sep server for my movie collection, i still run into SATA limitations outside of going with server boards.

so if i went the m1015 route, which is an IBM product if i understand correctly, and an expander....is this easy to maintain and setup or do i need to do something crazy?
what is the expander exactly, a seperate card or an attachement, a simple wiring harness?


----------



## pelplouffe

Quote:


> Originally Posted by *Plan9*
> 
> Wow, what are you using DOS for and how have you got that networked? I'm guessing it's not as simple as a twisted pair Ethernet RJ45.since that stuff is pretty new compared to DOS.
> 
> Weirdly, DOS is probably more secure than XP.


Its just a simple software that talk to some sensor trough LPT port then make some complicated calcul and gave us Humidity Ratio of some sort.

It also print us report on a old Matrix Printer every morning.

It is not connected to the network, since even if i did i could do much with it, i Also have a exact copy of it in case it crash, Same Hardware and Software. old Pentium 1 HP Desktop....


----------



## cones

Quote:


> Originally Posted by *t00sl0w*
> 
> i havent actually bought anything yet as this is the first time i am gobbling up terrabytes of space and running out of SATA ports for more HDDs.
> even if i build a sep server for my movie collection, i still run into SATA limitations outside of going with server boards.
> 
> so if i went the m1015 route, which is an IBM product if i understand correctly, and an expander....is this easy to maintain and setup or do i need to do something crazy?
> what is the expander exactly, a seperate card or an attachement, a simple wiring harness?


I don't have any personal experience with them. You can use them as a RAID card or you flash them to IT mode which lets you use the HDDs in non RAID. They use SAS plugs so you would buy an SAS to SATA breakout cable, I think it is 4 or 8 SATA plugs. They are an addon card just like a GPU.


----------



## tiro_uspsss

Quote:


> Originally Posted by *t00sl0w*
> 
> i havent actually bought anything yet as this is the first time i am gobbling up terrabytes of space and running out of SATA ports for more HDDs.
> even if i build a sep server for my movie collection, i still run into SATA limitations outside of going with server boards.
> 
> so if i went the m1015 route, which is an IBM product if i understand correctly, and an expander....is this easy to maintain and setup or do i need to do something crazy?
> what is the expander exactly, a seperate card or an attachement, a simple wiring harness?


I'm strictly talking from a windows perspective:

You don't need to do anything crazy, no flashing etc, just install drivers, done.
As for an expander, some need a PCIE slot, some don't. Its just a card that gets plugged into the M1015 (in this case) & allows for more HDDs to be plugged in. The M1015 does 8 HDDs by itself, but ii you buy an expander you can then plug in 30+









M1015 isn't expensive. depending on what expander you buy, it can be slightly pricey (~$300 - I don't know what prices USA has sorry)


----------



## cdoublejj

Quote:


> Originally Posted by *Plan9*
> 
> Wow, what are you using DOS for and how have you got that networked? I'm guessing it's not as simple as a twisted pair Ethernet RJ45.since that stuff is pretty new compared to DOS.
> 
> Weirdly, DOS is probably more secure than XP.


the same could _probably_ said for 98. as the OS demographics changes so does the maleware and virus.

I use windows server as well but, mainly because I do game servers and stuff.


----------



## Plan9

Quote:


> Originally Posted by *cdoublejj*
> 
> the same could _probably_ said for 98. as the OS demographics changes so does the maleware and virus.


Nar - Windows 98 is as vulnerable as they come. It's basically a lesson is bad OS design.


----------



## tompsonn

Quote:


> Originally Posted by *Plan9*
> 
> Nar - Windows 98 is as vulnerable as they come. It's basically a lesson is bad OS design.


Hey you have to make things work with 4MB of RAM


----------



## cdoublejj

patched 98 se... unofficially patched that is can support over 1gb of ram and last i looked they working on dual core support which they got working on a basic level.

Any A/V suggestion for server 2008 R2?


----------



## Plan9

Quote:


> Originally Posted by *tompsonn*
> 
> Hey you have to make things work with 4MB of RAM


Except it didn't. The minimum requirements for 98 was 16MB RAM







(and it ran like crap on even that much)


----------



## tompsonn

Quote:


> Originally Posted by *Plan9*
> 
> Except it didn't. The minimum requirements for 98 was 16MB RAM
> 
> 
> 
> 
> 
> 
> 
> (and it ran like crap on even that much)


Ah yes that's right. 24MB recommended though!


----------



## Plan9

Quote:


> Originally Posted by *tompsonn*
> 
> Ah yes that's right. 24MB recommended though!


My recommendation was to never install the blasted thing to begin with


----------



## tompsonn

Quote:


> Originally Posted by *Plan9*
> 
> My recommendation was to never install the blasted thing to begin with












Yes indeedy.


----------



## Zen00

I'm thinking of building a server/NAS for my home network. This server would do home data storage, host my website (www.zenproductions.org), and host games (such as Minecraft and some FPSs) for my family and friends to play on. This will be running with Ubuntu Server and Samba. For this server I'm thinking of using this hardware:


CORSAIR Vengeance 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model CMZ8GX3M2A1600C9R
Western Digital Red NAS Hard Drive WD20EFRX 2TB IntelliPower 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive x3
BitFenix Prodigy M Midnight Black Steel/ Plastic Micro ATX Mini Tower Computer Case
AMD A10-5800K Trinity Quad-Core 3.8GHz (4.2GHz Turbo) Socket FM2 100W Desktop APU (CPU + GPU) with DirectX 11 Graphic AMD Radeon HD 7660D AD580KWOHJBOX
GIGABYTE GA-F2A88XM-HD3 FM2+ / FM2 AMD A88X (Bolton D4) HDMI SATA 6Gb/s USB 3.0 Micro ATX AMD Motherboard
Pioneer Black 8X BD-ROM 16X DVD-ROM 40X CD-ROM SATA Internal Internal Blu-ray Combo DVD & CD Drive Model BDC-207DBK - OEM
AeroCool DS 120mm Black 120mm Patented Dual layered blades with noise and shock reduction frame x4
CORSAIR CXM series CX500M 500W ATX12V v2.3 SLI CrossFire 80 PLUS BRONZE Certified Modular Active PFC Power Supply
Cooler Master Seidon 120V - Compact All-In-One CPU Liquid Water Cooling System with 120mm Radiator and Fan

What do you think, needs a better CPU or more RAM or maybe can I go cheaper and maintain the same performance? Current cost: $900


----------



## CynicalUnicorn

That looks acceptable, but don't forget that an i3 is an option. Much better singlethread performance at the expense of a bit of multithread performance. I believe Minecraft only uses one thread, so that should help.


----------



## CSCoder4ever

Looks good! Though as CynicalUnixcorn said, an i3 is an option.

Any particular reason for the overkill cooling though? I think a Hyper 212 would do the job just fine, but even that's overkill.


----------



## CynicalUnicorn

I missed the cooler. Yeah, quad-core Piledriver doesn't that much. My setup hits 65C under load or so, and that's at 4.6GHz and with two extra cores. Dual-core Haswell draws a mere 40-50W under a maximum load and runs fine with a stock cooler.


----------



## tompsonn

So guess who bought the wrong socket 2011 cooler. Me!

Completely forgot about the square and narrow layouts of socket 2011.


----------



## Zen00

Quote:


> Originally Posted by *CSCoder4ever*
> 
> Looks good! Though as CynicalUnixcorn said, an i3 is an option.
> 
> Any particular reason for the overkill cooling though? I think a Hyper 212 would do the job just fine, but even that's overkill.


The water cooler is the same price as that air cooler you quoted. Mainly I've never liked the look of a big metal air cooling head on my CPUs anyways.

As for the CPU, it's because I have a 5800K on hand right now and was building around it to find something to do with it.

How many years do you think this would be relevant by the way?


----------



## CynicalUnicorn

Hyper 212+ is usually several dollars cheaper and runs one whole kelvin warmer. I understand the concern regarding big air coolers, though it shouldn't be an issue in practice. I myself would be more concerned about leaks from a liquid AIO, but again, it's not an issue in practice.

It should last quite a while. You've got the option to upgrade to Kaveri or Carrizo as well should Trinity run into issues later on. In addition, that 5800k is unlocked, and Piledriver is incredibly easy to overclock. This assumes you get A88X (upgradeable) or A85X (both unlocked).


----------



## CSCoder4ever

Should be relevant for a quite a while, I see my i3 being relevant for at least another 5 years, but I'll more than likely replace it because why not lol.
Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Hyper 212+ is usually several dollars cheaper and runs one whole kelvin warmer. I understand the concern regarding big air coolers, though it shouldn't be an issue in practice. *I myself would be more concerned about leaks from a liquid AIO*, but again, it's not an issue in practice.
> 
> It should last quite a while. You've got the option to upgrade to Kaveri or Carrizo as well should Trinity run into issues later on. In addition, that 5800k is unlocked, and Piledriver is incredibly easy to overclock. This assumes you get A88X (upgradeable) or A85X (both unlocked).


Agreed!


----------



## Zen00

I've been running a H50 in my current rig for 5 years now with no issues, I would hope that Cooler Master would be able to stand up just as well. The board is a A88X, would it be better to use a A85X instead?

The Cooler Master also better fits the look I'm going for in the case.

Oddly enough, whenever you try to add the DS fans to your shopping cart, it clears it. :/


----------



## CSCoder4ever

okay then.

also no, I'd get the a88x, fm2+ vs fm2
so it would be a little more future proof.


----------



## CynicalUnicorn

Quote:


> Originally Posted by *Zen00*
> 
> I've been running a H50 in my current rig for 5 years now with no issues, I would hope that Cooler Master would be able to stand up just as well. The board is a A88X, would it be better to use a A85X instead?


I'd certainly hope so. I'm not sure, are AIOs like PSUs? As in, do OEMs make them and sell them to NZXT or Cooler Master or whoever to put their stickers and warranties on them?

Nah, A88X is better. Either A85X or A88X will allow overclocking, but A85X is FM2. That means it supports only Trinity and Richland (5000 and 6000 series). A88X is FM2+, supporting Kaveri, Carrizo and backwards-compatibility with the FM2 chips (5000, 6000, 7000, and presumably 8000 series). With Kaveri or Carrizo, it also supports PCIe 3.0 in its x16 slot, though that probably will be of no concern.


----------



## tompsonn

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> I'd certainly hope so. I'm not sure, are AIOs like PSUs? As in, do OEMs make them and sell them to NZXT or Cooler Master or whoever to put their stickers and warranties on them?


Yep. Asetek is one such OEM.


----------



## Zen00

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> I'd certainly hope so. I'm not sure, are AIOs like PSUs? As in, do OEMs make them and sell them to NZXT or Cooler Master or whoever to put their stickers and warranties on them?
> 
> Nah, A88X is better. Either A85X or A88X will allow overclocking, but A85X is FM2. That means it supports only Trinity and Richland (5000 and 6000 series). A88X is FM2+, supporting Kaveri, Carrizo and backwards-compatibility with the FM2 chips (5000, 6000, 7000, and presumably 8000 series). With Kaveri or Carrizo, it also supports PCIe 3.0 in its x16 slot, though that probably will be of no concern.


Yeah, any suggestions for the PCIe slots, or do I have everything you need for a good server?


----------



## CynicalUnicorn

Quote:


> Originally Posted by *Zen00*
> 
> Yeah, any suggestions for the PCIe slots, or do I have everything you need for a good server?


You ought to be fine with integrated graphics, and the PCIe revision won't make a difference with a discrete GPU. There's a SATA or SAS card, but A88X supports eight 6Gb/s ports natively. WiFi, perhaps, but a server should probably be wired via Ethernet. USB is an option if you need it, but you've got 4x 3.0 and 8x 2.0 available.

Good question. I've got nothing.


----------



## Zen00

Yeah, well then it looks like it's all set. Now to just sit around and wait until I move to see if I can get a server line in my next apartment.


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Hyper 212+ is usually several dollars cheaper and runs one whole kelvin warmer. I understand the concern regarding big air coolers, though it shouldn't be an issue in practice. I myself would be more concerned about leaks from a liquid AIO, but again, it's not an issue in practice.
> 
> It should last quite a while. You've got the option to upgrade to Kaveri or Carrizo as well should Trinity run into issues later on. In addition, that 5800k is unlocked, and Piledriver is incredibly easy to overclock. This assumes you get A88X (upgradeable) or A85X (both unlocked).


Unicorn, the 212 is really 10C behind entry level AIOs and a lot noisier.
Try the seidon 120v for a change









Also as a low power chip a10-7800 possibly cannot be beaten as it still maintains mighty clocks at 45W TDP set.
That is what kaveri is supposed to be all along







not mentioning GF's crap yields


----------



## CynicalUnicorn

I meant the 212+ runs negligibly warmer than the 212 EVO, not the AIO. Yeah, liquid coolers are generally better than all but the biggest air coolers.

Since he already has the 5800k, he should wait for Carrizo before considering an upgrade. Based on what I have seen, it looks like efficient Kaveri, but it might pull an IPC boost out of nowhere or fix some of the issues carried over from Bulldozer or something.


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> 
> 
> 
> 
> 
> 
> 
> I meant the 212+ runs negligibly warmer than the 212 EVO, not the AIO. Yeah, liquid coolers are generally better than all but the biggest air coolers.
> 
> Since he already has the 5800k, he should wait for Carrizo before considering an upgrade. Based on what I have seen, it looks like efficient Kaveri, but it might pull an IPC boost out of nowhere or fix some of the issues carried over from Bulldozer or something.










True. And the 212 still holds the record for being the noisiest 120mm heatsink around I reckon.

Possibly yeah but a 7800 really cannot be overlooked.


----------



## CynicalUnicorn

I blame the stock fan, not the radiator itself.


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> I blame the stock fan, not the radiator itself.


It's firstly too thin to be much better with a better fan and secondly the fin array design itself is rather dated.

But I always say despite ... (you know what) that every other fan apart from the Jetflo (for now that exists) is complete garbage. I'm waiting on the new Seidon 240M to evaluate the new range of fan performance.


----------



## ikem

Finally got a shelf built for the server. HP switch should be here today. Getting the patch over tonight too.

I can hang off the shelf


----------



## Wildcard36qs

Looks good man. I was going to do that for my C1100 in my laundry room as well, but it is not temperature controlled so it would probably get too hot in the summer and too cold in the winter. Oh wells. Guess sound is not a big deal where it is at?


----------



## ikem

Quote:


> Originally Posted by *Wildcard36qs*
> 
> Looks good man. I was going to do that for my C1100 in my laundry room as well, but it is not temperature controlled so it would probably get too hot in the summer and too cold in the winter. Oh wells. Guess sound is not a big deal where it is at?


it runs really quiet really. but it is a utility room/laundry/bathroom so it is fine.


----------



## Wildcard36qs

Quote:


> Originally Posted by *ikem*
> 
> it runs really quiet really. but it is a utility room/laundry/bathroom so it is fine.


Which model is that Dell? I know it is an 11th gen judging by the specs...but it being a 2u helps compared to my 1u beast. Which is why I am selling my C1100.

I just bought 2 servers: Lenovo TS440 and a TS140. The TS440 is a Xeon 1225v3 w/ 32GB RAM and LSI 9240-8i and 4x Hot-swap bays. That will be my main ESXi host. The TS140 I just ordered last night and it is just the i3 model, but I will put 8GB RAM in it and use it as my FreeNAS or some other storage serving function. The TS440 has been excellent and is really quiet and easy to manage.


----------



## ikem

Quote:


> Originally Posted by *Wildcard36qs*
> 
> Which model is that Dell? I know it is an 11th gen judging by the specs...but it being a 2u helps compared to my 1u beast. Which is why I am selling my C1100.
> 
> I just bought 2 servers: Lenovo TS440 and a TS140. The TS440 is a Xeon 1225v3 w/ 32GB RAM and LSI 9240-8i and 4x Hot-swap bays. That will be my main ESXi host. The TS140 I just ordered last night and it is just the i3 model, but I will put 8GB RAM in it and use it as my FreeNAS or some other storage serving function. The TS440 has been excellent and is really quiet and easy to manage.


R710. isnt much but it was free. It has 8 bays. 3 are filled with 146gb SAS and 1 is a 500gb wd blue. Need more caddies.


----------



## mbudden

Quote:


> Originally Posted by *ikem*
> 
> I can hang off the shelf


I would prefer it if you don't.









In all seriousness. Nice.


----------



## ikem

Quote:


> Originally Posted by *mbudden*
> 
> I would prefer it if you don't.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In all seriousness. Nice.


haha yea. Got the plywood in and the patch ran over to it during lunch.


----------



## ozlay

8x opteron 875 duelcore cpus at 2.2ghz 128gigs of ddr 400 ecc 32x4gb sticks


----------



## ikem

Quote:


> Originally Posted by *ozlay*
> 
> 8x opteron 875 duelcore cpus at 2.2ghz 128gigs of ddr 400 ecc 32x 4gb sticks


dat ram


----------



## Wildcard36qs

ROFL that thing is ancient. Must suck some mean power.


----------



## DaveLT

Ancient doesn't mean it's slow or that it pulls a lot of power


----------



## Wildcard36qs

Hey I didnt say slow. But you got 8 95W processors and 32 sticks of RAM that run at what, 2.5v? It is amazing how we got processors today that are faster that those 8 combined and use less power than 2 of them. Technology advances fast.


----------



## CynicalUnicorn

And the same or better can be accomplished with a similarly clocked 8-core Ivy-EP or Sandy-EP Xeon with 8x16GB DDR3, all in a standard single-socket EATX board. I do like seeing old tech still being used, but WOW does it advance quickly. Ikem, how old is that system? 10 years?


----------



## ikem

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> And the same or better can be accomplished with a similarly clocked 8-core Ivy-EP or Sandy-EP Xeon with 8x16GB DDR3, all in a standard single-socket EATX board. I do like seeing old tech still being used, but WOW does it advance quickly. Ikem, how old is that system? 10 years?


My Dell is 3 years old. Im guessing that Oct Opt is around that.


----------



## ikem

up and running! got the switch from work, again. 2524, only 2 1g ports, but that is all I really need.


----------



## cones

Quote:


> Originally Posted by *ikem*
> 
> up and running! got the switch from work, again. 2524, only 2 1g ports, but that is all I really need.
> 
> 
> 
> Spoiler: Warning: Spoiler!


Wondering how high up that is?


----------



## ikem

Quote:


> Originally Posted by *cones*
> 
> Wondering how high up that is?


bottom is like 6ft from the floor.


----------



## christoph

Quote:


> Originally Posted by *ikem*
> 
> up and running! got the switch from work, again. 2524, only 2 1g ports, but that is all I really need.


nice thinking, hmm the trunk port where's it going to?


----------



## cones

Quote:


> Originally Posted by *ikem*
> 
> bottom is like 6ft from the floor.


Not as high as I thought it would be.


----------



## tompsonn

Finally got some new hardware in my Hyper-V box;

SuperMicro X9SRL-F-B
Xeon E5-2620
64GB 1333MHz

I am a huge fan of SuperMicro's IPMI.


----------



## stumped

Quote:


> Originally Posted by *tompsonn*
> 
> I am a huge fan of SuperMicro's IPMI.


Heh, you are at odds with those at my work then...

Also, SuperMicro also dun goofed


----------



## tompsonn

Quote:


> Originally Posted by *stumped*
> 
> Heh, you are at odds with those at my work then...
> 
> Also, SuperMicro also dun goofed


Yeah I saw all that. Its at home so I don't care









What's UPnP even doing on this sort of thing?

Good thing I don't have a router capable of letting things poke holes in its firewall.


----------



## link1393

Hi, I have a little question about SATA/RAID controller. Which brand are good for those controller? I know LSI is a very good one but they are not cheap and for now my budget is a little bit limited.

Thanks

- Link1393


----------



## Wildcard36qs

Quote:


> Originally Posted by *stumped*
> 
> Heh, you are at odds with those at my work then...
> 
> Also, SuperMicro also dun goofed


With these Lenovo servers, they use Intel AMT 9.0 and I actually quite like it. It is no iDRAC, but it does the job really well.

Quote:


> Originally Posted by *link1393*
> 
> Hi, I have a little question about SATA/RAID controller. Which brand are good for those controller? I know LSI is a very good one but they are not cheap and for now my budget is a little bit limited.
> 
> Thanks
> 
> - Link1393


What do you plan on doing with it? A lot of people like the LSI because they can flash it to IT mode which lets it be an HBA and use ZFS or some other software RAID. Lots of them can be had for under $100. What's your budget?


----------



## link1393

Ho really! The cheaper I see on newegg and NCIX was at 170$
For the moment Iwill only have one HDD but, I planned to do a RAID 1.
It's for a ESXi server w/Untangle+Windows as gaming server and a future files server, my sata controller is not detect by ESXi 5.5. My budget is 100-120$

-Link1393


----------



## Wildcard36qs

Quote:


> Originally Posted by *link1393*
> 
> Ho really! The cheaper I see on newegg and NCIX was at 170$
> For the moment Iwill only have one HDD but, I planned to do a RAID 1.
> It's for a ESXi server w/Untangle+Windows as gaming server and a future files server, my sata controller is not detect by ESXi 5.5. My budget is 100-120$
> 
> -Link1393


IBM M1015 jump on eBay all day they under $100. I have the same card.


----------



## link1393

Wow THE deal







this card is 320$ on NCIX w/no cable !

Do you have a brand for the cable or I just buy a mini-SAS cable and this will do the job ?


----------



## Wildcard36qs

Quote:


> Originally Posted by *link1393*
> 
> Wow THE deal
> 
> 
> 
> 
> 
> 
> 
> this card is 320$ on NCIX w/no cable !
> 
> Do you have a brand for the cable or I just buy a mini-SAS cable and this will do the job ?


Any mini-SAS will do.


----------



## link1393

Nice ! thanks 

-Link1393


----------



## TheNegotiator

Made a few updates to my rack..

Front (From top to bottom):
*HP ProLiant DL380 G6*


Spoiler: Specs



*OS:* Windows Server 2012 R2 Enterprise
*CPU:* Intel Xeon E5520 2.26GHz QC
*Memory:* 12GB DDR3
*OS HDD(s):* 2x HP 72GB 15k SAS
*Storage HDD(s):* 3x Western Digital 2.5" 2TB
*Use:* Media Server/VM Replication


*Dell PowerEdge 15FP Rack Console

Dell PowerEdge 1950 III*


Spoiler: Specs



*OS:* N/A
*CPU:* 2x Intel Xeon E5450 3.00GHz QC
*Memory:* 8GB DDR2
*OS HDD(s):* 2x Dell 73GB 15k SAS
*Storage HDD(s):* N/A
*Use:* Not used, listed on eBay


*Dell PowerEdge R710*


Spoiler: Specs



*OS:* Windows Server 2012 R2 Enterprise
*CPU:* 2x Intel Xeon L5520 2.26GHz QC
*Memory:* 48GB DDR3
*OS HDD(s):* OCZ RevoDrive
*Storage HDD(s):* 2x Western Digital 750GB, 4x Western Digital 2TB
*Use:* VM host, File server


*Dell PowerEdge 2900*


Spoiler: Specs



*OS:* Windows Server 2012 R2 Standard
*CPU:* 1x Intel Xeon 5160 2.60GHz DC
*Memory:* 4GB DDR2
*OS HDD(s):* 2x Dell 146GB 15k SAS
*Storage HDD(s):* 5x Western Digital 1.5TB
*Use:* Backup server


*Dell PowerVault MD1000

2x APC Smart-UPS 1500VA (SMT1500RM2U) Battery Backups*



Rear (Top to bottom):
*HP ProCurve 3400cl-24 Gigabit Switch
Dell PowerEdge 2160AS KVM*
The rest is the same as the front.


----------



## nismoskyline

Quote:


> Originally Posted by *TheNegotiator*
> 
> Made a few updates to my rack..
> 
> Front (From top to bottom):
> *HP ProLiant DL380 G6*
> 
> 
> Spoiler: Specs
> 
> 
> 
> *OS:* Windows Server 2012 R2 Enterprise
> *CPU:* Intel Xeon E5520 2.26GHz QC
> *Memory:* 12GB DDR3
> *OS HDD(s):* 2x HP 72GB 15k SAS
> *Storage HDD(s):* 3x Western Digital 2.5" 2TB
> *Use:* Media Server/VM Replication
> 
> 
> *Dell PowerEdge 15FP Rack Console
> 
> Dell PowerEdge 1950 III*
> 
> 
> Spoiler: Specs
> 
> 
> 
> *OS:* N/A
> *CPU:* 2x Intel Xeon E5450 3.00GHz QC
> *Memory:* 8GB DDR2
> *OS HDD(s):* 2x Dell 73GB 15k SAS
> *Storage HDD(s):* N/A
> *Use:* Not used, listed on eBay
> 
> 
> *Dell PowerEdge R710*
> 
> 
> Spoiler: Specs
> 
> 
> 
> *OS:* Windows Server 2012 R2 Enterprise
> *CPU:* 2x Intel Xeon L5520 2.26GHz QC
> *Memory:* 48GB DDR3
> *OS HDD(s):* OCZ RevoDrive
> *Storage HDD(s):* 2x Western Digital 750GB, 4x Western Digital 2TB
> *Use:* VM host, File server
> 
> 
> *Dell PowerEdge 2900*
> 
> 
> Spoiler: Specs
> 
> 
> 
> *OS:* Windows Server 2012 R2 Standard
> *CPU:* 1x Intel Xeon 5160 2.60GHz DC
> *Memory:* 4GB DDR2
> *OS HDD(s):* 2x Dell 146GB 15k SAS
> *Storage HDD(s):* 5x Western Digital 1.5TB
> *Use:* Backup server
> 
> 
> *Dell PowerVault MD1000
> 
> 2x APC Smart-UPS 1500VA (SMT1500RM2U) Battery Backups*
> 
> Rear (Top to bottom):
> *HP ProCurve 3400cl-24 Gigabit Switch
> Dell PowerEdge 2160AS KVM*
> The rest is the same as the front.










that's sexy


----------



## Master__Shake




----------



## mrkambo

Quote:


> Originally Posted by *Master__Shake*


Holy mother of god! whats in all of that?!?


----------



## Master__Shake

Quote:


> Originally Posted by *mrkambo*
> 
> Holy mother of god! whats in all of that?!?


66tb's of space

esxi pxe pfsense 4 unlocked intel i7s an i5 an i3 some old amd and intel stuff....

and my personal fave a 16 bay jbod case with 16 toshiba 2tb drives

security camera system 12 seagate 2tb drives 5 more toshiba drives in the 4th 2u case and a 24 port infiniband switch.

oh and a 4u with a 4770k and 9 dvd burners.

it's not pretty but shes all mine.


----------



## cones

Quote:


> Originally Posted by *Master__Shake*
> 
> 66tb's of space
> 
> esxi pxe pfsense 4 unlocked intel i7s an i5 an i3 some old amd and intel stuff....
> 
> and my personal fave a 16 bay jbod case with 16 toshiba 2tb drives
> 
> security camera system 12 seagate 2tb drives 5 more toshiba drives in the 4th 2u case and a 24 port infiniband switch.
> 
> oh and a 4u with a 4770k and 9 dvd burners.
> 
> it's not pretty but shes all mine.


I did not notice the 6 sideway ones until you said 9, any reason for all of those?


----------



## christoph

yeah why 9 dvd's???


----------



## Master__Shake

Quote:


> Originally Posted by *christoph*
> 
> yeah why 9 dvd's???


Something to do.

I ripped all my CDs in 3 hours though.


----------



## cones

Quote:


> Originally Posted by *Master__Shake*
> 
> Something to do.
> 
> I ripped all my CDs in 3 hours though.


Bet it got annoying if the sideway drives auto ejected after done ripping


----------



## Master__Shake

Quote:


> Originally Posted by *cones*
> 
> Bet it got annoying if the sideway drives auto ejected after done ripping


Slightly lol


----------



## christoph

nice hardware you got there, I forgot to say


----------



## TheDarkLord100

Quote:


> Originally Posted by *Master__Shake*


And I thought I had a problem hoarding pc equipment


----------



## burksdb

Quote:


> Originally Posted by *ikem*
> 
> up and running! got the switch from work, again. 2524, only 2 1g ports, but that is all I really need.


i couldnt do it. gigbit or nothin









I have 2 of the 48 port 2530's that i have 0 use for....


----------



## levontraut

I have just upgraded my games rig and turned it into a Main server for myself.

the Specs are in my Sig.

here is a brief look aqt it though

Mobo:
gigabyte 990fxa ud7

CPU
8350

RAM
32 gig 1866

HDD:
lots ( can not fit anymore in the case)

OS
Server2012

It is taking a lot of time to set it up correctly, the file sharing is done, Teamspeak server is done now to do the backup etc...


----------



## LuckyJack456TX

My server interior (cables will be managed soon):


Server outside:


The Core:
Black Silver stone is my Pfsense box:
intel D510mo
60gb 2.5
2gb ddr2 667

Below Silverstone
Dell Optiplex 745 (DC and Print server)
Xeon E5120
4gb
74gb Raptor
W2K8 R2


everything in my Closet:


Laptop on dock is my security DVR
Dell D630
Core2 T7250
3gb
500gb HDD
W2K8 R2

Switch is a Dell Power connect 2724 (purchased 3 years ago for $20) full gigabit
Not pictured is my Wireless AP: TP-Link TL-WDR3600

Server is in my sig and is mainly used for Hyper-V for my 6 VMs;s, File serving and media streaming. Closet is a mess i know, just havent had the time to tear it all apart and put a proper shelf or rack in. Also limited by space.


----------



## ikem

Quote:


> Originally Posted by *burksdb*
> 
> i couldnt do it. gigbit or nothin
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I have 2 of the 48 port 2530's that i have 0 use for....


why use a gb switch when everything I have in home automation is 100mb. I have the server and my main desktop on 1gb which is plenty fine.


----------



## TheNegotiator

Quote:


> Originally Posted by *ikem*
> 
> why use a gb switch when everything I have in home automation is 100mb. I have the server and my main desktop on 1gb which is plenty fine.


I can't speak for burksdb, but I transfer very large files (30+GB) between multiple computers and the servers on a regular basis. Plus all of our TVs and the ded. theater also have HTPCs for streaming Blu Ray quality content across the house and we frequently have multiple streaming at the same time. My home switches probably see more traffic than the switches at work.


----------



## burksdb

Quote:


> Originally Posted by *TheNegotiator*
> 
> I can't speak for burksdb, but I transfer very large files (30+GB) between multiple computers and the servers on a regular basis. Plus all of our TVs and the ded. theater also have HTPCs for streaming Blu Ray quality content across the house and we frequently have multiple streaming at the same time. My home switches probably see more traffic than the switches at work.


Heh on the same page as you are there everything from doing installs over the house network because im playing with something to streaming plex to 3-4 devices at the same time. If only 10gbe wasnt so expensive.


----------



## ikem

Quote:


> Originally Posted by *TheNegotiator*
> 
> I can't speak for burksdb, but I transfer very large files (30+GB) between multiple computers and the servers on a regular basis. Plus all of our TVs and the ded. theater also have HTPCs for streaming Blu Ray quality content across the house and we frequently have multiple streaming at the same time. My home switches probably see more traffic than the switches at work.


again. i dont need all gb. i have 1 pc and 1 server. thats it. A free switch is the best really... better than a cheap hub anyway.


----------



## Sad

Heres my server


----------



## CJston15

DL380 G5?

It's very similar to server I just got for basically free from work. DL380 G5 but it only had dual core procs so I already purchased two X5450's so Quad Core 3Ghz procs. Also purchased 6 x 500gb 10k SAS drives to go with my 2 x 146gb 15k SAS drives. It's got 32gb of RAM in it right now so once I get my new procs and drives I should be off and running.


----------



## Foxwater

My little Ubuntu Minecraft server...runs on less than 20W

Finished product:


Internals:


----------



## TheDarkLord100

Quote:


> Originally Posted by *Foxwater*
> 
> My little Ubuntu Minecraft server...runs on less than 20W
> 
> Finished product:
> 
> 
> Internals:


lol that is sick


----------



## void

That minecraft server is awesome.







Great job


----------



## DaveLT

Quote:


> Originally Posted by *Foxwater*
> 
> My little Ubuntu Minecraft server...runs on less than 20W
> 
> Finished product:
> 
> 
> Internals:


Specs?







I know it's AM1 but ... yea.


----------



## Plan9

wow that's epic


----------



## Foxwater

Specs:
Processor: AMD Athlon 5150 APU (4 cores, 1.6ghz)
RAM: 4GB(2 x 2GB) Crucial 1600 Mhz
SSD: 64GB ADATA
MOBO: ASUS Mini-ITX
PSU: Mini-Box pico-PSU-160-XT (160 watt)


----------



## burksdb

That is awesome


----------



## pvt.joker

That's a sweet lookin lil box..








I need to get off my lazy bum and get my rack finished up and in place.. just too much work and everything else in the way of my side projects..


----------



## lordhinton

Everyone has proper servers then theres me with a srandard oem for a server


----------



## Sad

Quote:


> Originally Posted by *CJston15*
> 
> DL380 G5?
> 
> It's very similar to server I just got for basically free from work. DL380 G5 but it only had dual core procs so I already purchased two X5450's so Quad Core 3Ghz procs. Also purchased 6 x 500gb 10k SAS drives to go with my 2 x 146gb 15k SAS drives. It's got 32gb of RAM in it right now so once I get my new procs and drives I should be off and running.


thats awesome bud







i should make a club for 380's


----------



## cones

Quote:


> Originally Posted by *lordhinton*
> 
> Everyone has proper servers then theres me with a srandard oem for a server


You could be stuck with a motherboard that doesn't like 3tb HDDs and a case with room for only three HHDs. Also only have about 300GB left.


----------



## lordhinton

Quote:


> Originally Posted by *cones*
> 
> You could be stuck with a motherboard that doesn't like 3tb HDDs and a case with room for only three HHDs. Also only have about 300GB left.


suppose you're right, although my case only fits two hdd's but im certain the motherboard can take bigger hard drives, currently a single western digital 2tb green


----------



## pmataruso

These are pictures from two different sites owned by myself. Here is a live webcam of the main datacenter in Plainfield, NH -- http://cam2.dothackinc.com -- that an axis cam btw, it may ask for permission to load or some **** like that. The first location with the black and blue server racks, are the main site. Cisco 3500 Series edge, cisco 2960-49 for the core, mostly VMware, with linux guests. optical interent, also private fiber between this location and several other buildings in the local area. The other location is in my basement, and is my testing setup, I use it to test different network setups and what not. and hosts a couple back up server for the main site.









All the pictues above are from my house, this is for testing and failover from the main location below


----------



## LuckyJack456TX

@pmataruso

I spy an orange light on you 3 rack 3 Dell Server on the bottom.







Something unplugged. Is that all hosted out of a garage or something?


----------



## pmataruso

Yah one of the PSU's is unplugged. And It used to be a decent sized storage shed, but it no longer used at that now. The other half of the building is all the ingress power for the entire facility, I believe we have a 1500 Amp server. The building actaully sits on a huge plot of land. It is home to a trout farm, a big big trout farm, thats why the service is so big, tons of VFDs for well pumps, and SCADA for the automation, fiber links most of the buildings, for interent/phone/control and what not. There is also a small IP DSLAM in there to supply DSL to the houses on and around the property. Its like a small village of sorts. The servers tho have not much to do with the trout farm, they are for my own uses. Only on VM is dedicated to the SCADA and automation systems. To answer your quesiton tho, yes it was a garage/shed at one point. Now its just for ingress power and my servers.


----------



## EvilMonk

Quote:


> Originally Posted by *pmataruso*
> 
> Yah one of the PSU's is unplugged. And It used to be a decent sized storage shed, but it no longer used at that now. The other half of the building is all the ingress power for the entire facility, I believe we have a 1500 Amp server. The building actaully sits on a huge plot of land. It is home to a trout farm, a big big trout farm, thats why the service is so big, tons of VFDs for well pumps, and SCADA for the automation, fiber links most of the buildings, for interent/phone/control and what not. There is also a small IP DSLAM in there to supply DSL to the houses on and around the property. Its like a small village of sorts. The servers tho have not much to do with the trout farm, they are for my own uses. Only on VM is dedicated to the SCADA and automation systems. To answer your quesiton tho, yes it was a garage/shed at one point. Now its just for ingress power and my servers.


Damn nice setup man








You seem so young its impressive to see you've done all this.
Is your picture not representative of your age sorry for asking this how old are you?
Thumbs up for your server setup!!


----------



## pmataruso

Im am 22 years old, I started putting it together when I was in 9th grade, i started out with an old dell poweredge 2900, and red hat linux, and hosted a couple websites for friends, and it turned in to this. I own a small company here in Newport, NH. I mostly do managed IT contracts and network design, and a lot of computer repair. But I like the networking aspect the most. Im a cisco guy all the way.


----------



## burksdb

My Unraid Server running in Esxi

Mobo: Asus P7P55WS Supercomputer
Cpu: Xeon X3440
Ram: Crucial 8GB
Raid card: Silicon Image Sil 3114 - Esxi's Main Datastore access
Ssd: Crucial MX100 256GB - Unraid Cache Drive
Hdd's: 80GB Maxtor drive (Esxi Datastore)
Hdd's: 3x 3TB WD Red's & 2 1TB WD Green's (In the process of replacing the greens)
Nic: Brocade Cna 1020 Dual 10GB Sfp+ ports w/ Brocade Direct Sfp+ Cable
Case: Norco 4220 - Old Version
Gpu: Amd 3450
Psu: Corsair TX750

Esxi Passthru is setup for the Onboard Intel 6 port Sata AHCI Controller for the unraid drives.

Esxi Boots off one flash drive and then i have 1 VM running Unraid
The VM boots 20 seconds after Esxi. Datastore set to boot off a custom plpbt image to boot the Unraid USB after 1 second.




(Yes i know i forgot the sata cable going to the ssd







)

Burst from my main desktop with a second Brocade 10gb Nic installed to my Unraid Cache - I wish i could keep up the sustained speeds. Running test i get 280 - 300MB/S transfer to the cache drive.


My Main Esxi Server (in the process of getting a better case for it)
Case: Thermtake Ramdom case
Mobo: Asus Z8NA-D6C
Cpu: Dual Xeon L5520's
Ram: 16GB Hynix DDR3 ECC PC3-10600 (4 x 4gb sticks with 2 Dimms free)
HDD's: 1 40gb Intel SSD, 1 500GB WD Blue, 1 250GB Samsung, 2x 160GB VelociRaptor Drives - some drives are mounted in the 3-in-5 adapter.
Cpu Coolers: 2x Corsair H55's
Psu: Thermaltake 700W Toughpower

Runs a few VM's

Sophos - handles my dns, dhcp, firewall etc.. I have the 2 internal nics passed thru to this VM to handle my network
Plex / Newsgroups - This VM handles all my media -

I know the cabling is terrible - Main reason i am switching cases.



Older pic before changing ram - cables look a little better but not by much.


Running Plex - I forced 6 HD transcodes and it didnt stutter one bit. Highest useage i saw peaked at 90% at startup then settled around 75%. Im pretty sure i could of had 2 more transcoding streams running without any issues.


Runs a Linux Mint VM for Owncloud
Also runs a few other OS's im testing / playing with.

It's Been a *Learning Experience* the whole way thats for sure


----------



## EvilMonk

Quote:


> Originally Posted by *burksdb*
> 
> My Unraid Server running in Esxi
> 
> Mobo: Asus P7P55WS Supercomputer
> Cpu: Xeon X3440
> Ram: Crucial 8GB
> Raid card: Silicon Image Sil 3114 - Esxi's Main Datastore access
> Ssd: Crucial MX100 256GB - Unraid Cache Drive
> Hdd's: 80GB Maxtor drive (Esxi Datastore)
> Hdd's: 3x 3TB WD Red's & 2 1TB WD Green's (In the process of replacing the greens)
> Nic: Brocade Cna 1020 Dual 10GB Sfp+ ports w/ Brocade Direct Sfp+ Cable
> Case: Norco 4220 - Old Version
> Gpu: Amd 3450
> Psu: Corsair TX750
> 
> Esxi Passthru is setup for the Onboard Intel 6 port Sata AHCI Controller for the unraid drives.
> 
> Esxi Boots off one flash drive and then i have 1 VM running Unraid
> The VM boots 20 seconds after Esxi. Datastore set to boot off a custom plpbt image to boot the Unraid USB after 1 second.
> 
> 
> 
> 
> (Yes i know i forgot the sata cable going to the ssd
> 
> 
> 
> 
> 
> 
> 
> )
> 
> Burst from my main desktop with a second Brocade 10gb Nic installed to my Unraid Cache - I wish i could keep up the sustained speeds. Running test i get 280 - 300MB/S transfer to the cache drive.
> 
> 
> My Main Esxi Server (in the process of getting a better case for it)
> Case: Thermtake Ramdom case
> Mobo: Asus Z8NA-D6C
> Cpu: Dual Xeon L5520's
> Ram: 16GB Hynix DDR3 ECC PC3-10600 (4 x 4gb sticks with 2 Dimms free)
> HDD's: 1 40gb Intel SSD, 1 500GB WD Blue, 1 250GB Samsung, 2x 160GB VelociRaptor Drives - some drives are mounted in the 3-in-5 adapter.
> Cpu Coolers: 2x Corsair H55's
> Psu: Thermaltake 700W Toughpower
> 
> Runs a few VM's
> 
> Sophos - handles my dns, dhcp, firewall etc.. I have the 2 internal nics passed thru to this VM to handle my network
> Plex / Newsgroups - This VM handles all my media -
> 
> I know the cabling is terrible - Main reason i am switching cases.
> 
> 
> 
> Older pic before changing ram - cables look a little better but not by much.
> 
> 
> Running Plex - I forced 6 HD transcodes and it didnt stutter one bit. Highest useage i saw peaked at 90% at startup then settled around 75%. Im pretty sure i could of had 2 more transcoding streams running without any issues.
> 
> 
> Runs a Linux Mint VM for Owncloud
> Also runs a few other OS's im testing / playing with.
> 
> It's Been a *Learning Experience* the whole way thats for sure


The ESXi server I freaking love


----------



## cones

Never seen a picture with the CPUs right next to each other like that before, is that why you have the water cooling?


----------



## DaveLT

Quote:


> Originally Posted by *cones*
> 
> Never seen a picture with the CPUs right next to each other like that before, is that why you have the water cooling?


On dual socket ATX boards it's only normal for them to be butt right next to each other


----------



## Plan9

Indeed. It's not a water cooled system either


----------



## burksdb

Quote:


> Originally Posted by *Plan9*
> 
> Indeed. It's not a water cooled system either


I had one H55 laying around decided to see if it would fit and luckily it did, so i ordered another and went with it. It's not a "true" watercooled system like the one in my desktop but it gets the job done and stays quiet.


----------



## Plan9

Quote:


> Originally Posted by *burksdb*
> 
> I had one H55 laying around decided to see if it would fit and luckily it did, so i ordered another and went with it. It's not a "true" watercooled system like the one in my desktop but it gets the job done and stays quiet.


Oh it is water cooled? sorry guys


----------



## cones

Quote:


> Originally Posted by *DaveLT*
> 
> On dual socket ATX boards it's only normal for them to be butt right next to each other


The smaller board would explain it. Liquid cooled, doubt they actually have water in there.


----------



## EvilMonk

Quote:


> Originally Posted by *DaveLT*
> 
> On dual socket ATX boards it's only normal for them to be butt right next to each other


Damn Dave man you said something I didn't hear for a freaking while there... dual socket ATX board lol







Its been a decade since I heard those words lol


----------



## DaveLT

Quote:


> Originally Posted by *EvilMonk*
> 
> Damn Dave man you said something I didn't hear for a freaking while there... dual socket ATX board lol
> 
> 
> 
> 
> 
> 
> 
> Its been a decade since I heard those words lol


Nearly bought a Z8NA-D6C for me server as well but I decided ... after looking at the bios instability of my ex58-ud5 (it suddenly began) I'll be using that for my server. (BIOS is unstable when you change settings only)









I'll be getting a Perc 6i for it though







Quote:


> Originally Posted by *cones*
> 
> The smaller board would explain it. Liquid cooled, doubt they actually have water in there.


Wow. AIOs actually have water in there or else how does it work? It's considering water cooling bruh


----------



## cones

Quote:


> Originally Posted by *DaveLT*
> 
> Wow. AIOs actually have water in there or else how does it work? It's considering water cooling bruh


What about Methanol, Ethylene glycol, Propylene glycol, and Glycerol? There are other things besides water, why do you think the coolant in cars are not just pure water anymore? There are other liquids besides water, you been drinking to much antifreeze bruh?


----------



## DaveLT

Quote:


> Originally Posted by *cones*
> 
> What about Methanol, Ethylene glycol, Propylene glycol, and Glycerol? There are other things besides water, why do you think the coolant in cars are not just pure water anymore? There are other liquids besides water, you been drinking to much antifreeze bruh?


It's NOT full of ethylene glycol. Those are FLAMMABLE dude.
It's a lot of water with a bit of glycol but neither of those

to keep the aluminium rads from reacting in the solution.
Water is the best conductor of heat so why would you not use that?


----------



## cones

Quote:


> Originally Posted by *DaveLT*
> 
> It's NOT full of ethylene glycol. Those are FLAMMABLE dude.
> It's a lot of water with a bit of glycol but neither of those
> 
> to keep the aluminium rads from reacting in the solution.
> Water is the best conductor of heat so why would you not use that?


Ok so there is some stuff besides water, i misspoke earlier meant to say doubt it was pure water. One reason i can think of not to have pure water is all of them freezing while being shipped.


----------



## SamKook

Another reason is that algea will form in pure water and they're not made to be open so you can't change it(not impossible though).


----------



## burksdb

Quote:


> Originally Posted by *DaveLT*
> 
> Nearly bought a Z8NA-D6C for me server as well but I decided ... after looking at the bios instability of my ex58-ud5 (it suddenly began) I'll be using that for my server. (BIOS is unstable when you change settings only)


Im Happy with my Z8na been working flawlessly. Wish it had more Dimms, but thats what to i had to compromise so i could stay with ATX.


----------



## DaveLT

Quote:


> Originally Posted by *burksdb*
> 
> Im Happy with my Z8na been working flawlessly. Wish it had more Dimms, but thats what to i had to compromise so i could stay with ATX.


6dimms is good enough







8x6 if you're that worried ... since it's 1DPC so there's no extra strain on the IMC


----------



## BlackCat33

This is my first post on forum









My home rack:


----------



## lowfat

Quote:


> Originally Posted by *EvilMonk*
> 
> Damn Dave man you said something I didn't hear for a freaking while there... dual socket ATX board lol
> 
> 
> 
> 
> 
> 
> 
> Its been a decade since I heard those words lol


Supermicro made a LGA2011 2P ATX board. I had bought one but found out the hard way that engineering samples didn't work.


----------



## jibesh

Quote:


> Originally Posted by *BlackCat33*
> 
> This is my first post on forum
> 
> 
> 
> 
> 
> 
> 
> 
> 
> My home rack:


That's a lot of cables. What are the specs for stuff in the rack?


----------



## BlackCat33

Switches in rack constantly connected with patch panels on the rear side (three patch panels). Servers in rack connected by 2-4 gigabit ports with switches (HP 2530 support level 4 load balancing where LACP trunk is used) plus management network connections. Also my workstation connected by 4 ports with storage network by 4 gigabit ports. Generally, all connected stuff use about 40 ports. Additional ports free for my guests & etc.


----------



## cones

Quote:


> Originally Posted by *BlackCat33*
> 
> Switches in rack constantly connected with patch panels on the rear side (three patch panels). Servers in rack connected by 2-4 gigabit ports with switches (HP 2530 support level 4 load balancing where LACP trunk is used) plus management network connections. Also my workstation connected by 4 ports with storage network by 4 gigabit ports. Generally, all connected stuff use about 40 ports. Additional ports free for my guests & etc.


Looks like you have a lot of storage, is it just that and VMs?


----------



## BlackCat33

Quote:


> Originally Posted by *cones*
> 
> Looks like you have a lot of storage, is it just that and VMs?


Primary router on FreeBSD, torrent server + access point on Debian, storage server, backup server, one application server and one source-safe, guests access points and print server. Other connections - family's hosts. Network divided by VLANs which terminated on primary router. All hosts in rack except torrent/access point server are in off state most of time. Maximal power consumption 24/7 is about 60W.


----------



## beers

Quote:


> Originally Posted by *BlackCat33*
> 
> Maximal power consumption 24/7 is about 60W.


Is that just the switch?

Also, could you not put some L3 SVI on the "core" switch to manage your inter-VLAN routing? I'm not that familiar with those switches though so wasn't sure if L2-only.


----------



## BlackCat33

Quote:


> Originally Posted by *beers*
> 
> Is that just the switch?
> Also, could you not put some L3 SVI on the "core" switch to manage your inter-VLAN routing? I'm not that familiar with those switches though so wasn't sure if L2-only.


First of all - you are right: all my switches (HP 1810-24g v2 and HP 2530-48) Layer 2. All inter-VLAN routing processed by Debian and FreeBSD routers.
60W - this is power consumption of ISP modem and Atom D2500 platform (secondary router, access point, squid, DNS, DHCP, SAMBA, transmission & etc).


----------



## beers

Quote:


> Originally Posted by *BlackCat33*
> 
> First of all - you are right: all my switches (HP 1810-24g v2 and HP 2530-48) Layer 2. All inter-VLAN routing processed by Debian and FreeBSD routers.
> 60W - this is power consumption of ISP modem and Atom D2500 platform (secondary router, access point, squid, DNS, DHCP, SAMBA, transmission & etc).


Ah, sounds pretty cool. Thanks for clarifying, it's nice to know the specs of looking at something (apparently I'm a sucker for model numbers). Seems to be a pretty good looking rack









I was going to 'ermagherd' at 60w since my piddly little rack of everything pulls about ~160w








(Ubiquiti ERPro-8, Cisco 2960G-8TC, Cisco WLC 2504, Cisco 3502i AP, Fileserver in sig, Opengear 9108 PDU)


----------



## BlackCat33

Quote:


> Originally Posted by *beers*
> 
> Ah, sounds pretty cool. Thanks for clarifying, it's nice to know the specs of looking at something (apparently I'm a sucker for model numbers). Seems to be a pretty good looking rack
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I was going to 'ermagherd' at 60w since my piddly little rack of everything pulls about ~160w
> 
> 
> 
> 
> 
> 
> 
> 
> (Ubiquiti ERPro-8, Cisco 2960G-8TC, Cisco WLC 2504, Cisco 3502i AP, Fileserver in sig, Opengear 9108 PDU)


All servers, switches, tape drives, access points & etc in the rack and workstation takes maximum about 1500W at full load. Standard power consumption (half rack include workstation) - about 600W. I prefer to turn off all the equipment in case I do not need it because of the price of the electricity in my country


----------



## Plan9

Quote:


> Originally Posted by *beers*
> 
> Ah, sounds pretty cool. Thanks for clarifying, it's nice to know the specs of looking at something (apparently I'm a sucker for model numbers). Seems to be a pretty good looking rack
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I was going to 'ermagherd' at 60w since my piddly little rack of everything pulls about ~160w
> 
> 
> 
> 
> 
> 
> 
> 
> (Ubiquiti ERPro-8, Cisco 2960G-8TC, Cisco WLC 2504, Cisco 3502i AP, Fileserver in sig, Opengear 9108 PDU)


My home server is between 300 and 600w depending on how heavily it's being ragged and that's just one box (albeit with about a dozen disks in it)


----------



## EvilMonk

Quote:


> Originally Posted by *BlackCat33*
> 
> This is my first post on forum
> 
> 
> 
> 
> 
> 
> 
> 
> 
> My home rack:


Damn thats a HP 2000 series rack $$$$














I mean the rack in which the stuff is mounted, I know which model it is since I had to order 6 when I designed the servers architecture at work a couple years ago. These things are far from being cheap.







Its a very neat home setup! Kudos!!!







Do you test run the UPS often? Did you have to replace the batteries a lot since you have the setup?


----------



## DaveLT

Quote:


> Originally Posted by *lowfat*
> 
> Supermicro made a LGA2011 2P ATX board. I had bought one but found out the hard way that engineering samples didn't work.


ASUS as well


----------



## BlackCat33

Quote:


> Originally Posted by *EvilMonk*
> 
> Damn thats a HP 2000 series rack $$$$
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I mean the rack in which the stuff is mounted, I know which model it is since I had to order 6 when I designed the servers architecture at work a couple years ago. These things are far from being cheap.
> 
> 
> 
> 
> 
> 
> 
> Its a very neat home setup! Kudos!!!
> 
> 
> 
> 
> 
> 
> 
> Do you test run the UPS often? Did you have to replace the batteries a lot since you have the setup?


Thank you















I bought this cabinet (year of manufacture 2005) from a local chip maker company after upgrading their server room at a symbolic price (near 150USD).
All UPS (4xAPC Smart UPS 1000VA, production year 2005) also was bought from another company with dead batteries, which was replaced with new 12V 7AH during rack assembly.
Test run and battery calibration passed successfully.

I think I have to specify full hardware list in the rack:
Switches:
Primary:
HP Procurve 2530-48g (J9775A) - 48 gigabit ports, layer 2, management, layer 4 traffic load balancing support.
(Got it instead old 2510-48g - thanks for HP lifetime warranty!)
Secondary:
HP Procurve 1810-24g v2 - 24 gigabit ports, layer 2, web management. Very simple and quiet home switch.

Routers:
Primary:
Case: 2U iStar rackmount chassis (can't say something positive about this case ...)
MB: Supermicro X10SLH-F
CPU: Intel Xeon E3-1220 v3 1150 quad core
RAM: Crucial DDR3 unbuffered ECC 1600 1.35v 2x8GB
HDD: OS - SSD Intel X25-e 64GB (from ebay - bought used)
PSU: Seasonic 360W
NIC:
1. onboard - 2xIntel i210AT + IPMI dedicated
2. Dell Pro 1000 VT gigabit quad port server adapter - switch downlink to Procurve 2530-48g
3. Silicom PEG2BPI gigabit dual port bypass server adapter - Primary WAN - 100/2 Mbit connection + bypass to secondary router WAN interface
3. Silicom PEG2BPI gigabit dual port bypass server adapter - Secondary WAN - reserved
OS: FreeBSD 10 x64 (in upgrade process to 10.1)
Secondary:
Case: iStar ITX case
MB: Intel D2500CC
CPU: onboard Intel Atom D2500
RAM: Kingston DDR3 unbuffered SODIMM 2x2GB
HDD:
Fujitsu 160GB 2.5" 5400 rpm (from old laptop) - OS + squid cache
Seagate 750GB 2.5" 7200 rpm - transmission download + samba storage
PSU: 200W Flex-ATX

Servers:
Application server:
Case: 1U noname (bought used)
MB: Asrock IMB-181L (ITX)
CPU: Intel i5-4570 quad core
RAM: Corsair DDR3 unbuffered SODIMM 2x8GB
HDD:
WD VelocyRaptor 1500HLFS SATA 10000 rpm - OS
3xOCZ SSD SATA 30GB (for database - log/temp/user data)
NIC:
onboard - i210AT - DMZ
onboard - i217V - management
Intel Pro 1000 PT gigabit dual port server adapter - team 2x1000
PSU: Seasonic 250W Flex-ATX

Source safe server:
Case: 1U noname (bought used)
MB: Gigabyte G31 based (old mobo from friend of mine)
CPU: Intel E2180 dual core
RAM: Noname DDR2 unbuffered - 2x2GB
HDD: WD 5000AAKS SATA 3.5" 7200 rpm (OS + DATA on same disk)
NIC:
onboard - Realtek - management
Intel Pro 1000 PT gigabit dual port server adapter - team 2x1000
PSU: Seasonic 250W Flex-ATX

Storage:
Case: Coolermaster HAF-932
MB: Gigabyte G33-DS2R
CPU: Intel Xeon L5420 (mod LGA 771) quad core
RAM: Noname DDR2 unbuffered - 4x2GB
RAID: Dell PERC 6i + BBU SAS RAID controller (8 ports)
HDD:
OCZ SSD SATA 30GB - OS
8xWD 10EADS (Green, 1TB) - DATA - RAID 6
4xWD 5000AAKS - DATA - JBOD
NIC:
onboard - Realtek - management
Silicom PEG4BPI gigabit quad port bypass server adapter (bypass mode is disabled) - team 4x1000
PSU: Enermax 520W

Backup storage:
Case: Coolermaster HAF-932
MB: Supermicro X7SBE
CPU: Intel Q9550 quad core LGA775
RAM: Kingston DDR2 unbuffered - 4x2GB
RAID: 2xDell PERC 6i + BBU SAS RAID controller (8 ports)
SCSI: 2xAdaptec 320 single port (PCI-X) - for tape drives
HDD:
SAS 10K 146GB - OS
4xSAS 10K 146GB RAID 10 - for LTO2 tape drive writing
4xSAS 15K 300GB RAID 10 - for LTO3 tape drive writing
4xSATA WD3200KS 7200 rpm RAID 10 - backup share
NIC:
onboard - Intel - management
onboard - Intel - reserved
2xIntel Pro 1000 MT gigabit dual port server adapter - team 4x1000
PSU: Seasonic Platinum 1000W (yes, I know - overkill for this setup)

UPS:
4xAPC Smart UPS 1000VA (USB+COM)

Tape drives:
HP Ultrium 448 LTO2 external SCSI
HP Ultrium 960 LTO3 external SCSI


----------



## Irisservice

Very nice Blackcat33


----------



## Kaboooom2000uk

I have aquired a new board, SuperMicro X8QB6-F









Not bad for $700 when it should cost $2500!

Got to find a cheap chassis, or make a custom case!



CPUS:


----------



## DaveLT

Wow! You'd have to make your custom case.
30$ per chip?!


----------



## CloudX

I was able to get me a nice HP DL 380 G7 from work. It's all set up, just wish it wasn't the SAS 2.5in drives. 300GB x 8 is just not enough


----------



## EvilMonk

Quote:


> Originally Posted by *CloudX*
> 
> I was able to get me a nice HP DL 380 G7 from work. It's all set up, just wish it wasn't the SAS 2.5in drives. 300GB x 8 is just not enough


Don't worry if its a DL380 G7 with the SmartArray you are stuck with the HP factory drives so unless you want to spend some serious $$$ on HP SSDs or HP larger drives or replace the raid controller to buy non HP drives and swap all the drives for SSDs and then lose the HP support (I guess since its a G7 it still has some) you are stuck with those since the 600Gb drives are a lot more expensive to get. It's an awesome server to get I assure you


----------



## jibesh

Quote:


> Originally Posted by *EvilMonk*
> 
> Don't worry if its a DL380 G7 with the SmartArray you are stuck with the HP factory drives so unless you want to spend some serious $$$ on HP SSDs or HP larger drives or replace the raid controller to buy non HP drives and swap all the drives for SSDs and then lose the HP support (I guess since its a G7 it still has some) you are stuck with those since the 600Gb drives are a lot more expensive to get. It's an awesome server to get I assure you


The HP RAID controllers will work just fine with non HP drives as well. I have ran 500GB - 4TB SATA drives of HP P410 adapters without any issues. Even commercial SSDs works fine on those.


----------



## Vispor

Quote:


> Originally Posted by *jibesh*
> 
> The HP RAID controllers will work just fine with non HP drives as well. I have ran 500GB - 4TB SATA drives of HP P410 adapters without any issues. Even commercial SSDs works fine on those.


I can confirm this as well. I have G7's and G5's working with Western Digital drives.


----------



## DaveLT

Quote:


> Originally Posted by *Vispor*
> 
> I can confirm this as well. I have G7's and G5's working with Western Digital drives.


They are frequently purchased to be used for servers that definitely AREN'T HPs


----------



## EvilMonk

Quote:


> Originally Posted by *jibesh*
> 
> The HP RAID controllers will work just fine with non HP drives as well. I have ran 500GB - 4TB SATA drives of HP P410 adapters without any issues. Even commercial SSDs works fine on those.


Well I have 6 48U rack full of those Proliant servers at work and with HP StorageWorks SANs that most of the time refuse to boot with anything else than HP drives in them. 8 HP Servers at home and I can't get them to work with anything other than HP drives. I have Smart Array P410i 512m and 1024m, Smart Array P400 256m and 512m, Smart Array E212 128m it just refused to create arrays with anything other than HP drives... I tried velociraptors, caviar black and blue drives normal 7.2k wd 3.5" drives, OCZ and crucial SSDs... never worked...

And these are my home servers...


----------



## zanginator

That's certainly odd that it refuses to boot. I've had no issues with HP servers and any HDD manufacturers.

I founder if it is firmware version that may be running.


----------



## pvt.joker

Quote:


> Originally Posted by *jibesh*
> 
> The HP RAID controllers will work just fine with non HP drives as well. I have ran 500GB - 4TB SATA drives of HP P410 adapters without any issues. Even commercial SSDs works fine on those.


I also recently came across a 380 G7, so this makes me very happy. It's got 2x146gb (system partition) and 3x600gb that i was thinking about replacing with 480gb ssd's.. we'll see, have a couple spare drives laying around i can test and see what works and what doesn't.


----------



## EvilMonk

Quote:


> Originally Posted by *pvt.joker*
> 
> I also recently came across a 380 G7, so this makes me very happy. It's got 2x146gb (system partition) and 3x600gb that i was thinking about replacing with 480gb ssd's.. we'll see, have a couple spare drives laying around i can test and see what works and what doesn't.


I don't have G7s though, mines are G6s and G5s maybe that's the reason. The ones I have at work are G7s G6s and X1000 series SANs though and I always deploy the latest HP maintenance pack on them which update the controllers to the latest firmware (for everything (Bios, ethernet, iLO, etc... not just Smart Array)
***Edit***
Maybe if you keep them on the original Smart Array firmware you will be able to use the non HP drives


----------



## jibesh

Quote:


> Originally Posted by *EvilMonk*
> 
> Well I have 6 48U rack full of those Proliant servers at work and with HP StorageWorks SANs that most of the time refuse to boot with anything else than HP drives in them. 8 HP Servers at home and I can't get them to work with anything other than HP drives. I have Smart Array P410i 512m and 1024m, Smart Array P400 256m and 512m, Smart Array E212 128m it just refused to create arrays with anything other than HP drives... I tried velociraptors, caviar black and blue drives normal 7.2k wd 3.5" drives, OCZ and crucial SSDs... never worked...
> 
> And these are my home servers...


Well thats strange you are having those issues. I'm sure I have heard people on here and other forums using G5/G6/G7s with non HP branded drives. I run the P410s controllers on SuperMicro motherboards and they have worked fine with any traditional SATA/SAS HDDs and SSDs.


----------



## akshep

Updated setup. (I know the wiring is a mess, I really need to straighten it out.)
I have a couple of HP 10/100 switches (unused)
Dell Poweredge SC 1435 (ESXi)
Netgear ProSafe gigabit switch
Dell Poweredge 1650 (Webserver, DNS, Mail Server)
Custom build used for plex and file sharing
Cisco 2821 (I love this router.)
Rack mount satellite receiver ( only there because I have no other place for it.)


----------



## driftingforlife

Wish I still had my rack and have one in my room. Dam crap floor that can't take much weight


----------



## TheNegotiator

Quote:


> Originally Posted by *EvilMonk*
> 
> I don't have G7s though, mines are G6s and G5s maybe that's the reason. The ones I have at work are G7s G6s and X1000 series SANs though and I always deploy the latest HP maintenance pack on them which update the controllers to the latest firmware (for everything (Bios, ethernet, iLO, etc... not just Smart Array)
> ***Edit***
> Maybe if you keep them on the original Smart Array firmware you will be able to use the non HP drives


I have a DL380 G5 and G6 running at home. I've got a pair of Samsung 840 SSD's and 6 WD 2.5" 2TB Green drives in the G6 and several WD Blue drives in my G5, never had a problem with either. My firmware probably isn't the latest but I did update the firmware in both servers earlier this year.


----------



## EvilMonk

Quote:


> Originally Posted by *TheNegotiator*
> 
> I have a DL380 G5 and G6 running at home. I've got a pair of Samsung 840 SSD's and 6 WD 2.5" 2TB Green drives in the G6 and several WD Blue drives in my G5, never had a problem with either. My firmware probably isn't the latest but I did update the firmware in both servers earlier this year.


Well thats weird... I sure wish my servers would boot with those non HP branded drives since I can't afford HP SSDs and I have many 128 and 256 Gb SSD and 500 Gb and 1Tb 2.5" HD that I wish I could mount in these servers instead of having to buy drives on eBay. All these servers are filled with 300 and 146 Gb SAS drives or 1Tb midline 7.2k Sata drives. I tried many things to get them to work and even at one point swapped out a smart array for an LSI controller but the performance lost was too bad so I had to switch back...


----------



## CloudX

Spoiler: Snip



Quote:


> Originally Posted by *EvilMonk*
> 
> Well thats weird... I sure wish my servers would boot with those non HP branded drives since I can't afford HP SSDs and I have many 128 and 256 Gb SSD and 500 Gb and 1Tb 2.5" HD that I wish I could mount in these servers instead of having to buy drives on eBay. All these servers are filled with 300 and 146 Gb SAS drives or 1Tb midline 7.2k Sata drives. I tried many things to get them to work and even at one point swapped out a smart array for an LSI controller but the performance lost was too bad so I had to switch back...






That's a bummer for sure. I have about a dozen 300GB cold spares for my HP array. I don't think they are all HP, there was another brand in there if I'm not mistaken. I'll have to check. I'm sure you tried changing the firmware etc?


----------



## EvilMonk

Quote:


> Originally Posted by *CloudX*
> 
> 
> That's a bummer for sure. I have about a dozen 300GB cold spares for my HP array. I don't think they are all HP, there was another brand in there if I'm not mistaken. I'll have to check. I'm sure you tried changing the firmware etc?


Well I updated the servers with the latest release HP SPP DVD on a regular basis. Which update all the firmwares on the server not only the smart array controller firmware. But I never changed a drive firmware...


----------



## BlackCat33

Some improvements:

1. Reorganized my rack: increase free space between UPS units and reordered servers.
2. Little Debian-based router got new case, dual port network card (Intel Pro 1000 MT Dual port server adapter) and new Seasonic PSU instead 9 years old Flex-ATX PSU with near-dead capacitors. 24/7 power consumption dropped from 60W to near 40W (router + ISP modem).

Router after upgrade:

24/7 power consumption:

New order in the rack:


----------



## Gunfire

Quote:


> Originally Posted by *BlackCat33*
> 
> New order in the rack:
> 
> 
> Spoiler: Warning: Spoiler!












Very nice


----------



## BlackCat33

Quote:


> Originally Posted by *Gunfire*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Very nice










Thank you!


----------



## jonjryjo

Here is my lowly(compared to some of those here) server:
Fractal Define R4
Windows 2012 R2 Datacenter x64
AMD FX 8320
Corsair Vengance 16GB DDR3
Intel Gigabit NIC
Highpoint Rocket 640L RAID
GA-78LMT-USB3
2x WD Black 2TB
2x WD Blue 1TB
3x WD Green 2TB
1x WD Green 4TB

Total storage: 16TB

Usage: VMs, software development, torrents, and holding my media collection.


----------



## EvilMonk

Quote:


> Originally Posted by *jonjryjo*
> 
> Here is my lowly(compared to some of those here) server:
> Fractal Define R4
> Windows 2012 R2 Datacenter x64
> AMD FX 8320
> Corsair Vengance 16GB DDR3
> GA-78LMT-USB3
> 2x WD Black 2TB
> 2x WD Blue 1TB
> 3x WD Green 2TB
> 1x WD Green 4TB
> 
> Total storage: 15TB
> 
> Usage: VMs, software development, torrents, and holding my media collection.


I like it!!
Clean, well organised little server!!!

I'm building myself a new little server of this genre with a Haswell Xeon E3 I have and 8 2Tb SATA 6Gbps 7.2K HDD that I will put on a LSI hardware raid controller in raid 5 with a hotspare.


----------



## jonjryjo

Quote:


> Originally Posted by *EvilMonk*
> 
> I like it!!
> Clean, well organised little server!!!
> 
> I'm building myself a new little server of this genre with a Haswell Xeon E3 I have and 8 2Tb SATA 6Gbps 7.2K HDD that I will put on a LSI hardware raid controller in raid 5 with a hotspare.


Thanks! It was fun to build, and I'm enjoying managing it and having the ability to run a bunch of VMs (mainly Linux). BTW my math failed in that post; 16TB total, not 15.

Sounds like fun







. I wish I had an extra Intel chip for this, I'm not too fond of the performance I'm getting out of this Vishera, but maybe I'm expecting too much from it.


----------



## EvilMonk

Quote:


> Originally Posted by *jonjryjo*
> 
> Thanks! It was fun to build, and I'm enjoying managing it and having the ability to run a bunch of VMs (mainly Linux). BTW my math failed in that post; 16TB total, not 15.
> 
> Sounds like fun
> 
> 
> 
> 
> 
> 
> 
> . I wish I had an extra Intel chip for this, I'm not too fond of the performance I'm getting out of this Vishera, but maybe I'm expecting too much from it.


They still are good chips... its not like you have one of the first AMD A8 or A6 chips, its a bulldozer... its a great performant chip in comparison


----------



## burksdb

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *jonjryjo*
> 
> Here is my lowly(compared to some of those here) server:
> Fractal Define R4
> Windows 2012 R2 Datacenter x64
> AMD FX 8320
> Corsair Vengance 16GB DDR3
> Intel Gigabit NIC
> Highpoint Rocket 640L RAID
> GA-78LMT-USB3
> 2x WD Black 2TB
> 2x WD Blue 1TB
> 3x WD Green 2TB
> 1x WD Green 4TB
> 
> Total storage: 16TB
> 
> Usage: VMs, software development, torrents, and holding my media collection.






are you just running drive share or using storage spaces. software / hardware raid something?


----------



## cones

Quote:


> Originally Posted by *EvilMonk*
> 
> They still are good chips... its not like you have one of the first AMD A8 or A6 chips, its a bulldozer... its a great performant chip in comparison


Thought that was vishera? I should know since I have a 8320.


----------



## EvilMonk

Quote:


> Originally Posted by *cones*
> 
> Thought that was vishera? I should know since I have a 8320.


Its not a fusion APU, its a bulldozer chip... An FX-8320. so its not an A8 or an A6... thats why I said they are still good chips in comparison...


----------



## cones

Quote:


> Originally Posted by *EvilMonk*
> 
> Its not a fusion APU, its a bulldozer chip... An FX-8320. so its not an A8 or an A6... thats why I said they are still good chips in comparison...


You are the only one talking about the a6/8 chips. The 81xx are bulldozer, while the 83xx are vishera.


----------



## EvilMonk

Quote:


> Originally Posted by *cones*
> 
> You are the only one talking about the a6/8 chips. The 81xx are bulldozer, while the 83xx are vishera.


My bad... You are right, sorry about that I didn't own AMD chips since the 6 cores phenom II and kinda lost track of their chips since then.
I do have an HP Proliant DL 385 G6 though but its based on opterons that are related to chips of the phenom II architecture...


----------



## cones

Quote:


> Originally Posted by *EvilMonk*
> 
> My bad... You are right, sorry about that I didn't own AMD chips since the 6 cores phenom II and kinda lost track of their chips since then.
> I do have an HP Proliant DL 385 G6 though but its based on opterons that are related to chips of the phenom II architecture...


Looking at what chips are what confused me also. They don't seem to have the best standard with naming.


----------



## DaveLT

Quote:


> Originally Posted by *EvilMonk*
> 
> My bad... You are right, sorry about that I didn't own AMD chips since the 6 cores phenom II and kinda lost track of their chips since then.
> I do have an HP Proliant DL 385 G6 though but its based on opterons that are related to chips of the phenom II architecture...


Yes they are indeed. The 24xx chips are based on the K10 arch.


----------



## Shiveron

Quote:


> Originally Posted by *driftingforlife*
> 
> Wish I still had my rack and have one in my room. Dam crap floor that can't take much weight


Can you not bolt your rack to a good solid piece of plywood to help distribute the weight? Any floor deemed safe to live on should be able to take a few hundred pounds in a 1m² or so space.


----------



## mbudden

Quote:


> Originally Posted by *Shiveron*
> 
> Can you not bolt your rack to a good solid piece of plywood to help distribute the weight? Any floor deemed safe to live on should be able to take a few hundred pounds in a 1m² or so space.


This.

All about spreading that weight.

But also, why would you want a rack in your room? lol. Unless you like the sounds of fans and the heat generated by a decently equipped rack.


----------



## driftingforlife

Good idea but its not going to happen. Might be able to move out next year so will see what happens.

My plan was to Watercool the systems I will have in it. Main rig/file server/VM server. I will have them linked up to a rad outside. Also i like some noise at night, i can get to sleep better


----------



## savagemic

Quote:


> Originally Posted by *jonjryjo*
> 
> Here is my lowly(compared to some of those here) server:
> Fractal Define R4
> Windows 2012 R2 Datacenter x64
> AMD FX 8320
> Corsair Vengance 16GB DDR3
> Intel Gigabit NIC
> Highpoint Rocket 640L RAID
> GA-78LMT-USB3
> 2x WD Black 2TB
> 2x WD Blue 1TB
> 3x WD Green 2TB
> 1x WD Green 4TB
> 
> Total storage: 16TB
> 
> Usage: VMs, software development, torrents, and holding my media collection.


I love this server! I wanted to make one using the R5!


----------



## CloudX

Anyone play around with RemoteFX on Server 2012 yet? Seems pretty cool, I got a little test VM up with an old DX11 AMD GPU in the server. Works really nice!


----------



## jonjryjo

Quote:


> Originally Posted by *burksdb*
> 
> 
> are you just running drive share or using storage spaces. software / hardware raid something?


No RAID or anything; just keep important stuff backed up via CrashPlan. I should probably add an extra line of defense though... would be a pain to rip all that content again.

Specifically I run Windows Server 2012 R2 Datacenter as the host, and two instances of Windows Server 2012 R2 Standard (one for active directory, and one to share files and run the Plex server). Then I also have a couple of Linux VMs for (one for OpenVPN and two for development purposes).


----------



## burksdb

Quote:


> Originally Posted by *jonjryjo*
> 
> No RAID or anything; just keep important stuff backed up via CrashPlan. I should probably add an extra line of defense though... would be a pain to rip all that content again.
> 
> Specifically I run Windows Server 2012 R2 Datacenter as the host, and two instances of Windows Server 2012 R2 Standard (one for active directory, and one to share files and run the Plex server). Then I also have a couple of Linux VMs for (one for OpenVPN and two for development purposes).


was just thinking 16TB is too much data i would hate to loose even if it was just movies / music / tv shows (which is most of mine)

Im running Unraid with 5 3tb reds and a 128 vertex 4 raid 0 cache with a 10gb nic running on esxi. Total usable space is around 11TB usable.

My other server is running Esxi with:

Sophos Utm

Server 2012R2: Plex, nzbdrone, nzbget, couch potato, ccproxy. Ive been able to run 6 hd transcodes at once on this vm using 16 threads and 8gb ram

Win 7: Mumble


----------



## mbudden

Thought I'd share this with you guys.
http://www.amazon.com/Lenovo-ThinkServer-TS440-70AQ0009UX-Computer/dp/B00ILH15DA/ref=sr_1_2

Processor: Intel Xeon E3-1225 v3 Quad Core Processor (8M Cache, 3.2GHz - 3.60GHz) 84W
Hard Drive: None. Supports up to 4 x 3.5" Hard Drives | Please Note that hard drive caddies are not included. They are only placeholders.
RAM: 4GB DDR3 1600MHz | Optical Drive: SuperMulti 8X DVD+/-R/RW Dual Layer

$299

All you have to do is buy the HDD caddies. (iirc. they're $15 a piece)


----------



## M3nta1

Man, thats a pretty good deal. Proper Xeon CPU... 4 hard drive support... Why must I be a broke college student, that would be awesome. Even matches my new laptop...


----------



## cones

The RAM is low though, but a good price with everything else.


----------



## mbudden

Quote:


> Originally Posted by *cones*
> 
> The RAM is low though, but a good price with everything else.


For the price you're complaining about it only having 4GB of ECC RAM?
Not to mention the ability to have redundant PSU's.


----------



## DaveLT

Quote:


> Originally Posted by *M3nta1*
> 
> Man, thats a pretty good deal. Proper Xeon CPU... 4 hard drive support... Why must I be a broke college student, that would be awesome. Even matches my new laptop...


You call that a proper xeon?


----------



## cones

Quote:


> Originally Posted by *mbudden*
> 
> For the price you're complaining about it only having 4GB of ECC RAM?
> Not to mention the ability to have redundant PSU's.


I am not, the price would go up if it had more. That is low now for most of the tasks I would personally use.


----------



## M3nta1

Quote:


> Originally Posted by *DaveLT*
> 
> Quote:
> 
> 
> 
> Originally Posted by *M3nta1*
> 
> Man, thats a pretty good deal. Proper Xeon CPU... 4 hard drive support... Why must I be a broke college student, that would be awesome. Even matches my new laptop...
> 
> 
> 
> You call that a proper xeon?
Click to expand...

I have a celeron in my server sooo yeah, i do call it a proper Xeon. Especially paired with ECC ram


----------



## EvilMonk

Quote:


> Originally Posted by *cones*
> 
> The RAM is low though, but a good price with everything else.


Still if you buy the RAM directly from lenovo its going to cost a lot more than buying it from a reseller so its always better to buy it after at a retailer.


----------



## EvilMonk

Quote:


> Originally Posted by *DaveLT*
> 
> You call that a proper xeon?


They do are proper Xeon CPUs... I have an E3-1246v3 quad core clocked @ 4.1 Ghz and under most tasks it's more powerful than my 6 cores X5650 clocked @ 4.6Ghz


----------



## awil95

*OS:* Windows Server 2012 R2 Essentials
*Case:* ARK IPC-3U380 3U Rackmount Chassis
*CPU:* Intel Xeon E3-1246 v3
*Motherboard:* SUPERMICRO MBD-X10SL7-F-O
*Memory:* 2x Crucail 8GB ECC Unbuffered DDR3 1600
*PSU:* Rosewill Photon 550W
*OS HDD:* Samsung 850 Pro 256GB SSD
*Storage HDD(s):* 2x WD Red 2TB RAID1; 5x WD Red 3TB RAID1E
*Server Manufacturer:* Myself

This server is not complete as it is missing the 5x WD Red 3TB Drives in a RAID1E. I will be installing them soon inside a ICY Dock 5Bay Cage that will sit in the 3 5.25in Bays.


----------



## christoph

Quote:


> Originally Posted by *awil95*
> 
> *OS:* Windows Server 2012 R2 Essentials
> *Case:* ARK IPC-3U380 3U Rackmount Chassis
> *CPU:* Intel Xeon E3-1246 v3
> *Motherboard:* SUPERMICRO MBD-X10SL7-F-O
> *Memory:* 2x Crucail 8GB ECC Unbuffered DDR3 1600
> *PSU:* Rosewill Photon 550W
> *OS HDD:* Samsung 850 Pro 256GB SSD
> *Storage HDD(s):* 2x WD Red 2TB RAID1; 5x WD Red 3TB RAID1E
> *Server Manufacturer:* Myself
> 
> This server is not complete as it is missing the 5x WD Red 3TB Drives in a RAID1E. I will be installing them soon inside a ICY Dock 5Bay Cage that will sit in the 3 5.25in Bays.
> 
> 
> Spoiler: Warning: Spoiler!


nice


----------



## christoph

is it me? or the spolier doesn't hide the content?


----------



## EvilMonk

Quote:


> Originally Posted by *christoph*
> 
> is it me? or the spolier doesn't hide the content?


Nah it doesnt you're right!


----------



## cones

Quote:


> Originally Posted by *christoph*
> 
> is it me? or the spolier doesn't hide the content?


Probably just got the format wrong, the stuff to hide has to go somewhere between the tags. You don't need two spoilers.


----------



## christoph

got it


----------



## BlackCat33

Quote:


> Originally Posted by *awil95*
> 
> *OS:* Windows Server 2012 R2 Essentials
> *Case:* ARK IPC-3U380 3U Rackmount Chassis
> *CPU:* Intel Xeon E3-1246 v3
> *Motherboard:* SUPERMICRO MBD-X10SL7-F-O
> *Memory:* 2x Crucail 8GB ECC Unbuffered DDR3 1600
> *PSU:* Rosewill Photon 550W
> *OS HDD:* Samsung 850 Pro 256GB SSD
> *Storage HDD(s):* 2x WD Red 2TB RAID1; 5x WD Red 3TB RAID1E
> *Server Manufacturer:* Myself
> 
> This server is not complete as it is missing the 5x WD Red 3TB Drives in a RAID1E. I will be installing them soon inside a ICY Dock 5Bay Cage that will sit in the 3 5.25in Bays.


Great homemade servers!








Little question: why you choose CPU Fan air flow orientation "from back to rear" instead common "from rear to back"?


----------



## awil95

Quote:


> Originally Posted by *BlackCat33*
> 
> Great homemade servers!
> 
> 
> 
> 
> 
> 
> 
> 
> Little question: why you choose CPU Fan air flow orientation "from back to rear" instead common "from rear to back"?


Great Question! I had to turn the heatsink/fan assembly 180degrees due to the position of the hard drive. No this would have blown air to the front of the case but i also flipped the fan so all fans and airflow are front front to rear.


----------



## BlackCat33

Quote:


> Originally Posted by *awil95*
> 
> Great Question! I had to turn the heatsink/fan assembly 180degrees due to the position of the hard drive. No this would have blown air to the front of the case but i also flipped the fan so all fans and airflow are front front to rear.


Very interesting solution








Anyway, I think those rackmount cases is not perfect (own 2 2U from iStar, same as yours) because very short and no space for cpu fan and disk drives, especially in combination with server-grade motherboards from Supermicro. Also PSU location on the right side require very long PSU cables and adapters ATX 24 pin plus CPU 8 pin for motherboard. Looks like Star engineers hate computer enthusiasts


----------



## awil95

Quote:


> Originally Posted by *BlackCat33*
> 
> Very interesting solution
> 
> 
> 
> 
> 
> 
> 
> 
> Anyway, I think those rackmount cases is not perfect (own 2 2U from iStar, same as yours) because very short and no space for cpu fan and disk drives, especially in combination with server-grade motherboards from Supermicro. Also PSU location on the right side require very long PSU cables and adapters ATX 24 pin plus CPU 8 pin for motherboard. Looks like Star engineers hate computer enthusiasts


The only real purpose for these shallow rack mount cases is for the 2post rack users out there. Like me for our office we only have a 20U wall mounted swing out rack that is only 18in deep. Any server chassis that is deeper needs to have rails and a 4 post rack. I install servers on the daily for work. I just setup a 4U Geovision NVR Server for a warehouse with about 64 IPcam, 16x3TB WD Purple drives and Quad intel NICs.


----------



## Whisenhunter

I think it's time for me to finally share my full home data center setup as it stands now. I have been piecing this cluster together for a while now, mostly with things that fall in to my lap that have been removed from production to cycle in new servers as well as I have 5 iomega ix4-200D NAS' that I got for free as they would no longer boot. The Synology DS212j was a gift from Synology and I purchased the DS1512+ later on.

Currently from top to bottom this is how everything is set up:

Synology DS1512+ - currently set up with 4 2TB drives (2 WD Reds and 2 Hitatchi Enterprise Grade) in RAID5. This NAS currently hosts all household backups from all desktops and laptops (including all the Macs). This NAS also provides my vSphere cluster with iSCSI and NFS for VM datastores for vSphere HA

Synology DS212j - currently set up with 2 1.5TB drives in RAID 1. This holds all backups from the two Veeam instances that are running, both on separate Server 2012 R2 VMs.

iomega ix4-200d - Currently set up with 4 250GB drives in RAID 5. This holds all critical backups from the DS1512+

iomega ix4-200d - Currently set up with 4 250GB drives in RAID 5. This holds all critical backups from the DS212j

Server 1 - HP ProLiant DL360G5 - Dual Xeon E5345 2.33GHz Quad core processors with 32GB of memory - 6x 250GB SSDs in RAID6

Server 2 - HP ProLiant DL360G5 - Dual Xeon E5345 2.33GHz Quad core processors with 32GB of memory - 6x 120GB SSDs in RAID6

Server 3 - HP ProLiant DL360G5 - Dual Xeon E5345 2.33GHz Quad core processors with 32GB of memory - 4X 1TB HDDs in RAID10

Server 4 - Dell PowerEdge 1900 - Single Xeon E5450 3.0GHz Quad Core Processor with 24GB of memory - 6x 250GB drives in RAID5 with an online hot spare

Server 5 - Dell PowerEdge C6100 - 4 server nodes total - Each nodes has dual 6 core hyper threaded Xeons with 48GB of memory each - no drives and the system is not plugged in due to ta faulty PCM and some weird wire routing as I haven't worked on it since it fell in to my lap. I do have 2 spare nodes for this system and am looking around for the C6100 to desktop computer conversion chassis

On the back of the rack top to bottom is all my networking equipment. The current home router is a Peplink Balace 310 which normally has 2 Comcast connection that are bonded together and sent back to my Balance 380 in my real datacenter, however my secondary Comcast connection is being utilized else where at this time, however the VPN tunnel is still up so I can stream Netflix in full HD (screw you Comcast). There is also an AT&T LTE card which I send all VoIP traffic through as well as fail over to when the Comcast connection goes down.

My main network switch for the house is a Dell PowerConnect 5448 - there are a total of 30 CAT6 jacks on the 1st and 2nd floors of the house. For the 3rd floor I ran fiber in conduit up to the 3rd floor IDF where it is connected to a 16 port switch to connect all 12 CAT6 jacks on the 3rd floor (Master bedrooms). I also ran fiber to my office because why not...

From there I have a Dell PowerConnect 5424 which handles all iSCSI traffic keeping it very busy and extremely chatty. I set up a separate VLAN for switch management and SNMP data so my Cacti server didn't go crazy trying to grab data.

I have a Dell RPS-600 that both of these switches are connected to on a different UPS.

Now for PICS!!!











This is of course a work always in progress, however I figured it was time to make somewhat of an introductory post to this thread.

P.S. ALWAYS keep a crash cart in your datacenter - you'll thank yourself many times.


----------



## cones

That is a lot of SSD storage.


----------



## Whisenhunter

Quote:


> Originally Posted by *cones*
> 
> That is a lot of SSD storage.


It actually does not add up to that much surprisingly... VMSVR01.Skynet.local which has the 6x 250GB SSDs in RAID 6 only has 926.5 GB of storage capacity. VMSVR02.Skynet.local which has 6x 120GB SSDs adds up to 442GB

I run most of my VMs over iSCSI except for my Windows servers which include my Domain Controller, Exchange Server, Terminal Server, and my Backup Servers (Veeam FTMFW). The Windows servers are split up between the hosts for best resource headroom, at least according to vSphere HA...


----------



## cones

Quote:


> Originally Posted by *Whisenhunter*
> 
> It actually does not add up to that much surprisingly... VMSVR01.Skynet.local which has the 6x 250GB SSDs in RAID 6 only has 926.5 GB of storage capacity. VMSVR02.Skynet.local which has 6x 120GB SSDs adds up to 442GB
> 
> I run most of my VMs over iSCSI except for my Windows servers which include my Domain Controller, Exchange Server, Terminal Server, and my Backup Servers (Veeam FTMFW). The Windows servers are split up between the hosts for best resource headroom, at least according to vSphere HA...


~2.2TB total with ~1.3TB usable is still a lot of SSD storage for "home" use. Much bigger and way faster then any i have, also much more expensive.


----------



## Gunfire

Quote:


> Originally Posted by *Whisenhunter*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> I think it's time for me to finally share my full home data center setup as it stands now. I have been piecing this cluster together for a while now, mostly with things that fall in to my lap that have been removed from production to cycle in new servers as well as I have 5 iomega ix4-200D NAS' that I got for free as they would no longer boot. The Synology DS212j was a gift from Synology and I purchased the DS1512+ later on.
> 
> Currently from top to bottom this is how everything is set up:
> 
> Synology DS1512+ - currently set up with 4 2TB drives (2 WD Reds and 2 Hitatchi Enterprise Grade) in RAID5. This NAS currently hosts all household backups from all desktops and laptops (including all the Macs). This NAS also provides my vSphere cluster with iSCSI and NFS for VM datastores for vSphere HA
> 
> Synology DS212j - currently set up with 2 1.5TB drives in RAID 1. This holds all backups from the two Veeam instances that are running, both on separate Server 2012 R2 VMs.
> 
> iomega ix4-200d - Currently set up with 4 250GB drives in RAID 5. This holds all critical backups from the DS1512+
> 
> iomega ix4-200d - Currently set up with 4 250GB drives in RAID 5. This holds all critical backups from the DS212j
> 
> Server 1 - HP ProLiant DL360G5 - Dual Xeon E5345 2.33GHz Quad core processors with 32GB of memory - 6x 250GB SSDs in RAID6
> 
> Server 2 - HP ProLiant DL360G5 - Dual Xeon E5345 2.33GHz Quad core processors with 32GB of memory - 6x 120GB SSDs in RAID6
> 
> Server 3 - HP ProLiant DL360G5 - Dual Xeon E5345 2.33GHz Quad core processors with 32GB of memory - 4X 1TB HDDs in RAID10
> 
> Server 4 - Dell PowerEdge 1900 - Single Xeon E5450 3.0GHz Quad Core Processor with 24GB of memory - 6x 250GB drives in RAID5 with an online hot spare
> 
> Server 5 - Dell PowerEdge C6100 - 4 server nodes total - Each nodes has dual 6 core hyper threaded Xeons with 48GB of memory each - no drives and the system is not plugged in due to ta faulty PCM and some weird wire routing as I haven't worked on it since it fell in to my lap. I do have 2 spare nodes for this system and am looking around for the C6100 to desktop computer conversion chassis
> 
> On the back of the rack top to bottom is all my networking equipment. The current home router is a Peplink Balace 310 which normally has 2 Comcast connection that are bonded together and sent back to my Balance 380 in my real datacenter, however my secondary Comcast connection is being utilized else where at this time, however the VPN tunnel is still up so I can stream Netflix in full HD (screw you Comcast). There is also an AT&T LTE card which I send all VoIP traffic through as well as fail over to when the Comcast connection goes down.
> 
> My main network switch for the house is a Dell PowerConnect 5448 - there are a total of 30 CAT6 jacks on the 1st and 2nd floors of the house. For the 3rd floor I ran fiber in conduit up to the 3rd floor IDF where it is connected to a 16 port switch to connect all 12 CAT6 jacks on the 3rd floor (Master bedrooms). I also ran fiber to my office because why not...
> 
> From there I have a Dell PowerConnect 5424 which handles all iSCSI traffic keeping it very busy and extremely chatty. I set up a separate VLAN for switch management and SNMP data so my Cacti server didn't go crazy trying to grab data.
> 
> I have a Dell RPS-600 that both of these switches are connected to on a different UPS.
> 
> Now for PICS!!!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This is of course a work always in progress, however I figured it was time to make somewhat of an introductory post to this thread.
> 
> P.S. ALWAYS keep a crash cart in your datacenter - you'll thank yourself many times.












But +1 for a local


----------



## Whisenhunter

Quote:


> Originally Posted by *Gunfire*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> But +1 for a local


Thanks! The SSDs also fell in to my lap along with the servers







I currently have 6x 120GB SSDs sitting on my desk doing nothing as I haven't found a home for them yet, maybe the C6100 is in order for some SSDs... Always fun to increase the load demand on power dams here in Washington by firing up another server.

Funny thing is my DataCenter started out as useless garage space (double depth sing car garage). So I had my contractor go ahead and frame it in along with a general purpose storage room.

Currently the door is waiting to be installed, I custom ordered an 8 foot door so I could have the ability to roll that massive APC 48U rack in and out of the room - in case you're curious it's an AR3307 which I picked up off of Craigslist in mint condition for $50. I had to replace all the slam locks and door locks on it though as it did not come with keys. I have another AR3307 chilling in the garage as well but its fate is unknown at this time.

Once the door is installed I'll then be ordering a set of rubber curtains to separate the cold/hot aisle. I built this room with a 6 inch passive intake duct to pull code air from the outside in the from of the room and a 300 CFM fan mounted in the ceiling in the rear pushing all the hot exhaust air outside. Now that my house has passed final inspection I am designing ways of being able to use the hot air to heat the house rather than waste it.

Does anybody have any rack mount KVMs they may be looking to sell?


----------



## Bottomburp

Hi all

I have looked at this thread on and off for what seems like years now. Anyway, it suddenly occurred to me that I should post a few snaps of my home server. This only sprang to mind because I am about to build a new one but more of that later.

These pictures are of the mock-up stage of the build a couple of years ago but the current set-up uses 1 heavily modified (more on that later as well) Dell R200 server.


----------



## cones

That stand is custom correct? It looks nice.


----------



## CloudX

Love it! I need to post a pic of my setup too lol


----------



## mbudden

Quote:


> Originally Posted by *cones*
> 
> That stand is custom correct? It looks nice.


Just by looking at the janky welds I assume it is.

Not sure why he/she didn't paint it before installing hardware.


----------



## Ferrari8608

Quote:


> Originally Posted by *mbudden*
> 
> Not sure why he/she didn't paint it before installing hardware.


I assume that would be because:
Quote:


> Originally Posted by *Bottomburp*
> 
> These pictures are of the mock-up stage of the build a couple of years ago but the current set-up uses 1 heavily modified (more on that later as well) Dell R200 server.


----------



## Bottomburp

Quote:


> Just by looking at the janky welds I assume it is.


Clearly not a welder I see mbudden. Tack-it, Check-it, Weld-it, Check-it, Paint-it.









I have some finished photos around here somewhere....


----------



## CloudX

I'm a professional welder as well, it's my secondary career skill. Those weld's will do!


----------



## Gunfire

Quote:


> Originally Posted by *Bottomburp*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> Clearly not a welder I see mbudden. Tack-it, Check-it, Weld-it, Check-it, Paint-it.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I have some finished photos around here somewhere....


Quote:


> Originally Posted by *CloudX*
> 
> I'm a professional welder as well, it's my secondary career skill. Those weld's will do!


This.


----------



## Apple Pi

Here is my ghetto datacruncher temp build until I can get a better enclosure for it, though this seems it will work well.

The build consists of,

Two Noctua NH-U12DXi4
Dual X5675 3.06Ghz 6-Core Intel Xeon,
24GB 1333Mhz DDR3 ECC Registered RAM
4x120GB Intel 330 SSD
Dell C6100 Motherboard
Custom modded Rosewell 650W PSU.
The whole build cost around $400 with scavenged parts and parts from e-bay



The main purpose of this server will be for encoding tasks, but when I upgrade to a new GPU I will look into vGPU using modded GTX 680s.


----------



## Wildcard36qs

Had to mod the PSU so you could power the single node without the full chassis? Pretty cool. Those SATA cables look like they are nearly touching the fan hahaha


----------



## EvilMonk

Quote:


> Originally Posted by *Apple Pi*
> 
> Here is my ghetto datacruncher temp build until I can get a better enclosure for it, though this seems it will work well.
> 
> The build consists of,
> 
> Two Noctua NH-U12DXi4
> Dual X5675 3.06Ghz 6-Core Intel Xeon,
> 24GB 1333Mhz DDR3 ECC Registered RAM
> 4x120GB Intel 330 SSD
> Dell C6100 Motherboard
> Custom modded Rosewell 650W PSU.
> The whole build cost around $400 with scavenged parts and parts from e-bay
> 
> 
> 
> The main purpose of this server will be for encoding tasks, but when I upgrade to a new GPU I will look into vGPU using modded GTX 680s.


Damn brother!!!








Love the ghetto mod kudos on making it work together!








Those SATA cables scare me the hell out though


----------



## Apple Pi

Sadly the Sata Cables are quite rigid and I would need to affix the drives to get the cables to move out of the way. I'm not too bothered by them as they are quit rigid and the fan has a Low noise adapter so I'm not too worried about the cables getting sucked in or anything.


----------



## FireBean

I really wish I could share photos of one of our data center here at Koch. We are actually needing to reduce the amount of copper in the data center. Feels pretty gratifying to replace a 2ft wide trunk of copper with 1in of fiber.


----------



## AlphaZero

This is my first contribution to the forums!









*Server #1 as described in my signature:*

This is my main and most active server. My main workstation sits on the left side of my desk, and this server sits on the right (that's why you see the sub). This server is built entirely from off-the-shelf components.

It started in 2009 with a Core 2 Duo "Wolfdale", a 1TB WD Green, and 2x1.5TB Seagate "Time-Bombs". It was built to replace a Compaq Desktop with a 1TB WD Green that was my first file server where I stored the media I accessed wth my XBOX moddded to run XBMC. Today only the CM Stacker case remains of the original build after all the upgrades through the years.

I know it's bad practice to overclock servers as it effects stability and longevity, and while I certainly agree with that and don't advocate it, it's my home server and it's been thoroughly tested, and only ever has gone down for upgrades and updates.

These pictures are a little old, as they don't show the Hard Drive Cages fully loaded as they are now.


----------



## ndoggfromhell

Why would you place a sub that close to a server? The vibrations can't be good for your data.
Quote:


> Originally Posted by *AlphaZero*
> 
> This is my first contribution to the forums!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Server #1 as described in my signature:*
> 
> This is my main and most active server. My main workstation sits on the left side of my desk, and this server sits on the right (that's why you see the sub). This server is built entirely from off-the-shelf components.


----------



## EvilMonk

My new server setup / network lab is just installed in my new Norco rack.


Server 1:
HP Proliant DL380 G5 Storage Server:
Dual Intel Xeon Quad X5450 3.0Ghz, 32Gb DDR2 667 FBDIMM - HP Smart Array P400 512Mb 8x146Gb 10k R5 - Windows Server 2008 R2 - Plex Server 0.9.11
StorageWorks MSA60 12x2Tb Raid6 - HP SmartArray P800 1Gb
Plex Library - 4.91 Tb Blu-Ray / Movie Rip - 2.13Tb TV Rip - 131 Gb Raw Pictures library

Server 2:
HP Proliant DL385 G5 :
Dual AMD Opteron Quad Core 2.3Ghz - 32Gb DDR2 667 ECC-R - HP Smart Array P400 512Mb 8x146Gb 10k R5 - ESXi 5.5 u1

Server 3:
HP Proliant DL360 G5 :
Dual Intel Xeon Quad E5450 3.0Ghz, 32Gb DDR2 667 FBDIMM - HP Smart Array P400 512Mb 4x146Gb 10k R5 - Windows Server 2012 R2 - Geforce 640GT

Server 4:
HP Proliant DL320 G5p :
Intel Xeon Quad x3360 2.83Ghz, 16Gb DDR2 800 ECC - HP Smart Array E212 128Mb 2x300Gb 15k R1 - Windows Server 2012 R2 + SQL Server 2012 - Geforce 640GT

Server 5:
HP Proliant DL160 G6 :
Dual Intel Xeon Hexa L5640 2.26Ghz, 72Gb DDR3 1333R ECC - HP Smart Array P410 512Mb 4x300Gb 15k R5 - Windows Server 2012 R2 + Exchange 2013 - Matrox G200e

Server 6:
HP Proliant DL320 G6 :
Intel Xeon Hexa X5660 2.8Ghz, 24Gb DDR3 1333R ECC - HP Smart Array P410 512Mb 4x300Gb 15k R5 - Windows Server 2012 R2 + Sharepoint + Symantec Endpoint Management - Geforce GT 640

Server 7:
HP Proliant SE316M1R2 :
Dual Intel Xeon Hexa X5650 2.67Ghz, 48Gb DDR3 1333R ECC - HP Smart Array P410 512Mb 8x146Gb 10k R5 - Windows Server 2012 R2 + HyperV - Geforce 640 GT

Server 8:
Apple xServe 2008 :
Dual Intel Xeon Quad E5462 2.8Ghz, 32Gb DDR2 800 FBDIMM - 3x1Tb SAS 7.2K - OSX Mavericks Server 10.9.3 Server + - Geforce GT120

and finally the Network Equipment (some of it is in the back of the cisco equipment facing the wall since I am missing room with the front being full now):
1x Cisco 2651XM Router with IOS 12.4(5)T Telephony/CME 4.1 upgraded to 256m RAM / 48m flash + VPN Encryption & NM-PRI-1CT1-CSU
1x Cisco 2950T IOS 12.3(4)
1x Cisco ASA 5505 Security Plus VPN / firewall on ASA 9.3(2) ASDM 7.2(3) upgraded to 1Gb Ram / 4Gb flash
2x 3Com 2824 unmanaged 24 ports gigabit switches
1x 3Com 2848 managed 48 ports gigabit switch
1x Juniper NetScreen 500 VPN / firewall upgraded to 2Gb flash
1x Juniper NSMxpress security appliance upgraded to redundant PSU
1x Sonicwall EX2500 VPN / firewall
1x 8 ports usb KVM + logitech wireless mk700 keyboard + mouse


----------



## andymiller

Im seeing a lot of HP g5-6's floating about.

I have a DL380 G5, 2x 1.8ghz quad core LV, can anyone confirm if it will quiet down alittle if I add the second psu??? and will a second psu consume any more power???


----------



## Wildcard36qs

Honestly those older gen servers are hot and loud and thirsty for power. I'd really look at something newer and more efficient. My Thinksever towers are cheap and really quiet and sip power.

I don't think adding a 2nd psu will quiet it down any. It may draw little more power but not much more.


----------



## andymiller

was just a thought TBH, its been acting as a shelf for my pedestal server and TiVo box for about 6 months, was about to list it on ebay and use the cash to re case my pedestal into something with hot swap bays.


----------



## EvilMonk

Quote:


> Originally Posted by *andymiller*
> 
> Im seeing a lot of HP g5-6's floating about.
> 
> I have a DL380 G5, 2x 1.8ghz quad core LV, can anyone confirm if it will quiet down alittle if I add the second psu??? and will a second psu consume any more power???


1. Unfortunately no sorry








2. Depends of what setting you set in the bios for the way your server will use the 2 PSUs but it will most likely use a little more juice to run both in parallel at all time in case one goes down... Unless you turn all the C-State settings on in your BIOS I mean you can't magically power more components without using more electricity...


----------



## EvilMonk

Quote:


> Originally Posted by *andymiller*
> 
> was just a thought TBH, its been acting as a shelf for my pedestal server and TiVo box for about 6 months, was about to list it on ebay and use the cash to re case my pedestal into something with hot swap bays.


Well you are using Low Voltage chips in the server he might not have noticed... I find these servers quite good actually and they give me still good performance even today (I mean the DL380 G5 Storage Server hooked to the 24 Tb HP Storageworks MSA60 SAN, it does all my transcoding and is my plex server for the whole house (With the wife watching all her tv shows on it) and it never had any issues) I have a DL360 G5 with the same chips as well and it is still going strong for a couple years without any issues... both machines are maintained regularly...


----------



## andymiller

Server was a real work horse when I was using it, just found the 2tb disc limit was quickly an issue.

my main server has 6x 3tb and 2x 1tb mirrored for OS.

have considered using the HP as a virtual host and just connecting my now main server as an iscsi box.


----------



## EvilMonk

Quote:


> Originally Posted by *andymiller*
> 
> Server was a real work horse when I was using it, just found the 2tb disc limit was quickly an issue.
> 
> my main server has 6x 3tb and 2x 1tb mirrored for OS.
> 
> have considered using the HP as a virtual host and just connecting my now main server as an iscsi box.


well you seriously should... I use the iSCSI connector on Windows Server with the DL380 G5 Storage Server and my VMware connect to it via iSCSI to use its storage space... it work really well and it fix all my storage issues. Plus with the amount of cores you have on it, you will have plenty to work with for Virtualization


----------



## Reaper28

My 15TB (12TB) home storage server.









OS: WHS2011
Case: Fractal R4 - Black/No window
CPU: Intel G860 3GHz
CPU Cooler: CM Hyper 212 Evo
Motherboard: Gigabyte H77M-D3H
Memory: Kingston Black Series 2x4GB DDR3 1600MHz CL9
PSU: Seasonic G 650W
OS HDD (If you have one): WD Blue 500GB
Storage HDD(s): 5x3TB WD Red's - Raid 5
Raid Card: LSI 9260-4i w/BBU & Intel RES2SV240 20 port expander
Case fans: Fractal R2's & NF-F12's



A few more build pics here -


http://imgur.com/io5NL

 for who's interested


----------



## cdoublejj

Quote:


> Originally Posted by *EvilMonk*
> 
> Damn brother!!!
> 
> 
> 
> 
> 
> 
> 
> 
> Love the ghetto mod kudos on making it work together!
> 
> 
> 
> 
> 
> 
> 
> 
> Those SATA cables scare me the hell out though


I bet it runs cool though AND probably much quieter too.


----------



## yawa77

I've been lurking in the thread for a while. There are some awesome builds on here! Would anyone be willing to point me in the right direction for putting together a server that will be used for home use. I'd like for it to be able to run VMs smoothly for different OSs and things like Plex and so on. I have never built something like this so any info would be helpful. Thank You!


----------



## M3nta1

Quote:


> Originally Posted by *yawa77*
> 
> I've been lurking in the thread for a while. There are some awesome builds on here! Would anyone be willing to point me in the right direction for putting together a server that will be used for home use. I'd like for it to be able to run VMs smoothly for different OSs and things like Plex and so on. I have never built something like this so any info would be helpful. Thank You!


Couple things
How many VMs? What OS's and jobs will those VMs be doing?

as for PLEX, anything with a passmark score over 2000 is enough for a single 1080p stream. So if you plan to do 2 or 3 1080p streams at one time, find a processor that has at least 2000 passmark score x number of streams you anticipate having. Then add a little extra, to run the OS n stuff. RAM wise, my PLEX server runs fine with 4 gb but i do plan to upgrade to 8 gb.

You also have to consider that this will be on 24/7, so if you can find modern power sipping chips that fit with what you want to do then thats a great place to start. Everything else just needs to be quality products, not necessarily the highest end but solid and reliable.


----------



## EvilMonk

Quote:


> Originally Posted by *yawa77*
> 
> I've been lurking in the thread for a while. There are some awesome builds on here! Would anyone be willing to point me in the right direction for putting together a server that will be used for home use. I'd like for it to be able to run VMs smoothly for different OSs and things like Plex and so on. I have never built something like this so any info would be helpful. Thank You!


Sure thing. How many VMs would you like to run? What would you run on these VMs? Databases? Web servers? File Servers?
Whats the size of your plex library? Will you transcode HD in real time with Plex?
Its all information that is more or less needed to give us an idea of the hardware you'll need to run all of these tasks at the same time on your server.
Let us know and it will be easy to give you an idea


----------



## yawa77

Quote:


> Originally Posted by *EvilMonk*
> 
> Sure thing. How many VMs would you like to run? What would you run on these VMs? Databases? Web servers? File Servers?
> Whats the size of your plex library? Will you transcode HD in real time with Plex?
> Its all information that is more or less needed to give us an idea of the hardware you'll need to run all of these tasks at the same time on your server.
> Let us know and it will be easy to give you an idea


VMs = Less than 5. I'm not a hardcore VM guy currently. I experiment with different OS and would like to run a server/cloud I could access from outside of my home network. Maybe one for SABnzbd/torrents.

Plex: Transcoding in real time yes. Plex lib size = about 7 - 10 TBs. I of course will add more as the wife or I need/want new content.

OS: I can do Windows or Linux.

Again I know this may seem like the blind making you blind and asking for directions..I am admittedly very new to this kind of venture. Any advice and guidance would be helpful.


----------



## EvilMonk

Quote:


> Originally Posted by *yawa77*
> 
> VMs = Less than 5. I'm not a hardcore VM guy currently. I experiment with different OS and would like to run a server/cloud I could access from outside of my home network. Maybe one for SABnzbd/torrents.
> 
> Plex: Transcoding in real time yes. Plex lib size = about 7 - 10 TBs. I of course will add more as the wife or I need/want new content.
> 
> OS: I can do Windows or Linux.
> 
> Again I know this may seem like the blind making you blind and asking for directions..I am admittedly very new to this kind of venture. Any advice and guidance would be helpful.


Sorry for not asking before but do you have an idea of what your budget is around?
Do you want to look for new hardware or are you open to second hand business class server hardware from eBay?
Do you already have the hard disk drives on hand?
Thanks!!


----------



## yawa77

Quote:


> Originally Posted by *EvilMonk*
> 
> Sorry for not asking before but do you have an idea of what your budget is around?
> Do you want to look for new hardware or are you open to second hand business class server hardware from eBay?
> Do you already have the hard disk drives on hand?
> Thanks!!


I would prefer used like in EBay. I've looked on there but I have no idea what I'd need. I'd need disk. Budget is another thing I'd have to get from you guys, but the lower the better..I don't need 4 CPU, 1 zillion thread system.


----------



## EvilMonk

Quote:


> Originally Posted by *yawa77*
> 
> I would prefer used like in EBay. I've looked on there but I have no idea what I'd need. I'd need disk. Budget is another thing I'd have to get from you guys, but the lower the better..I don't need 4 CPU, 1 zillion thread system.


You can start by looking at these DL180 G6 which support up to 2x 6 cores Xeons Westmere EP CPUs, 18x DDR3 memory slots and 12x 3.5" SAS2/SATA2 slots. The cheapest ones are coming with 1 quad core CPU and 8Gb of DDR3R ECC ram. You would need to buy hard drives and you could upgrade it in the future if you need to. I think it would be a great starter server for most of your needs, value wise its one of the great platform to start with. It can also be upgraded to dual power supplies in the future if you decide to buy a server with only 1 PSU.

http://www.ebay.com/sch/Servers-/11211/i.html?_from=R40&_sop=15&_nkw=dl180+g6


----------



## cones

Quote:


> Originally Posted by *yawa77*
> 
> ...
> Plex lib size = about 7 - 10 TBs. I of course will add more as the wife or I need/want new content.
> ...


That is going to be around $400 in just hard drives plus another $500 or so in hardware. The storage is what is going to cost you the most in this.


----------



## yawa77

Quote:


> Originally Posted by *EvilMonk*
> 
> You can start by looking at these DL180 G6 which support up to 2x 6 cores Xeons Westmere EP CPUs, 18x DDR3 memory slots and 12x 3.5" SAS2/SATA2 slots. The cheapest ones are coming with 1 quad core CPU and 8Gb of DDR3R ECC ram. You would need to buy hard drives and you could upgrade it in the future if you need to. I think it would be a great starter server for most of your needs, value wise its one of the great platform to start with. It can also be upgraded to dual power supplies in the future if you decide to buy a server with only 1 PSU.
> 
> http://www.ebay.com/sch/Servers-/11211/i.html?_from=R40&_sop=15&_nkw=dl180+g6


I have an untested but free XEON L5410 Socket 771(w/o cooler) I just got for free if that'll help cost any. Also how does storage work in the servers you posted?
http://www.ebay.com/itm/HP-SE326M1-DL180-G6-2x-2-26GHz-L5520-24GB-RAM-5x-2-5-73GB-W-P400-/171620728057?pt=LH_DefaultDomain_0&hash=item27f5647cf9

only supports 73 gigs.


----------



## EvilMonk

Quote:


> Originally Posted by *yawa77*
> 
> I have an untested but free XEON L5410 Socket 771(w/o cooler) I just got for free if that'll help cost any. Also how does storage work in the servers you posted? http://rover.ebay.com/rover/1/711-5...057?pt=LH_DefaultDomain_0&hash=item27f5647cf9 only supports 73 gigs.


Unfortunately its a different socket








The Proliant G6 Series are using LGA 1366 (Nehalem and Westmere CPU CPUs)
The storage isnt related to 73Gb disks, its just an example in that case since some sellers are including small HP SAS 73Gb 15K SAS Hard Drives with the servers, you can ditch those and use normal hard drives instead







The SE326M1 server isnt the best choice to make, sellers on ebay are selling those as DL180 G6 but they are different servers... you should look for a real DL180 G6 server make sure you find one without SE326M1 in the name to be sure







Make sure it uses 3.5" LFF hard drive instead of 2.5" SFF hard drives (DL180 G6 are using in 95% of cases 3.5" LFF drives, the remaining 5% are mostly BTO builds) BTW LFF means Large Form Factor and SFF means Small Form Factor...


----------



## yawa77

Quote:


> Originally Posted by *EvilMonk*
> 
> Unfortunately its a different socket
> 
> 
> 
> 
> 
> 
> 
> 
> The Proliant G6 Series are using LGA 1366 (Nehalem and Westmere CPU CPUs)
> The storage isnt related to 73Gb disks, its just an example in that case since some sellers are including small HP SAS 73Gb 15K SAS Hard Drives with the servers, you can ditch those and use normal hard drives instead
> 
> 
> 
> 
> 
> 
> 
> The SE326M1 server isnt the best choice to make, sellers on ebay are selling those are DL180 G6 but they are different servers... you should look for a real DL180 G6 server make sure you find one without SE326M1 in the name to be sure
> 
> 
> 
> 
> 
> 
> 
> Make sure it uses 3.5" LFF hard drive instead of 2.5" SFF hard drives (DL180 G6 are using in 95% of cases 3.5" LFF drives, the remaining 5% are mostly BTO builds)


I realize there different socket, was just trying to give you what I have in case it would help with your selection.








http://www.ebay.com/itm/HP-ProLiant-DL180-G6-2U-2X-XEON-QC-L5520-2-26GHZ-12xTRAYS-0G-MEM-P410-512MB-/261448063080?pt=LH_DefaultDomain_0&hash=item3cdf84d868 is what your talking about?


----------



## EvilMonk

Quote:


> Originally Posted by *yawa77*
> 
> I realize there different socket, was just trying to give you what I have in case it would help with your selection.
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.ebay.com/itm/HP-ProLiant-DL180-G6-2U-2X-XEON-QC-L5520-2-26GHZ-12xTRAYS-0G-MEM-P410-512MB-/261448063080?pt=LH_DefaultDomain_0&hash=item3cdf84d868 is what your talking about?


Yup thats exactly what I'm talking about







plus you have 2 quad cores CPUs with hyper threading, the raid controller with a good amount of memory on it and probably the BBWC module and the trays for the HDDs are included!


----------



## pvt.joker

Quote:


> Originally Posted by *EvilMonk*
> 
> You can start by looking at these DL180 G6 which support up to 2x 6 cores Xeons Westmere EP CPUs, 18x DDR3 memory slots and 12x 3.5" SAS2/SATA2 slots. The cheapest ones are coming with 1 quad core CPU and 8Gb of DDR3R ECC ram. You would need to buy hard drives and you could upgrade it in the future if you need to. I think it would be a great starter server for most of your needs, value wise its one of the great platform to start with. It can also be upgraded to dual power supplies in the future if you decide to buy a server with only 1 PSU.
> 
> http://www.ebay.com/sch/Servers-/11211/i.html?_from=R40&_sop=15&_nkw=dl180+g6


I thought that auction looked familiar.. that's a place about 20 min from me.. Makes me want to go in and see what they're asking in store for it!


----------



## yawa77

Quote:


> Originally Posted by *pvt.joker*
> 
> I thought that auction looked familiar.. that's a place about 20 min from me.. Makes me want to go in and see what they're asking in store for it!


You suck j/k. I wish I could find local places that were getting rid of their hardware like that.


----------



## pvt.joker

Lets just say that these guys the last time I was in their shop, had little to no clue about what they were selling. It's been years, but all the local places are terrible.


----------



## yawa77

Maybe, but at least you'd know what your getting. EBay while ok most of the times can be a gamble.


----------



## EvilMonk

Quote:


> Originally Posted by *yawa77*
> 
> Maybe, but at least you'd know what your getting. EBay while ok most of the times can be a gamble.


Well I didn't get any problem with the 9 servers I got on eBay over the last 2 years








Plus you have eBay buyer protection in case something happens, which didn't exist 9 years ago when I started to order on eBay


----------



## yawa77

Quote:


> Originally Posted by *EvilMonk*
> 
> Well I didn't get any problem with the 9 servers I got on eBay over the last 2 years
> 
> 
> 
> 
> 
> 
> 
> 
> Plus you have eBay buyer protection in case something happens, which didn't exist 9 years ago when I started to order on eBay


I know..just wanted to give him crap


----------



## EvilMonk

Quote:


> Originally Posted by *yawa77*
> 
> I know..just wanted to give him crap


And I wish those DL180 G6 were so cheap when I got all my G6 series servers, its a no brainer and I would have bought only some of those instead... even if they are a lot bigger than my 1U G6s in the long run they are way better upgrade wise and would last me a lot longer...


----------



## yawa77

Quote:


> Originally Posted by *EvilMonk*
> 
> And I wish those DL180 G6 were so cheap when I got all my G6 series servers, its a no brainer and I would have bought only some of those instead... even if they are a lot bigger than my 1U G6s in the long run they are way better upgrade wise and would last me a lot longer...


I saw your pics. How many did you have in all and why so many? Just curious as it might give me ideas.


----------



## pe4nut666

i wish shipping was cheaper for servers on ebay i live in canada so a $400 dollar server is really 900 dollars after shipping it is so heart breaking


----------



## EvilMonk

Quote:


> Originally Posted by *pe4nut666*
> 
> i wish shipping was cheaper for servers on ebay i live in canada so a $400 dollar server is really 900 dollars after shipping it is so heart breaking


I have a broker in Vermont, I get the stuff shipped there and he gets it to Montreal with a bunch of other customers orders so its a lot cheaper to get stuff shipped and through customs.


----------



## EvilMonk

Quote:


> Originally Posted by *yawa77*
> 
> I saw your pics. How many did you have in all and why so many? Just curious as it might give me ideas.


I have 5 G6 Servers, 3 G5 and 1 Xserve.
I'm a sys admin and do a lot of virtualization + network/ IP phone management work
I also have a backup server hooked to the office that's always online through by VPN.
I run a lot of network environments for my Juniper / Cisco certifications since I have to keep up to date and always renew, that's quite a lot of work to keep up to date with the Microsoft and VMware certifications but basically I have to for my job. Well I'm not doing any IT consulting job now, so its less stressful and time consuming to keep all the other fields I used to work in at that time up to date








I can ease up on the LAMP, Exchange and Sharepoint part now at least since I don't do any web/exchange admin work.


----------



## cdoublejj

Quote:


> Originally Posted by *pvt.joker*
> 
> Lets just say that these guys the last time I was in their shop, had little to no clue about what they were selling. It's been years, but all the local places are terrible.


who?


----------



## pe4nut666

Quote:


> Originally Posted by *EvilMonk*
> 
> I have a broker in Vermont, I get the stuff shipped there and he gets it to Montreal with a bunch of other customers orders so its a lot cheaper to get stuff shipped and through customs.


i will have to keep you in mind next time i go server shopping


----------



## EvilMonk

Quote:


> Originally Posted by *pe4nut666*
> 
> i will have to keep you in mind next time i go server shopping


I can ask if he gets stuff outside of Quebec and Ontario, I know he deliver to Quebec City, Montreal, Ottawa and Toronto but I don't know about PEI, next time I order I'll ask, you have to open an account as well


----------



## pvt.joker

Quote:


> Originally Posted by *cdoublejj*
> 
> who?


a shop here in CO called Action Computers.


----------



## cdoublejj

looks like a medium sized bulk sales place.


----------



## yawa77

Are there any other server resale sites that deliver in the US that have good deals?


----------



## driftingforlife

Spent all day on a ladder doing this. Need to do the uplinks tomorrow (the blue ones)


----------



## Darkcyde

Updated my server with a little more horsepower so I can stream video to my PS3 over my network and downsampled music to my iPhone over the net. The old Atom based rig wasn't cutting it.

Specs are in my sig.


----------



## yawa77

Quote:


> Originally Posted by *EvilMonk*
> 
> I have 5 G6 Servers, 3 G5 and 1 Xserve.
> I'm a sys admin and do a lot of virtualization + network/ IP phone management work
> I also have a backup server hooked to the office that's always online through by VPN.
> I run a lot of network environments for my Juniper / Cisco certifications since I have to keep up to date and always renew, that's quite a lot of work to keep up to date with the Microsoft and VMware certifications but basically I have to for my job. Well I'm not doing any IT consulting job now, so its less stressful and time consuming to keep all the other fields I used to work in at that time up to date
> 
> 
> 
> 
> 
> 
> 
> 
> I can ease up on the LAMP, Exchange and Sharepoint part now at least since I don't do any web/exchange admin work.


Thats crazy! Thats another thing I'd like to be able to do. Pull files off my laptop with something like btsync and have it pullable to my home network. I wonder how much space I'll need to begin with. I can throw a SSD I have in there for the OS and then a 4TB drive or so to start with. Do the servers you recommended have a storage limit. I.E. The say 75 gig drives etc. Am I limited to only 75 gigs per drive for them?


----------



## EvilMonk

Quote:


> Originally Posted by *yawa77*
> 
> Thats crazy! Thats another thing I'd like to be able to do. Pull files off my laptop with something like btsync and have it pullable to my home network. I wonder how much space I'll need to begin with. I can throw a SSD I have in there for the OS and then a 4TB drive or so to start with. Do the servers you recommended have a storage limit. I.E. The say 75 gig drives etc. Am I limited to only 75 gigs per drive for them?


No worries, you don't have any storage limit, its like the same as a normal controller would work, you can use the same hard drives you would use with a normal PC is you want, you can even use PC ssd. The 73 gb as I said is just because they put the smallest HD available in the server to sell it, you can put 4Tb drives if you want


----------



## yawa77

Dell Poweredge R900

This is a VERY Clean Machine
30 Day Warranty

No OS Installed
No Media Included

Specifications
4x Intel Xeon Quad Core 2.4ghz 16mb 1066fsb CPUs (Intel E7440 - 16 Cores Total)
32x 1gb DDR2 ECC Memory (32gb Total)
2x 72gb 10k 2.5" SAS Drives
DVD Drive
Perc 6i SAS Raid Controller w/ Battery
DRAC 5 Remote Access Card
4x Gigabit Ethernet
2x Power Supplies

No Bezel Included
No Rails Included
No Power Cords Included

Is this a good deal for $199.99 plus $85 shipping?


----------



## EvilMonk

Quote:


> Originally Posted by *yawa77*
> 
> Dell Poweredge R900
> 
> This is a VERY Clean Machine
> 30 Day Warranty
> 
> No OS Installed
> No Media Included
> 
> Specifications
> 4x Intel Xeon Quad Core 2.4ghz 16mb 1066fsb CPUs (Intel E7440 - 16 Cores Total)
> 32x 1gb DDR2 ECC Memory (32gb Total)
> 2x 72gb 10k 2.5" SAS Drives
> DVD Drive
> Perc 6i SAS Raid Controller w/ Battery
> DRAC 5 Remote Access Card
> 4x Gigabit Ethernet
> 2x Power Supplies
> 
> No Bezel Included
> No Rails Included
> No Power Cords Included
> 
> Is this a good deal for $199.99 plus $85 shipping?


Quite old and no upgradability (DDR2 FBDIMM and old Xeons base on Penrynn core architecture on socket 604) but for the price I guess its what you should expect to pay, you can get the equivalent DL580 G5 around the same price... you might want to invest a little more to get a Proliant G6 that will have a possibility to upgrade to dual 6 cores xeons and DDR3 memory and be almost twice as powerful as that server (DDR2 fbdimms are known to be a bottleneck and these CPUs are not that powerful compared to Westmere-EP) but for an initial investment it will be cheaper, so if you want something that you won't use that long you can go with that one, if you wan't to get a server you'll upgrade and you will keep longer you might be better to get a more recent Dell, IBM or HP server with a more recent quad core LGA 1366 CPU and DDR3 cpu


----------



## Shaefurr

Just finished rebuilding my server, still waiting on some paracord to come in and I need to remake the PSU cover.


----------



## yawa77

Quote:


> Originally Posted by *EvilMonk*
> 
> Quite old and no upgradability (DDR2 FBDIMM and old Xeons base on Penrynn core architecture on socket 604) but for the price I guess its what you should expect to pay, you can get the equivalent DL580 G5 around the same price... you might want to invest a little more to get a Proliant G6 that will have a possibility to upgrade to dual 6 cores xeons and DDR3 memory and be almost twice as powerful as that server (DDR2 fbdimms are known to be a bottleneck and these CPUs are not that powerful compared to Westmere-EP) but for an initial investment it will be cheaper, so if you want something that you won't use that long you can go with that one, if you wan't to get a server you'll upgrade and you will keep longer you might be better to get a more recent Dell, IBM or HP server with a more recent quad core LGA 1366 CPU and DDR3 cpu


http://www.ebay.com/itm/HP-ProLiant-DL380-G6-491315-001-Server-/261783757872?pt=LH_DefaultDomain_0&hash=item3cf3872430 any good?


----------



## EvilMonk

Quote:


> Originally Posted by *yawa77*
> 
> http://www.ebay.com/itm/HP-ProLiant-DL380-G6-491315-001-Server-/261783757872?pt=LH_DefaultDomain_0&hash=item3cf3872430 any good?


Thats a good one but the auction will end Thursday at 11 AM so you can be sure it won't go cheap and lots of people will be able to bid on it in the last 10-5 minutes unfortunately...








But its definitely a good choice, I'm looking to buy a DL380 G6 for the last 2-3 days as well.


----------



## yawa77

Quote:


> Originally Posted by *EvilMonk*
> 
> Thats a good one but the auction will end Thursday at 11 AM so you can be sure it won't go cheap and lots of people will be able to bid on it in the last 10-5 minutes unfortunately...
> 
> 
> 
> 
> 
> 
> 
> 
> But its definitely a good choice, I'm looking to buy a DL380 G6 for the last 2-3 days as well.


That being said there area lot of guys with the EBay sniper programs. I looked for the "buy now" options. What is your option of this? http://www.ebay.com/itm/HP-Proliant-DL380-G6-Server-E5506-2x-2-13GHz-Quad-Core-CPU-16GB-RAM-P410-1PSU-2U-/171667452575?pt=LH_DefaultDomain_0&hash=item27f82d729f&autorefresh=true It is a LGA 1366 but uses DDR 800 according to the Intel ARK site.


----------



## jibesh

Quote:


> Originally Posted by *yawa77*
> 
> That being said there area lot of guys with the EBay sniper programs. I looked for the "buy now" options. What is your option of this? http://www.ebay.com/itm/HP-Proliant-DL380-G6-Server-E5506-2x-2-13GHz-Quad-Core-CPU-16GB-RAM-P410-1PSU-2U-/171667452575?pt=LH_DefaultDomain_0&hash=item27f82d729f&autorefresh=true It is a LGA 1366 but uses DDR 800 according to the Intel ARK site.


I would say go for this one. It has better and lower power processors, 12x 3.5" drive bays, 24GB RAM and a Dell H700 RAID controller.

DELL FS12-TY C2100 2x QUAD CORE L5630 2.13GHz 24GB RAM 12x TRAYS H700 - http://www.ebay.com/itm/281590303238


----------



## EvilMonk

Quote:


> Originally Posted by *jibesh*
> 
> I would say go for this one. It has better and lower power processors, 12x 3.5" drive bays, 24GB RAM and a Dell H700 RAID controller.
> 
> DELL FS12-TY C2100 2x QUAD CORE L5630 2.13GHz 24GB RAM 12x TRAYS H700 - http://www.ebay.com/itm/281590303238


Hes right I think, but if you are not ready to pay the difference (356$ vs 499$) you'll have what you pay for. The hp server is fine too but those CPUs don't have hyper threading and the seller only included DDR3 800 (the cpus only support up to a maximum of DDR3 1066 anyway) I wouldn't say the Dell Perc H700 is any better than the Smart Array P410 though


----------



## yawa77

Quote:


> Originally Posted by *EvilMonk*
> 
> Hes right I think, but if you are not ready to pay the difference (356$ vs 499$) you'll have what you pay for. The hp server is fine too but those CPUs don't have hyper threading and the seller only included DDR3 800 (the cpus only support up to a maximum of DDR3 1066 anyway) I wouldn't say the Dell Perc H700 is any better than the Smart Array P410 though


There is no optical bay but I could USB install Linux on there. I'd have to get some HDs to get it started too.


----------



## tiro_uspsss

Quote:


> Originally Posted by *EvilMonk*
> 
> Hes right I think, but if you are not ready to pay the difference (356$ vs 499$) you'll have what you pay for. The hp server is fine too but those CPUs don't have hyper threading and the seller only included DDR3 800 (the cpus only support up to a maximum of DDR3 1066 anyway) *I wouldn't say the Dell Perc H700 is any better than the Smart Array P410 though*


I'd say it's significantly better actually


----------



## Wildcard36qs

Im a Dell guy and the H700 is a great controller.


----------



## EvilMonk

Quote:


> Originally Posted by *Wildcard36qs*
> 
> Im a Dell guy and the H700 is a great controller.


Well I'm a HP guy and I have 6 servers with the P410 / P212 512mb / 1Gb and a couple of servers running older P400 / p800 512 mb controllers and they are quite good as well...

I still prefer to run an LSI MegaRaid 9240 8i controller in my PC though, my backup pc is running a RocketRAID 2720SGL SAS controller which I find quite good for SSDs. I work on IBM servers at the office and they are using IBM serveRAID controllers which are running quite well but most of them are rebranded LSI MegaRaid controllers. We got new HP servers 3 weeks ago and the new Smart Array P440 4Gb are really fast.


----------



## yawa77

http://www.ebay.com/itm/Dell-PowerEdge-C2100-FS12-TY-2x-2-26GHz-QC-E5520-8GB-Ram-PERC-6-i-Trays-Rails-/181657810381?pt=LH_DefaultDomain_0&hash=item2a4ba635cd What about this one. It looks like the last recommended but doesn't cost as much.


----------



## yawa77

or http://www.ebay.com/itm/HP-Proliant-DL380-G6-Virtualization-8-Core-Server-2x-2-4GHz-12GB-RAM-P410-1PS-1U/171356217786?_trksid=p2047675.c100005.m1851&_trkparms=aid%3D222007%26algo%3DSIC.MBE%26ao%3D1%26asc%3D27705%26meid%3Dfb375e08499b48fc83716f8507c9dd7d%26pid%3D100005%26rk%3D2%26rkt%3D6%26sd%3D181657944317&rt=nc&autorefresh=true


----------



## EvilMonk

Quote:


> Originally Posted by *yawa77*
> 
> http://www.ebay.com/itm/Dell-PowerEdge-C2100-FS12-TY-2x-2-26GHz-QC-E5520-8GB-Ram-PERC-6-i-Trays-Rails-/181657810381?pt=LH_DefaultDomain_0&hash=item2a4ba635cd What about this one. It looks like the last recommended but doesn't cost as much.


This one don't have the same raid controller its a PERC 6i which is quite slower and older, I remember seeing the 6i a while ago the CPUs are cheaper chips, the difference in RAM as well finally explain the price difference.

I quite like the HP server you posted after this one. But I guess from all those the first Dell server that was mentioned is the best but if the price difference is that important for you I think the the HP only having half the ram and not using low power CPUs you might want to go with this one


----------



## yawa77

Quote:


> Originally Posted by *EvilMonk*
> 
> This one don't have the same raid controller its a PERC 6i which is quite slower and older, I remember seeing the 6i a while ago the CPUs are cheaper chips, the difference in RAM as well finally explain the price difference.
> 
> I quite like the HP server you posted after this one. But I guess from all those the first Dell server that was mentioned is the best but if the price difference is that important for you I think the the HP only having half the ram and not using low power CPUs you might want to go with this one


For the things that I'll be doing will I take that much of a hit? For my Plex and on the fly transcoding I've been using my desktop I5 3570K currently not over clocked, 16 gigs of RAM. If I do run VMs it might be a Linux or something like PFSense. Besides that it'll be storage.


----------



## Ziglez

Bit of an update, changed power supply to a 650watt seasonic, threw in a little fan hub, got some molex to 5sata power cables. zip tied a fan to the grill for the raid card. I doubt you will find a more cable managed server than mine.

OS:Server 2012r2
Case:Lian Li D8000
CPU:i3-4130
Motherboard:asrock extreme4
Memory:4gb of something
PSU:seasonic 650watt
OS HDD (If you have one): some old 500gb WD blue.
Storage HDD(s):2x toshiba 5tb drives, 8x toshiba 3tb drives, 2x samsung 2tb drives, 1x WD Green 2tb
Server Manufacturer (Ex: Dell, HP, You?): me?


----------



## Wildcard36qs

Quote:


> Originally Posted by *EvilMonk*
> 
> Well I'm a HP guy and I have 6 servers with the P410 / P212 512mb / 1Gb and a couple of servers running older P400 / p800 512 mb controllers and they are quite good as well...
> 
> I still prefer to run an LSI MegaRaid 9240 8i controller in my PC though, my backup pc is running a RocketRAID 2720SGL SAS controller which I find quite good for SSDs. I work on IBM servers at the office and they are using IBM serveRAID controllers which are running quite well but most of them are rebranded LSI MegaRaid controllers. We got new HP servers 3 weeks ago and the new Smart Array P440 4Gb are really fast.


Yea I have two Thinkservers at home and the RAID 500 is just an LSI 9240-8i. I have it in HBA mode using it for my ESXi all in one.


----------



## tiro_uspsss

Quote:


> Originally Posted by *Ziglez*
> 
> Bit of an update, changed power supply to a 650watt seasonic, threw in a little fan hub, got some molex to 5sata power cables. zip tied a fan to the grill for the raid card. *I doubt you will find a more cable managed server than mine.*


----------



## jibesh

Quote:


> Originally Posted by *Ziglez*
> 
> Bit of an update, changed power supply to a 650watt seasonic, threw in a little fan hub, got some molex to 5sata power cables. zip tied a fan to the grill for the raid card. I doubt you will find a more cable managed server than mine.


lol ***? *facepalm*


----------



## jibesh

Quote:


> Originally Posted by *EvilMonk*
> 
> I wouldn't say the Dell Perc H700 is any better than the Smart Array P410 though


I personally can't say the H700 is better than the P410 but i've heard from others that its good. I like the P410s myself; got P410s running both of my NAS arrays (6 x 4TB RAID6).


----------



## LuckyJack456TX

Litte addition to my crowd. Say hello to my VM Host.

2xE5620 @ 2.4ghz
36gb ram
2x1tb
2x80gb
ESXI 5.5


----------



## coachmark2

*@LuckyJack456TX*

Nice! I like me some Dell TXXXX series. That's a nicely specced system there too


----------



## CloudX

Quote:


> Originally Posted by *coachmark2*
> 
> *@LuckyJack456TX*
> 
> Nice! I like me some Dell TXXXX series. That's a nicely specced system there too


+1! That's a nice system there. Good to see it put to work!


----------



## LuckyJack456TX

Thanks all and it just got 12gb more of ram. So now it has 48gb in a very quiet package running 10 vm's on it.








:thumb:


----------



## coachmark2

Quote:


> Originally Posted by *LuckyJack456TX*
> 
> Thanks all and it just got 12gb more of ram. So now it has 48gb in a very quiet package running 10 vm's on it.
> 
> 
> 
> 
> 
> 
> 
> :thumb:


----------



## EvilMonk

Thats a good use for that Dell Precision T5500


----------



## Rbby258

Call me stupid but how do you make use of 10 vm's or can other people use them from there machine or what? What i have only ever done with vm's is for example test windows 10 in my windows 7 os. So when i see people say they got 10 i think why would you need to do that 10 times cause thats as far as my vm knowledge goes.


----------



## driftingforlife

Each one will be a different server or test environment.


----------



## EvilMonk

Quote:


> Originally Posted by *Rbby258*
> 
> Call me stupid but how do you make use of 10 vm's or can other people use them from there machine or what? What i have only ever done with vm's is for example test windows 10 in my windows 7 os. So when i see people say they got 10 i think why would you need to do that 10 times cause thats as far as my vm knowledge goes.


10 VMs if you have only 1 server is easy use... I would find it not powerful enough to drive what I need in my test environments that is why I have 9 servers in my lab and 7 of those 9 are dual socket 6 cores systems with 32 to 72 Gb of RAM... most of those servers are running some servers that are just a mirrored backup of the office servers copied through VPN for my certifications.

Exchange, sharepoint, SQL server, DNS and DHCP backup, AD, Symantec Endpoint Manager, a backup exec site that just backup my IT data and takes my images of my workstation at the office and is used as a console for the office backup server. Then there is my home stuff as well, plesk, my file servers, web servers, Game servers and telecom stuff (Cisco, juniper, sonicwall, fortinet and checkpoint) and my own lab stuff I keep for my certifications to test on and modify, 10 VM is easy as hell to fill up.


----------



## cones

Quote:


> Originally Posted by *Rbby258*
> 
> Call me stupid but how do you make use of 10 vm's or can other people use them from there machine or what? What i have only ever done with vm's is for example test windows 10 in my windows 7 os. So when i see people say they got 10 i think why would you need to do that 10 times cause thats as far as my vm knowledge goes.


I personally only run two but have been thinking about running more.


----------



## tycoonbob

I currently run about 23 VM's at home. Currently all on a single host with dual Xeon L5520's and 24GB RAM.


----------



## TheNegotiator

Made a few additions to my home lab:

*Dell PowerEdge R610*


Spoiler: Specs



*OS:* Linux Debian
*CPU:* 2x Intel Xeon E5620 2.40GHz QC
*Memory* 12GB DDR3
*HDD(s):* 3x 15k 73GB SAS in RAID5
*Use:* Minecraft Server



*Dell PowerEdge R710*


Spoiler: Specs



*OS:*
*CPU:* 2x Intel Xeon E5620 2.40GHz QC
*Memory* 24GB DDR3
*HDD(s):* 4x 15k 146GB SAS in RAID10
*Use:* Undecided





Also picked up a second ProCurve 3400cl-24G and a ProCurve 2900-48G, both with 10GE fiber modules.

The specs for the rest of the equipment are shown in my last post.


----------



## LuckyJack456TX

Quote:


> Originally Posted by *cones*
> 
> I personally only run two but have been thinking about running more.


Quote:


> Originally Posted by *tycoonbob*
> 
> I currently run about 23 VM's at home. Currently all on a single host with dual Xeon L5520's and 24GB RAM.


I easliy use 10 VM just because i can







. Mainly for keeping my certs up to date and try new things. My host still has plenty of room for MOAR vms.


----------



## Plan9

I run 6 and I thought that was a little OTT. What on earth do you guys use 10/20 VMs for?


----------



## zanginator

I have 13 active (8 non) on a single box (2x x5650, 48GB RAM)

There are

2 web servers.
Both with their own corresponding SQL server.
A reverse proxy.
A windows 7 box.
A debian box (that does some web trawling).
A Gmod server.
A minecraft server.
An internal web server that I have been playing with.
2 additional test servers (one for web software another for games).
Lastly a torrent box.


----------



## b4d17

Cca. 1300 cpu cores of different Intel and AMD processors, IB, 80TB user storage, 3TB total RAM


----------



## tycoonbob

Quote:


> Originally Posted by *Plan9*
> 
> I run 6 and I thought that was a little OTT. What on earth do you guys use 10/20 VMs for?


----------



## Plan9

@zanginator & @tycoonbob: Interesting stuff. Thanks.








Quote:


> Originally Posted by *b4d17*
> 
> 
> 
> Cca. 1300 cpu cores of different Intel and AMD processors, IB, 80TB user storage, 3TB total RAM


Clustered, or are you just totalling up different servers in the same rack?


----------



## LuckyJack456TX

Quote:


> Originally Posted by *tycoonbob*


tycoonbob what are you using to display you VM info"?


----------



## LuckyJack456TX

Here's what VM's im running on my Host:


----------



## Plan9

I guess I should post my VMs since I asked others to









Code:



Code:


[[email protected] ~]# jls
   JID  IP Address      Hostname                      Path
     1  192.168.1.200   alphatrion                    /jails/alphatrion
     2  192.168.1.201   cybertron                     /jails/cybertron
     3  192.168.1.204   unicron                       /jails/unicron
     4  192.168.1.206   starscream                    /jails/starscream
     5  192.168.1.202   galvatron                     /jails/galvatron
     6  192.168.1.203   megatron                      /jails/megatron

#1: DNS server (internal)
#2: remote SSH sandbox / SFTP server
#3: Subsonic
#4: random dev workstation
#5: Linux ISO seeding
#6: webserver + reverse proxy

Firewalling happens both on my router and on the VM host (fail2ban + ipfw). Plex is running from a dedicated box (Intel NUC). And my IRC bots, mail server and various other webservers run from a leased dedicated box hosted in a proper datacentre.


----------



## cones

Quote:


> Originally Posted by *Plan9*
> 
> I guess I should post my VMs since I asked others to
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> [[email protected] ~]# jls
> JID  IP Address      Hostname                      Path
> 1  192.168.1.200   alphatrion                    /jails/alphatrion
> 2  192.168.1.201   cybertron                     /jails/cybertron
> 3  192.168.1.204   unicron                       /jails/unicron
> 4  192.168.1.206   starscream                    /jails/starscream
> 5  192.168.1.202   galvatron                     /jails/galvatron
> 6  192.168.1.203   megatron                      /jails/megatron
> 
> #1: DNS server (internal)
> #2: remote SSH sandbox / SFTP server
> #3: Subsonic
> #4: random dev workstation
> #5: Linux ISO seeding
> #6: webserver + reverse proxy
> 
> Firewalling happens both on my router and on the VM host (fail2ban + ipfw). Plex is running from a dedicated box (Intel NUC). And my IRC bots, mail server and various other webservers run from a leased dedicated box hosted in a proper datacentre.


Huh you have a unicorn







What do the IRC bots do?


----------



## Plan9

Quote:


> Originally Posted by *cones*
> 
> Huh you have a unicorn
> 
> 
> 
> 
> 
> 
> 
> What do the IRC bots do?


Uni*cr*on - they're all transformers.

The bots do various things from posting URL previews, perform Google searches and such like. Plus some less worthwhile stuff programmed for fun; like insult generators


----------



## EvilMonk

Quote:


> Originally Posted by *Plan9*
> 
> Uni*cr*on - they're all transformers.
> 
> The bots do various things from posting URL previews, perform Google searches and such like. Plus some less worthwhile stuff programmed for fun; like insult generators


Niceeee


----------



## tycoonbob

Quote:


> Originally Posted by *LuckyJack456TX*
> 
> tycoonbob what are you using to display you VM info"?


Good ol' Microsoft Excel. As part of my upcoming home network restructuring, I will be deploying some sort of CMDB (ITop, OneCMDB, CMDBuild, etc) to track all my assets, along with a more robust monitoring tool utilizing Splunk.


----------



## cones

Quote:


> Originally Posted by *Plan9*
> 
> Uni*cr*on - they're all transformers.
> 
> The bots do various things from posting URL previews, perform Google searches and such like. Plus some less worthwhile stuff programmed for fun; like insult generators


I noticed that after I posted but wanted to go with it anyways. Thanks for the info.


----------



## b4d17

Clustered (only 5 servers are seperated and custom installed), Rocks Linux, cluster is used for different applications on the field of theoretical chemistry, physics and pharmacy.


----------



## Plan9

Quote:


> Originally Posted by *b4d17*
> 
> Clustered (only 5 servers are seperated and custom installed), Rocks Linux, cluster is used for different applications on the field of theoretical chemistry, physics and pharmacy.


Nice. How do you distribute jobs across your cluster? Does software have to be specially written for Rocks?


----------



## b4d17

We use SGE scheduler, that keeps track of jobs and dispatches them around.

Most of our software rely on the openmp and mpi for parallelisation. So majority of code that supports them (and some similar) can be used without any problem. But you can still use normal serial jobs.


----------



## coachmark2

I'll throw in my server names too.









Gainestown - AD Domain Controller, DNS, (PDC, RID Master)
Harpertown - AD Domain Controller, DNS, (Schema, DN Master, Infra Master)
Whistler - File server
Beckton - Backup/DFS file server
Conroe - DHCP
Penryn - Failover DHCP
Ubiquiti - Ubiquiti controller
Stormville - Syslog server
Archangel - Certificate Authority
Condor - Test/sandbox server

All hosted on a pair of C1100's


----------



## LuckyJack456TX

Opinion guys? X3430 on a supermicro board or E5430 on an msi g41 board (771 Mod). Thinking about swapping components in my NAS/Media Server.


----------



## EvilMonk

Quote:


> Originally Posted by *LuckyJack456TX*
> 
> Opinion guys? X3430 on a supermicro board or E5430 on an msi g41 board (771 Mod). Thinking about swapping components in my NAS/Media Server.


X3430, way more recent and will last you longer, and a supermicro board will be a far better server board and more stable than a ghetto LGA 771 mod...


----------



## LuckyJack456TX

Thanks Monk. looks like the Xeon Mod is going up for sale. Any takers?


----------



## EvilMonk

Quote:


> Originally Posted by *LuckyJack456TX*
> 
> Thanks Monk. looks like the Xeon Mod is going up for sale. Any takers?


NP bud


----------



## LuckyJack456TX

Made the swap...didnt even have to reinstall the OS. SPecs are in the Blackpearlnas Sig.


----------



## EvilMonk

Quote:


> Originally Posted by *LuckyJack456TX*
> 
> Made the swap...didnt even have to reinstall the OS. SPecs are in the Blackpearlnas Sig.


Nice!!!


----------



## LuckyJack456TX

Hyper-V or VMware?


----------



## driftingforlife

I prefer VMware.


----------



## cones

Quote:


> Originally Posted by *LuckyJack456TX*
> 
> Hyper-V or VMware?


I use KVM but that is not one you said.


----------



## EvilMonk

Quote:


> Originally Posted by *LuckyJack456TX*
> 
> Hyper-V or VMware?


VMware


----------



## tycoonbob

Quote:


> Originally Posted by *LuckyJack456TX*
> 
> Hyper-V or VMware?


KVM.


----------



## jibesh

Quote:


> Originally Posted by *LuckyJack456TX*
> 
> Hyper-V or VMware?


Quote:


> Originally Posted by *tycoonbob*
> 
> KVM.


Lol they all do the same thing...pick whichever one you're comfortable with.


----------



## cones

Quote:


> Originally Posted by *jibesh*
> 
> Lol they all do the same thing...pick whichever one you're comfortable with.


Not exactly, they all use a different OS for the "host". They accomplish the same thing but in different ways.


----------



## EvilMonk

Xen then since it seems people are throwing ideas that were not in the list of choices...


----------



## Plan9

VirtualBox


----------



## tycoonbob

Between Hyper-V and VMware, I'd choose Hyper-V. If the question is open to other hypervisors, I'd pick KVM 9 out of 10 times.


----------



## Plan9

I was put off KVM after reading a series of exploits that allowed you to break out of hypervisor and onto the host OS. Granted they've since been corrected and other virtualisation solutions have been known to have vulnerabilities as well, but it reminded me how young KVM is.

Plus if I'm going to run a full fat Linux as the host OS then I'd rather do away with hardware virtualisation entirely and instead run OS containers (LXC or OpenVZ) instead.


----------



## cones

Quote:


> Originally Posted by *Plan9*
> 
> I was put off KVM after reading a series of exploits that allowed you to break out of hypervisor and onto the host OS. Granted they've since been corrected and other virtualisation solutions have been known to have vulnerabilities as well, but it reminded me how young KVM is.
> 
> Plus if I'm going to run a full fat Linux as the host OS then I'd rather do away with hardware virtualisation entirely and instead run OS containers (LXC or OpenVZ) instead.


Do you happen to remember which version that was?


----------



## Plan9

Quote:


> Originally Posted by *cones*
> 
> Do you happen to remember which version that was?


No idea about the version, but i think it was 2012. So a while ago now


----------



## cones

Quote:


> Originally Posted by *Plan9*
> 
> No idea about the version, but i think it was 2012. So a while ago now


I was worried it might have been the version I am running but security fixes are usually fast.


----------



## cdoublejj

I wonder if it's possible to run 1 VM across several servers for combined computing power?


----------



## EvilMonk

Quote:


> Originally Posted by *cdoublejj*
> 
> I wonder if it's possible to run 1 VM across several servers for combined computing power?


I know its possible to run applications through a cluster for distributed computing and that you can use vmotion and DRS in case of a virtual host failure to make a virtual machine launch automatically and transparently on another host without service interruption but I never heard of a VM accross multiple host... maybe you are thinking more of distributing computing or parallel computing?


----------



## cdoublejj

Quote:


> Originally Posted by *EvilMonk*
> 
> I know its possible to run applications through a cluster for distributed computing and that you can use vmotion and DRS in case of a virtual host failure to make a virtual machine launch automatically and transparently on another host without service interruption but I never heard of a VM accross multiple host... maybe you are thinking more of distributing computing or parallel computing?


so if wanted to combine my servers i could run 1 app across this distribution or cluster? is it only for certain applications or any or all applications? or am i talking crazy?


----------



## EvilMonk

Quote:


> Originally Posted by *cdoublejj*
> 
> so if wanted to combine my servers i could run 1 app across this distribution or cluster? is it only for certain applications or any or all applications? or am i talking crazy?


OS and apps coded to be used on parallel computing / distributed computing afaik. but I might be wrong I work as a sys admin with (VMware,Hyper-V/Windows) and (Cisco/Juniper) environments, I never worked with distributed computing before, I've only been reading on it.


----------



## christoph

Quote:


> Originally Posted by *EvilMonk*
> 
> OS and apps coded to be used on parallel computing / distributed computing afaik. but I might be wrong I work as a sys admin with (VMware,Hyper-V/Windows) and (Cisco/Juniper) environments, I never worked with distributed computing before, I've only been reading on it.


yeah that is right, and currently only big companies/facilities make use of parallel computing

one ex. can be Folding more or less


----------



## KYKYLLIKA

Example of software framework: http://en.wikipedia.org/wiki/HTCondor


----------



## infinity9

Here's my overpowered media server. It's a small form factor Lenovo ThinkCentre M58p that I used for the LGA 771/775 Xeon mod. I get to my files via Samba; it also has VNC so I can access it from work. I'm looking to use it as a DVR/streaming server as well, with some virtualization thrown in for good measure. There are only two SATA ports on the motherboard, so I have an eSATA-to-SATA cable running from the I/O panel to the optical drive. Pretty cramped, so the SSD just sits next to the heatsink.

CPU: Intel Xeon X3363 (4 cores, 2.83 GHz)
Memory: 16GB (4x4GB) DDR3-1333 (limited to 1066 by motherboard)
System drive: Samsung 830 Series 128GB
Data drive: Western Digital Green WD40EZRX 4TB
Graphics: AMD Radeon HD 8570
Operating system: Gentoo Linux
Other stuff: Lenovo custom mobo w/Intel Q45 chipset, stock 280W power supply, Lite-On DVD+/-RW drive. Also has an internal speaker that unexpectedly plays system audio, not just BIOS beeps.


----------



## cones

infinity9 have you tried X11 forwarding yet? I like it way more then VNC.


----------



## infinity9

Quote:


> Originally Posted by *cones*
> 
> infinity9 have you tried X11 forwarding yet? I like it way more then VNC.


Haven't tried that before. Looks interesting, though I rather like having all of my programs in a single window. I may give it a shot eventually.


----------



## cones

Quote:


> Originally Posted by *infinity9*
> 
> Haven't tried that before. Looks interesting, though I rather like having all of my programs in a single window. I may give it a shot eventually.


It's always way more responsive for me. I actually like having separate windows though.


----------



## koulaid

OS: WS 2012 r2
CPU: 4790K
Ram: 16gb
HD: 120gb intel
PSU: CX430M
extras: Intel Dual 1Gb nic

Rig is mainly for VM's. Have 3 running right now but will have 2 more soon. 32gb ram will come at a later date.


----------



## parityboy

Quote:


> Originally Posted by *cdoublejj*
> 
> I wonder if it's possible to run 1 VM across several servers for combined computing power?


See this thread. I don't think you could run the actual VM instance in that way (that would be cool though!) but you could possibly auto-migrate some of the in-VM processes to other (v)CPUs in a (virtualised) cluster.


----------



## EvilMonk

Quote:


> Originally Posted by *parityboy*
> 
> See this thread. I don't think you could run the actual VM instance in that way (that would be cool though!) but you could possibly auto-migrate some of the in-VM processes to other (v)CPUs in a (virtualised) cluster.


Actually you can migrate some VMs on the fly from physical host to physical host through VSphere VMotion / DRS and Hyper-V as well.


----------



## parityboy

*@EvilMonk*

We know.







What we're talking about is "spreading" the VM across multiple local and non-local CPU cores. Single System Image (SSI) would be what's needed for that, but it would have to be implemented within a cluster of VM clones using the in-VM kernel image (although you could probably mix in some bare-metal compute hosts as well); the VM host(s) wouldn't be involved, apart from the usual launch and migrate operations.

This architecture for this is different from shared-storage VM infrastructure, and is also different from "Beowulf"-style computing clusters (which use job queues to distribute jobs to compute nodes, for example batch processing thousands of large photographic images).

I played with MOSIX a few years back; it was pretty nice watching it auto-migrate a compiler process from my laptop to my workstation when the CPU load hit a certain threshold.


----------



## EvilMonk

Quote:


> Originally Posted by *parityboy*
> 
> *@EvilMonk*
> 
> We know.
> 
> 
> 
> 
> 
> 
> 
> What we're talking about is "spreading" the VM across multiple local and non-local CPU cores. Single System Image (SSI) would be what's needed for that, but it would have to be implemented within a cluster of VM clones using the in-VM kernel image (although you could probably mix in some bare-metal compute hosts as well); the VM host(s) wouldn't be involved, apart from the usual launch and migrate operations.
> 
> This architecture for this is different from shared-storage VM infrastructure, and is also different from "Beowulf"-style computing clusters (which use job queues to distribute jobs to compute nodes, for example batch processing thousands of large photographic images).
> 
> I played with MOSIX a few years back; it was pretty nice watching it auto-migrate a compiler process from my laptop to my workstation when the CPU load hit a certain threshold.


Yeah MOSIX is pretty sweet bud, I played with it as well and its fun to see all the possibilities


----------



## cdoublejj

Quote:


> Originally Posted by *parityboy*
> 
> See this thread. I don't think you could run the actual VM instance in that way (that would be cool though!) but you could possibly auto-migrate some of the in-VM processes to other (v)CPUs in a (virtualised) cluster.


what defines a cluster? networked computers? or ones all on the same back plane/rack?


----------



## parityboy

Quote:


> Originally Posted by *cdoublejj*
> 
> what defines a cluster? networked computers? or ones all on the same back plane/rack?


Good question, actually.

Basically it's a group of computers which are fairly tightly coupled together through networking and software, such that they can either

*a)* work together to achieve a single task, typically by breaking up that task into small pieces, or
*b)* be linked together to behave as a single computing device, again with the purpose of increasing available compute resources, or
*c)* be linked together to provide redundancy in the event one of the nodes fails, and is unavailable.

*Storage Cluster*
For example, 5 storage servers have 20TB of storage each. However, rather than being presented as 5 separate storage pools, what you see is a single pool of 100TB of storage.

*Compute Cluster*
A compute cluster might serve as a render farm for Pixar, each CPU core in the cluster will be passed a movie frame to render in 3D. Some clusters can also unify the memory of each node into a single memory pool. Systems such as MOSIX can allow a group of 20 machines to act as a single machine (with some limits).

*HA Cluster*
A High Availability cluster isn't focused on increasing computing power, memory or storage but instead provides redundancy in the event of failure. Each node watches the other, and in the event that one node fails the other takes over, adopting its IP address, domain name and other identifiers. Clients using the system are none the wiser. Such an arrangement also helps mitigate availability issues in the event of planned maintenance.

*Distributed Computing*
Something like [email protected] _is_ distributed computing but is _not_ a cluster, because the nodes are not coupled at all; they are people's home machines. However a super computer like Blue Gene _is_ a cluster, because the nodes are tightly coupled together, not just physically but logically also.

Lines can be blurred though. For example, a Beowulf-style compute cluster is coordinated by a master node, which farms out jobs to each node based on that node's current work load. However, is that really that much different to a farm of web servers fronted by a load balancer, which is also monitoring their load?

*EDIT*
To answer your question directly: yes, in order for machines to be clustered they need to be networked. And yes typically, due to performance requirements and manageability, they tend to be physically in the same rack or at least in the same facility - a supercomputer's nodes occupy multiple racks.


----------



## vaeron

Bringing some old storage back to life.

OS: FreeNAS 9.3
Case: Dell Rack
CPU: 2 x Xeon @ 3 GHz (single core)
Memory: 8 GB DDR2
PSU: 2x700w in the 2850, 2x600w in each of the Powervaults
OS HDD (If you have one): 16 GB USB drive
Storage HDD(s): 34 x 300 GB 10k Fujitsu SCSI
Server Manufacturer: Dell

2 x Dell PowerVault 220S with 14 ea 300 GB 10k SCSI HDD
1 x Dell PowerEdge 2850 with dual Xeon @ 3 GHz, 8 GB ECC (upgrading to 16 GB soon) and 6 x 300 GB 10k SCSI HDD running FreeNAS 9.3


----------



## EvilMonk

Quote:


> Originally Posted by *vaeron*
> 
> Bringing some old storage back to life.
> 
> OS: FreeNAS 9.3
> Case: Dell Rack
> CPU: 2 x Xeon @ 3 GHz (will look up later)
> Motherboard:
> Memory: 8 GB DDR2
> PSU: 2 in each chassis
> OS HDD (If you have one): 16 GB USB drive
> Storage HDD(s): 34 x 300 GB 10k Fujitsu SCSI
> Server Manufacturer: Dell
> 
> 2 x Dell PowerVault 220S with 14 ea 300 GB 10k SCSI HDD
> 1 x Dell PowerEdge 2850 with dual Xeon @ 3 GHz, 8 GB ECC (upgrading to 16 GB soon) and 6 x 300 GB 10k SCSI HDD running FreeNAS 9.3


Damn that must draw some real juice for a storage server... are you just running storage out of this? don't you think it might be more suitable to run something else as an OS and use virtualization to get the servers to give you some other functions as well?


----------



## vaeron

O
Quote:


> Originally Posted by *EvilMonk*
> 
> Damn that must draw some real juice for a storage server... are you just running storage out of this? don't you think it might be more suitable to run something else as an OS and use virtualization to get the servers to give you some other functions as well?


Oh that's just my storage server, I have other servers for my VMs, Domain Controllers, and web servers. It's just sitting on my test bench at the moment, I just brought it online last night. I also updated the power supply information. 2x700w and 4x600w


----------



## beers

Quote:


> Originally Posted by *vaeron*
> 
> O
> Oh that's just my storage server, I have other servers for my VMs, Domain Controllers, and web servers. It's just sitting on my test bench at the moment, I just brought it online last night. I also updated the power supply information. 2x700w and 4x600w


I think he means you'd get some easy ROI from electricity alone with obtaining some higher capacity drives.


----------



## cam51037

Sorry for the potato quality picture, but here's my server. I picked it up used a few weeks ago for a great price ($225 CAD)

Here's the rundown on components:

i7 960

Sabertooth X58 Board

12GB RAM (however only 8GB max ever shows up in the BIOS, I've done some testing and all sticks appear to be good, must be a problem with the board)

EGVA GTX 570

Corsair TX750 PSU

640GB HDD

500 GB HDD

NZXT Case (I forget the exact model, but it's a fairly large case with quite a few fans + fan controller built into the case)

Card Reader & HDD Hot Spot Bay

Right now I run my small website off of this computer, a Bitcoin node, as well as a Minecraft server. I'm looking to set up a few more things on it (home VPN, etc) but bandwidth is the real limitation. I can't do much with a 2Mbps upload speed unfortunately.


----------



## vaeron

Quote:


> Originally Posted by *beers*
> 
> I think he means you'd get some easy ROI from electricity alone with obtaining some higher capacity drives.


The max capacity per drive of an Ultra 320 SCSI is 300GB which I have the entire thing maxed out and when I bought it I bought the entire setup for $100. Electricity where I live is some of the cheapest in the nation. But yes I thought about using it for other things, just right now it is running FreeNAS but I've been looking into ESXi but I have to track down ESXi 4 as anything newer won't work on it.


----------



## EvilMonk

Quote:


> Originally Posted by *vaeron*
> 
> The max capacity per drive of an Ultra 320 SCSI is 300GB which I have the entire thing maxed out and when I bought it I bought the entire setup for $100. Electricity where I live is some of the cheapest in the nation. But yes I thought about using it for other things, just right now it is running FreeNAS but I've been looking into ESXi but I have to track down ESXi 4 as anything newer won't work on it.


Yeah that's what I meant but since it doesn't cost much where you live it doesn't seem to be much of an issue so its all good








What kind of array are you using with those drives if you don't mind me asking?


----------



## vaeron

In order for FreeNAS to recognize the drives I had to JBOD the PowerVaults then use software striping to get any measurable storage. I'm looking into my options as far as changing OS so that maybe I can use Raid 50 instead.

EDIT: That being said, I am open to suggestions as to what to do with this system. As EvilMonk pointed out I can do a lot more with this, I just don't know what I really want to do with it. FreeNAS is kinda clunky.


----------



## Plan9

Quote:


> Originally Posted by *vaeron*
> 
> In order for FreeNAS to recognize the drives I had to JBOD the PowerVaults then use software striping to get any measurable storage. I'm looking into my options as far as changing OS so that maybe I can use Raid 50 instead.
> 
> EDIT: That being said, I am open to suggestions as to what to do with this system. As EvilMonk pointed out I can do a lot more with this, I just don't know what I really want to do with it. FreeNAS is kinda clunky.


ZFS > hardware raid

If you want to experiment more with the system, do away with FreeNAS and run vanilla FreeBSD instead. You could run jails (OS containers, which is like virtualisation but without the overhead of a hypervisor) which are pretty awesome imo.


----------



## EvilMonk

Quote:


> Originally Posted by *Plan9*
> 
> ZFS > hardware raid
> 
> If you want to experiment more with the system, do away with FreeNAS and run vanilla FreeBSD instead. You could run jails (OS containers, which is like virtualisation but without the overhead of a hypervisor) which are pretty awesome imo.


Well yes but not if the hardware isn't at a minimum level of performance in which case the hardware raid will be a far better choice...


----------



## Plan9

Quote:


> Originally Posted by *EvilMonk*
> 
> Well yes but not if the hardware isn't at a minimum level of performance in which case the hardware raid will be a far better choice...


I don't agree with that to be honest. ZFS brings a lot more to the table than software RAIDing alone. Plus you still use your RAID controller in passthrough with ZFS if you wanted the performance of a hardware controller (though even there, a pure software ZFS setup closes the gap with L2ARC and ZIL SSD cache disks).

These days, I'm not convinced hardware RAID is the closed case it once used to be as even ZFS aside, other software RAID-based file systems also offer up some alternative solutions to problems that hardware RAIDs simply cannot touch. And that's without taking distributed file systems into account.

edit: I should add the caveat that of course the best solution is always dependant on the specific problem. I wouldn't be rolling out software RAIDs on all the storage servers we have at work as there will be some instances were the additional features of ZFS wouldn't be utilised thus I would opt for the simpler hardware setup vs the equivalent software RAID


----------



## ColSanderz

Small update (I guess) from my previous tower server. Realized very quickly that I needed/wanted a beefier system to play around with. and so:

Picked up an APC 24u rack (AR3814) off craigslist cheap.
From top to bottom:
Zyxel 1920-24
Serpent (one from previous post) - Hosts three vm's: First one hosts the gf's website for her business, second one an exchange server for her business (not completely setup yet), and the third a minecraft server.

Two Cyberpower PDU's - Top is critical load, bottom is non-critical

Supermicro 936 that I picked up for 80 bucks on ebay. Since Transformed into a jbod chassis. Has sixteen wd red 3TB drives in raid 10. Backplane has some LED's that don't work, and one of the ears is completely broken off, but otherwise it was in good condition.

PengServ - 2x x5670 and 48Gb memory (room for up to 96Gb). Adaptec 8885 connected to jbod chassis. 4Gb fibre channel card to Serpent.

2u at the bottom for a battery pack for the ups if I need it, and 3u between the pdu and jbod chassis for another jbod chassis eventually. I'm not nearly done with cable management or setting up the Vlan on the switch. I can take internal pictures of the chassis's soon, after I get back from work. I also changed out all the fans for quieter, and put bigger noctua heatsinks on the xeons. Altogether, it's quieter than my gaming system.

Right now I have the netgear r7500 router, which was working fine. But it doesn't support Vlans (apparently), so I'll probably be switching to pfsense here pretty soon and using the router as a wireless access point. Shame, considering its price, but oh well.


----------



## cdoublejj

IDK if it warranted it's own thread or not but, here it is as it stand after upgrades.http://www.overclock.net/t/1553635/another-upgrade-to-recycling-bin-computer


----------



## CloudX

Very nice!


----------



## broadbandaddict

I put together a cheap portable file server for a friend. I had most of the parts laying around except the case.



Specs:

Code:



Code:


CPU     AMD Athlon II X2 620
MOBO    Gigabyte Something-or-Other AM3
RAM     2 x 2GB DDR3 1066
BOOT    160GB Western Digital Raptor
DATA    3 x 500GB Western Digital Green
PSU     Dell 460W
CASE    APEX TX-381-C
OS      Windows Server 2012 R2

Also got a question, what's the best economical RAID card for home use? I've been eyeing the LSI 9260-8i but I haven't ever really messed with hardware RAID. I want to run a RAID 5 array on some 3/4TB drives, ideally two separate arrays of 5x3TB and 6x4TB. I assume I'll need two cards but if there is a better setup to go with I am all ears.


----------



## djriful

I know this look silly, Macbook Pro 2010, found an old red white armoire. Drilled 2 holes for cables and shove everything in there. Ventilation cooler under the MBP running on Plex Media Server 24/7. I local remote access my OS X to manage files.


----------



## tiro_uspsss

Quote:


> Originally Posted by *broadbandaddict*
> 
> Also got a question, what's the best economical RAID card for home use? I've been eyeing the LSI 9260-8i but I haven't ever really messed with hardware RAID. I want to run a RAID 5 array on some 3/4TB drives, ideally two separate arrays of 5x3TB and 6x4TB. I assume I'll need two cards but if there is a better setup to go with I am all ears.


you can run multiple arrays from one card, you don't need a second card for a second array.
which mobo do you want to plug the LSI into?


----------



## 47 Knucklehead

[Build Log] The Bit Bucket

Case: Silverstone Lascala SST-LC10B-E-USB3.0
Motherboard: Asus A88XM-A FM2+ mATX
CPU: AMD A10-7700K 3.8Ghz Black Edition
Memory: 2 sticks (8GB) of Corsair Vengeance
Power supply: Thermaltake TR2 RX 850 watt
Video card: Build in APU on the AMD A10-7700K
SSD drive: 120GB Intel SSD 320
Hard drives: 4 Western Digital 2TB IntelliPower 6Gb/s SATA drives in a RAID5 array.
Keyboard: IOGear 2.4GHz wireless HTPC multimedia keyboard with laster trackball and scroll wheel
Operating system: Windows 8.1 Professional 64-bit with Plex media server and other programs.
CPU cooler: Cooler Master GeminII S524
Case fan: Gentle Typhoon AP-14, 1450 RPM


----------



## EvilMonk

Quote:


> Originally Posted by *tiro_uspsss*
> 
> you can run multiple arrays from one card, you don't need a second card for a second array.
> which mobo do you want to plug the LSI into?


To run 5x3 (15 drives) + 6x4 (24) so 39 drives he will need a lot more than 1 card, even with a port multiplier he will need at least one card + a port multiplier and another card and another port multiplier or 1 card and a port multiplier and 2 other cards knowing 1 card and a port multiplier will run 24 drives and each additional cards will run 8 drives so a total of 40 or 48 if he gets 2 cards and 2 ports multiplier cards...

Edit (Thats if he goes with an LSI 9260-8i card or any 8 drives controller type card) there are some more expensive cards that support more drives but to support that amount of drive it sure won't be cheap on an hardware raid controller...


----------



## driftingforlife

Got me this last week. Using 4 x 4TB Reds in Raid 5 for 11TB of storage.


----------



## cones

Quote:


> Originally Posted by *driftingforlife*
> 
> Got me this last week. Using 4 x 4TB Reds in Raid 5 for *111TB* of storage.
> 
> 
> Spoiler: Warning: Spoiler!


I hope that is a typo?


----------



## broadbandaddict

Quote:


> Originally Posted by *tiro_uspsss*
> 
> you can run multiple arrays from one card, you don't need a second card for a second array.
> which mobo do you want to plug the LSI into?


It's a Gigabyte G1.Sniper M5
Quote:


> Originally Posted by *EvilMonk*
> 
> To run 5x3 (15 drives) + 6x4 (24) so 39 drives he will need a lot more than 1 card, even with a port multiplier he will need at least one card + a port multiplier and another card and another port multiplier or 1 card and a port multiplier and 2 other cards knowing 1 card and a port multiplier will run 24 drives and each additional cards will run 8 drives so a total of 40 or 48 if he gets 2 cards and 2 ports multiplier cards...
> 
> Edit (Thats if he goes with an LSI 9260-8i card or any 8 drives controller type card) there are some more expensive cards that support more drives but to support that amount of drive it sure won't be cheap on an hardware raid controller...


Sorry, it will be 5 3TB drives and 6 4TB drives, not 39 total drives.


----------



## driftingforlife

Quote:


> Originally Posted by *cones*
> 
> I hope that is a typo?


Fixed


----------



## EvilMonk

Quote:


> Originally Posted by *broadbandaddict*
> 
> It's a Gigabyte G1.Sniper M5
> Sorry, it will be 5 3TB drives and 6 4TB drives, not 39 total drives.


Sorry my bad didn't think properly there,
You'll need either 2 cards or one card and a port multiplier.
Either case if you want to upgrade you'll still be able to but the port multiplier will give you the possibility to upgrade to 24 drives and the 2 cards solution only up to 16 drives.









And some LSI controllers have problems on some Gigabyte motherboards, I have the 9240-8i and had to search for a huge amount of time before giving up to put it in my backup PC and switching to my HP smart array P420 2Gb so just do some research and make sure its compatible before buying that controller


----------



## cones

Quote:


> Originally Posted by *driftingforlife*
> 
> Fixed


I was wondering since I thought no way you could get that much storage in a space that small.


----------



## hawkeye071292

I have an HP ML350 G6. 1x Intel Xeon E5645, 32GB RAM, 4x 300GB SAS drives and 4x 146GB SAS Drives. Running 2x RAID 1+0 for a total of a little less than a TB of storage.

I need to get another CPU so I can bump up the RAM in this thing. I'd love to get 8x 600GB SAS drives xD


----------



## tiro_uspsss

Quote:


> Originally Posted by *EvilMonk*
> 
> Sorry my bad didn't think properly there,
> You'll need either 2 cards or one card and a port multiplier.
> Either case if you want to upgrade you'll still be able to but the port multiplier will give you the possibility to upgrade to 24 drives and the 2 cards solution only up to 16 drives.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And some LSI controllers have problems on some Gigabyte motherboards, I have the 9240-8i and had to search for a huge amount of time before giving up to put it in my backup PC and switching to my HP smart array P420 2Gb so just do some research and make sure its compatible before buying that controller


which GB mobo was it? You still want the two together happy? I *may* be able to help you


----------



## EvilMonk

Quote:


> Originally Posted by *tiro_uspsss*
> 
> which GB mobo was it? You still want the two together happy? I *may* be able to help you


Z97X-SLI and yes of course


----------



## broadbandaddict

Quote:


> Originally Posted by *EvilMonk*
> 
> Sorry my bad didn't think properly there,
> You'll need either 2 cards or one card and a port multiplier.
> Either case if you want to upgrade you'll still be able to but the port multiplier will give you the possibility to upgrade to 24 drives and the 2 cards solution only up to 16 drives.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And some LSI controllers have problems on some Gigabyte motherboards, I have the 9240-8i and had to search for a huge amount of time before giving up to put it in my backup PC and switching to my HP smart array P420 2Gb so just do some research and make sure its compatible before buying that controller


Great, thanks for the info. Do I need a specific port multiplier card? Is the 9260-8i a good card or is there a better one to get? Do I need/want the battery backup if it's going to be hooked up to a UPS?

I was planning to run a parity storage space in 2012 but my current card (AOC-SASLP-MV8) has some kind of addressing problems so server only sees it as one drive. Are there any SAS>SATA cards that are better/cheaper that do work with Server 2012 R2? I'm just going to be storing files on the arrays so as long as they are as fast as gigabit ethernet I'm happy.


----------



## EvilMonk

Quote:


> Originally Posted by *broadbandaddict*
> 
> Great, thanks for the info. *Do I need a specific port multiplier card?* *Is the 9260-8i a good card or is there a better one to get?* *Do I need/want the battery backup if it's going to be hooked up to a UPS?*
> 
> I was planning to run a parity storage space in 2012 but my current card (AOC-SASLP-MV8) has some kind of addressing problems so server only sees it as one drive. Are there any SAS>SATA cards that are better/cheaper that do work with Server 2012 R2? I'm just going to be storing files on the arrays so as long as they are as fast as gigabit ethernet I'm happy.


Depending on your budget, I know that some people are using the HP one http://h18004.www1.hp.com/products/servers/proliantstorage/arraycontrollers/sas-expander/index.html and some are using the intel RES2SV240 http://www.intel.com/content/www/us/en/servers/raid/raid-controller-res2sv240.html but both aren't cheap

For the SAS/SATA controller there is always better choices but it all depend on your budget at the end... telling us your budget would give us a better idea... already the SAS expander card price bust most people budget for a good controller..

You definitely want to run a battery even if you have a ups simply because the battery will keep data in the cache if the PC crash.. a lot longer than a UPS, when juice runs out in the UPS, data is lost for good... I live in Montreal and like 2 months ago we lost power for 5 hours... I didn't see that for like 6-7 years... still I doubt a UPS would have lasted that long... I know mine wouldn't have with my 7 servers hooked on it and my generator wouldn't have been able to spin long enough before I could power all I needed since I wasn't home when I came back and noticed.

For the parity storage space question you the SAS/SATA raid controller you will choose should cover raid 10/5 and 6 so you should get parity there... no need to worry about that


----------



## broadbandaddict

Quote:


> Originally Posted by *EvilMonk*
> 
> Depending on your budget, I know that some people are using the HP one http://h18004.www1.hp.com/products/servers/proliantstorage/arraycontrollers/sas-expander/index.html and some are using the intel RES2SV240 http://www.intel.com/content/www/us/en/servers/raid/raid-controller-res2sv240.html but both aren't cheap
> 
> For the SAS/SATA controller there is always better choices but it all depend on your budget at the end... telling us your budget would give us a better idea... already the SAS expander card price bust most people budget for a good controller..
> 
> You definitely want to run a battery even if you have a ups simply because the battery will keep data in the cache if the PC crash.. a lot longer than a UPS, when juice runs out in the UPS, data is lost for good...
> 
> For the parity storage space question you the SAS/SATA raid controller you will choose should cover raid 10/5 and 6 so you should get parity there... no need to worry about that


I'd like to stick to ~$150 per array, less if possible. It's just going in a home file server that will serve up movies and store my pictures/documents. Nothing fancy. I was thinking a storage space would be a nice alternative to an expensive RAID card as I don't need crazy read/write speeds but my current Super Micro card doesn't work right with 2012 R2.


----------



## EvilMonk

Quote:


> Originally Posted by *broadbandaddict*
> 
> I'd like to stick to ~$150 per array, less if possible. It's just going in a home file server that will serve up movies and store my pictures/documents. Nothing fancy. I was thinking a storage space would be a nice alternative to an expensive RAID card as I don't need crazy read/write speeds but my current Super Micro card doesn't work right with 2012 R2.


For that 150$ per array for 11 drives (well maybe not 150$ per array but well call it 300$ for 16 drives at least since you need at least 2 cards or 1 card + 1 expander) if you want to expand in the future you might be better with off with the expander. You want 6 Gbps and brand new stuff? there are some sweet HP Expander in 3Gbps under 100$ on eBay, you can even find some around 50$ since you are in the US. that and the raid controller you could get both around the 200$ mark and support up to 24 drives. That would be pretty sweet unless you plan to use only SSDs and need 6Gbps. Newer 6Gbps and 12Gbps expander are more expensive but the older and occasion 3Gbps ones are a lot cheaper if you don't need the throughput.

Edit : I just took a look and the 6Gbps ones are around the 400$ mark while the 12Gbps ones are around the 7-800$ mark.
I might have gotten myself one of these 3Gbps expander if I had known before I bought myself 2 HP StorageWorks MSA 60 12 bays external SAS SANs that in the long run costed me a lot more money to set up since I had to buy a 550$ HP SmartArray P812 1Gb BBWC 6Gbps SAS card to run them and each SANs was a buttload of money to buy + shipping and customs from the US...


----------



## broadbandaddict

Quote:


> Originally Posted by *EvilMonk*
> 
> For that 150$ per array for 11 drives (well maybe not 150$ per array but well call it 300$ for 16 drives at least since you need at least 2 cards or 1 card + 1 expander) if you want to expand in the future you might be better with off with the expander. You want 6 Gbps and brand new stuff? there are some sweet HP Expander in 3Gbps under 100$ on eBay, you can even find some around 50$ since you are in the US. that and the raid controller you could get both around the 200$ mark and support up to 24 drives. That would be pretty sweet unless you plan to use only SSDs and need 6Gbps.


I don't need SATA 6, its just going to be mechanical drives for the time being. I've got 6 SSDs installed but they are all hooked up to the motherboard SATA ports which is why the HDDs were on the SAS card. I've been reading a bit today and saw that the LSI SAS9211-8i (~$100) can be flashed to IT mode where the RAID functionality is removed and it allows for up to 256 drives to be connected using an expander. I think I might just go with that for cheapness and use parity storage spaces to create the arrays. I've been pleasantly surprised at work with our storage space set up and I really like the idea of not losing everything if the RAID card dies.


----------



## EvilMonk

Quote:


> Originally Posted by *broadbandaddict*
> 
> I don't need SATA 6, its just going to be mechanical drives for the time being. I've got 6 SSDs installed but they are all hooked up to the motherboard SATA ports which is why the HDDs were on the SAS card. I've been reading a bit today and saw that the LSI SAS9211-8i (~$100) can be flashed to IT mode where the RAID functionality is removed and it allows for up to 256 drives to be connected using an expander. I think I might just go with that for cheapness and use parity storage spaces to create the arrays. I've been pleasantly surprised at work with our storage space set up and I really like the idea of not losing everything if the RAID card dies.


I've rarely seen an LSI controller die to be honest... I had a bunch including the rebranded IBM and HP versions and as hard as I try I can't recall from my servers (And I have home servers by the chain for more than 10 years) or from work (And I'm a sys admin in a 50+ servers environment for more than 10 years) that had a production LSI controller that failed...


----------



## Cheatdeath

This is my first dedicated server/nas. Previously I had been serving the media and file sharing for my home network with my main desktop machine, it has a 17TB Flexraid setup that is still going but now only for redundancy. The new machine is running the latest FreeNAS with various plugins.

Specs below.

24TB Advertised, I am using a Raidz2 8 disk array, for 16.4TB Usable and can lose 2 drives before game over.

FreeNas is running off a 16GB USB 2.0 drive

CASE: Fractal Design R5 Black No window
PSU: Corsair CS450M GOLD Cert
MB: Supermicro X10SLL-F
CPU: Intel Xeon E3-1231V3 Haswell 3.4GHz
HS: Noctua NH-U9S
RAM: 32GB Crucial DDR3 ECC 4x8GB
RAID CARD 1: LSI 9211-8i flashed with P16IT Firmware
HDD: 8x TOSHIBA PH3300U-1I72 3TB 7200 RPM
Fans: 2xFractal 140mm / 2x Noctua 120mm / 3x Noctua 140mm


----------



## Jeci

Quote:


> Originally Posted by *Cheatdeath*
> 
> This is my first dedicated server/nas. Previously I had been serving the media and file sharing for my home network with my main desktop machine, it has a 17TB Flexraid setup that is still going but now only for redundancy. The new machine is running the latest FreeNAS with various plugins.
> 
> Specs below.
> 
> 24TB Advertised, I am using a Raidz2 8 disk array, for 16.4TB Usable and can lose 2 drives before game over.
> 
> FreeNas is running off a 16GB USB 2.0 drive
> 
> CASE: Fractal Design R5 Black No window
> PSU: Corsair CS450M GOLD Cert
> MB: Supermicro X10SLL-F
> CPU: Intel Xeon E3-1231V3 Haswell 3.4GHz
> HS: Noctua NH-U9S
> RAM: 32GB Crucial DDR3 ECC 4x8GB
> RAID CARD 1: LSI 9211-8i flashed with P16IT Firmware
> HDD: 8x TOSHIBA PH3300U-1I72 3TB 7200 RPM
> Fans: 2xFractal 140mm / 2x Noctua 120mm / 3x Noctua 140mm


That's a really nice build


----------



## vaeron

I just picked up a new system that I will be running ESXi 6. I am going to be running my AD through a VM, my webserver through another, and a testing environment in another.

IBM System X3690 X5

Processor: 2x Intel Xeon E7-2803 6-core @ 1.73GHz
RAM: 24x 8 GB DDR3 (Total 192 GB)
Raid Card: IBM ServeRaid M1015
Drives: 2x 146 GB 15k RPM 2.5" Hard Drives (May upgrade to a few more since I have 6 free slots already set up and can add 8 more to that).
NIC: 2x 4 port Gigabit Ethernet


(The PowerVault below it is loaded with 34 300GB 10k SAS drives)


----------



## Spelio

Wanted to post my server. I just built this from some parts I had laying around. I'm using it as a test build for ESXi 5.5.0.

Case: iStar
Mobo: Intel DP55WG (Only downside is it only supports 16GB Max RAM but the on-board NIC works in ESXi 5.5.0 and it also allows for DirectPath I/O!)
CPU: Intel Xeon x3440 @ 2.53GHz (It's quad core and hyper-threaded!
RAM: Total of 10GB for now (2x4GB and 1x2GB)
Raid Card: HP P400
Drives: 4x500 GB in a Raid 6 1x500 GB for ISO storage 40GB boot drive

Really, the only amount of money I have in this is for the CPU, and that was $45 on eBay. The case, mobo, raid card and drives I got from work from over the years as they scrapped parts







Left to right on the PCI expansions:
NVS 295 Video Card
Intel Gigabit NIC
Intel Gigabit NIC
HP P400 Raid card w/ battery pack and 512 MB
Empty
Dual HP Gigabit NIC
Another Intel Gigabit NIC





Mounted in my Server rack below my homeserver.

As of right now, the only things I plan on running on it are IPCop for my firewall and a simple webserver. I would like to one day incorporate my Home Server, but I have a bunch of smaller drives, and don't have the room for all of the HDD's between the two cases.

Any suggestions on anything else I should/could run on this?


----------



## Jeci

Nice tidy setup!

I'll post a picture of mine after I move & have purchased my new NAS (it'll be a beast)


----------



## 350 Malibu

Here's my junk.

First one is a Dell C1100 I bought from a member on here, 2 Xeon L5520's and 24GB RAM, 60GB OS Drive, 6TB Storage currently running ESXi to learn on and be a test horse and I'm still setting it up as I have time.
Second is another C1100 I bought off ebay (same specs as above) that has fan issues, currently not using it. Trying to either figure out how to fix the fan issue, or it will become spare parts...
Netgear GS752TS 48Port Managed Switch
Also have te Netgear GSM724B that has a bad fan that needs replaced, but it still works. Just to loud to run in my computer room until the fan is fixed.
Third I have my C2100 with 2 Xeon X5650's and 72GB of RAM, with 2x 512GB SSD's in RAID1 for the OS and VM storage, 12 5TB Toshiba Drives in RAID50 for a total storage space of ~45TB. This one is hosting 90% of my VM's to include domain controller, etc., and many game servers on HYPER-V (Server 2012 R2). Will be adding pfSense soon once I test it out.



The White box is an Areca RAID box with 8x 2TB HGST drives, this was my old storage box before getting the C2100, still use it for movie/music storage and what not via iSCSI.
The other is a Define Mini, with an i7 3770k, 16GB RAM ~16TB storage which started life as my HTPC, but graduated to be my headless Torrent Server as it's primary use, and a few odd ball VM game servers like MineOS on HYPER-V (Server 2012 R2).



Yeah, I think that's about it... Until I get the hair up my butt to buy something else and spend more money (but I think the 15AMP electrical wiring in the room might need an upgrade soon).


----------



## EddieJames

OS: whs 2011
Case: Fractal Designs Arc Midi R2 (window)
CPU: Athlon x4 615e 2.5ghz @ 45watts, oc to 3.49ghz
Motherboard: AsRock Extreme 9 990fx
Memory: 8Gb ddr3 1600mhz
PSU: evga 500w 80+
OS HDD: 128Gb kingston hyperX ssd
Storage HDD(s): Three 4TB seagate drives and one 4TB HGST nas drive

Im using 2 yate loon high speed 140mm fans in the front intake for hdd cooling, the yate loon fans suck as a case fan imo, the air isnt very directional, they would be a better fit for a static pressure fan.. but my hdds stay around 29c to 33c.. i might reoriantate my hdd cage so i can do a push/pull type of hdd cooling. Like that r5 hdd rack is above, i like that.
I also removed my 5.25 bay from the case, theres 2x120mm fans pulling air in from bottom and top, then 2x120mm exhaust near the real.
also added lights cause cause they've been sitting around.. and i figured it would light up my closet, since thats where it resides.. but it doesnt light up my closet. lol

Its mostly a plex media server, local and outside, and i stream games from it occasionally too.
running flexraid with one 4TB drive set as parity.
Running low on space, planning on adding a 6TB parity drive asap so then i can continue to add 6TB drives instead of 4TBs


----------



## alpenwasser

Quote:


> Originally Posted by *Cheatdeath*
> 
> This is my first dedicated server/nas. Previously I had been serving the media and file sharing for my home network with my main desktop machine, it has a 17TB Flexraid setup that is still going but now only for redundancy. The new machine is running the latest FreeNAS with various plugins.
> 
> Specs below.
> 
> 24TB Advertised, I am using a Raidz2 8 disk array, for 16.4TB Usable and can lose 2 drives before game over.
> 
> FreeNas is running off a 16GB USB 2.0 drive
> 
> CASE: Fractal Design R5 Black No window
> PSU: Corsair CS450M GOLD Cert
> MB: Supermicro X10SLL-F
> CPU: Intel Xeon E3-1231V3 Haswell 3.4GHz
> HS: Noctua NH-U9S
> RAM: 32GB Crucial DDR3 ECC 4x8GB
> RAID CARD 1: LSI 9211-8i flashed with P16IT Firmware
> HDD: 8x TOSHIBA PH3300U-1I72 3TB 7200 RPM
> Fans: 2xFractal 140mm / 2x Noctua 120mm / 3x Noctua 140mm


He, fancy seeing you here, that machine looks slightly familiar!


----------



## andyroo89

screenfetch


temporary setup.


----------



## link1393

I will need to post mine soon


----------



## Plan9

Quote:


> Originally Posted by *andyroo89*
> 
> screenfetch


Any particular reason you went for 32bit instead of 64bit on the Opteron?
And Ubuntu Desktop instead of Ubuntu Server? (you don't really want Xorg running on a server with only 2GB RAM)


----------



## andyroo89

Quote:


> Originally Posted by *Plan9*
> 
> Any particular reason you went for 32bit instead of 64bit on the Opteron?
> And Ubuntu Desktop instead of Ubuntu Server? (you don't really want Xorg running on a server with only 2GB RAM)


it is temporary until I find suitable hdd to move my files so I can "clean house" then ubuntu server will be installed.

as for the hp media center, it gets fussy when I try to install ubuntu server on it. I was forced to use xubuntu (or any distro variant) and I have attempted to stop xorg from running but its no use.


----------



## Plan9

Quote:


> Originally Posted by *andyroo89*
> 
> it is temporary until I find suitable hdd to move my files so I can "clean house" then ubuntu server will be installed.
> 
> as for the hp media center, it gets fussy when I try to install ubuntu server on it. I was forced to use xubuntu (or any distro variant) and I have attempted to stop xorg from running but its no use.


Maybe try Debian. It's basically Ubuntu Server but without the bloat and better support for niche hardware / software configurations.


----------



## tiro_uspsss

thought I might post a little side project of mine.. not is use/'production' yet.. pfsense box:







what it looks like in device manager (quick windows 8.1 install & run just to see):



total of 16 GbE ports + a single 100Mb port

specs:
2x Intel Xeon 'Sossaman' SL8WT 2Ghz dual-core (heatsink is Dynatron i65G) http://ark.intel.com/products/27222/Intel-Xeon-Processor-LV-2_00-GHz-2M-Cache-667-MHz-FSB
4x 2GB DDR2-400 ECC+REG Hynix
Tyan Tiger i7520SD S5365 http://www.tyan.com/archive/products/html/tigeri7520sd.html
2x Intel 1000GT PCI
2x Intel 1000MT 6-port PCIX
Intel 330 120GB
Hyper 560W PSU
6x 120mm fans
Lian Li V1100 Plus


----------



## cones

Why so many ports?


----------



## tiro_uspsss

Quote:


> Originally Posted by *cones*
> 
> Why so many ports?


the pfsense box will be acting as a router/switch & I plan on messing around with vlans, all while connected to only some rigs for now but some more down the track + VMs








I was going to fill in the two PCIEx8 slots with two 6-port cards (total would have been 28x GbE) but I think I have enough ports for now!


----------



## cones

Quote:


> Originally Posted by *tiro_uspsss*
> 
> the pfsense box will be acting as a router/switch & I plan on messing around with vlans, all while connected to only some rigs for now but some more down the track + VMs
> 
> 
> 
> 
> 
> 
> 
> 
> I was going to fill in the two PCIEx8 slots with two 6-port cards (total would have been 28x GbE) but I think I have enough ports for now!


My thought was a switch must be cheaper then that.


----------



## NKrader

Quote:


> Originally Posted by *tiro_uspsss*
> 
> thought I might post a little side project of mine.. not is use/'production' yet.. pfsense box:
> 
> [
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> IMG ALT=""]http://www.overclock.net/content/type/61/id/2499134/width/350/height/700[/IMG]
> 
> 
> 
> 
> 
> what it looks like in device manager (quick windows 8.1 install & run just to see):
> 
> 
> 
> total of 16 GbE ports + a single 100Mb port
> 
> 
> 
> specs:
> 2x Intel Xeon 'Sossaman' SL8WT 2Ghz dual-core (heatsink is Dynatron i65G) http://ark.intel.com/products/27222/Intel-Xeon-Processor-LV-2_00-GHz-2M-Cache-667-MHz-FSB
> 4x 2GB DDR2-400 ECC+REG Hynix
> Tyan Tiger i7520SD S5365 http://www.tyan.com/archive/products/html/tigeri7520sd.html
> 2x Intel 1000GT PCI
> 2x Intel 1000MT 6-port PCIX
> Intel 330 120GB
> Hyper 560W PSU
> 6x 120mm fans
> Lian Li V1100 Plus


tiro
im so glad you still rock your sammy, i wish i wouldnt have sold mine...


----------



## tiro_uspsss

Quote:


> Originally Posted by *cones*
> 
> My thought was a switch must be cheaper then that.


yes & no.. seeing as its pfsense, an equivalent switch would be _very_ high end. Keep in mind I already had the hardware minus network cards lying around, so hey, why not?








Quote:


> Originally Posted by *NKrader*
> 
> tiro
> im so glad you still rock your sammy, i wish i wouldnt have sold mine...


I'm glad I still have mine. However, if I had the intel mobo I would have long given up on it seeing as it only has two slots. Heck, if I hadn't come across the idea of using my sossaman as a pfsense box, I would have given up on it too. For me, only the Tyan mobo + pfsense make sossaman useful (to me). Any other use I can think of, sossman isn't enough.


----------



## Plan9

I quite like that pfsense box.

re cost: power draw would be more than an off-the-shelf product, but cost isn't the be-all and end-all in my opinion. Sometimes it's nice just having something that's been hacked together from scratch - even if just for the "_i made this_" gloat factor.


----------



## cones

Quote:


> Originally Posted by *tiro_uspsss*
> 
> yes & no.. seeing as its pfsense, an equivalent switch would be _very_ high end. Keep in mind I already had the hardware minus network cards lying around, so hey, why not?
> 
> 
> 
> 
> 
> 
> 
> 
> ...


I was thinking cost might be around the same between those, but yes why not. Usually people don't do that with Pfsense so it made me curious.


----------



## parityboy

Quote:


> Originally Posted by *Plan9*
> 
> I quite like that pfsense box.
> 
> re cost: power draw would be more than an off-the-shelf product, but cost isn't the be-all and end-all in my opinion. Sometimes it's nice just having something that's been hacked together from scratch - even if just for the "_i made this_" gloat *OCN* factor.


Fixed.


----------



## Plan9

nicely done


----------



## tiro_uspsss

Do workstations count?











specs:

2x Intel Xeon 5650 (s1366)
SuperMicro X8DTH
12x 4GB DDR3-1333 ECC+REG Samsung
2x HD7950
Intel GbE PCIEx1 card
HighPoint USB3
Creative X-Fi Titanium HD
Samsung XP941 256Gb OS & apps
Samsung 850 Pro 128GB apps
Kingston HyperX 3K 120GB VMs
Pioneer DVDRW
Enermax Revolution 1050W
Lian Li PC-P80NB
Lamptron FC8
13 fans


----------



## EvilMonk

Quote:


> Originally Posted by *tiro_uspsss*
> 
> Do workstations count?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> specs:
> 
> 2x Intel Xeon 5650 (s1366)
> SuperMicro X8DTH
> 12x 4GB DDR3-1333 ECC+REG Samsung
> 2x HD7950
> Intel GbE PCIEx1 card
> HighPoint USB3
> Creative X-Fi Titanium HD
> Samsung XP941 256Gb OS & apps
> Samsung 850 Pro 128GB apps
> Kingston HyperX 3K 120GB VMs
> Pioneer DVDRW
> Enermax Revolution 1050W
> Lian Li PC-P80NB
> Lamptron FC8
> 13 fans


Well it could be a server too so I don't see why not...


----------



## maddangerous

Quote:


> Originally Posted by *tiro_uspsss*
> 
> Do workstations count?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> specs:
> 
> 2x Intel Xeon 5650 (s1366)
> SuperMicro X8DTH
> 12x 4GB DDR3-1333 ECC+REG Samsung
> 2x HD7950
> Intel GbE PCIEx1 card
> HighPoint USB3
> Creative X-Fi Titanium HD
> Samsung XP941 256Gb OS & apps
> Samsung 850 Pro 128GB apps
> Kingston HyperX 3K 120GB VMs
> Pioneer DVDRW
> Enermax Revolution 1050W
> Lian Li PC-P80NB
> Lamptron FC8
> 13 fans


Holy crap what do you use that for?


----------



## hawkeye071292

I like that pfsense box. I should spin up a virtual one to play around with.


----------



## maddangerous

Quote:


> Originally Posted by *hawkeye071292*
> 
> I like that pfsense box. I should spin up a virtual one to play around with.


Personally, I've been pretty interested in using one as a main router/firewall at my apartment. I just don't have the rig to do it yet really.


----------



## hawkeye071292

Quote:


> Originally Posted by *maddangerous*
> 
> Personally, I've been pretty interested in using one as a main router/firewall at my apartment. I just don't have the rig to do it yet really.


I have an ASA 5505 so I don't really "need" one. But if it dies, a pfsense box would be free since I have the hardware for it.


----------



## maddangerous

Quote:


> Originally Posted by *hawkeye071292*
> 
> I have an ASA 5505 so I don't really "need" one. But if it dies, a pfsense box would be free since I have the hardware for it.


ooooo I wouldn't mind having one of those either. just a touch out of price range though lol. (a lot







)


----------



## EvilMonk

Quote:


> Originally Posted by *hawkeye071292*
> 
> I like that pfsense box. I should spin up a virtual one to play around with.


I have one running with other VMs on a 12 cores / 24 threads 72 gb ecc/reg 8 nic DL180 G6 and I really love it... It's a great way to learn how to use it


----------



## EvilMonk

Quote:


> Originally Posted by *maddangerous*
> 
> ooooo I wouldn't mind having one of those either. just a touch out of price range though lol. (a lot
> 
> 
> 
> 
> 
> 
> 
> )


Well its still good to learn more than one platform.
I have an ASA 5505 security plus, a Cisco 2651 and a Cisco 2851 ISR and I still like to discover other different ones I never had a chance to work with.
I bought a couple of juniper boxes afterward just to learn them and I don't even use them at work. I just do Cisco and Checkpoint platforms at the office but you never know when you'll get one of these other boxes between your hands







I got my hands on a Checkpoint UTM-270 that was broken and I fixed it so I could save a huge amount of money compared to the price it usually sells for


----------



## hawkeye071292

Quote:


> Originally Posted by *maddangerous*
> 
> ooooo I wouldn't mind having one of those either. just a touch out of price range though lol. (a lot
> 
> 
> 
> 
> 
> 
> 
> )


Once people start upgrading to the 5506, the 5505 will cost a LOT less. You can pick em up for around $150-160 right now. I got mine for $100.
Quote:


> Originally Posted by *EvilMonk*
> 
> I have one running with other VMs on a 12 cores / 24 threads 72 gb ecc/reg 8 nic DL180 G6 and I really love it... It's a great way to learn how to use it


I have a 6 core 12 threads and 32GB ram in a ML350 G6. It wouldn't take much to pickup another CPU though. A CPU and cooler would be about $150 or so.

Quote:


> Originally Posted by *EvilMonk*
> 
> Well its still good to learn more than one platform.
> I have an ASA 5505 security plus, a Cisco 2651 and a Cisco 2851 ISR and I still like to discover other different ones I never had a chance to work with.
> I bought a couple of juniper boxes afterward just to learn them and I don't even use them at work. I just do Cisco and Checkpoint platforms at the office but you never know when you'll get one of these other boxes between your hands
> 
> 
> 
> 
> 
> 
> 
> I got my hands on a Checkpoint UTM-270 that was broken and I fixed it so I could save a huge amount of money compared to the price it usually sells for


A lot of places use Juniper boxes. We are a Cisco shop but I rarely have to do much with them. We have other guys for that. I would love to get a 5506 though. That way I could increase my internet package. I'm capped out on the 5505 now. I need to get a managed switch next though.


----------



## EvilMonk

Quote:


> Originally Posted by *hawkeye071292*
> 
> Once people start upgrading to the 5506, the 5505 will cost a LOT less. You can pick em up for around $150-160 right now. I got mine for $100.
> I have a 6 core 12 threads and 32GB ram in a ML350 G6. It wouldn't take much to pickup another CPU though. A CPU and cooler would be about $150 or so.
> A lot of places use Juniper boxes. We are a Cisco shop but I rarely have to do much with them. We have other guys for that. I would love to get a 5506 though. That way I could increase my internet package. I'm capped out on the 5505 now. I need to get a managed switch next though.


I got a good deal on a best offer / combined shipping for 3 Cisco, a Catalyst 2970 24 Ports POE Gigabit, a Catalyst 3750 Gigabit 48 ports and an ESW 540 48 ports Gigabit. All on eBay a couple of months back and the seller was really nice, I sent him an offer for all 3 and asked him if he could combine shipping and he didn't even argue, he accepted my first offer and I got them less than 10 days later. You can find great deals and if you make offers you never know they might say yes right away


----------



## wiretap

Intel i5 2500k Processor
Gigabyte Z77X-D3H Motherboard
8GB Corsair Dominator DDR3-1600
90GB Corsair ForceGT SSD [OS Drive]
20TB usable storage (FlexRAID with 2-drive redundancy, all WD Green 1TB/2TB)
Ceton InfiniTV 4
Rocketfish Case (modded to hold 20 hard drives)
Highpoint DC-7280 Datacenter HBA
Sans Digital TowerRAID 8-bay eSATA Enclosure
Zalman 850w Heatpipe Cooled PSU
WHS 2011

I just ordered a new server though, so I'll be replacing all that within the next few weeks. Supermicro X9SCM, Intel Xeon E3-1220v2, 16GB DDR3-1333 ECC RAM, Norco RPC-4220 -- all for $200 on Ebay from some server wholesale place who didn't know what they had. I'm probably going to use Windows Server 2012, or Ubuntu Server.. probably running SnapRAID. I might give FreeNAS a try though. Or maybe I'll just use ESXi and call it a day and install both. I have specific OS requirements for my media server software and security camera software. I still have to order ~24TB of new hard drives for my data transition. I'd like to use a file system or RAID style implementation that protects against silent data corruption, because over the last 5 years I've had one or two instances of it happening. Luckily I have backups of critical things.


----------



## cones

Quote:


> Originally Posted by *wiretap*
> 
> Intel i5 2500k Processor
> Gigabyte Z77X-D3H Motherboard
> 8GB Corsair Dominator DDR3-1600
> 90GB Corsair ForceGT SSD [OS Drive]
> 20TB usable storage (FlexRAID with 2-drive redundancy, all WD Green 1TB/2TB)
> Ceton InfiniTV 4
> Rocketfish Case (modded to hold 20 hard drives)
> Highpoint DC-7280 Datacenter HBA
> Sans Digital TowerRAID 8-bay eSATA Enclosure
> Zalman 850w Heatpipe Cooled PSU
> WHS 2011
> 
> I just ordered a new server though, so I'll be replacing all that within the next few weeks. Supermicro X9SCM, Intel Xeon E3-1220v2, 16GB DDR3-1333 ECC RAM, Norco RPC-4220 -- all for $200 on Ebay from some server wholesale place who didn't know what they had. I'm probably going to use Windows Server 2012, or Ubuntu Server.. probably running SnapRAID. I might give FreeNAS a try though. Or maybe I'll just use ESXi and call it a day and install both. I have specific OS requirements for my media server software and security camera software. I still have to order ~24TB of new hard drives for my data transition. I'd like to use a file system or RAID style implementation that protects against silent data corruption, because over the last 5 years I've had one or two instances of it happening. Luckily I have backups of critical things.
> 
> 
> Spoiler: Warning: Spoiler!


I would go VM, really nice to have and new one would be able to support it fine.


----------



## wiretap

Thanks. I'll give it a go.


----------



## cones

Quote:


> Originally Posted by *wiretap*
> 
> Thanks. I'll give it a go.


Just remember disk I/O is the most limited followed by RAM/CPU.


----------



## wiretap

Yea, we use ESXi at work in a lot of our plant process data servers. For a home server, I should be ok with the hardware I chose. It isn't mission critical. I mostly just use it to serve HTPC's and record security cam footage.


----------



## hawkeye071292

We use FreeNAS or open filer just for data. I prefer just using vmware personally.


----------



## driftingforlife

Finally bought a server so i can start learning VMing at home









http://www.ebay.co.uk/itm/400899222693?_trksid=p2060353.m2763.l2649&ssPageName=STRK%3AMEBIDX%3AIT&rmvSB=true


----------



## alpenwasser

Quote:


> Originally Posted by *driftingforlife*
> 
> Finally bought a server so i can start learning VMing at home
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.ebay.co.uk/itm/400899222693?_trksid=p2060353.m2763.l2649&ssPageName=STRK%3AMEBIDX%3AIT&rmvSB=true


Careful there, I hear this can turn into an addiction...


----------



## EvilMonk

Quote:


> Originally Posted by *alpenwasser*
> 
> Careful there, I hear this can turn into an addiction...


Congrats on the server, For vmware you might want to get a good raid controller that will manage both SAS & Sata 6Gbps and increase the ram capacity. I'm a vmware admin and manage a 50+ servers infrastructure. depending on the amount of VM you'll run (at least 6 or 8 maybe more depending on your enthusiasm I guess since it will be your only server for now) you might want to get 32Gb of ram just to make sure especially that the FBDIMM for those servers is cheap.
Beside that its a great server


----------



## alpenwasser

Quote:


> Originally Posted by *EvilMonk*
> 
> Congrats on the server, For vmware you might want to get a good raid controller that will manage both SAS & Sata 6Gbps and increase the ram capacity. I'm a vmware admin and manage a 50+ servers infrastructure. depending on the amount of VM you'll run (at least 6 I guess) you might want to get 32Gb of ram just to make sure especially that the FBDIMM for those servers is cheap.
> Beside that its a great server


I think you quoted the wrong post here, my friend. Your advice is still appreciated of course, but I just thought I'd mention that.


----------



## EvilMonk

Quote:


> Originally Posted by *alpenwasser*
> 
> I think you quoted the wrong post here, my friend. Your advice is still appreciated of course, but I just thought I'd mention that.


Yeah I did sorry about that


----------



## wiretap

New server build is commencing. My case just came today.









Specs so far:
Norco RPC-4220
Intel Xeon E3-1220v2
Supermicro X9SCM
16GB Supertalent DDR3-1333 ECC
Zalman 500W Heatpipe Cooled PSU (just one I had laying around for testing - I'll upgrade before the final build)

CPU + Mobo + RAM = $200 on Ebay (wholesaler server pull -- with warranty, and I did a 24hr stress test and verified everything works great)
Case = $275 on Ebay (brand new in sealed box, with 120mm fan wall and dual OS drive mounting option)

I just ordered all new cooling fans:
ARCTIC Alpine 11 Plus CPU Cooler
3x ARCTIC F12 PWM PST
2x ARCTIC F8 PWM PST
6x Silverstone CPF03 PWM Fan Power Extension Cable
Swiftech 8-way PWM Fan Splitter

Cooling equipment = $120 on Amazon


----------



## christoph

that's a nice case


----------



## hawkeye071292

Quote:


> Originally Posted by *EvilMonk*
> 
> Congrats on the server, For vmware you might want to get a good raid controller that will manage both SAS & Sata 6Gbps and increase the ram capacity. I'm a vmware admin and manage a 50+ servers infrastructure. depending on the amount of VM you'll run (at least 6 or 8 maybe more depending on your enthusiasm I guess since it will be your only server for now) you might want to get 32Gb of ram just to make sure especially that the FBDIMM for those servers is cheap.
> Beside that its a great server


32GB is more than enough for a test environment. That's what mine has. I gave my vCenter server (Windows one not SUSE) 8GB then my Domain Controller 8GB. That leaves me with another 16GB to play around with. I could knock both of those down to 4GB and be fine if needed.

Most linux distros run fine on 2gb unless you are doing some intense work on them. Windows boxes 2-4gb depending on task. Just make sure you allocate enough CPU cores.

Unfortunately I cant run much more ram without getting a second CPU or upgrading to all 8GB sticks though =/


----------



## beers

Quote:


> Originally Posted by *vaeron*
> 
> Most linux distros run fine on 2gb unless you are doing some intense work on them.


Dang, and here I was using 256/512 MB slices for certain server apps









2 GB for some test bed *nix servers is pretty bloaty unless you are just installing the full desktop environment.


----------



## Photographer

Does a Dual 2011 socket workstation count?



2x Xeon E5 2665 8 Core 16 threads
Asus Z9PE D8-WS
8X8GB ADATA XPG 1600Mhz CL9
GTX 780 (replaced the 7970s those were temporary and needed CUDA)
1x Intel 330 Series 180GB (Boot)
1x Samsung 840 Evo 250 GB (programs)
1x Crucial M550 1TB (Data)
1x WD Black 4TB ( Archive and Backups)
Powered by Corsair AX 1200i

oh and for those wondering yes its in a case now. this was a test bench picture from last year.


----------



## vaeron

Quote:


> Originally Posted by *beers*
> 
> Dang, and here I was using 256/512 MB slices for certain server apps
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 2 GB for some test bed *nix servers is pretty bloaty unless you are just installing the full desktop environment.


True. When I run systems I like to push them hard. I also don't usually run a system with less than 4gb of ram. *nix systems are great in that they are very lightweight, but you can get into some intense sys reqs if you have a heavy load on it.


----------



## jibesh

Quote:


> Originally Posted by *hawkeye071292*
> 
> 32GB is more than enough for a test environment. That's what mine has. I gave my vCenter server (Windows one not SUSE) 8GB *then my Domain Controller 8GB.*


Unless if the domain controller is running many more other services, 8GB is overkill for a domain controller in a test environment. You can easily run the domain controller with 1 to 2GB RAM.


----------



## Blindsay

Anyone know, as a general rule of thumb with a 2p intel system do you have to install matching cpus? I have 2 L5520's cpus (quadcore) and a single 6 core chip (forget which model off hand) but assuming the 6 core is compatible with the board would I be able to run 1 quad and 1 6 core?


----------



## wiretap

Quote:


> Originally Posted by *Blindsay*
> 
> Anyone know, as a general rule of thumb with a 2p intel system do you have to install matching cpus? I have 2 L5520's cpus (quadcore) and a single 6 core chip (forget which model off hand) but assuming the 6 core is compatible with the board would I be able to run 1 quad and 1 6 core?


From my experience, you need matching CPU's with matched stepping.


----------



## Rbby258

Quote:


> Originally Posted by *Blindsay*
> 
> Anyone know, as a general rule of thumb with a 2p intel system do you have to install matching cpus? I have 2 L5520's cpus (quadcore) and a single 6 core chip (forget which model off hand) but assuming the 6 core is compatible with the board would I be able to run 1 quad and 1 6 core?


They have to be the same microarchitecture.


----------



## Master__Shake

Quote:


> Originally Posted by *wiretap*
> 
> New server build is commencing. My case just came today.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Specs so far:
> Norco RPC-4220
> Intel Xeon E3-1220v2
> Supermicro X9SCM
> 16GB Supertalent DDR3-1333 ECC
> Zalman 500W Heatpipe Cooled PSU (just one I had laying around for testing - I'll upgrade before the final build)
> 
> CPU + Mobo + RAM = $200 on Ebay (wholesaler server pull -- with warranty, and I did a 24hr stress test and verified everything works great)
> Case = $275 on Ebay (brand new in sealed box, with 120mm fan wall and dual OS drive mounting option)
> 
> I just ordered all new cooling fans:
> ARCTIC Alpine 11 Plus CPU Cooler
> 3x ARCTIC F12 PWM PST
> 2x ARCTIC F8 PWM PST
> 6x Silverstone CPF03 PWM Fan Power Extension Cable
> Swiftech 8-way PWM Fan Splitter
> 
> Cooling equipment = $120 on Amazon


you can use this to mount your os drive if you'd like.

the three holes on the fan wall and the one on the side are used to mount this.

http://www.amazon.com/Mounting-Norco-RPC-4224-RPC-4220-RPC-4216/dp/B00IK587QQ



also may i suggest this for a raid card.

http://www.ebay.com/itm/LSI-MegaRAID-LSI9260-Low-Profile-BKT-4i-6Gb-s-SATA-SAS-PCI-e-x8-Lane-/121692824903?pt=LH_DefaultDomain_0&hash=item1c55752547

cheapest you're going to find.

also get one of these

http://www.ebay.com/itm/Intel-SE91267-RES2SV240NC-RES2SV240-24-port-6-Gb-s-SATA-SAS-RAID-Expander-Card-/271912405082?pt=LH_DefaultDomain_0&hash=item3f4f3e085a

then you can fill your case.


----------



## tycoonbob

Quote:


> Originally Posted by *Master__Shake*
> 
> you can use this to mount your os drive if you'd like.
> 
> the three holes on the fan wall and the one on the side are used to mount this.
> 
> http://www.amazon.com/Mounting-Norco-RPC-4224-RPC-4220-RPC-4216/dp/B00IK587QQ


This is awesome. I wish they existed (or at least, that I knew about them) when I did my RPC-4224 build like 3 years ago. I used something like these instead:
http://www.newegg.com/Product/Product.aspx?Item=N82E16817974008

In fact, mine is the same thing, but SYBA brand. I do love them, though. Little more expensive, but gives easier access in some scenarios.


----------



## wiretap

Quote:


> Originally Posted by *Master__Shake*
> 
> you can use this to mount your os drive if you'd like.
> 
> the three holes on the fan wall and the one on the side are used to mount this.
> 
> http://www.amazon.com/Mounting-Norco-RPC-4224-RPC-4220-RPC-4216/dp/B00IK587QQ
> 
> ]http://www.overclock.net/content/type/61/id/2508323/width/500/height/1000
> 
> also may i suggest this for a raid card.
> 
> http://www.ebay.com/itm/LSI-MegaRAID-LSI9260-Low-Profile-BKT-4i-6Gb-s-SATA-SAS-PCI-e-x8-Lane-/121692824903?pt=LH_DefaultDomain_0&hash=item1c55752547
> 
> cheapest you're going to find.
> 
> also get one of these
> 
> http://www.ebay.com/itm/Intel-SE91267-RES2SV240NC-RES2SV240-24-port-6-Gb-s-SATA-SAS-RAID-Expander-Card-/271912405082?pt=LH_DefaultDomain_0&hash=item3f4f3e085a
> 
> then you can fill your case.


Thanks. My RPC-4220 came with two OS drive mounts that go on top of the 20 hot swap bays. For the RAID card, I have a Highpoint DC-7280 HBA that's in my existing server, but I'm not sure if I'm going to reuse it or not. I have been looking at using 3 of those IBM M1015's in IT mode.


----------



## vaeron

Quote:


> Originally Posted by *wiretap*
> 
> Thanks. My RPC-4220 came with two OS drive mounts that go on top of the 20 hot swap bays. For the RAID card, I have a Highpoint DC-7280 HBA that's in my existing server, but I'm not sure if I'm going to reuse it or not. I have been looking at using 3 of those IBM M1015's in IT mode.


The M1015 is an awesome card and only runs a few bucks more on ebay that the LSI that they posted. I use it in all of my large storage servers.


----------



## EvilMonk

Quote:


> Originally Posted by *Blindsay*
> 
> Anyone know, as a general rule of thumb with a 2p intel system do you have to install matching cpus? I have 2 L5520's cpus (quadcore) and a single 6 core chip (forget which model off hand) but assuming the 6 core is compatible with the board would I be able to run 1 quad and 1 6 core?


You need matching cpus yes







You can't mix different CPUs like what you had in mind (6 cores + 4 cores). You need to use identical ones (L5520 + L5520 or for example X5650 + X5650)


----------



## Rbby258

Quote:


> Originally Posted by *EvilMonk*
> 
> You need matching cpus yes
> 
> 
> 
> 
> 
> 
> 
> You can't mix different CPUs like what you had in mind (6 cores + 4 cores). You need to use identical ones (L5520 + L5520 or for example X5650 + X5650)


http://www.intel.com/content/www/us/en/processors/xeon/xeon-5500-vol-1-datasheet.html

Page 25 and 26 have info on mixed cpu's
Quote:


> Intel supports dual processor (DP) configurations consisting of processors:
> 1. from the same power optimization segment
> 2. that support the same maximum Intel QuickPath Interconnect and DDR3 memory speeds
> 3. that share symmetry across physical packages with respect to the number of logical processor per package, number of cores per package, number of Intel QuickPath interfaces, and cache topology
> 4. that have identical Extended Family, Extended Model, Processor Type, Family Code
> and Model Number as indicated by the function 1 of the CPUID instruction


Thats some of the info.


----------



## EvilMonk

Quote:


> Originally Posted by *Rbby258*
> 
> http://www.intel.com/content/www/us/en/processors/xeon/xeon-5500-vol-1-datasheet.html
> 
> Page 25 and 26 have info on mixed cpu's
> Thats some of the info.


This :

*4. that have identical Extended Family, Extended Model, Processor Type, Family Code
and Model Number as indicated by the function 1 of the CPUID instruction*

Equals in english

What I said by this :

*You need matching cpus yes smile.gif You can't mix different CPUs like what you had in mind (6 cores + 4 cores). You need to use identical ones (L5520 + L5520 or for example X5650 + X5650) thumb.gif*


----------



## Rbby258

Quote:


> Originally Posted by *EvilMonk*
> 
> This :
> 
> *4. that have identical Extended Family, Extended Model, Processor Type, Family Code
> and Model Number as indicated by the function 1 of the CPUID instruction*
> 
> Equals in english
> 
> What I said by this :
> 
> *You need matching cpus yes smile.gif You can't mix different CPUs like what you had in mind (6 cores + 4 cores). You need to use identical ones (L5520 + L5520 or for example X5650 + X5650) thumb.gif*


Didn't say it didn't more the fact its a good read. You are able to mix cpu's though but its not recommended.


----------



## camry racing

is a gigabyte H97 board paired with an i5 4460T and 4GB crucial ballistic 1600mhz 1.35v low profile memory 3 WD red 4TB and an intel SSD of boot drive. It running W8.1 since I had hard time with drivers and this board on W server 2012 Essensials. The goal was a low power consuming server I kind of made it it only consume 33W on idle and like 55W while doing something. Its running plex media server and my 2 computer at home do backup to it using windows file history.


----------



## deKmKrftR

I must say the x3650 is a lot nicer since it became "Lenovo System X"




This is a simple little 1x Xeon, 24GB RAM, RAID5 (1200GB) server for a client I got to "assemble"

I also put in an order for a 2x Xeon 6C, 32GB RAM, 400GB SSD RAID6 machine today which unfortunately its through a cloud provider so I will never get to see it


----------



## wiretap

Got some more parts in









Specs so far:
Norco RPC-4220 4U Case
Corsair RM1000 Power Supply
Supermicro X9SCM Motherboard
Intel E3-1220v2 Processor
Supertalent 16GB (4x4GB) DDR3-1333 ECC RAM
2x 250GB Samsung Evo 850 SSD's in RAID1
3x 120mm Arctic Cooling F12 PWM Fans
2x 80mm Arctic Cooling F8 Rev.2 PWM Fans
1x 92mm Arctic Cooling Alpine 11 Plus PWM CPU Fan
5x SFF-8087 to SFF-8087 cables

Soon I'm probably going to order 3x IBM ServeRAID M1015's so I can start adding HDD's. Not sure yet on which drives I'm going with.. probably either HGST as a primary choice and WD as a secondary choice. (4TB or greater each) I need at least 20TB with two parity drives. This week I'm going to install Windows Server 2012 R2 on it and start setting it up.


----------



## EvilMonk

Quote:


> Originally Posted by *wiretap*
> 
> Got some more parts in
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Specs so far:
> Norco RPC-4220 4U Case
> Corsair RM1000 Power Supply
> Intel E3-1220v2 Processor
> Supertalent 16GB (4x4GB) DDR3-1333 ECC RAM
> 2x 250GB Samsung Evo 850 SSD's in RAID1
> 3x 120mm Arctic Cooling F12 PWM Fans
> 2x 80mm Arctic Cooling F8 Rev.2 PWM Fans
> 1x 92mm Arctic Cooling Alpine 11 Plus PWM CPU Fan
> 5x SFF-8087 to SFF-8087 cables
> 
> Soon I'm probably going to order 3x IBM ServeRAID M1015's so I can start adding HDD's. Not sure yet on which drives I'm going with.. probably either HGST as a primary choice and WD as a secondary choice. (4TB or greater each) I need at least 20TB with two parity drives. This week I'm going to install Windows Server 2012 R2 on it and start setting it up.


You've got a very nice start to your server there. I have to say I really love what I'm seeing and it make me rethink the 2 fileservers I already have. I've been hesitating about getting 2 norco server cases to replace the ones I'm using currently with my fileservers and I'd get all the room I need (my main one has an LSI 9260 + an HP SAS Expander with 4x256Gb Crucial MX100 in Raid10 and 20x2Tb SATA 7.2k Seagate 6 Gbps in Raid 6) but I really think you helped me change my mind now. Really this build is what I call a well made design...


----------



## wiretap

It's a really simple case, and perfect for a home server. I wouldn't really say it is up to corporate standards, but it will get the job done in a lab/home environment. The layout is great for consumer style hardware and there's plenty of space inside for wire management. Consolidating everything to my 22U rack enclosure will keep the room quieter and less dust will get into the computer since the rack has a glass door on it. But the Arctic Cooling fans I put in the Norco are dead silent.. I can't even hear them spinning from 1 foot away. Best bang for the buck fan IMO, and they have good airflow. I've had Noctua fans in the past, and I'd say these are pretty darn close for less than half the cost.


----------



## andyroo89

I just installed ubuntu server so here is the new screenfetch on my dell poweredge sc1435


----------



## Jeci

Parts are starting to arrive for my new FreeNas build







:


----------



## EvilMonk

Quote:


> Originally Posted by *Jeci*
> 
> Parts are starting to arrive for my new FreeNas build
> 
> 
> 
> 
> 
> 
> 
> :


That's for a FreeNAS build?









Damnnnnnnnnn


----------



## camry racing

talk about overkill for a freenas build...


----------



## 350 Malibu

Quote:


> Originally Posted by *camry racing*
> 
> talk about overkill for a freenas build...


Well of course it is, this Overclock.net!


----------



## beers

Quote:


> Originally Posted by *Jeci*
> 
> Parts are starting to arrive for my new FreeNas build
> 
> 
> 
> 
> 
> 
> 
> :


I'd demote your aging desktop to the freeNAS build, instead


----------



## DzillaXx

New Case next to old.

Not final Pic, Some things have been added.

Like ESATA card for two external eSATA hard drives. As well as another 2TB Hard drive in the top 3.5" bay, and a hot swap with 5TB HDD in the 5.25" Bay.

For a Total of 22TB's of Hard Drive Space. Which I only use 10TB's for storage. The Rest is for backing on the 10TB's of storage and the 2TB's for backing up my PC's Raid. Also using a 64gb SSD for the OS.

Harpertown X5450 @ 3.6ghz makes for a pretty decent server CPU wise, might use more power than newer chips these days. But CPU performance is still more than enough to require getting something else. Used an old P45 board with the 771 to 775 mod, worked out great. Other than hard drive cost, the server cost was very low.









Using Windows Home Server 2011 (Windows Server 2008 R2)

Also have ClearOS on my Atom Powered Super Micro, as use as a router and future data server. Just don't have drives for it yet.


----------



## DzillaXx

Quote:


> Originally Posted by *wiretap*
> 
> Got some more parts in
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Specs so far:
> Norco RPC-4220 4U Case
> Corsair RM1000 Power Supply
> Supermicro X9SCM Motherboard
> Intel E3-1220v2 Processor
> Supertalent 16GB (4x4GB) DDR3-1333 ECC RAM
> 2x 250GB Samsung Evo 850 SSD's in RAID1
> 3x 120mm Arctic Cooling F12 PWM Fans
> 2x 80mm Arctic Cooling F8 Rev.2 PWM Fans
> 1x 92mm Arctic Cooling Alpine 11 Plus PWM CPU Fan
> 5x SFF-8087 to SFF-8087 cables
> 
> Soon I'm probably going to order 3x IBM ServeRAID M1015's so I can start adding HDD's. Not sure yet on which drives I'm going with.. probably either HGST as a primary choice and WD as a secondary choice. (4TB or greater each) I need at least 20TB with two parity drives. This week I'm going to install Windows Server 2012 R2 on it and start setting it up.


Don't overlook these

http://www.amazon.com/Toshiba-7200rpm-3-5-Inch-Internal-PH3500U-1I72/dp/B00OP2PKH2/ref=sr_1_1?ie=UTF8&qid=1437840407&sr=8-1&keywords=toshiba+5tb

I've got 3 running in my server, along with 3x 2TB drives (WD, Hitachi, Seagate (which died, waiting on RMA. Don't buy Seagate)) as well as a 1TB Hitachi.

For a Total of 22TB, but only use 10TBs for Data. Rest is for backing up.


----------



## 350 Malibu

Quote:


> Originally Posted by *DzillaXx*
> 
> Don't overlook these
> 
> http://www.amazon.com/Toshiba-7200rpm-3-5-Inch-Internal-PH3500U-1I72/dp/B00OP2PKH2/ref=sr_1_1?ie=UTF8&qid=1437840407&sr=8-1&keywords=toshiba+5tb
> 
> I've got 3 running in my server, along with 3x 2TB drives (WD, Hitachi, Seagate (which died, waiting on RMA. Don't buy Seagate)) as well as a 1TB Hitachi.
> 
> For a Total of 22TB, but only use 10TBs for Data. Rest is for backing up.


I've had 12 of those Toshiba's spun up in my server (RAID50) for over 3 months now and they do seem to be very good drives for the money.


----------



## CSCoder4ever

Well my server has gone through a lot, the only parts that remained the entire build was the i3 and the memory.

OS: Debian Linux 8.1
Case: Antec 300
CPU: Intel Core i3 2100
Motherboard: MSI b75a-g43
Cooling: Xigmatek Dark Knight v1 and 2 120mm cougar vortex fans for intake
Memory: Patriot Memory gamer 2 8GB (2 x 4GB) PC3 10666 @ 1333
PSU: Antec neo 520W PSU
Storage HDD(s): WD Caviar Red 1TB ( storage, will get more drives at some point ), Crucial C300 64GB SSD ( boot )
Server Manufacturer: I assembled it.
Uses: NFS + Samba ( file serving ), git server, Continuous Integration, and of course backup





pardon the quality, still trying to get used to this phone's blasted camera


----------



## wiretap

Quote:


> Originally Posted by *DzillaXx*
> 
> Don't overlook these
> 
> http://www.amazon.com/Toshiba-7200rpm-3-5-Inch-Internal-PH3500U-1I72/dp/B00OP2PKH2/ref=sr_1_1?ie=UTF8&qid=1437840407&sr=8-1&keywords=toshiba+5tb
> 
> I've got 3 running in my server, along with 3x 2TB drives (WD, Hitachi, Seagate (which died, waiting on RMA. Don't buy Seagate)) as well as a 1TB Hitachi.
> 
> For a Total of 22TB, but only use 10TBs for Data. Rest is for backing up.


I ended up going with some 4TB WD SSHD's and 3x IBM M1015's and flashed them to IT mode. The server is built and I'm about 1/4 the way through my data transfers. (read and write is ~170MB/sec disk to disk) Once the data transfer is complete, I'm going to add 22TB more from the old server.







It should bring me up to ~38TB.


----------



## EvilMonk

Quote:


> Originally Posted by *wiretap*
> 
> I ended up going with some 4TB WD SSHD's and 3x IBM M1015's and flashed them to IT mode. The server is built and I'm about 1/4 the way through my data transfers. (read and write is ~170MB/sec disk to disk) Once the data transfer is complete, I'm going to add 22TB more from the old server.
> 
> 
> 
> 
> 
> 
> 
> 
> It should bring me up to ~38TB.


I love it!!!!! Great work man


----------



## DzillaXx

Quote:


> Originally Posted by *wiretap*
> 
> I ended up going with some 4TB WD SSHD's and 3x IBM M1015's and flashed them to IT mode. The server is built and I'm about 1/4 the way through my data transfers. (read and write is ~170MB/sec disk to disk) Once the data transfer is complete, I'm going to add 22TB more from the old server.
> 
> 
> 
> 
> 
> 
> 
> It should bring me up to ~38TB.


Yeah ^ like he said, nice job.

WD is always a great choice. Wonder how the SSD part of those drives will affect overall performance.


----------



## cones

I thought those cards could handle more HDDs?


----------



## EvilMonk

Quote:


> Originally Posted by *cones*
> 
> I thought those cards could handle more HDDs?


8 each, like most SAS raid adapters (4 HDDs per port for a 2 ports SAS card)...
If you want to handle more per card you need an SAS expander or a card with more than 2 ports, like the 16i version cards...


----------



## Master__Shake




----------



## EvilMonk

Quote:


> Originally Posted by *Master__Shake*


SAS Expander on a 1 port LSI SAS 9260-4i card...















I run 24 drives on the 8i version version of the same card with an HP SAS Expander.


----------



## Master__Shake

Quote:


> Originally Posted by *EvilMonk*
> 
> SAS Expander on a 1 port LSI SAS 9260-4i card...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I run 24 drives on the 8i version version of the same card with an HP SAS Expander.


there's actually 2.

i'm daisy chaining them


----------



## wiretap

Quote:


> Originally Posted by *cones*
> 
> I thought those cards could handle more HDDs?


8 each for the M1015's. I only have 7 4TB in (with 2 parity) connected right now, and 2 250GB Samsung Pro SSD's for ESXi datastore. I'll be adding my other 2TB hard drives to the Norco case after I complete all my transfers and complete the SnapRAID sync.


----------



## cones

Quote:


> Originally Posted by *EvilMonk*
> 
> 8 each, like most SAS raid adapters (4 HDDs per port for a 2 ports SAS card)...
> If you want to handle more per card you need an SAS expander or a card with more than 2 ports, like the 16i version cards...


Quote:


> Originally Posted by *wiretap*
> 
> 8 each for the M1015's. I only have 7 4TB in (with 2 parity) connected right now, and 2 250GB Samsung Pro SSD's for ESXi datastore. I'll be adding my other 2TB hard drives to the Norco case after I complete all my transfers and complete the SnapRAID sync.


Knew about the expanders but didn't know any of the numbers. I'm still wanting to get a better way for storage and you aren't helping that.


----------



## EvilMonk

Quote:


> Originally Posted by *cones*
> 
> Knew about the expanders but didn't know any of the numbers. I'm still wanting to get a better way for storage and you aren't helping that.


What cam we explain and help you understand better?


----------



## wiretap

Quote:


> Originally Posted by *cones*
> 
> Knew about the expanders but didn't know any of the numbers. I'm still wanting to get a better way for storage and you aren't helping that.


What type of storage method would you like?

1) You can use a single RAID controller hooked up to an expander (IBM M1015 or similar + Intel RES2SV240)
2) You can use multiple RAID controllers (i.e. IBM M1015 or similar)
3) You can use one large HBA with 24+ ports (i.e. RocketRAID DC7280 or Rocket 750)

These are just a small handful of options you have.


----------



## cones

Quote:


> Originally Posted by *EvilMonk*
> 
> What cam we explain and help you understand better?


Quote:


> Originally Posted by *wiretap*
> 
> What type of storage method would you like?
> 
> 1) You can use a single RAID controller hooked up to an expander (IBM M1015 or similar + Intel RES2SV240)
> 2) You can use multiple RAID controllers (i.e. IBM M1015 or similar)
> 3) You can use one large HBA with 24+ ports (i.e. RocketRAID DC7280 or Rocket 750)
> 
> These are just a small handful of options you have.


I'm good, I meant I'm getting jealous when you keep posting the pictures. I don't have the money right now to spend to upgrade my server, more important things need it. When I do I'll probably go with Unraid and an IBM M1015 so I can also use VMs. Also I haven't looked at all the details with all the current cards just have a basic idea of what's possible.


----------



## Jeci

Quote:


> Originally Posted by *EvilMonk*
> 
> That's for a FreeNAS build?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Damnnnnnnnnn


Quote:


> Originally Posted by *camry racing*
> 
> talk about overkill for a freenas build...


Quote:


> Originally Posted by *350 Malibu*
> 
> Well of course it is, this Overclock.net!


Quote:


> Originally Posted by *beers*
> 
> I'd demote your aging desktop to the freeNAS build, instead


Yes it's massively overkill. I was going to get a single 2603v2 & a supermicro X9SRL, but ended up finding a pair of 2603's & an x9Dia for the same price.

I'll be running plex in jails as well to try to utilise the extra horse power


----------



## Aximous

Quote:


> Originally Posted by *wiretap*
> 
> 8 each for the M1015's. I only have 7 4TB in (with 2 parity) connected right now, and 2 250GB Samsung Pro SSD's for ESXi datastore. I'll be adding my other 2TB hard drives to the Norco case after I complete all my transfers and complete the SnapRAID sync.


On the SSDs are you running two separate datastores or do you have them combined in some way?


----------



## wiretap

Quote:


> Originally Posted by *Aximous*
> 
> On the SSDs are you running two separate datastores or do you have them combined in some way?


Two separate SSD's for two separate datastores. I have it configured right now for the virtual machines to do the OS backups to the other datastore with Veeam Backup utility.


----------



## EvilMonk

Quote:


> Originally Posted by *wiretap*
> 
> Two separate SSD's for two separate datastores. I have it configured right now for the virtual machines to do the OS backups to the other datastore with Veeam Backup utility.


That is a good little backup setup!


----------



## stolid

An update on my old home server since I just upgraded its storage, RAM, and it no longer lives in a cardboard box.

*OS*: Debian Stable
*Case*: Tray
*CPU*: 4x Opteron 8431 2.4GHz hexacores
*Motherboard*: Supermicro H8QME-2+
*Memory*: 32GB (16x 2GB) DDR2-667 ECC Registered
*PSU*: Corsair CX750M
*OS HDD*: Old WD 160GB laptop hard drive
*Storage*: 2x Toshiba 5TB in a ZFS mirror
and 2x Samsung F3 1TB in a ZFS mirror (probably retiring these soon)
*Cooling*: 4x Cooler Master Hyper TX-3

I use it as a NAS and VM host. It runs ZFSonLinux and has a few virtual machines (KVM as hypervisor). Super overkill and not terribly power efficient, but most of the parts were cheap off ebay a couple years ago.


----------



## pm40elys40

My new NVR/PVR build for my brother. Usage for surveillance data and TV server. Crappy phone pic, forgot to take others!
That one you see was a test build with slightly different components. Definitive config:
*CPU* Intel Core i3-4130T 35W
*System cooling* Intel HTS1155LP with metal air duct to cool hard disk drives, all with one fan
*Mobo* Asus Q87T
*Case* Inter-tech 1U compact mini-ITX case, adapted to thin-ITX and 3x2.5" cooled drive positions (under duct)
*Memory* 2x Kingston HX316LS9IB4 DDR3L 8192MB
*Boot drive* Samsung 850PRO MZ-7TE128BW
*Storage drives* 3x WD10JFCX Red 2.5"
*Power supply* Cooler Master USNA95 slim notebook brick (95W)
*OS* Windows 7 Home Premium X64

*System power consumption* 25W on normal operation


Hard drives are configured as 2x1.0TB RAID1 for surveillance and 1x1.0TB for TV. Network connectivity via integrated Intel I217-LM chip


----------



## camry racing

Quote:


> Originally Posted by *pm40elys40*
> 
> My new NVR/PVR build for my brother. Usage for surveillance data and TV server. Crappy phone pic, forgot to take others!
> That one you see was a test build with slightly different components. Definitive config:
> *CPU* Intel Core i3-4130T 35W
> *System cooling* Intel HTS1155LP with metal air duct to cool hard disk drives, all with one fan
> *Mobo* Asus Q87T
> *Case* Inter-tech 1U compact mini-ITX case, adapted to thin-ITX and 3x2.5" cooled drive positions (under duct)
> *Memory* 2x Kingston HX316LS9IB4 DDR3L 8192MB
> *Boot drive* Samsung 850PRO MZ-7TE128BW
> *Storage drives* 3x WD10JFCX Red 2.5"
> *Power supply* Cooler Master USNA95 slim notebook brick (95W)
> *OS* Windows 7 Home Premium X64
> 
> *System power consumption* 25W on normal operation
> 
> 
> Hard drives are configured as 2x1.0TB RAID1 for surveillance and 1x1.0TB for TV. Network connectivity via integrated Intel I217-LM chip


Win8.1 will be a better OS for this btw I see you manage to comsume 5W less than mine... seems like my PSU is not that efficient


----------



## pm40elys40

Win7 works better with business and not so new programs like D-link surveillance monitor, for now no Win8.1 and no Win10 here.








Power consumption hovers between 25 and 30W but it is achieved limiting CPU clock to 1.1 GHz (like the 13W Xeon 1220Lv3) and disabling USB3.0, since there's no USB3.0 peripherals and we don't want hassles with the Auto or Smart Auto XHCI controller modes. Since A/V apps need an audio card to work, I had to let both Realtek and Intel audio adapters enabled even with no devices. Disabling the secondary Realtek 8111G NIC could shave 1-1.5 watts more, maybe a very little more disabling Intel display audio, since system's remotely controlled via LAN.

Before production I tested a FreeNAS installation and I was able to cut power to 23W, limiting CPU to 1.1, GPU to 400, all audio off, secondary LAN off, PCH set to AHCI, all LPM/ASPM on.


----------



## wiretap

Server is 99% done. I transitioned everything from my old server to the new ESXi build.

Just in the process of adding the 24TB of storage from my old server to the new one. I should have ~40TB (with triple parity) when I format the old drives and append them to the SnapRAID array. As it sits, I have 20TB in there right now. I should be finished adding drives tomorrow so it fills up all the bays in the Norco.

Final server specs:
Norco RPC-4220
3x Arctic Cooling F12 PWM Fans
2x Arctic Cooling F8 v2 PWM Fans
1x Arctic Cooling F11 Plus PWM CPU Cooler
Corsair RM1000 PSU
Xeon E3-1240 Processor
16GB Supertalent DDR3-1333 ECC Unbuffered RAM
Supermicro X9SCM Motherboard
3x IBM M1015's flashed to IT Mode
2x 250GB Samsung 850 Pro SSD's
7x 4TB Western Digital SSHD's
13x 2TB Western Digital Green HDD's
---
ESXi 6.0
VM1: Windows 8.1 x64 w/ MCE (for ServerWMC CableCARD support to serve the HTPC's via Emby Server)
VM2: TBD (thinking of adding a network management option here to monitor everything, send notifications, manage backups, etc)

I'm probably going to order 32GB of RAM sometime in the near future, since right now I have 4x4GB sticks. I only have 8GB dedicated to VM1 at the moment, since I want to give each OS at least 8GB. But, I want to give the file server 16GB, and maybe give two more VM's 8GB each. Not sure yet, but I love ESXi now.

I pretty much made a seamless transition from my old file server to the virtualized one. I installed the VM OS, then transferred all my data to the new server (~18TB worth), then installed all my programs and services to mirror the old one, then turned off the old server and changed the IP address of the VM server to match the old one. My security cameras, HTPC's, and other networked clients didn't even notice a difference.

The parity build speed of SnapRAID with the SSHD's was pretty good. I did ~18TB of data in 8 hours with dual parity.


----------



## EvilMonk

Quote:


> Originally Posted by *wiretap*
> 
> Server is 99% done. I transitioned everything from my old server to the new ESXi build.
> 
> Just in the process of adding the 24TB of storage from my old server to the new one. I should have ~40TB (with triple parity) when I format the old drives and append them to the SnapRAID array. As it sits, I have 20TB in there right now. I should be finished adding drives tomorrow so it fills up all the bays in the Norco.
> 
> Final server specs:
> Norco RPC-4220
> 3x Arctic Cooling F12 PWM Fans
> 2x Arctic Cooling F8 v2 PWM Fans
> 1x Arctic Cooling F11 Plus PWM CPU Cooler
> Corsair RM1000 PSU
> Xeon E3-1240 Processor
> 16GB Supertalent DDR3-1333 ECC Unbuffered RAM
> Supermicro X9SCM Motherboard
> 3x IBM M1015's flashed to IT Mode
> 2x 250GB Samsung 850 Pro SSD's
> 7x 4TB Western Digital SSHD's
> 13x 2TB Western Digital Green HDD's
> ---
> ESXi 6.0
> VM1: Windows 8.1 x64 w/ MCE (for ServerWMC CableCARD support to serve the HTPC's via Emby Server)
> VM2: TBD (thinking of adding a network management option here to monitor everything, send notifications, manage backups, etc)
> 
> I'm probably going to order 32GB of RAM sometime in the near future, since right now I have 4x4GB sticks. I only have 8GB dedicated to VM1 at the moment, since I want to give each OS at least 8GB. But, I want to give the file server 16GB, and maybe give two more VM's 8GB each. Not sure yet, but I love ESXi now.
> 
> I pretty much made a seamless transition from my old file server to the virtualized one. I installed the VM OS, then transferred all my data to the new server (~18TB worth), then installed all my programs and services to mirror the old one, then turned off the old server and changed the IP address of the VM server to match the old one. My security cameras, HTPC's, and other networked clients didn't even notice a difference.
> 
> The parity build speed of SnapRAID with the SSHD's was pretty good. I did ~18TB of data in 8 hours with dual parity.


Wow great job I love it, you made an awesome project and the final result is simply amazing


----------



## levontraut

Quote:


> Originally Posted by *levontraut*
> 
> I have just upgraded my games rig and turned it into a Main server for myself.
> 
> the Specs are in my Sig.
> 
> here is a brief look aqt it though
> 
> Mobo:
> gigabyte 990fxa ud7
> 
> CPU
> 8350
> 
> RAM
> 32 gig 1866
> 
> HDD:
> lots ( can not fit anymore in the case)
> 
> OS
> Server2012
> 
> It is taking a lot of time to set it up correctly, the file sharing is done, Teamspeak server is done now to do the backup etc...


This has been upgraded since last time I wrote this down.

New controller card.
Replaced the 1 TB drives with 3TB drives.
I have just under 40TB worth of storage.

Will post a pic later.


----------



## wiretap

Done adding drives. I decided to keep 3 bays of my Norco open to make swapping to new drives easier in case of future expansion. This should keep me with enough free space for a little while.


----------



## jibesh

Quote:


> Originally Posted by *wiretap*
> 
> Done adding drives. I decided to keep 3 bays of my Norco open to make swapping to new drives easier in case of future expansion. This should keep me with enough free space for a little while.


That's a lot of pr0n.


----------



## Archer13

Case: Fractal Design Define R2
CPU: Xeon E3-1230v2
MB: Asus P8C-WS
RAM: 32Gb Crucial 1600 ECC

2x Sapphire R9 280X (Arctic Accelero Hybrid & Kraken G10+X41)

SATA controller: IBM M1115, re-flashed to LSI 9211-8i (IT mode)

HDD:
RAIDZ2
4 x Samsung F2 1.5TB (yes, still alive!)
2 x Seagate Barracuda 1.5TB

Mirror:
2 x WD Red 3TB

SSD:
Samsung 850 Pro 128Gb + Crucial MX100 250Gb for VMs

ESXi 6.0

VMs:
NAS - Debian 7 with LSI pass-through, Openmediavault, ZFS, UPS software, Torrent, NFS, CIFS, Netatalk
Desktop 1 - Win10 with GPU pass-through, USB pass-through
Desktop 2 - Mac OS X with GPU pass-through, USB pass-through


----------



## EpicAMDGamer

Quote:


> Originally Posted by *Archer13*
> 
> Case: Fractal Design Define R2
> CPU: Xeon E3-1230v2
> MB: Asus P8C-WS
> RAM: 32Gb Crucial 1600 ECC
> 
> 2x Sapphire R9 280X (Arctic Accelero Hybrid & Kraken G10+X41)
> 
> SATA controller: IBM M1115, re-flashed to LSI 9211-8i (IT mode)
> 
> HDD:
> RAIDZ2
> 4 x Samsung F2 1.5TB (yes, still alive!)
> 2 x Seagate Barracuda 1.5TB
> 
> Mirror:
> 2 x WD Red 3TB
> 
> SSD:
> Samsung 850 Pro 128Gb + Crucial MX100 250Gb for VMs
> 
> ESXi 6.0
> 
> VMs:
> NAS - Debian 7 with LSI pass-through, Openmediavault, ZFS, UPS software, Torrent, NFS, CIFS, Netatalk
> Desktop 1 - Win10 with GPU pass-through, USB pass-through
> Desktop 2 - Mac OS X with GPU pass-through, USB pass-through


Very nice setup! What UPS do you have?
Quote:


> Originally Posted by *pm40elys40*
> 
> My new NVR/PVR build for my brother. Usage for surveillance data and TV server. Crappy phone pic, forgot to take others!
> That one you see was a test build with slightly different components. Definitive config:
> *CPU* Intel Core i3-4130T 35W
> *System cooling* Intel HTS1155LP with metal air duct to cool hard disk drives, all with one fan
> *Mobo* Asus Q87T
> *Case* Inter-tech 1U compact mini-ITX case, adapted to thin-ITX and 3x2.5" cooled drive positions (under duct)
> *Memory* 2x Kingston HX316LS9IB4 DDR3L 8192MB
> *Boot drive* Samsung 850PRO MZ-7TE128BW
> *Storage drives* 3x WD10JFCX Red 2.5"
> *Power supply* Cooler Master USNA95 slim notebook brick (95W)
> *OS* Windows 7 Home Premium X64
> 
> *System power consumption* 25W on normal operation
> 
> 
> Hard drives are configured as 2x1.0TB RAID1 for surveillance and 1x1.0TB for TV. Network connectivity via integrated Intel I217-LM chip


It's sort of sad to see that your server, which has a much more powerful processor, and multiple hard drives, pulls the same wattage as my Supermicro 1U at basically idle. You've got yourself one power-efficient server!

There's no way I'm going out and spending the money I'd have to spend on a mobo that was as power efficient as yours though, so for now, the little atom will do.


----------



## Archer13

Quote:


> Originally Posted by *EpicAMDGamer*
> 
> Very nice setup! What UPS do you have?


Thanks.
UPS: Eaton Ellipse PRO 1200


----------



## camry racing

Quote:


> Originally Posted by *EpicAMDGamer*
> 
> Very nice setup! What UPS do you have?
> It's sort of sad to see that your server, which has a much more powerful processor, and multiple hard drives, pulls the same wattage as my Supermicro 1U at basically idle. You've got yourself one power-efficient server!
> 
> There's no way I'm going out and spending the money I'd have to spend on a mobo that was as power efficient as yours though, so for now, the little atom will do.


I think is his power supply that is efficient my server consume 31W on normal operation 3 WD reds and 1 SSD and the psu is a corsair CS 430 and i5 4460T


----------



## Rbby258

Do people find gigabit fast enough for your needs?


----------



## wiretap

For a home environment, I find GigE fast enough. I'm not in too much of a hurry with my data transfers, whereas if I was in a corporate environment on a time schedule I would be. I just transferred ~18TB of data across my network when I migrated servers and it didn't take all that long. (about 3 days)


----------



## jibesh

Quote:


> Originally Posted by *Rbby258*
> 
> Do people find gigabit fast enough for your needs?


Quote:


> Originally Posted by *wiretap*
> 
> For a home environment, I find GigE fast enough. I'm not in too much of a hurry with my data transfers, whereas if I was in a corporate environment on a time schedule I would be. I just transferred ~18TB of data across my network when I migrated servers and it didn't take all that long. (about 3 days)


I'm impatient so my storage network is 10gbe but rest of the network is 1gbe.


----------



## Rbby258

I'm going to be moving / expanding all my storage to a server and currently use stablebit storage pool with a ssd cache which I'm going to miss. The only thing i can really do is get 10GbE but its too expensive really.

If i did have 10GbE how does files transfers work when you directly connect 2 computers together but use internet with the motherboards gigabit ports?


----------



## jibesh

Quote:


> Originally Posted by *Rbby258*
> 
> I'm going to be moving / expanding all my storage to a server and currently use stablebit storage pool with a ssd cache which I'm going to miss. The only thing i can really do is get 10GbE but its too expensive really.


Direct connecting 2 computers together with 10GbE is not expensive at all. Easily doable within $100 most of the time.

HP / Mellanox MNPA19-XTR 10GbE Adapter - $25 x 2 = $50
Finsar FTLX8571D3BCL 10GbE SFP+ Transceiver - $20 x 2 = $40
Multi Mode fiber cable (OM2/OM3/OM4) - price varies on length (i.e. $0.50 per ft)

Quote:


> Originally Posted by *Rbby258*
> 
> If i did have 10GbE how does files transfers work when you directly connect 2 computers together but use internet with the motherboards gigabit ports?


You would assign each adapter an IP address on a different subnet than your main network without a gateway defined.

Example:
Server - 192.168.5.2 / 255.255.255.0
Computer - 192.168.5.3 / 255.255.255.0


----------



## beers

Quote:


> Originally Posted by *Rbby258*
> 
> If i did have 10GbE how does files transfers work when you directly connect 2 computers together but use internet with the motherboards gigabit ports?


You'd just reference the 10GbE IP address instead of the other NIC one.

Typically for back-to-back you would set up a separate subnet just for those two NICs to talk together. From the configuration panel when assigning an IP you'd just leave the default gateway blank. Then any other traffic goes out your integrated NIC but any traffic toward the other server NIC just goes out of that interface instead.

I had a similar setup at one point and even routed that segregated subnet out of my fileserver upstream/back so I didn't need anything but the point to point 10 GbE link on my desktop to access the rest of the network. So, you have some flexibility


----------



## Rbby258

@jibesh @beers Thanks for the info


----------



## camry racing

Quote:


> Originally Posted by *jibesh*
> 
> Direct connecting 2 computers together with 10GbE is not expensive at all. Easily doable within $100 most of the time.
> 
> HP / Mellanox MNPA19-XTR 10GbE Adapter - $25 x 2 = $50
> Finsar FTLX8571D3BCL 10GbE SFP+ Transceiver - $20 x 2 = $40
> Multi Mode fiber cable (OM2/OM3/OM4) - price varies on length (i.e. $0.50 per ft)
> You would assign each adapter an IP address on a different subnet than your main network without a gateway defined.
> 
> Example:
> Server - 192.168.5.2 / 255.255.255.0
> Computer - 192.168.5.3 / 255.255.255.0


so correct me if I'm wrong but no internet would pass throught that just only conection to the server or other computer??


----------



## jibesh

Quote:


> Originally Posted by *camry racing*
> 
> so correct me if I'm wrong but no internet would pass throught that just only conection to the server or other computer??


Correct. No gateway is defined so it won't use that direct connection to route traffic to other subnets.


----------



## driftingforlife

Or you could get some 4-port GbE PCI-E cards, thats what im doing at some point.


----------



## Rbby258

Quote:


> Originally Posted by *driftingforlife*
> 
> Or you could get some 4-port GbE PCI-E cards, thats what im doing at some point.


I don't think that speeds up file transfers, just don't slow down multiple connections to the server. Unless you have a managed switch? I could be wrong though.


----------



## beers

Quote:


> Originally Posted by *driftingforlife*
> 
> Or you could get some 4-port GbE PCI-E cards, thats what im doing at some point.


Quote:


> Originally Posted by *Rbby258*
> 
> I don't think that speeds up file transfers, just don't slow down multiple connections to the server. Unless you have a managed switch? I could be wrong though.


^ That.

You need a managed/smart switch to manage LACP on the switch end. Even if you have bonded interfaces at each side typically the switch just 'picks' a destination MAC and shoves the traffic down one interface for a single stream (so you'll get round robining from the server that aggregates into a single interface on the remote end, even if that side is set up in LACP/PAgP too).

It's much more effective to just use 10/40GbE interfaces.


----------



## driftingforlife

Thats what I mean, proper switch so you can team all 4 connections. 500MB/s (roughly)

I have a HP ProCurve 1810G-24 switch that i use at home.

You can get a new managed switchs from £100.


----------



## Rbby258

Quote:


> Originally Posted by *driftingforlife*
> 
> Thats what I mean, proper switch so you can team all 4 connections. 500MB/s (roughly)
> 
> I have a HP ProCurve 1810G-24 switch that i use at home.
> 
> You can get a new managed switchs from £100.


Damn I've just bought the wrong switch. Didn't think these were so cheap :/


----------



## driftingforlife

http://www.scan.co.uk/products/24-port-netgear-gs724t-prosafe-gigabit-10-1000-smart-network-switch


----------



## beatfried

Four 1GbE Connections can speed up your direct connection to the server IF:
- Your client also has four 1GbE connections teamed up
- through a 802.3ad (LACP) enabled switch
- in your server with also four 1GbE connectiosn teamed up

This will only work for multiple files. If you copy one big iso or something like this you'll be limited by the one tcp-stream with can't be split up and only get ~ 117MB/s
Thats again a different story for things like oracle/sql backup. But I don't think you're doing thinks like this?


----------



## Rbby258

Quote:


> Originally Posted by *driftingforlife*
> 
> http://www.scan.co.uk/products/24-port-netgear-gs724t-prosafe-gigabit-10-1000-smart-network-switch


I just bought one of them from ebay GS724T v2...

I've logged in and i think this is what i need, just not sure what to do


----------



## 350 Malibu

Should just need to click the enable box, the select which physical ports you will plugging the cables into and save the settings. That's all I did, but I only use 2 physical network adapters in my machines.


----------



## ComGuards

Yowza! Amazing the changes since I posted up in this thread oh-so-long-ago. Many awesome servers in this thread. Gets expensive though, after a few years.

Next up, 10GigE in the home... just need to get that raise first...


----------



## 350 Malibu

Quote:


> Originally Posted by *ComGuards*
> 
> Yowza! Amazing the changes since I posted up in this thread oh-so-long-ago. Many awesome servers in this thread. Gets expensive though, after a few years.
> 
> Next up, 10GigE in the home... just need to get that raise first...


You and me both, but I'm on this 3d printer trip right now so it's taking all my time and $.


----------



## Master__Shake

i need me some 1tbps...

soon i'll be able to move this file.



10gbps infiniband just can't do it


----------



## 350 Malibu

Jezus, what do you have that is 115 Petabytes?


----------



## cones

Quote:


> Originally Posted by *350 Malibu*
> 
> Jezus, what do you have that is 115 Petabytes?


I think we all know what it is, must be one big collection or just a really long one.


----------



## Master__Shake

i'd be lying if i said that was an actual file.

it's actually a bug

to this day i can't figure it out.

an nrg in a zip file.


----------



## cones

Quote:


> Originally Posted by *Master__Shake*
> 
> i'd be lying if i said that was an actual file.
> 
> it's actually a bug
> 
> to this day i can't figure it out.
> 
> an nrg in a zip file.


So how long does Windows think it takes to transfer, years?


----------



## ComGuards

Quote:


> Originally Posted by *Master__Shake*
> 
> i'd be lying if i said that was an actual file.
> 
> it's actually a bug
> 
> to this day i can't figure it out.
> 
> an nrg in a zip file.


I've had something similar happen to me twice a while ago. Once was at the disk-enclosure level, the other was a corrupt NTFS table and bad sectors.


----------



## Master__Shake

Quote:


> Originally Posted by *cones*
> 
> So how long does Windows think it takes to transfer, years?


it won't let me copy it anywhere.

it thinks i need more room










oh windows...


----------



## LDV617

Moved my 771 -> 775 mod file share server into a networking closet today. Threw some fans and all my routers / switches in there. Looks so much better than before.


----------



## liangstein

I am running a Ubuntu14.10 and a Windows8.1 servers for nearly a month.
Ubuntu server is used for web service and Windows server is for storage.


----------



## andyroo89

Quote:


> Originally Posted by *Master__Shake*
> 
> i need me some 1tbps...
> 
> soon i'll be able to move this file.
> 
> 
> 
> 10gbps infiniband just can't do it


that is almost as big as my redhead folder.


----------



## broadbandaddict

Here's a picture of my current work in progress server. After a busy summer with a very inconvenient drive failure I'm hoping to get the finishing touches done in the next couple months.




Specs:

Intel Core i7 4790K
16GB DDR3 (2x8)
Fractal R5 w/ Seasonic 400W
Windows Server 2012 R2 Datacenter
2 x 128GB Crucial M4: Boot, RAID 1
4 x 256GB Crucial M4: VMs, Mirrored Storage Pool
5 x 3TB WD Red: Data, Mirrored Storage Pool, ReFS
3 x 5TB Toshiba: Movies and TV, Mirrored Storage Pool, ReFS
I'm planning on adding another 16GB RAM and 4 more 5TB Toshibas to the movie/tv storage pool.

Also I'm looking for a decent quad port gigabit NIC, any recommendations?


----------



## tiro_uspsss

Quote:


> Originally Posted by *broadbandaddict*
> 
> Also I'm looking for a decent quad port gigabit NIC, any recommendations?


anything with Intel chip is good


----------



## EvilMonk

Quote:


> Originally Posted by *broadbandaddict*
> 
> *Also I'm looking for a decent quad port gigabit NIC, any recommendations?*


You want to buy the latest Quad port GigE NIC available brand new or you don't mind second hand stuff like eBay? I got my first quad port GigE NIC new but it was $$$ and I got the last 3 on eBay and they are doing the same great job for a lot less $$







You can find some HP NC364T quite cheap on eBay, there are still some new ones there that sell cheap and they have intel chips in PCIe x4 I got one brand new in box for 60$ + shipping.
I also bought 2 Intel PRO/1000 VT on eBay for 25$ and its a quad port GigE in PCIe x4


----------



## broadbandaddict

Quote:


> Originally Posted by *EvilMonk*
> 
> You want to buy the latest Quad port GigE NIC available brand new or you don't mind second hand stuff like eBay? I got my first quad port GigE NIC new but it was $$$ and I got the last 3 on eBay and they are doing the same great job for a lot less $$
> 
> 
> 
> 
> 
> 
> 
> You can find HP NC364T quite cheap on eBay, there are still some new ones there that sell cheap and they have intel chips in PCIe x4


Second hand is preferred with the amount of money the new ones go for. I was looking a bit on eBay and found this i350-T4 but I'm a little overwhelmed with all the choices and the constant rebadges from OEMs. I just need something that will work with Server 2012 R2 and Hyper-V, nothing fancy.

edit: I don't want the cheapest of the cheap, I'm fine paying up to $100 but if it isn't worth spending more then I'd rather not.


----------



## jibesh

Quote:


> Originally Posted by *EvilMonk*
> 
> You can find some HP NC364T quite cheap on eBay, there are still some new ones there that sell cheap and they have intel chips in PCIe x4 I got one brand new in box for 60$ + shipping.


I would also recommend the HP NC364T or NC365T.


----------



## EvilMonk

Quote:


> Originally Posted by *broadbandaddict*
> 
> Second hand is preferred with the amount of money the new ones go for. I was looking a bit on eBay and found this i350-T4 but I'm a little overwhelmed with all the choices and the constant rebadges from OEMs. I just need something that will work with Server 2012 R2 and Hyper-V, nothing fancy.
> 
> edit: I don't want the cheapest of the cheap, I'm fine paying up to $100 but if it isn't worth spending more then I'd rather not.


The best one I found for you would be the previous version of the chip. Intel PRO/1000 VT Quad that you can find for 30$ in the US, new with free shipping. It uses the previous version of the chip the card you mention uses, 2 Intel 82575GB which is pro use oriented. The card you mention (I350-T4) uses 2 Intel 82576EB.

Its still supported by intel and support all the functions of the 82576EB implemented in the I350-T4.


----------



## Clos

So, I'm hoping to join you server guys reeal soon. I'm getting my hands on a Dell PowerEdge T710 for free.

I'll be using it for media streaming and Data Storage (photos, blu rays and etc.) And remote access, (so family can ulload and save photos to my server.

Server has:

PROCESSOR, E5502, 1.86/4.8, 4MB, XDN, D0
6gb ddr3 memory
8 hdd slots in the front
Dual psu's
Quad gigabit lan ports

And an empty second slot for cpu/memory.

Would this server do what i need pretty well? And would it be worth maybe upgrading to a second cpu? (Its a dual core xeon if i remember correctly.)

Not sure if i should run windows server 2008 that it comes with, or something like free nas.

I would like to load it up with 8 3tb wester digital red nas drives in in probably raid 10 (1+0) to have a total of 12tb plus redundancy. (Unless you guys recommend a different raid setup?)

I'll post pictures as soon as i get it. Hopefully by the end of this coming week.


----------



## jibesh

Quote:


> Originally Posted by *Clos*
> 
> So, I'm hoping to join you server guys reeal soon. I'm getting my hands on a Dell PowerEdge T710 for free.
> 
> I'll be using it for media streaming and Data Storage (photos, blu rays and etc.) And remote access, (so family can ulload and save photos to my server.
> 
> Server has:
> 
> PROCESSOR, E5502, 1.86/4.8, 4MB, XDN, D0
> 6gb ddr3 memory
> 8 hdd slots in the front
> Dual psu's
> Quad gigabit lan ports
> 
> And an empty second slot for cpu/memory.
> 
> Would this server do what i need pretty well? And would it be worth maybe upgrading to a second cpu? (Its a dual core xeon if i remember correctly.)
> 
> Not sure if i should run windows server 2008 that it comes with, or something like free nas.
> 
> I would like to load it up with 8 3tb wester digital red nas drives in in probably raid 10 (1+0) to have a total of 12tb plus redundancy. (Unless you guys recommend a different raid setup?)
> 
> I'll post pictures as soon as i get it. Hopefully by the end of this coming week.


1 or 2 x Intel Xeon L5640's processors would be a good upgrade for this but not really needed for just a storage server.

More RAM is always good, 16GB or 32GB, but again, not needed for a simple storage server.

You can pick up the processors and RAM on eBay for pretty cheap if you do decided to upgrade.


----------



## ComGuards

Quote:


> Originally Posted by *Clos*
> 
> So, I'm hoping to join you server guys reeal soon. I'm getting my hands on a Dell PowerEdge T710 for free.
> 
> I'll be using it for media streaming and Data Storage (photos, blu rays and etc.) And remote access, (so family can ulload and save photos to my server.
> 
> Server has:
> 
> PROCESSOR, E5502, 1.86/4.8, 4MB, XDN, D0
> 6gb ddr3 memory
> 8 hdd slots in the front
> Dual psu's
> Quad gigabit lan ports
> 
> And an empty second slot for cpu/memory.
> 
> Would this server do what i need pretty well? And would it be worth maybe upgrading to a second cpu? (Its a dual core xeon if i remember correctly.)
> 
> Not sure if i should run windows server 2008 that it comes with, or something like free nas.
> 
> I would like to load it up with 8 3tb wester digital red nas drives in in probably raid 10 (1+0) to have a total of 12tb plus redundancy. (Unless you guys recommend a different raid setup?)
> 
> I'll post pictures as soon as i get it. Hopefully by the end of this coming week.


You should double-check to see which RAID card is included in the system. I have the T710 myself, and as far as I know, the PERC6/i that came with the system originally doesn't support drives larger than 2TB.

You might need to replace the RAID card with a 11th-generation RAID card, but not sure if the backplane connectors all match up as well.


----------



## Clos

Quote:


> Originally Posted by *jibesh*
> 
> 1 or 2 x Intel Xeon L5640's processors would be a good upgrade for this but not really needed for just a storage server.
> 
> More RAM is always good, 16GB or 32GB, but again, not needed for a simple storage server.
> 
> You can pick up the processors and RAM on eBay for pretty cheap if you do decided to upgrade.


I'll definitrly look into those procs, and i definitely would like to add to the memory, just to help it out in any way i can. Do those procs have kore corrs? Or lower power? Both?
Should i fill all memory slots? Or just do 1 set? I.e. 2gbx6 =12gb or 4gbx3 =13gb?
Quote:


> Originally Posted by *ComGuards*
> 
> You should double-check to see which RAID card is included in the system. I have the T710 myself, and as far as I know, the PERC6/i that came with the system originally doesn't support drives larger than 2TB.
> 
> You might need to replace the RAID card with a 11th-generation RAID card, but not sure if the backplane connectors all match up as well.


This is what comes up on the dell webtsite with the service tag:

Part NumberT954J
Quantity1
Description ASSEMBLY, CARD (CIRCUIT), CONTROLLER, PERC6II, SERIAL ATTACHED SCSI, NOSLD


----------



## Rbby258

Quote:


> Originally Posted by *broadbandaddict*
> 
> Here's a picture of my current work in progress server. After a busy summer with a very inconvenient drive failure I'm hoping to get the finishing touches done in the next couple months.
> 
> Specs:
> 
> Intel Core i7 4790K
> 16GB DDR3 (2x8)
> Fractal R5 w/ Seasonic 400W
> Windows Server 2012 R2 Datacenter
> 2 x 128GB Crucial M4: Boot, RAID 1
> 4 x 256GB Crucial M4: VMs, Mirrored Storage Pool
> 5 x 3TB WD Red: Data, Mirrored Storage Pool, ReFS
> 3 x 5TB Toshiba: Movies and TV, Mirrored Storage Pool, ReFS
> I'm planning on adding another 16GB RAM and 4 more 5TB Toshibas to the movie/tv storage pool.
> 
> Also I'm looking for a decent quad port gigabit NIC, any recommendations?


How's storage spaces? I'm planning on running the same sort of setup soon with 4 ssd's also.


----------



## broadbandaddict

Quote:


> Originally Posted by *Rbby258*
> 
> How's storage spaces? I'm planning on running the same sort of setup soon with 4 ssd's also.


I really like storage spaces. I've always been a fan of software RAID and this seems to be the next step. There are a few arbitrary limits I don't like or features that aren't implemented in Server 2012, like rebalancing an array, but Microsoft seems to be adding a lot of stuff in 2016. You can add multiple drives at a time (dependent on virtual disk settings) and the only problems I've had are with controllers that aren't supported. It's pretty neat to be able to hook up to any 8/8.1/10/2012/R2 computer to get data off of them if you need to as well, which makes it very hardware independent.

Performance wise the mirrored SSDs are great for the VM pool but it is NTFS. My two ReFS pools are a little slower especially on writes but that is an ReFS thing, not storage spaces. The other cool part with Server 2012 is if you're VMs are taking up too much space you can enable data deduplication, so it will remove duplicate data from the array. I saw a ~70% reduction (250GB -> 80GB) when I enabled it.


----------



## wiretap

I wanted to use Windows Storage Spaces, but after looking at the parity speeds, it was atrociously slow. I really like the concept of Storage Spaces, but they need to improve their parity engine. You can improve it some by using tiered storage, but it gets expensive fast.. especially in my case, where I want to have 26TB with at least dual parity. I would have to spend another grand on SSD's just to get halfway decent speeds. In the end, I just called it a day and ended up using SnapRAID. It isn't ideal since it isn't realtime, but combined with Stablebit Drivepool and Scanner, it works well for my needs.


----------



## Rbby258

Quote:


> Originally Posted by *broadbandaddict*
> 
> I really like storage spaces. I've always been a fan of software RAID and this seems to be the next step. There are a few arbitrary limits I don't like or features that aren't implemented in Server 2012, like rebalancing an array, but Microsoft seems to be adding a lot of stuff in 2016. You can add multiple drives at a time (dependent on virtual disk settings) and the only problems I've had are with controllers that aren't supported. It's pretty neat to be able to hook up to any 8/8.1/10/2012/R2 computer to get data off of them if you need to as well, which makes it very hardware independent.
> 
> Performance wise the mirrored SSDs are great for the VM pool but it is NTFS. My two ReFS pools are a little slower especially on writes but that is an ReFS thing, not storage spaces. The other cool part with Server 2012 is if you're VMs are taking up too much space you can enable data deduplication, so it will remove duplicate data from the array. I saw a ~70% reduction (250GB -> 80GB) when I enabled it.


Thanks for the detailed response









I'm currently using stablebit storage pool with a ssd cache and think its great. What are the read and write speeds like? Just the same as a single drive?
Quote:


> Originally Posted by *wiretap*
> 
> I wanted to use Windows Storage Spaces, but after looking at the parity speeds, it was atrociously slow. I really like the concept of Storage Spaces, but they need to improve their parity engine. You can improve it some by using tiered storage, but it gets expensive fast.. especially in my case, where I want to have 26TB with at least dual parity. I would have to spend another grand on SSD's just to get halfway decent speeds. In the end, I just called it a day and ended up using SnapRAID. It isn't ideal since it isn't realtime, but combined with Stablebit Drivepool and Scanner, it works well for my needs.


How many ssd's did you need? I'm planning on using 4 with about 10-15tb of storage.


----------



## broadbandaddict

Quote:


> Originally Posted by *wiretap*
> 
> I wanted to use Windows Storage Spaces, but after looking at the parity speeds, it was atrociously slow. I really like the concept of Storage Spaces, but they need to improve their parity engine. You can improve it some by using tiered storage, but it gets expensive fast.. especially in my case, where I want to have 26TB with at least dual parity. I would have to spend another grand on SSD's just to get halfway decent speeds. In the end, I just called it a day and ended up using SnapRAID. It isn't ideal since it isn't realtime, but combined with Stablebit Drivepool and Scanner, it works well for my needs.


I ran my 5 x 3TB in parity for a while and they seemed to do fine. I even had ReFS on top which really slows them down and I was still getting close to or over 100MB/s reads and 40MB/s or more on writes over the network. I decided to switch to mirroring with how cheap storage is now (5TB for $140!) and considering how likely a second drive failure will be when rebuilding multiple big arrays over days.


----------



## broadbandaddict

Quote:


> Originally Posted by *Rbby258*
> 
> Thanks for the detailed response
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm currently using stablebit storage pool with a ssd cache and think its great. What are the read and write speeds like? Just the same as a single drive?


Speed isn't bad in general use, I don't really notice any slow downs but as long as it is fast enough to stream my movies I don't really look into it. CDM says my current VM array reads/writes at 750/263 MB/s. Each of the 12 VMs runs as good as they would installed to their own SSD although I am tempted to try out a striped storage space with 4 drives, 3 of them netted me 1600/1300 MB/s! The other two ReFS storage spaces are hard to test, no drive benchmark gets along well with the new file system, CDM says I get 1.9MB/s writes on one and 0.8MB/s on the other.


----------



## Simmons572

I noticed I never entered my server!


Spoiler: Warning: Spoiler!








Dell C1100

x2 Xeon L5520s
x2 Coolermaster 212 Evos
48GB ECC RAM
Fractal Design Define XL R2
Seasonic G-750 (PSU)
NZXT Sentry Mix 2 (Fan Controller)
Storage:

x2 Seagate 2TB HDDs (RAID 1)
60GB PNY SSD
80GB Intel SSD
240 GB 840 Evo
Currently hosting 6 Minecraft severs, a Space Engineers server, and a Teamspeak Server. I am eventually going reformat and turn it into a VM server, but I have yet to grow the balls to actually do it









The box on top of the server is a Dell Dimension 8400 running pfSense.

Intel Pentium 4 560
4GB RAM
x3 Intel Pro/1000 NICs
120GB 850 Evo
And for wifi, I have a used Asus RT-AC66U, set to AP mode.

In case anyone asks why I am missing a heatsink fan, I noticed that one of the fans seized, so I have filed an RMA with Coolermaster. I upped the speed on the top Delta. Max appear to be hovering around 45C, so I am not too concerned about that while I wait for the new fan to come in.


----------



## ComGuards

Quote:


> Originally Posted by *Clos*
> 
> I'll definitrly look into those procs, and i definitely would like to add to the memory, just to help it out in any way i can. Do those procs have kore corrs? Or lower power? Both?
> Should i fill all memory slots? Or just do 1 set? I.e. 2gbx6 =12gb or 4gbx3 =13gb?
> This is what comes up on the dell webtsite with the service tag:
> 
> Part NumberT954J
> Quantity1
> Description ASSEMBLY, CARD (CIRCUIT), CONTROLLER, PERC6II, SERIAL ATTACHED SCSI, NOSLD


This link contains the answer from a Dell rep; that 2TB+ drives are not supported by the PERC6i cards. You'd have to go one generation higher with the H7xx/8xx/2xx/3xx cards to begin using drives larger than 2TB.

You should get the Service Manual for the T710 from the Dell site to confirm the memory configuration. If it's a single-processor system, you only fill the one side (up to 9 slots), but there's an order in which you fill it. If you have both sockets filled, you have to fill both sides (up to 18 slots). The memory can run in either dual-channel or triple-channel, depending on what you want to get.

Make sure you match it up with what's in there now, or but all-new memory. There's RDIMM and UDIMM options available; RDIMMS allow you to get higher total capacity.

The Lxxxx processors have lower TDP. Intel's ARK site will let you compare the two. For what you're planning on doing, you can run with the older E5502s for now, unless you know for a fact that you're doing something really CPU-intensive to need the processor upgrade.

I would personally just keep the current processors, and use some of that money to toss in a small 120GB SSD onto the onboard SATA controller as a boot drive. Even though it's SATA2, this will allow you to maximize storage on the PERC.


----------



## Clos

Quote:


> Originally Posted by *ComGuards*
> 
> This link contains the answer from a Dell rep; that 2TB+ drives are not supported by the PERC6i cards. You'd have to go one generation higher with the H7xx/8xx/2xx/3xx cards to begin using drives larger than 2TB.
> 
> You should get the Service Manual for the T710 from the Dell site to confirm the memory configuration. If it's a single-processor system, you only fill the one side (up to 9 slots), but there's an order in which you fill it. If you have both sockets filled, you have to fill both sides (up to 18 slots). The memory can run in either dual-channel or triple-channel, depending on what you want to get.
> 
> Make sure you match it up with what's in there now, or but all-new memory. There's RDIMM and UDIMM options available; RDIMMS allow you to get higher total capacity.
> 
> The Lxxxx processors have lower TDP. Intel's ARK site will let you compare the two. For what you're planning on doing, you can run with the older E5502s for now, unless you know for a fact that you're doing something really CPU-intensive to need the processor upgrade.
> 
> I would personally just keep the current processors, and use some of that money to toss in a small 120GB SSD onto the onboard SATA controller as a boot drive. Even though it's SATA2, this will allow you to maximize storage on the PERC.


I see, So i guess I am stuck with a 2tb hdd times 8 slots for 8TB Mirrored It should hold me up for I need at this point in time, and give me time to learn how to set everything up. For free, i cannot complain.

I also appreciate the heads up on the Procs. I'll probably stick with what it has at the moment, Would a reinstall be recommended or required if i decide to later on upgrade and/or add another processor? I've see the previously recommended processor for pretty cheap on amazon ~79$ + the memory i'll be needing.

I have an Extra 120GB Intell SSD that i can use for a boot drive, i appreciate that heads up!

I will do some research in regards to the RDIMM and UDIMM to see what all the differences are, other than capacity. Thanks for all of ya'lls help!


----------



## wiretap

RDIMM's (registered ECC) delay the clock cycle to account for electrical loading settling times. The motherboard has to support it in order for you to use it. UDIMM (unbuffered) usually only supports 2 DIMMs per memory channel, but offers slightly better memory bandwidth in a single channel. However if you fill up a RDIMM supported board, you'll get higher memory bandwidth anyhow. Always check the motherboard's QVL (qualified vendor list) for compatible memory, because server boards can be picky sometimes. For more modern setups, choose a board that supports RDIMM and use it. It's getting hard to find high capacity UDIMM configurations these days.


----------



## EvilMonk

Quote:


> Originally Posted by *wiretap*
> 
> RDIMM's (registered ECC) delay the clock cycle to account for electrical loading settling times. The motherboard has to support it in order for you to use it. UDIMM (unbuffered) usually only supports 2 DIMMs per memory channel, but offers slightly better memory bandwidth in a single channel. However if you fill up a RDIMM supported board, you'll get higher memory bandwidth anyhow. Always check the motherboard's QVL (qualified vendor list) for compatible memory, because server boards can be picky sometimes. For more modern setups, choose a board that supports RDIMM and use it. It's getting hard to find high capacity UDIMM configurations these days.


Well it's usually less common in single socket entry level workstation / server boards to find boards that support RDIMM. I have around 10 1u/2u servers racked at home and a couple of tower format servers and most single sockets servers unless DL3xx Proliants won't support RDIMM. Also the E3 Xeons don't support RDIMM so it is a drawback compared to the Westmere-EP line of Xeon CPUs that did support RDIMM memory in single socket configurations.







Indeed it is very hard to get high capacity udimm configs these days since they are limited to the same amount of memory as non ECC setups...


----------



## Clos

I see, I'll definitely make sure to follow the QVL List then. And make sure i use their recommended memory sizes and types.


----------



## Zeus

Below are a few images for my new NAS I've build. It in a Cooler Master Stacker 915F case. The specs are: -

Motherboard: Gigabyte H97N-WIFI ITX
CPU: Intel i5 4440
RAM: TeamGroup Elite Black 16GB
Cooler: Cooler Master Hyper 612 V2
RAID Controller: LSi MegaRAID SAS 9260-8i
Port Expander: Intel RES2SV240 Controller

The host O/S is Windows ServerCore 2008 Hyper-V.


----------



## 350 Malibu

Quote:


> Originally Posted by *Zeus*
> 
> Below are a few images for my new NAS I've build. It in a Cooler Master Stacker 915F case. The specs are: -
> 
> Motherboard: Gigabyte H97N-WIFI ITX
> CPU: Intel i5 4440
> RAM: TeamGroup Elite Black 16GB
> Cooler: Cooler Master Hyper 612 V2
> RAID Controller: LSi MegaRAID SAS 9260-8i
> Port Expander: Intel RES2SV240 Controller
> 
> The host O/S is Windows ServerCore 2008 Hyper-V.


Gotta admit, that is a creative way to do extra internal storage.


----------



## ComGuards

Quote:


> Originally Posted by *Clos*
> 
> I see, So i guess I am stuck with a 2tb hdd times 8 slots for 8TB Mirrored It should hold me up for I need at this point in time, and give me time to learn how to set everything up. For free, i cannot complain.
> 
> I also appreciate the heads up on the Procs. I'll probably stick with what it has at the moment, Would a reinstall be recommended or required if i decide to later on upgrade and/or add another processor? I've see the previously recommended processor for pretty cheap on amazon ~79$ + the memory i'll be needing.
> 
> I have an Extra 120GB Intell SSD that i can use for a boot drive, i appreciate that heads up!
> 
> I will do some research in regards to the RDIMM and UDIMM to see what all the differences are, other than capacity. Thanks for all of ya'lls help!


No point running mirrored if you're just starting out and learning. And it wouldn't be "mirrored" either. Mirror is just RAID-1 with two drives. With 8 drives, you have your choice of running RAID-10, RAID-5, RAID-6, RAID-50 and RAID-60. With 2TB drives, each option would give you 8TB, 14TB, 12TB, 12TB, 8TB, respectively.

There's a write-performance hit when going with any option other than RAID-10, due to parity calculations and the such, though the PERC6 card isn't too bad as far as hardware RAID cards go, so for most normal purposes, it shouldn't be noticeable. I personally run RAID-50 on my T710 with 8 drives, and it's fast enough for my needs.

Upgrading the processor later won't require a reinstall of anything, but it may require a reactivation of Windows, if that's what you go with, due to the change in hardware. Again, unless you're doing something processor intensive, you're not likely to need it. Or doing something that requires a higher-clock speed than what's currently available.

When you do build the array, make sure you test it out by pulling out a drive and re-inserting, just to make sure your entire storage subsystem is working properly. They're hot-swap bays, you should be able to that...


----------



## EvilMonk

Quote:


> Originally Posted by *Zeus*
> 
> Below are a few images for my new NAS I've build. It in a Cooler Master Stacker 915F case. The specs are: -
> 
> Motherboard: Gigabyte H97N-WIFI ITX
> CPU: Intel i5 4440
> RAM: TeamGroup Elite Black 16GB
> Cooler: Cooler Master Hyper 612 V2
> RAID Controller: LSi MegaRAID SAS 9260-8i
> Port Expander: Intel RES2SV240 Controller
> 
> The host O/S is Windows ServerCore 2008 Hyper-V.


Wow gotta say I love that awesome little NAS good work there man









what are the specs of those little HDDs? they are 2.5" HDDs right?
What the fitting work complicated and long? Again congrats on an amazing project


----------



## PuffinMyLye

I purchased *this* little power house about 2 months back. It's one of the best purchases I've ever made. On full load it draws less than 100w of power and it idles UNDER 30W! I'll post some pics this weekend but the specs are as follows:


SuperMicro MiniServer Chassis
SuperMicro X10SDV-TLN4F motherboard with SoC Intel Boardwell Xeon D-1540 8-core/16 thread 45w CPU
64GB (32GB x 2) Samsung DDR4-2133 RDIMMs
IBM ServeRAID M1015 HBA
Samsung SM951 512GB AHCI SSD
Intel 730 480GB SSDs (x2)
Seagate 8TB HDDs (x4)

This server is an ESXi 5.5 host running the following VMs:


pfSense 2.2.4 - (Router/Firewall/VPN Server)
unRAID (Storage/Plex/Subsonic/Usenet/Torrents)
WS 2012 R2 (AD, DNS, etc.)
Backups (Runs SyncBack to replicate unRAID to backup server and Veeam for VM backups)
Private Internet Box (Windows VM with all Internet traffic routed over VPN WAN interface)
Various Windows test VMs (XP, 7, 8, 10, etc.)

The fact that I've got 16 vCPUs to play with and this server is capable of supporting 128GB of RAM (I've got 64 in there now but I'm only using about half that at the moment), I expect this server to last me quite some time on my home network.


----------



## Zeus

Quote:


> Originally Posted by *EvilMonk*
> 
> Wow gotta say I love that awesome little NAS good work there man
> 
> 
> 
> 
> 
> 
> 
> 
> what are the specs of those little HDDs? they are 2.5" HDDs right?
> What the fitting work complicated and long? Again congrats on an amazing project


Thanks EvilMonk









The HDD's are WD Red 2.5" with NASWare 3.0

The actual build took about 20hrs in total. The research into the right hardware (system board & PSU took longer (a lot longer







)

The hard part was choosing the right PSU. I did look a lot at SFF PSU's but I just wasn't really happy with them so I went with a Silverstone 850 Gold. But then I needed to figure out how to mount it. At that time I visited a friend who had just got a DimasTech Test Bench and when I first saw it, it gave me the solution. DimasTech do a PSU mount that holds it on it's side. I ordered one and it worked perfect.

The mounting for the HDD cages wasn't that hard to work out. The case already has pre-drilled holes for HDD cages. So I fitted a few stand-off to it and bingo.... a raised platform to mount the HDD cages too (and which gave me space to hide some of the cables).


----------



## Irisservice

Quote:


> Originally Posted by *Zeus*
> 
> Below are a few images for my new NAS I've build. It in a Cooler Master Stacker 915F case. The specs are: -
> 
> Motherboard: Gigabyte H97N-WIFI ITX
> CPU: Intel i5 4440
> RAM: TeamGroup Elite Black 16GB
> Cooler: Cooler Master Hyper 612 V2
> RAID Controller: LSi MegaRAID SAS 9260-8i
> Port Expander: Intel RES2SV240 Controller
> 
> The host O/S is Windows ServerCore 2008 Hyper-V.


Very nice...love the 2.5" drives..why did you choose those versus 3.5?


----------



## Zeus

Quote:


> Originally Posted by *Irisservice*
> 
> Very nice...love the 2.5" drives..why did you choose those versus 3.5?


I'm using 2.5" drives because I couldn't get the level of redundancy that I wanted with 3.5" drives in the space I had available. In the past I have suffered a double disk failure in a RAID5 (8 X 2TB disks) which took over 40hrs to fix (lost 300MB out of 7TB) so I didn't want to go through that again.

The disks are configured in the current way: -

1 x RAID6 (6 disks) Read/Write @ 450MBs/125MBs
2 x RAID10 (4 disks each) Read/Write @ 210MBs/193MBs
1 x RAID0 (2 SSD's) Read/Write @ 930MBs/860MBs

Another reason is that the total power usage & noise from the drives is a lot less.

I do plan to change the 40mm fans in the drive cages to quieter ones so the whole system will be silent (sub 22db). The down side is more cables to manage


----------



## PuffinMyLye

Quote:


> Originally Posted by *Zeus*
> 
> I'm using 2.5" drives because I couldn't get the level of redundancy that I wanted with 3.5" drives in the space I had available. In the past I have suffered a double disk failure in a RAID5 (8 X 2TB disks) which took over 40hrs to fix (lost 300MB out of 7TB) so I didn't want to go through that again.
> 
> The disks are configured in the current way: -
> 
> 1 x RAID6 (6 disks) Read/Write @ 450MBs/125MBs
> 2 x RAID10 (4 disks each) Read/Write @ 210MBs/193MBs
> 1 x RAID0 (2 SSD's) Read/Write @ 930MBs/860MBs
> 
> Another reason is that the total power usage & noise from the drives is a lot less.
> 
> I do plan to change the 40mm fans in the drive cages to quieter ones so the whole system will be silent (sub 22db). The down side is more cables to manage


Why on earth is your RAID10 read speed less than half your RAID6 read speed?


----------



## andyroo89

Ok guys, so I just ssh'd into my dell poweredge, and I ran htop and noticed its only detecting 4gb of ram (have 8gb) the only thing I can think of is what happened is when I opened the dell power edge to put the plastic hdd bracket back into the server..

edit; nevermind I think I realized the problem, I had to other desktop "servers" on same extension cord, and I guess the dell server wasn't getting enough power and only used 4gb instead of 8gb?


----------



## cones

Quote:


> Originally Posted by *andyroo89*
> 
> Ok guys, so I just ssh'd into my dell poweredge, and I ran htop and noticed its only detecting 4gb of ram (have 8gb) the only thing I can think of is what happened is when I opened the dell power edge to put the plastic hdd bracket back into the server..
> 
> edit; nevermind I think I realized the problem, I had to other desktop "servers" on same extension cord, and I guess the dell server wasn't getting enough power and only used 4gb instead of 8gb?


Check the BIOS first and make sure you installed a 64 bit OS.


----------



## beers

Quote:


> Originally Posted by *andyroo89*
> 
> edit; nevermind I think I realized the problem, I had to other desktop "servers" on same extension cord, and I guess the dell server wasn't getting enough power and only used 4gb instead of 8gb?


That's not really how these things work...
Quote:


> Originally Posted by *cones*
> 
> Check the BIOS first and make sure you installed a 64 bit OS.


This. You should be able to see the full quantity in BIOS. If not, something is wrong.


----------



## andyroo89

Quote:


> Originally Posted by *cones*
> 
> Check the BIOS first and make sure you installed a 64 bit OS.


I did, I had 32 bit on there (temp) then did full wipe and installed 64 bit.

edit; beers didn't know you were in the KC area, Wanna split and buy these?

http://gsaauctions.gov/gsaauctions/aucdsclnk?sl=51QSCI15450018

lol


----------



## jibesh

Quote:


> Originally Posted by *andyroo89*
> 
> edit; beers didn't know you were in the KC area, Wanna split and buy these?
> 
> http://gsaauctions.gov/gsaauctions/aucdsclnk?sl=51QSCI15450018
> 
> lol


You guys should get this instead









http://gsaauctions.gov/gsaauctions/aucalsrh/?sl=91QSCI15273601


----------



## Irisservice

Quote:


> Originally Posted by *Zeus*
> 
> I'm using 2.5" drives because I couldn't get the level of redundancy that I wanted with 3.5" drives in the space I had available. In the past I have suffered a double disk failure in a RAID5 (8 X 2TB disks) which took over 40hrs to fix (lost 300MB out of 7TB) so I didn't want to go through that again.
> 
> The disks are configured in the current way: -
> 
> 1 x RAID6 (6 disks) Read/Write @ 450MBs/125MBs
> 2 x RAID10 (4 disks each) Read/Write @ 210MBs/193MBs
> 1 x RAID0 (2 SSD's) Read/Write @ 930MBs/860MBs
> 
> Another reason is that the total power usage & noise from the drives is a lot less.
> 
> I do plan to change the 40mm fans in the drive cages to quieter ones so the whole system will be silent (sub 22db). The down side is more cables to manage


Very Nice..
My plan was to use 20 x 2tb Spinpoint 2.5"


----------



## EvilMonk

Quote:


> Originally Posted by *jibesh*
> 
> You guys should get this instead
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://gsaauctions.gov/gsaauctions/aucalsrh/?sl=91QSCI15273601


Damn, I'll take 10, but I doubt they ship to Canada







too bad...


----------



## herkalurk

Quote:


> Originally Posted by *andyroo89*
> 
> I did, I had 32 bit on there (temp) then did full wipe and installed 64 bit.
> 
> edit; beers didn't know you were in the KC area, Wanna split and buy these?
> 
> http://gsaauctions.gov/gsaauctions/aucdsclnk?sl=51QSCI15450018
> 
> lol


I probably drive right by the warehouse that's stored in daily....


----------



## EvilMonk

Quote:


> Originally Posted by *andyroo89*
> 
> I did, I had 32 bit on there (temp) then did full wipe and installed 64 bit.
> 
> edit; beers didn't know you were in the KC area, Wanna split and buy these?
> 
> http://gsaauctions.gov/gsaauctions/aucdsclnk?sl=51QSCI15450018
> 
> lol


I wonder what are the full specs of those...
The selling price of the auction is still reasonable for now but it's gonna jump in the last minutes before ending... 21 servers the 285$ its at now will likely jump 10x or more...


----------



## akshep

No pictures yet but I just picked up a Poweredge R610. Dual L5640's, 12gb ram and 2 136gb sas drives. Need to get some more storage for VM's. It will host my minecraft server, website, and then different os's for school.


----------



## bobfig

just tried out emby on my server and now its going to replace plex.


----------



## cones

Quote:


> Originally Posted by *bobfig*
> 
> just tried out emby on my server and now its going to replace plex.


Just wait until they update it if you are using Linux


----------



## bobfig

Quote:


> Originally Posted by *cones*
> 
> Just wait until they update it if you are using Linux


nope windows server 2008


----------



## Rbby258

Quote:


> Originally Posted by *bobfig*
> 
> just tried out emby on my server and now its going to replace plex.


What do you like better about it? I've not heard of it until now.


----------



## bobfig

Quote:


> Originally Posted by *Rbby258*
> 
> What do you like better about it? I've not heard of it until now.


over all its very similar but i like the few other options it has like the possibility to stream live tv, able to make accounts and restrict who watches what, and that its mostly free except the couple premium apps and the sync feature that i don't care about.

here's a good list and read comparing the two.

http://www.htpcbeginner.com/plex-vs-emby-comparison-with-kodi/


----------



## cones

Quote:


> Originally Posted by *Rbby258*
> 
> What do you like better about it? I've not heard of it until now.


I like how you don't need a subscription for it to be useful.


----------



## PuffinMyLye

Quote:


> Originally Posted by *cones*
> 
> I like how you don't need a subscription for it to be useful.


Does Emby support secure connections like Plex now does? I don't see that mentioned in the comparison so I'm wondering.


----------



## stumped

Quote:


> Originally Posted by *PuffinMyLye*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cones*
> 
> I like how you don't need a subscription for it to be useful.
> 
> 
> 
> Does Emby support secure connections like Plex now does? I don't see that mentioned in the comparison so I'm wondering.
Click to expand...

They do have HTTPS support, yes, but it's not in the same way plex does (i.e. it's setup for you automagically and with a valid ssl cert without you doing anything). You have to provide your own cert, and tell it where the parts of the certs to use are.


----------



## PuffinMyLye

Quote:


> Originally Posted by *stumped*
> 
> They do have HTTPS support, yes, but it's not in the same way plex does (i.e. it's setup for you automagically and with a valid ssl cert without you doing anything). You have to provide your own cert, and tell it where the parts of the certs to use are.


I see. How easy is it to setup so that remote users have no work to do on their end?


----------



## stumped

Well, it's somewhat similar to setting up any server with an ssl cert: You tell it where the cert and private key are (and possibly intermediate cert) (and where/how you get the cert is up to you) and then tell the server to serve over what port it uses ssl for (and in emby's case, it's 443). The clients should see it without any intervention (except for maybe needing a confirmation to use a self-signed cert if that's the route you chose).

However, that's just my best (educated) guess, as I don't use SSL with emby as it's only serving stuff locally (I have UPnP disabled on it).


----------



## PuffinMyLye

Quote:


> Originally Posted by *stumped*
> 
> Well, it's somewhat similar to setting up any server with an ssl cert: You tell it where the cert and private key are (and possibly intermediate cert) (and where/how you get the cert is up to you) and then tell the server to serve over what port it uses ssl for (and in emby's case, it's 443). The clients should see it without any intervention (except for maybe needing a confirmation to use a self-signed cert if that's the route you chose).
> 
> However, that's just my best (educated) guess, as I don't use SSL with emby as it's only serving stuff locally (I have UPnP disabled on it).


I see. I've been using Plex for quite some time and have a good amount of users on it so making the switch would be hard but I'm always interested in alternative so maybe I'll throw up an Emby docker on my unRAID box and give it a try.


----------



## cones

Quote:


> Originally Posted by *PuffinMyLye*
> 
> Does Emby support secure connections like Plex now does? I don't see that mentioned in the comparison so I'm wondering.


Yes it is kinda a recent thing. Here is a short how-to, it doesn't explain everything though. Last time i tried it though it added to much overhead that the pages loaded slower.


----------



## tycoonbob

Quote:


> Originally Posted by *cones*
> 
> Yes it is kinda a recent thing. Here is a short how-to, it doesn't explain everything though. Last time i tried it though it added to much overhead that the pages loaded slower.


While I have never used Emby, and know little about it (other than the research I've done in the last week or so), I'd imagine you might you get better performance by using a reverse proxy and putting your SSL there, instead of configuring it on the local Emby web server, as outlined in that link you shared. Seems that Emby is using some sort of light built-in web server and maybe having that local web server doing SSL is too heavy. Offload that to a reverse proxy, such as Nginx.

I do my PlexMediaServer over SSL through my Nginx Reverse Proxy, and it works great. Though, I'm very interested in trying out Emby, but trying to wait for a Vizio TV app. My sister wouldn't let me take my PlexMediaServer offline if she couldn't reach it from her TV. Seems like a Vizio app may happen sometime, but no idea when.


----------



## andyroo89

Anyone rocking ikea lack rack(side table or coffee table) as a server rack?


----------



## bobfig

Quote:


> Originally Posted by *andyroo89*
> 
> Anyone rocking ikea lack rack(side table or coffee table) as a server rack?


i would be if i had something to mount to mine i have now. i may be able to snag one of the big ups's im taking out of schools around here so i wonder how well it dose holding up 200lbs of batteries.


----------



## beatfried

So I got myself a nice HP Server SSD (800GB) for a really good price. The problem is: I only got Dell servers :/
No, what should I do? Sell the SSD and buy a (or more then one for the same price -.-) Dell one?
Tear it apart and use it in the Dell server? Is that possible? Can I get problems? I know that you CAN get problems with HP-Controllers and non-HP Disks, but the other way around?


----------



## tiro_uspsss

Quote:


> Originally Posted by *beatfried*
> 
> So I got myself a nice HP Server SSD (800GB) for a really good price. The problem is: I only got Dell servers :/
> No, what should I do? Sell the SSD and buy a (or more then one for the same price -.-) Dell one?
> Tear it apart and use it in the Dell server? Is that possible? Can I get problems? I know that you CAN get problems with HP-Controllers and non-HP Disks, but the other way around?


its just a SSD with HP stickers on it, there won't be any problems putting it in any other PC, whether self-built or pre-built (ie Dell).


----------



## andyroo89

hey guys, I was wondering if someone knows what the problem is? I googled it but all I gt was dell's website.

I have dell poweredge sc1435. I had this problem before but I had to go into recovery mode (forgot to modify firewall for my new ssh port) and noticed it said I only had 4.0gb memory, and then there was output *warning dimm 5 6 7 8 are disabled* anyone know how to re enable it?


----------



## beers

Quote:


> Originally Posted by *andyroo89*
> 
> hey guys, I was wondering if someone knows what the problem is? I googled it but all I gt was dell's website.
> 
> I have dell poweredge sc1435. I had this problem before but I had to go into recovery mode (forgot to modify firewall for my new ssh port) and noticed it said I only had 4.0gb memory, and then there was output *warning dimm 5 6 7 8 are disabled* anyone know how to re enable it?


Do you only have one CPU installed?


----------



## beatfried

Quote:


> Originally Posted by *tiro_uspsss*
> 
> its just a SSD with HP stickers on it, there won't be any problems putting it in any other PC, whether self-built or pre-built (ie Dell).


yes, thats exactly what i thought of hdds. and I was so wrong. after three restores of 16TB of data which I lost on two different (got a new one after the second restore) hp raid controllers with "normal" WD RE disks I learned my lesson and ask if someone knows of any problems








btw: same disks, same controller but without the hp-branding (and firmware) -> no problems at all.


----------



## andyroo89

Quote:


> Originally Posted by *beers*
> 
> Do you only have one CPU installed?


No, there are two cpu in the server.


----------



## TheBloodEagle

Lack of pictures in this thread.


----------



## andyroo89

Quote:


> Originally Posted by *TheBloodEagle*
> 
> Lack of pictures in this thread.


----------



## Junior82

addition to the server rack is on its way 

Got a killer deal or at least I think I did.
Plans to add another ESXi virtual host.
Specs
x2 Intel E5-2620v2 12cores 24 Threads
72GB DDR3 ECC 1333
Intel l340-T4 Quad 1Gb NIC
16 Bay 2.5" Drives
x2 250GB SSD
x2 120GB SSD
x5 500GB HDD


----------



## EpicAMDGamer

Quote:


> Originally Posted by *Junior82*
> 
> addition to the server rack is on its way
> 
> Got a killer deal or at least I think I did.
> Plans to add another ESXi virtual host.
> Specs
> x2 Intel E5-2620v2 12cores 24 Threads
> 72GB DDR3 ECC 1333
> Intel l340-T4 Quad 1Gb NIC
> 16 Bay 2.5" Drives
> x2 250GB SSD
> x2 120GB SSD
> x5 500GB HDD


Certianly has some awesome specs, but exactly what kind of a deal did you get ($)?


----------



## Junior82

I'll have about $1500 into it when all said and done, i have the drives laying around so I saved quite a bit there. The barebone server which included 1 E5-2620v2 cpu and 8gb, H700w/batt RAID controller, was $700, I got an additional E5-2620v2, 72GB (18) DDR3 ECC 1333 2Rx4GB, ended up with the quad port Intel l350-T4 nic, (9) 2.5" trays. I have room for expansion on the drives and will be able to add another 7 disks to it in the future and able to hold a total of 16 2.5" drives. I'll post more pictures once parts start coming in.


----------



## andyroo89

Maybe this will be interesting for someone?
http://www.ebay.com/itm/Lenovo-ThinkServer-TS140-5U-Tower-Server-Intel-Xeon-E3-4GB-DDR3-500GB-HDD/291140796467?hash=item43c9580033


----------



## swingarm

Behold my awesome reused Dell computer. Dad's a computer tech and earlier this year he had some computers and a bunch of hard drives sitting around so I made this file server....

OS: Openmediavault(Linux)
Case: Dell Optiplex 330
CPU: Intel Core 2 Duo E4500 @ 2.20GHz
Motherboard: Socket 775
Memory: 2 x 512GB DDR2 PC2-6400
PSU: Antec VP-450 450Watt
OS HDD: Kingspec SATA II 8GB SSD
Storage HDD(s): Western Digital 80GB, Western Digital 4TB, Seagate 80GB, Seagate 160GB, and Western Digital 500GB
Other: Promise 4 port SATA II PCI Card, Evercool Dual 5.25" Drive Bay to Triple 3.5" HDD Cooling Box, and power is plugged into a Eaton 3S750 UPS
Server Manufacturer: Dell




The 8GB SSD OS drive is in the space below the 2 lower hard drives, no need to fasten it to anything. Have space for 1 more HDD if needed. One of the 80GB drives is an IDE interface so I had to find a IDE to SATA converter for it. Tried to make the cables look nice but it wasn't that important to me. 1GB of memory may seem really small but the OS only uses about 10% of it. The 4TB drive wasn't "laying around", I bought it separately because I wanted at least one drive with a lot of capacity. Replaced the Dell OEM 300Watt(?) PSU with the Antec because I didn't trust it to power everything. Promise card because there was only 4 SATA ports on the motherboard. Evercool drive bay because I could fit three 3.5" hard drives in a dual 5.25" space.


----------



## fasttracker440

Hey all just found this thread. My current set up is a work in progress arnt they all lol. The heart of my setup is a Dell r710 and a supermicro 24bay jbod box. The dell basic specs are dual Xeon 5570's 48gb ecc ram dual 870watt psu. I replaced the crap perc raid card with a Adaptec asr 52445 and the jbod box runs off a Adaptec 5805. In the jbod box I currently have a mix of 11 2tb hdd in raid 5 gives me about 18tb usable. In the dell is 5 Intel 40gb ssd raid 5 for the os. The last 3 slots have 500gb raid 0 for downloads and other temp stuff. I currently use it mostly for media storage/streaming and dns. I use plex for my media. I have one crappy pic on my phone so will post more latter.


----------



## Gunfire

Quote:


> Originally Posted by *fasttracker440*
> 
> 01001001 00100000 01101100 01101111 01110110 01100101 00100000 01100010 01101001 01100111 00100000 01110100 01101001 01110100 01110011












Nice set-up though


----------



## fasttracker440

Quote:


> Originally Posted by *Gunfire*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Nice set-up though


someone finaly looked my childish signature lol


----------



## Pawelr98

My HP ML350 G5 codename "MIR"
Exact specs:
-2xE5420
-20GB FB-DIMM (2x2GB + 4x4GB)
Storage:
-3x143GB 10K RPM SAS Standalone-storage/VMs
-2x73GB 15K RPM SAS RAID0-Arma III server
-2x40GB 7.2K RPM SATA RAID0-OS drive

I had to perform a lot of mods to make it running properly.
-Bigger radiators for RAID controller and SouthBridge
-DIY backup battery to enable 128MB cache for RAID controller
-HD5450 with PCIE X4 riser-to make OS less laggy and to run host modern games servers
-FB-DIMM cooling modification, lowered ram temperature by 5-10°C (depending on stick)


----------



## cones

I've been seeing that DIY battery backup more often lately, how often do you have to change them?


----------



## Pawelr98

Quote:


> Originally Posted by *cones*
> 
> I've been seeing that DIY battery backup more often lately, how often do you have to change them?


I have created this batpack just recently.
These cells are pretty old (few years) but still good enough to keep the cache running.


----------



## andyroo89

Anyone able to recommendm me a power consumption meter for electrical outlet, so I can test whether or not I can leave my servers running 24/7 without huge spike in electric bill.


----------



## cones

Quote:


> Originally Posted by *andyroo89*
> 
> Anyone able to recommendm me a power consumption meter for electrical outlet, so I can test whether or not I can leave my servers running 24/7 without huge spike in electric bill.


http://www.amazon.com/gp/aw/d/B000RGF29Q/ref=mp_s_a_1_2?qid=1443158222&sr=8-2&pi=SY200_QL40&keywords=p3+p4400+kill+a+watt&dpPl=1&dpID=41nMQyqE75L&ref=plSrch


----------



## andyroo89

Quote:


> Originally Posted by *cones*
> 
> http://www.amazon.com/gp/aw/d/B000RGF29Q/ref=mp_s_a_1_2?qid=1443158222&sr=8-2&pi=SY200_QL40&keywords=p3+p4400+kill+a+watt&dpPl=1&dpID=41nMQyqE75L&ref=plSrch


Yeah I was looking at that one, I might pick one up at lowes or walmart they're about 17 bucks or so.


----------



## tiro_uspsss

Quote:


> Originally Posted by *Pawelr98*
> 
> -FB-DIMM cooling modification, lowered ram temperature by 5-10°C (depending on stick)


whats the mod? just the fan or..?


----------



## Pawelr98

Quote:


> Originally Posted by *tiro_uspsss*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Pawelr98*
> 
> -FB-DIMM cooling modification, lowered ram temperature by 5-10°C (depending on stick)
> 
> 
> 
> whats the mod? just the fan or..?
Click to expand...

The fan.

Without this fan the server will use nearby cpu fan to cool it down (by cooling it down I mean keeping it under 75°C).This creates a lot of noise.


----------



## cones

Quote:


> Originally Posted by *andyroo89*
> 
> Yeah I was looking at that one, I might pick one up at lowes or walmart they're about 17 bucks or so.


Check the versions first, I believe it's 4420 or 4400, the one I linked is newer and has slightly more features then the older one for not much more.


----------



## EvilMonk

Wish I lived in the US because I'm going to be almost giving up some G5s DL360, DL380, DL385 and a StorageWorks MSA20 quite soon to make room in my rack for my 2 DL160 G6 and 3 DL380 G7 servers coming in with my new 16 bay SAS2/SATA3 external enclosure. The G5s are loaded with Harpertown CPUs (and the lower power E5450 quad core 3 Ghz variant) with 32 Gb of FBDIMM. All have either 512mb Smart Array P800 or P400 SAS raid controllers, one even has a Smart Array P410 1Gb. I really just wish someone could like give me a small something and pay shipping to get them out of my house so I won't end up piling them up in the garage...

I have 2 DL380 G5, 2 DL360 G5 and a DL385 G5 with 2 Opteron quad 2.3 Ghz and 32Gb of DDR2 ECC with a SmartArray P800 512mb and a second U320 SmartArray 6404 320 Mb connecting the StorageWorks MSA20 SAN that support up to 24Tb (12x2Tb SATA) HDDs


----------



## NKrader

Quote:


> Originally Posted by *Pawelr98*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> My HP ML350 G5 codename "MIR"
> Exact specs:
> -2xE5420
> -20GB FB-DIMM (2x2GB + 4x4GB)
> Storage:
> -3x143GB 10K RPM SAS Standalone-storage/VMs
> -2x73GB 15K RPM SAS RAID0-Arma III server
> -2x40GB 7.2K RPM SATA RAID0-OS drive
> 
> I had to perform a lot of mods to make it running properly.
> -Bigger radiators for RAID controller and SouthBridge
> -DIY backup battery to enable 128MB cache for RAID controller
> -HD5450 with PCIE X4 riser-to make OS less laggy and to run host modern games servers
> -FB-DIMM cooling modification, lowered ram temperature by 5-10°C (depending on stick)


this makes me so happy.

also I should soon have a new server to post,building it currently.


----------



## andyroo89

Anyone running raspberrypi torrent box? If so, how is yours set up? I have some extra internal HDD, and extra rpi lying around.


----------



## twerk

Any HP server buffs in the house? I'm thinking of buying a DL series for home use and want to spend under £1000.

After quite a bit of research the DL80 seems like the best option. This config specifically:

http://www.uk.insight.com/en-gb/productinfo/servers/0004144830

8x 3.5" hot-swap bays

1.6GHz 6-core Xeon

4GB 2133MHz RAM

Dual GbE

They are doing a buy one get one free offer at the moment. If you buy two servers, HP will give you cash back for the second one. If I then sell one, I'm effectively only paying the tax for one server.

http://www.uk.insight.com/content/dam/insight/EMEA/uk/promotions/PA0151_-_BOGOF_Gen9_Servers_Sept_15_v1_2.pdf

Thoughts?


----------



## zanginator

Quote:


> Originally Posted by *andyroo89*
> 
> Anyone running raspberrypi torrent box? If so, how is yours set up? I have some extra internal HDD, and extra rpi lying around.


Have Pi's, have a torrent box. Just not a Pi Torrent Box. In terms of software I use nothing special, just rTorrent with a ruTorrent Web-Front end running on top of Debian.

My only concern with the Pi and HDD's is power. The USB ports are very limited to what amperage they can supply and its usually just enough for a 2.5" drive in an external caddy. But I wouldn't risk two drives, unless you can externally power them.


----------



## stumped

Quote:


> Originally Posted by *zanginator*
> 
> Quote:
> 
> 
> 
> Originally Posted by *andyroo89*
> 
> Anyone running raspberrypi torrent box? If so, how is yours set up? I have some extra internal HDD, and extra rpi lying around.
> 
> 
> 
> Have Pi's, have a torrent box. Just not a Pi Torrent Box. In terms of software I use nothing special, just rTorrent with a ruTorrent Web-Front end running on top of Debian.
> 
> My only concern with the Pi and HDD's is power. The USB ports are very limited to what amperage they can supply and its usually just enough for a 2.5" drive in an external caddy. But I wouldn't risk two drives, unless you can externally power them.
Click to expand...

The other issu (at least with the Pi, not sure about the Pi 2) is that the ethernet and usb run on the same bus, so they fight with each other for the resources. This can make network become unstable, and hdd throughput can be diminished. If you're looking for a torrent box using an ARM chip, maybe look at something that isn't a Pi (some have dedicated buses for the ethernet and then for the usb controller).


----------



## EvilMonk

Quote:


> Originally Posted by *twerk*
> 
> Any HP server buffs in the house? I'm thinking of buying a DL series for home use and want to spend under £1000.
> 
> After quite a bit of research the DL80 seems like the best option. This config specifically:
> http://www.uk.insight.com/en-gb/productinfo/servers/0004144830
> 
> 8x 3.5" hot-swap bays
> 1.6GHz 6-core Xeon
> 4GB 2133MHz RAM
> Dual GbE
> 
> They are doing a buy one get one free offer at the moment. If you buy two servers, HP will give you cash back for the second one. If I then sell one, I'm effectively only paying the tax for one server.
> http://www.uk.insight.com/content/dam/insight/EMEA/uk/promotions/PA0151_-_BOGOF_Gen9_Servers_Sept_15_v1_2.pdf
> 
> Thoughts?


I consider myself to be an HP enthusiast. I have about 10 Proliant G6 and G7 servers that are a mix of DL160 G6, DL180 G6, DL360 G6 & G7 and DL380 G7 series.

The deal you are talking about is a really good one I think, I don't own any G9 series server but we have them at work and they are really good so far and this particular DL80 G9 server seem to be good but will require to be upgraded after you purchased it (Like more RAM, hard drives, there is no real raid controller as the one specified is the Smart Array B140i aka Intel built in iRST which does not support SAS or any advanced raid functions.) At the end you will end up paying a lot more to get the server to a performance level (If you want to add a second CPU you'll need to buy a specific HP heatsink made for that server and a the matching CPU... Usually they don't go cheap for new servers. Then the HP Smart Array SAS controllers are a little expensive as well if you look for the version that is designed for that specific server. You will want to have a decent amount of cache and a battery on it as well to help with I/O) so I guess at the end you end up paying back the money you saved from selling the other server and getting the cash back from HP... All those small amounts you'll end up spending to put those missing components in the server will add up to the value you'll save but you will have the advantage of getting you a brand new server which is I think a big plus as well as having a lot more performance and the latest tech in it.


----------



## twerk

Quote:


> Originally Posted by *EvilMonk*
> 
> I consider myself to be an HP enthusiast. I have about 10 Proliant G6 and G7 servers that are a mix of DL160 G6, DL180 G6, DL360 G6 & G7 and DL380 G7 series.
> 
> The deal you are talking about is a really good one I think, I don't own any G9 series server but we have them at work and they are really good so far and this particular DL80 G9 server seem to be good but will require to be upgraded after you purchased it (Like more RAM, hard drives, there is no real raid controller as the one specified is the Smart Array B140i aka Intel built in iRST which does not support SAS or any advanced raid functions.) At the end you will end up paying a lot more to get the server to a performance level (If you want to add a second CPU you'll need to buy a specific HP heatsink made for that server and a the matching CPU... Usually they don't go cheap for new servers. Then the HP Smart Array SAS controllers are a little expensive as well if you look for the version that is designed for that specific server. You will want to have a decent amount of cache and a battery on it as well to help with I/O) so I guess at the end you end up paying back the money you saved from selling the other server and getting the cash back from HP... All those small amounts you'll end up spending to put those missing components in the server will add up to the value you'll save but you will have the advantage of getting you a brand new server which is I think a big plus as well as having a lot more performance and the latest tech in it.


Awesome, thank you for your input. For the moment I think I'll stick with the single CPU and upgrade to 8GB of RAM, so that shouldn't cost me much. I'll probably grab a H240 HBA controller for the SAS and RAID 5 ability. It's £182.39 at the moment. It's not great but it's good enough for my current uses.

Do you know if HP servers come with the hot-swap caddies, or do you have to buy them separately?

I had a pretty hard time picking a server, they have quite a few models in the offer. The ones in red have 2.5" bays, I already have some 3.5" drives I can use so it will save me quite a bit of cash. The DL80 seemed like the best option simply because of the 8 x 3.5" bays allowing for more expansion in the future. If you disagree then please let me know if there's a better buy.



(click to expand)


----------



## EvilMonk

Quote:


> Originally Posted by *twerk*
> 
> Awesome, thank you for your input. For the moment I think I'll stick with the single CPU and upgrade to 8GB of RAM, so that shouldn't cost me much. I'll probably grab a H240 HBA controller for the SAS and RAID 5 ability. It's £182.39 at the moment. It's not great but it's good enough for my current uses.
> 
> Do you know if HP servers come with the hot-swap caddies, or do you have to buy them separately?
> 
> I had a pretty hard time picking a server, they have quite a few models in the offer. The ones in red have 2.5" bays, I already have some 3.5" drives I can use so it will save me quite a bit of cash. The DL80 seemed like the best option simply because of the 8 x 3.5" bays allowing for more expansion in the future. If you disagree then please let me know if there's a better buy.
> 
> 
> 
> (click to expand)


Hi,
I agree with you the 3.5" drives are the easiest to go with and I always try to go with those even when I get a server now (I got 2 DL380 G7 in the last month and went for the 3.5" versions).
You will also need to get the caddies to fit your drives in the server yes. they are not so hard to find so you can probably get some on ebay for a decent price. I got myself more than I needed both for 2.5" and 3.5" drives and now I have a box full of them. Luckily they are the same for DL series from G5 to G7 so I kept them when drives failed.

Let me know if you have other questions







have a nice day.


----------



## twerk

Quote:


> Originally Posted by *EvilMonk*
> 
> Hi,
> I agree with you the 3.5" drives are the easiest to go with and I always try to go with those even when I get a server now (I got 2 DL380 G7 in the last month and went for the 3.5" versions).
> You will also need to get the caddies to fit your drives in the server yes. they are not so hard to find so you can probably get some on ebay for a decent price. I got myself more than I needed both for 2.5" and 3.5" drives and now I have a box full of them. Luckily they are the same for DL series from G5 to G7 so I kept them when drives failed.
> 
> Let me know if you have other questions
> 
> 
> 
> 
> 
> 
> 
> have a nice day.


I've had a look around for official HP caddies and they don't seem to exist.

After some research it seems they want to you only use their drives. This thread was pretty discouraging:

http://h30499.www3.hp.com/t5/ProLiant-Servers-ML-DL-SL/ML350p-Gen8-Hard-Drives-SBS-and-Controller/td-p/5598551#.VhFmbPlVhBc

People saying that you have to use HP drives in Gen8 and Gen9 servers. Is this true?


----------



## EvilMonk

Quote:


> Originally Posted by *twerk*
> 
> I've had a look around for official HP caddies and they don't seem to exist.
> 
> After some research it seems they want to you only use their drives. This thread was pretty discouraging:
> http://h30499.www3.hp.com/t5/ProLiant-Servers-ML-DL-SL/ML350p-Gen8-Hard-Drives-SBS-and-Controller/td-p/5598551#.VhFmbPlVhBc
> 
> People saying that you have to use HP drives in Gen8 and Gen9 servers. Is this true?


Its not the case for Gen6 and Gen7 servers but the Gen8 and Gen9 servers we have at work all use HP branded SSDs and HDDs so I can't really tell for the newer gens. It might be the case but I doubt it... I'll try to take a look later this afternoon and I'll let you know.


----------



## twerk

Quote:


> Originally Posted by *EvilMonk*
> 
> Its not the case for Gen6 and Gen7 servers but the Gen8 and Gen9 servers we have at work all use HP branded SSDs and HDDs so I can't really tell for the newer gens. It might be the case but I doubt it... I'll try to take a look later this afternoon and I'll let you know.


Alrighty thanks! I can find some knock-off caddies on eBay and other sites which would work. It's just a case of whether third party drives work or not.


----------



## andyroo89

Quote:


> Originally Posted by *zanginator*
> 
> Have Pi's, have a torrent box. Just not a Pi Torrent Box. In terms of software I use nothing special, just rTorrent with a ruTorrent Web-Front end running on top of Debian.
> 
> My only concern with the Pi and HDD's is power. The USB ports are very limited to what amperage they can supply and its usually just enough for a 2.5" drive in an external caddy. But I wouldn't risk two drives, unless you can externally power them.


I have a powered usb hub I was planning to use to power the HDD.


----------



## BugBash

Quote:


> Originally Posted by *twerk*
> 
> I've had a look around for official HP caddies and they don't seem to exist.
> 
> People saying that you have to use HP drives in Gen8 and Gen9 servers. Is this true?


G8/G9 Caddies have a collection of lights in the front to indicate what they are doing or if they have failed.
The server will report that it detects Non-HP drives installed in the HP Smart Storage Admistrator or Array Config Utility

As the G8 Servers start to run out of warranty, the caddies will start appear.
I cant remember if you could get 36GB drives for G8`s, if you can, these will be worth peanuts now.
Maybe buy a few and swap out the drives.

EDIT:
Maybe I should look on ebay before posting







There are millions of G8 3.5" caddies on there!
They are expensive but you can put whatever drives you want in there.









http://www.ebay.co.uk/sch/i.html?_odkw=hp+36gb+g8&_osacat=0&_from=R40&_trksid=p2045573.m570.l1313.TR0.TRC0.H0.Xhp+g8+3.5%22.TRS0&_nkw=hp+g8+3.5%22&_sacat=0


----------



## EvilMonk

Quote:


> Originally Posted by *BugBash*
> 
> G8/G9 Caddies have a collection of lights in the front to indicate what they are doing or if they have failed.
> The server will report that it detects Non-HP drives installed in the HP Smart Storage Admistrator or Array Config Utility
> 
> As the G8 Servers start to run out of warranty, the caddies will start appear.
> I cant remember if you could get 36GB drives for G8`s, if you can, these will be worth peanuts now.
> Maybe buy a few and swap out the drives.
> 
> EDIT:
> Maybe I should look on ebay before posting
> 
> 
> 
> 
> 
> 
> 
> There are millions of G8 3.5" caddies on there!
> They are expensive but you can put whatever drives you want in there.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.ebay.co.uk/sch/i.html?_odkw=hp+36gb+g8&_osacat=0&_from=R40&_trksid=p2045573.m570.l1313.TR0.TRC0.H0.Xhp+g8+3.5%22.TRS0&_nkw=hp+g8+3.5%22&_sacat=0


One of my friends has 2 DL320 G8 V2 servers with 3rd party SSDs (4x Crucial MX100 512Gb) in each servers on a HP Smart Array P420 1Gb FBWC SAS controllers and he confirms they work fine with eBay caddies so I guess we only need to find out if the same is true for G9 servers with either P430 and P440 arrays now...


----------



## twerk

Well, the DL80 Gen9's arrived! I bought a HP H240 Smart Host Bus Adapter thinking that the onboard controller wouldn't handle the 8 hot-swap drives. It turns out it does, so the card will probably be going back.

My only problem is the noise. The fans are constantly at 50%+ which just seems completely unnecessary and it ridiculously loud. I understand it's a server and it meant for a data center environment. All it has to cool is a single 85W CPU, even at 5% fan the temperatures would be perfectly fine.

Anyone know if I can turn down the fans and if so how?


----------



## EvilMonk

Quote:


> Originally Posted by *twerk*
> 
> Well, the DL80 Gen9's arrived! I bought a HP H240 Smart Host Bus Adapter thinking that the onboard controller wouldn't handle the 8 hot-swap drives. It turns out it does, so the card will probably be going back.
> 
> My only problem is the noise. The fans are constantly at 50%+ which just seems completely unnecessary and it ridiculously loud. I understand it's a server and it meant for a data center environment. All it has to cool is a single 85W CPU, even at 5% fan the temperatures would be perfectly fine.
> 
> Anyone know if I can turn down the fans and if so how?


Should have an option in the bios that allow you to choose either quiet thermal setting and optimal thermal setting, where this setting is on the G9 series server is probably in the advanced setup part of the UEFI setup screen but I am not going to work this weekend so I wouldn't be able to tell you. I am not sure as well it is going to be the same UEFI there is on the DL380 and DL360 gen9 servers but you can probably look it up on the HP website or google it. You still have to know this server is going to still be noisy.


----------



## twerk

Quote:


> Originally Posted by *EvilMonk*
> 
> Should have an option in the bios that allow you to choose either quiet thermal setting and optimal thermal setting, where this setting is on the G9 series server is probably in the advanced setup part of the UEFI setup screen but I am not going to work this weekend so I wouldn't be able to tell you. I am not sure as well it is going to be the same UEFI there is on the DL380 and DL360 gen9 servers but you can probably look it up on the HP website or google it. You still have to know this server is going to still be noisy.


Thanks. Yeah I had a look at that, the 'slowest' option was Optimal Cooling which is the default.

I know it's not going to be the quietest machine in the world but running both fans at over 8000rpm at idle seems a bit silly to me.


----------



## Rbby258

Quote:


> Originally Posted by *twerk*
> 
> Thanks. Yeah I had a look at that, the 'slowest' option was Optimal Cooling which is the default.
> 
> I know it's not going to be the quietest machine in the world but running both fans at over 8000rpm at idle seems a bit silly to me.


Change the fans is really the only way.


----------



## fasttracker440

Quote:


> Originally Posted by *twerk*
> 
> Thanks. Yeah I had a look at that, the 'slowest' option was Optimal Cooling which is the default.
> 
> I know it's not going to be the quietest machine in the world but running both fans at over 8000rpm at idle seems a bit silly to me.


In my supermicro case I installed something like these http://www.amazon.com/Computer-Noise-Reduce-Resistor-Adapter/dp/B00880S6KA . Mine came from some corsair fans I had though but the idea is the same. It does throw a enclosure fault to my raid card but temps are fine.


----------



## EvilMonk

Just to mention the fans are insert clip ons in an HP specific form factor and will void your warranty if you try to tweak the fan connector to fit another fan... HP are quite difficult with their warranties as we found out at work when we used non HP external SAS cables between our tape HP Ultrium G2 1/8 tape backup and the P222 SAS controller and we needed to get their support to come as all backups were screwing up on the tape drive and we even have a 4 hours emergency support contract.


----------



## Pawelr98

Quote:


> Originally Posted by *twerk*
> 
> Quote:
> 
> 
> 
> Originally Posted by *EvilMonk*
> 
> Should have an option in the bios that allow you to choose either quiet thermal setting and optimal thermal setting, where this setting is on the G9 series server is probably in the advanced setup part of the UEFI setup screen but I am not going to work this weekend so I wouldn't be able to tell you. I am not sure as well it is going to be the same UEFI there is on the DL380 and DL360 gen9 servers but you can probably look it up on the HP website or google it. You still have to know this server is going to still be noisy.
> 
> 
> 
> Thanks. Yeah I had a look at that, the 'slowest' option was Optimal Cooling which is the default.
> 
> I know it's not going to be the quietest machine in the world but running both fans at over 8000rpm at idle seems a bit silly to me.
Click to expand...

Plug the fans into 7V ?
Plus into 12V and minus into 5V.

However before doing anything to fans I would check if there are other parts that require cooling.
My ML350 G5 had fans running fast because of hot FB-Dimm.

Your server is a newer one (no uber-Hot FB-Dimms) so I would check the chipsets.
When there are none temperature sensors visible in HWinfo64/Hwmonitor then HP remote managing will probaly be able to read them.

And when that fails then using finger can be enough to tell.
If you cannot hold it on radiator for more than 2 seconds without serious pain then it means it runs hot.


----------



## EvilMonk

Quote:


> Originally Posted by *Pawelr98*
> 
> Plug the fans into 7V ?
> Plus into 12V and minus into 5V.
> 
> However before doing anything to fans I would check if there are other parts that require cooling.
> My ML350 G5 had fans running fast because of hot FB-Dimm.
> 
> Your server is a newer one (no uber-Hot FB-Dimms) so I would check the chipsets.
> When there are none temperature sensors visible in HWinfo64/Hwmonitor then HP remote managing will probaly be able to read them.
> 
> And when that fails then using finger can be enough to tell.
> If you cannot hold it on radiator for more than 2 seconds without serious pain then it means it runs hot.


New HP servers are just running with high fan speed even at with the lowest fan setting selected, even when the temperatures are not that hot. The DL360s and DL380s Gens 8 and 9 we have at work (around 60) are at least... It's the same for the couple of DL160 G6, DL180 G6, DL320 G6 & G7, DL360s G7 and DL380s G7 I have at home, when I look at the temps through iLO they are all way under the warning and critical temp threshold and are set on the lowest fan speed setting in the bios but they still manage to make a good amount of noise. I doubt that with the way the fans slide in connector is made with the power built in on newer G9 servers that he could modify anything to the wiring to change the voltage of the for the 12v and 7v without having to cut any wires and then void the warranty or make visible and irreversible changes that would be visible to HP support and void his warranty.


----------



## Pawelr98

Quote:


> Originally Posted by *EvilMonk*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Pawelr98*
> 
> Plug the fans into 7V ?
> Plus into 12V and minus into 5V.
> 
> However before doing anything to fans I would check if there are other parts that require cooling.
> My ML350 G5 had fans running fast because of hot FB-Dimm.
> 
> Your server is a newer one (no uber-Hot FB-Dimms) so I would check the chipsets.
> When there are none temperature sensors visible in HWinfo64/Hwmonitor then HP remote managing will probaly be able to read them.
> 
> And when that fails then using finger can be enough to tell.
> If you cannot hold it on radiator for more than 2 seconds without serious pain then it means it runs hot.
> 
> 
> 
> New HP servers are just running with high fan speed even at with the lowest fan setting selected, even when the temperatures are not that hot. The DL360s and DL380s Gens 8 and 9 we have at work (around 60) are at least... It's the same for the couple of DL160 G6, DL180 G6, DL320 G6 & G7, DL360s G7 and DL380s G7 I have at home, when I look at the temps through iLO they are all way under the warning and critical temp threshold and are set on the lowest fan speed setting in the bios but they still manage to make a good amount of noise. I doubt that with the way the fans slide in connector is made with the power built in on newer G9 servers that he could modify anything to the wiring to change the voltage of the for the 12v and 7v without having to cut any wires and then void the warranty or make visible and irreversible changes that would be visible to HP support and void his warranty.
Click to expand...

I mean just doing simple connector for the fan.

A simple molex->2/3pin cable and put the minus wire into 5V.
There are also adapters which already do that.
This is the most safe method.

I had no problems with connecting standard 3 pin fan into 6pin HP fan connector in my server.
All you have to do is find the right pins(ground,power,rpm) and use bit more force.
4pin is the same but you also need to locate PWM pin.
The plastic thing will bend a bit and fan still catches connection.

Remove the fan and no evidence of mods(if done carefully).


----------



## EvilMonk

Quote:


> Originally Posted by *Pawelr98*
> 
> I mean just doing simple connector for the fan.
> 
> A simple molex->2/3pin cable and put the minus wire into 5V.
> There are also adapters which already do that.
> This is the most safe method.
> 
> I had no problems with connecting standard 3 pin fan into 6pin HP fan connector in my server.
> All you have to do is find the right pins(ground,power,rpm) and use bit more force.
> 4pin is the same but you also need to locate PWM pin.
> The plastic thing will bend a bit and fan still catches connection.
> 
> Remove the fan and no evidence of mods(if done carefully).


Connectors are a lot smaller and there is a lot less room now in the G9 servers thats what I meant sorry if my explanations were not clear


----------



## levontraut

Since my last 2 posts I have done a huge change ( or in the process of). I am busy upgrading my kit and relocating it all.
Quote:


> Originally Posted by *levontraut*
> 
> I have just upgraded my games rig and turned it into a Main server for myself.
> 
> the Specs are in my Sig.
> 
> here is a brief look aqt it though
> 
> Mobo:
> gigabyte 990fxa ud7
> 
> CPU
> 8350
> 
> RAM
> 32 gig 1866
> 
> HDD:
> lots ( can not fit anymore in the case)
> 
> OS
> Server2012
> 
> It is taking a lot of time to set it up correctly, the file sharing is done, Teamspeak server is done now to do the backup etc...


Please see below pics.








In the pic is a pic of my daughter in front of the rack, as a size comparison. The rack is a little 27U Fusion ( 800x1000x1430).
I will be putting all 3 machines in there ( so my cases will be up for sale soon.
The cases I am looking at are 2X 4U 16x3.5 hot swap chassis.
then something that will take my watercooler and 4 SSD's and the rest of my games rig.

I am hoping to make some or most of my money back from the sale of my cases as they are still in very good condition.

Will keep you posted as I go along.


----------



## cdoublejj

A buddies, 1TB of ram. Model unknown. MAY be an M920.


----------



## cones

Quote:


> Originally Posted by *cdoublejj*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> A buddies, 1TB of ram. Model unknown. MAY be an M920.


That must be expensive. What would you need that much RAM for, or is it for a business?


----------



## cdoublejj

Quote:


> Originally Posted by *cones*
> 
> That must be expensive. What would you need that much RAM for, or is it for a business?


visualizing entire office buildings/companies, converting most all desktops and devices to dumb terminals. if it breaks it gets replaced, everything is really on the server which never breaks.


----------



## twerk

Anyone have any insight on software vs hardware RAID?

I can't decide whether to use the integrated HP Dynamic Smart Array B140i on the motherboard, or just do it in software via Ubuntu.

There will be one RAID 1 array for the boot drive (hardware RAID of course) but there will also be three 3TB disks in RAID 5 for storage, which I can't decide on.


----------



## Aximous

Quote:


> Originally Posted by *twerk*
> 
> Anyone have any insight on software vs hardware RAID?
> 
> I can't decide whether to use the integrated HP Dynamic Smart Array B140i on the motherboard, or just do it in software via Ubuntu.
> 
> There will be one RAID 1 array for the boot drive (hardware RAID of course) but there will also be three 3TB disks in RAID 5 for storage, which I can't decide on.


Honestly I would go for ZFS instead of anything hardware RAID. That way you won't get tied to that specific card and can migrate to any platform that fits your hardware. Also you'll get more features that help you protect your data.

This video sums it up pretty good, well worth watching on this topic:


----------



## NKrader

Quote:


> Originally Posted by *Aximous*
> 
> Honestly I would go for ZFS instead of anything hardware RAID. That way you won't get tied to that specific card and can migrate to any platform that fits your hardware. *Also you'll get more features that help you protect your data*.
> 
> This video sums it up pretty good, well worth watching on this topic:


Yep


----------



## cdoublejj

Quote:


> Originally Posted by *Aximous*
> 
> Honestly I would go for ZFS instead of anything hardware RAID. That way you won't get tied to that specific card and can migrate to any platform that fits your hardware. Also you'll get more features that help you protect your data.
> 
> This video sums it up pretty good, well worth watching on this topic:


Oooohhhh, great video. +Rep for that one sir. Been a while since i watched it, aren't there some performance loses by going with ZFS or is that dependent on the controller? Also I would like to note drive failure and data rot can possibly be fought on raid 5/6 by replacing drivers early or annually so all the the drive are not the same age or with the same amount of hours.


----------



## stumped

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Aximous*
> 
> Honestly I would go for ZFS instead of anything hardware RAID. That way you won't get tied to that specific card and can migrate to any platform that fits your hardware. Also you'll get more features that help you protect your data.
> 
> This video sums it up pretty good, well worth watching on this topic:
> 
> 
> 
> 
> 
> 
> 
> Oooohhhh, great video. +Rep for that one sir. Been a while since i watched it, aren't there some performance loses by going with ZFS or is that dependent on the controller? Also I would like to note drive failure and data rot can possibly be fought on raid 5/6 by replacing drivers early or annually so all the the drive are not the same age or with the same amount of hours.
Click to expand...

If you have ECC ram, data rot on ZFS is a non-issue. As for drive failure, that's an issue you'll be fighting no matter what you chose.

Performance wise on ZFS, it depends on the hardware resources allocated to it. The only downside is that once you hit ~75% of total capacity, then you have rather large performance hits, where as something like ext4 only start hitting that limit at about 85% (or even up to 90%).


----------



## EvilMonk

As I mentioned earlier the HP SmartArray B140i is not a dedicated hardware chip raid but an Intel RST technology based raid. You won't get the real performance level of an hardware soc based card but since you will use ZFS I guess you won't be needing it as you will use software raid anyway so you don't really have to worry about anything.


----------



## cdoublejj

Quote:


> Originally Posted by *stumped*
> 
> If you have ECC ram, data rot on ZFS is a non-issue. As for drive failure, that's an issue you'll be fighting no matter what you chose.
> 
> Performance wise on ZFS, it depends on the hardware resources allocated to it. The only downside is that once you hit ~75% of total capacity, then you have rather large performance hits, where as something like ext4 only start hitting that limit at about 85% (or even up to 90%).


so ZFS is totally software? my CPU usage would take a hit or can the hardware on my hardware raid controller accelerate some of that?


----------



## NKrader

Quote:


> Originally Posted by *cdoublejj*
> 
> so ZFS is totally software? my CPU usage would take a hit or can the hardware on my hardware raid controller accelerate some of that?


adding hardware raid would cause problems and compromise the data. yes it uses more cpu, but most of our servers have wicked overpowered CPU anyways.

look at this
http://www.freenas.org/hardware-requirements/
https://forums.freenas.org/index.php?threads/hardware-recommendations-read-this-first.23069/


----------



## Zen00

What would I have to add to convert my current rig into a server? What software?


----------



## twerk

Quote:


> Originally Posted by *Zen00*
> 
> What would I have to add to convert my current rig into a server? What software?


Depends entirely on what you want to do with it.

Mail server? Apps server? Proxy server? Media server...?


----------



## Zen00

Can I do all of the above?


----------



## xxpenguinxx

Quote:


> Originally Posted by *Zen00*
> 
> Can I do all of the above?


I don't see why not. A server is just a piece of hardware dedicated to network applications. You might need more storage if you plan on using it as a media server.


----------



## cdoublejj

Quote:


> Originally Posted by *NKrader*
> 
> adding hardware raid would cause problems and compromise the data. yes it uses more cpu, but most of our servers have wicked overpowered CPU anyways.
> 
> look at this
> http://www.freenas.org/hardware-requirements/
> https://forums.freenas.org/index.php?threads/hardware-recommendations-read-this-first.23069/


you miss under stand or i miss communicated. raid cards have processing units and ram of there own. perhaps ZFS utilizes this to bring down over head, i don't know. i know hardware raid card can be used in on raid mode to connect drives.


----------



## Pawelr98

But remember not to make same mistake as me.
Don't install FreeNas on USB flash drive.

1.5 year and pendrive killed.
The lack of wear leveling is the source of the problem.

So either take small HDD or flash with hardware-level wear leveling (unless FreeNas can make use of any tool to use software for this).

Also I had no problems running FreeNas on 2GB ram.
Samba+CIFS+SMART+Transmission
FreeNas was using 1-1.5MB swap sometimes.
No problems with maxing out HDD transfer rates on such config.

But if you want to use ZFS(as well as more plugins/services) then prepare a lot of ram.

An alterenative to FreeNas is unRAID.
More open IMO but never used it.


----------



## stumped

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *NKrader*
> 
> adding hardware raid would cause problems and compromise the data. yes it uses more cpu, but most of our servers have wicked overpowered CPU anyways.
> 
> look at this
> http://www.freenas.org/hardware-requirements/
> https://forums.freenas.org/index.php?threads/hardware-recommendations-read-this-first.23069/
> 
> 
> 
> you miss under stand or i miss communicated. raid cards have processing units and ram of there own. perhaps ZFS utilizes this to bring down over head, i don't know. i know hardware raid card can be used in on raid mode to connect drives.
Click to expand...

You would have to use the raid card in "pass through mode" or lacking that, set each drive to be it's own raid0 volume.

ZFS is not just a filesystem, but it is a volume manager, disk manager, filesystem, and probably more that I cannot remember currently. The more you put in ZFS's way, the better chance you have of corrupted or lost data. So no, you wouldn't be able to use the CPU of the raid card, however you might be able to use some of the cache on the raid card (which the raid card would be responsible for).


----------



## cdoublejj

Quote:


> Originally Posted by *stumped*
> 
> You would have to use the raid card in "pass through mode" or lacking that, set each drive to be it's own raid0 volume.
> 
> ZFS is not just a filesystem, but it is a volume manager, disk manager, filesystem, and probably more that I cannot remember currently. The more you put in ZFS's way, the better chance you have of corrupted or lost data. So no, you wouldn't be able to use the CPU of the raid card, however you might be able to use some of the cache on the raid card (which the raid card would be responsible for).


sounds like it's better to have a dedicated machine set up as a nas or san with ZFS so the actual server doesn't suffer an overhead penalty.


----------



## stumped

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *stumped*
> 
> You would have to use the raid card in "pass through mode" or lacking that, set each drive to be it's own raid0 volume.
> 
> ZFS is not just a filesystem, but it is a volume manager, disk manager, filesystem, and probably more that I cannot remember currently. The more you put in ZFS's way, the better chance you have of corrupted or lost data. So no, you wouldn't be able to use the CPU of the raid card, however you might be able to use some of the cache on the raid card (which the raid card would be responsible for).
> 
> 
> 
> sounds like it's better to have a dedicated machine set up as a nas or san with ZFS so the actual server doesn't suffer an overhead penalty.
Click to expand...

It honestly depends on the use case.

I have a desktop/fileserver with a i3-2100 and 16GB of ram, and I don't notice overhead issues on the equivalent of a raid6.

What specs are your server and what purpose is it?


----------



## cdoublejj

Quote:


> Originally Posted by *stumped*
> 
> It honestly depends on the use case.
> 
> I have a desktop/fileserver with a i3-2100 and 16GB of ram, and I don't notice overhead issues on the equivalent of a raid6.
> 
> What specs are your server and what purpose is it?


dual socket 1366 quads. lab use and multi purpose use. some of those i3s are wicked little dual cores depending on the computations being done. in Nintendo wii emulation they can beat out older HEAVILY OCed quads if its the right i3.


----------



## stumped

Quote:


> Originally Posted by *cdoublejj*
> 
> dual socket 1366 quads. lab use and multi purpose use. some of those i3s are wicked little dual cores depending on the computations being done. in Nintendo wii emulation they can beat out older HEAVILY OCed quads if its the right i3.


You've got *more* than enough CPU power for ZFS. The only thing is RAM, as zfs likes to use about as much ram as it can get its hands on, however there is a plateau affect on it (as with all things), and you can tweak zfs and limit the amount of ram it can use for caching.

Hell, I used to run raidz2 (raid6 equivalent) on this core i3 with only 4GB of ram in a virtualbox vm on a windows 7 host, without hiccup (granted, the virtualbox vm was just a nas, and I did pass through the block device to virtualbox). But I noticed no issues what so ever. And now that this file server is on a dedicated machine, it runs even better.

One of the other cool things, is you can create what is called a "zvol", which is a ZFS backed block device and pass that to the VM (depending on configuration and hypervisor), getting the CoW filesystem with data integrity checks, compression, and any other ZFS features you have enabled.


----------



## cdoublejj

Quote:


> Originally Posted by *stumped*
> 
> You've got *more* than enough CPU power for ZFS. The only thing is RAM, as zfs likes to use about as much ram as it can get its hands on, however there is a plateau affect on it (as with all things), and you can tweak zfs and limit the amount of ram it can use for caching.
> 
> Hell, I used to run raidz2 (raid6 equivalent) on this core i3 with only 4GB of ram in a virtualbox vm on a windows 7 host, without hiccup (granted, the virtualbox vm was just a nas, and I did pass through the block device to virtualbox). But I noticed no issues what so ever. And now that this file server is on a dedicated machine, it runs even better.
> 
> One of the other cool things, is you can create what is called a "zvol", which is a ZFS backed block device and pass that to the VM (depending on configuration and hypervisor), getting the CoW filesystem with data integrity checks, compression, and any other ZFS features you have enabled.


what about dual socket 771 quads? i'm building a second server. maybe once i do that i can back every thing up so i can back up ESXi 6. (i run esxi 6 on my raid 5). i also use enterprise drives.

EDIT: those core i3s certain models can murder core 2 quads in certain computations and apis.


----------



## Pawelr98

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *stumped*
> 
> You've got *more* than enough CPU power for ZFS. The only thing is RAM, as zfs likes to use about as much ram as it can get its hands on, however there is a plateau affect on it (as with all things), and you can tweak zfs and limit the amount of ram it can use for caching.
> 
> Hell, I used to run raidz2 (raid6 equivalent) on this core i3 with only 4GB of ram in a virtualbox vm on a windows 7 host, without hiccup (granted, the virtualbox vm was just a nas, and I did pass through the block device to virtualbox). But I noticed no issues what so ever. And now that this file server is on a dedicated machine, it runs even better.
> 
> One of the other cool things, is you can create what is called a "zvol", which is a ZFS backed block device and pass that to the VM (depending on configuration and hypervisor), getting the CoW filesystem with data integrity checks, compression, and any other ZFS features you have enabled.
> 
> 
> 
> what about dual socket 771 quads? i'm building a second server. maybe once i do that i can back every thing up so i can back up ESXi 6. (i run esxi 6 on my raid 5). i also use enterprise drives.
Click to expand...

If it runs on FB-Dimm RAM then skip it.
You will need really fast fans to keep those sticks cool.
Not sure if there are dual 771 that run on normal DDR2.

Not to mention that these cpu are pretty slow compared to modern ones.
2x E5420 (80W, 2.5ghz) in my case gets ~6.0 in cinebench 11.5
Thuban @4Ghz gets 7.2
Sandy or newer Xeon with HT can easily beat those 771.


----------



## cdoublejj

Quote:


> Originally Posted by *Pawelr98*
> 
> If it runs on FB-Dimm RAM then skip it.
> You will need really fast fans to keep those sticks cool.
> Not sure if there are dual 771 that run on normal DDR2.
> 
> Not to mention that these cpu are pretty slow compared to modern ones.
> 2x E5420 (80W, 2.5ghz) in my case gets ~6.0 in cinebench 11.5
> Thuban @4Ghz gets 7.2
> Sandy or newer Xeon with HT can easily beat those 771.


everything was free except mobos, i can handle making a good cooling system


----------



## Pawelr98

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Pawelr98*
> 
> If it runs on FB-Dimm RAM then skip it.
> You will need really fast fans to keep those sticks cool.
> Not sure if there are dual 771 that run on normal DDR2.
> 
> Not to mention that these cpu are pretty slow compared to modern ones.
> 2x E5420 (80W, 2.5ghz) in my case gets ~6.0 in cinebench 11.5
> Thuban @4Ghz gets 7.2
> Sandy or newer Xeon with HT can easily beat those 771.
> 
> 
> 
> everything was free except mobos, i can handle making a good cooling system
Click to expand...

From my experience.
One fan(server grade) - 75-80°C
Two fans(server grade + 2000RPM normal)- 60-70°C

3000-4000RPM fans may be useful.

Watercooling however would be the best and the quietest method.


----------



## Trinergy

Ghetto Plex and backup server built using a bunch of old hard disks cobbled from various old workstations (even an old 500GB IDE drive that won't die after 10 years). Originally built with Windows Home Server, upgraded to Windows Server 2012 Essentials in 2012 so I can play with Storage Spaces. Original case was a now 15 year old Antec SX1000. This month I just moved everything over to a Fractal Design Node 804 that I picked up when newegg had a sale for $69.99. I crossflashed Dell H200 to LSI 9211-8i in IT mode to provide me with 8 SATA ports. Also added a Noctua NF-A15 PWM fan to replace original CM fan from Hyper 212+. Will be installing Windows 10 and a new Seasonic G-450 PSU to add game streaming (XB1 and Steam) and DVR duties to the family room. Will also be trying my hand at MediaPortal and DVR functionality using a SD HDHOMERUN Prime. I am switching to Stablebit's Drivepool to handle the drives and volumes.

Specific server parts for this build:
1. Onboard SATA Cables: I used a NZXT 4 SATA to 4 SATA sleeved 90 Degree connector cables (very stiff had to cut the sheathing to allow some slack)
2. Dell PERC H200: crossflashed to LSi 9211-8i in Initiator Target mode instead of Initiator RAID mode (allows use of card as single drives without RAID restrictions)
3. Silverstone CP06 4 port SATA power cable with capacitor (uses inline power connectors that can be taken off and repositioned to match drive cage layout perfectly, this will leave wire cuts that should be taped)
4. Startech PY04SATA 4 port SATA cable (similar to above without cap but longer so you can add your own inline SATA power connector; I did this to add a 5th connector for the Node's fan controller which was routed to behind the rear hard drive cage.

Using inline SATA power splitters makes it very easy to customize to drive cage spacing and keep wires at bay. I only need to add an extension to feed the SSD in the front panel power.

OS: Windows Server 2012 Essentials (Soon Windows 10 Pro)
Case: Fractal Design Node 804
CPU: AMD Phenom II X2 555 3.7 Ghz (All four cores unlocked)
Motherboard: Gigabyte GA-MA785GM-US2H Rev 1.1
Memory: 8 GB DDR2 800 Mhz
PSU: CM XPP 500w (replacing with Seasonic G-450)
OS HDD (If you have one): Samsung OEM PM810 128GB
Storage HDD(s): 4 Seagate 1TB, 1 WD 1TB, 1 HGST 4TB, 1 Seagate 1.5TB, 2 Seagate 2TB, Seagate 500GB
Server Manufacturer (Ex: Dell, HP, You?): Me


----------



## stumped

Quote:


> Originally Posted by *cdoublejj*
> 
> what about dual socket 771 quads? i'm building a second server. maybe once i do that i can back every thing up so i can back up ESXi 6. (i run esxi 6 on my raid 5). i also use enterprise drives.
> 
> EDIT: those core i3s certain models can murder core 2 quads in certain computations and apis.


You are putting *way* too much emphasis on the CPU. There isn't *taht* much cpu computations involved. dual 771 quads will run it just fine. Your limitation will more so be the amount ram than anything.


----------



## EvilMonk

Quote:


> Originally Posted by *stumped*
> 
> You are putting *way* too much emphasis on the CPU. There isn't *taht* much cpu computations involved. dual 771 quads will run it just fine. Your limitation will more so be the amount ram than anything.


Yup. You're right... still those old CPUs are not going to be worth the trouble to get running. I took out of my rack my 2 DL360 G5 (Dual Quad E5450 3Ghz 32Gb DDR2 FBDIMM 6x146Gb SAS 10k P400 512mb raid6) and my 2 DL380 G5 (Dual Quad X5450 3Ghz 32Gb DDR2 FBDIMM 8x146Gb SAS 10k P800/P400 512mb) because they were starting to make too much heat and cost too much $ to run compared to the last DL360 G6 and DL380 G7 I got to replace them. They are good servers I'm not saying they aren't, it's just now they are too expensive to run and they generate too much heat compared to much more powerful 12 cores westmere ep G6 and G7 that will draw more than half the power and dissipate a lot less heat while giving more than twice the computing power for not that much more $ if you know where to look for them.


----------



## JoeChamberlain

Using FreeNAS, non-ECC RAM and ZFS Array.

Loving my server, best thing I've ever bought and built. Not using it to backup anything so I couldn't really care less about paying more for ECC ram! Not had a URE in over a year. *touch wood*

Rig as "For Serving" sig!


----------



## cdoublejj

Quote:


> Originally Posted by *Pawelr98*
> 
> From my experience.
> One fan(server grade) - 75-80°C
> Two fans(server grade + 2000RPM normal)- 60-70°C
> 
> 3000-4000RPM fans may be useful.
> 
> Watercooling however would be the best and the quietest method.


might help to cool the back of the mobo too. WC might be possible cause the chips are close ...or that may work against that. might be possible to AS5 epoxy plates on the sides and tops of the ram. lol.


----------



## NKrader

Quote:


> Originally Posted by *JoeChamberlain*
> 
> Using FreeNAS, *non-ECC* RAM and *ZFS* Array.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Loving my server, best thing I've ever bought and built. Not using it to backup anything so I couldn't really care less about paying more for ECC ram! Not had a URE in over a year. *touch wood*
> 
> Rig as "For Serving" sig!


----------



## Rbby258

How much does freenas store in ram? What if the power goes out?


----------



## JoeChamberlain

Quote:


> Originally Posted by *NKrader*


----------



## EpicAMDGamer

Quote:


> Originally Posted by *Rbby258*
> 
> How much does freenas store in ram? What if the power goes out?


I've never used Freenas or ZFS but I thought there was a rule when using it that you should have 1GB of ram per TB of zfs storage.


----------



## Rbby258

Quote:


> Originally Posted by *EpicAMDGamer*
> 
> I've never used Freenas or ZFS but I thought there was a rule when using it that you should have 1GB of ram per TB of zfs storage.


Yeah but what happens if theres a sudden loss of power?


----------



## cloudbyday

Quote:


> Originally Posted by *Rbby258*
> 
> Yeah but what happens if theres a sudden loss of power?


It is recommended that you have a backup source of power. I have a UPS that will keep me FreeNAS on for 30 minutes. I have a script running on my FreeNAS that will shutdown when running on the UPS.


----------



## Pawelr98

Quote:


> Originally Posted by *Rbby258*
> 
> How much does freenas store in ram? What if the power goes out?


In my case few outages killed Jails.After update 9.2.1.7->9.2.1.9 it worked again.

And that was with UFS. Get an UPS.
However remember to get one that is compatible with freenas.
Mine was not compatible. But now I run debian which has software provided by manufacturer.


----------



## stumped

Quote:


> Originally Posted by *Rbby258*
> 
> Yeah but what happens if theres a sudden loss of power?


ZFS does caching in RAM. default usually have writes happen as soon as possible, but like with other filesystems, you can tweak it and have it write to disk later (which means potential for data loss).

For the most part a sudden power loss just means the cache is gone, but writes should be safe (Except for writes that were in transit at the time, but this is also true for *any* filesystem).

Quote:


> Originally Posted by *Pawelr98*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Rbby258*
> 
> How much does freenas store in ram? What if the power goes out?
> 
> 
> 
> In my case few outages killed Jails.After update 9.2.1.7->9.2.1.9 it worked again.
> 
> And that was with UFS. Get an UPS.
> However remember to get one that is compatible with freenas.
> Mine was not compatible. But now I run debian which has software provided by manufacturer.
Click to expand...

I have heard UFS is very bad at sudden power loss situations, when compared to ZFS.


----------



## cdoublejj

does xen server run on ZFS? i do not think ESXi runs on ZFS.


----------



## tycoonbob

Quote:


> Originally Posted by *cdoublejj*
> 
> does xen server run on ZFS? i do not think ESXi runs on ZFS.


Xen, ESXi, nor any other hypervisor "run" on ZFS. ZFS is merely a file system.

ZFS datasets can be exported via NFS or iSCSI, so essentially a ZFS dataset CAN be used for storage of VM's running KVM/QEMU, Xen, ESXi, Hyper-V, or any other hypervisor.

If you are looking to run ZFS (for file storage) on the same box as something running a hypervisor (to run VM's), then I'd highly suggest checking out Proxmox. It's Debian based, runs KVM/QEMU and LXC (for containers), and includes the ZFSonLinux package which works great.

I'm currently using Proxmox 3.4 with a 10 x 5TB ZFS mirror, and running 20-ish VM's, and have set ZFS's ARC (Adaptive Read Cache) to only use 48GB, and I am getting all the performance I need to saturate gigabit. This leaves 48GB of RAM for VM's, and I've got about 8GB free RAM on the system currently. May drop my ARC to 40GB of RAM to free up room for more VM's.


----------



## cdoublejj

Quote:


> Originally Posted by *tycoonbob*
> 
> Xen, ESXi, nor any other hypervisor "run" on ZFS. ZFS is merely a file system.
> 
> ZFS datasets can be exported via NFS or iSCSI, so essentially a ZFS dataset CAN be used for storage of VM's running KVM/QEMU, Xen, ESXi, Hyper-V, or any other hypervisor.
> 
> If you are looking to run ZFS (for file storage) on the same box as something running a hypervisor (to run VM's), then I'd highly suggest checking out Proxmox. It's Debian based, runs KVM/QEMU and LXC (for containers), and includes the ZFSonLinux package which works great.
> 
> I'm currently using Proxmox 3.4 with a 10 x 5TB ZFS mirror, and running 20-ish VM's, and have set ZFS's ARC (Adaptive Read Cache) to only use 48GB, and I am getting all the performance I need to saturate gigabit. This leaves 48GB of RAM for VM's, and I've got about 8GB free RAM on the system currently. May drop my ARC to 40GB of RAM to free up room for more VM's.


and you can still do HW pass through just like xen and esxi?


----------



## tycoonbob

Quote:


> Originally Posted by *cdoublejj*
> 
> and you can still do HW pass through just like xen and esxi?


Can you do HW passthrough with Proxmox, to run ZFS in a VM? I believe you can, but there would be no point. You would be running ZFS on the baremetal, and the baremetal would also be running VM's. No need for passthrough.

Now If you want to run ESXi, you can passthrough your HBA and run ZFS in a VM (or FreeNAS/FreeBSD/whatever), but I just don't get the point when you can do it all on baremetal.


----------



## cones

Quote:


> Originally Posted by *cdoublejj*
> 
> and you can still do HW pass through just like xen and esxi?


KVM/QEMU has pass through support.


----------



## zanginator

In regards to the talk of running ZFS file systems from within a VM (with hardware passthrough) this topic on the FreeNAS forum is worth a read. LINK

Specifically points 6,7 and 8 seem to apply to what is being spoken about here. Please also bear in mind that in a VM you may run into I/O Wait issues with the hardware, potentially causing data corruption issues.

If you are looking to experiment, sure why not? If you are looking for reliable storage (which ZFS is all about) do it on a dedicated box.


----------



## NKrader

Quote:


> Originally Posted by *zanginator*
> 
> In regards to the talk of running ZFS file systems from within a VM (with hardware passthrough) this topic on the FreeNAS forum is worth a read. LINK
> 
> Specifically points 6,7 and 8 seem to apply to what is being spoken about here. Please also bear in mind that in a VM you may run into I/O Wait issues with the hardware, potentially causing data corruption issues.
> 
> If you are looking to experiment, sure why not? If you are looking for reliable storage (which ZFS is all about) do it on a dedicated box.


they ALSO say if you do it right (passthrough EVERYTHING) its the same as dedicated hardware, and there is no difference. that linked post is 2+ years old, so many things change.

https://forums.freenas.org/index.php?threads/absolutely-must-virtualize-freenas-a-guide-to-not-completely-losing-your-data.12714/


----------



## twerk

Can anyone recommend a rack that will take my DL80 Gen9? The server is 60cm deep. 6U or 7U.

I can't afford the HP stuff. I just want something as cheap as possible. Thanks.


----------



## levontraut

Quote:


> Originally Posted by *twerk*
> 
> Can anyone recommend a rack that will take my DL80 Gen9? The server is 60cm deep. 6U or 7U.
> 
> I can't afford the HP stuff. I just want something as cheap as possible. Thanks.


Have you looked on flea bay for a 27U rack. That one you are talking about will fit in any std rack.

When I get home I will post a few links. What part of the UK do you live in?


----------



## twerk

Quote:


> Originally Posted by *levontraut*
> 
> Have you looked on flea bay for a 27U rack. That one you are talking about will fit in any std rack.
> 
> When I get home I will post a few links. What part of the UK do you live in?


Live in Birmingham.

Do the HP rails fit in standard racks?

Thanks


----------



## levontraut

Quote:


> Originally Posted by *twerk*
> 
> Live in Birmingham.
> 
> Do the HP rails fit in standard racks?
> 
> Thanks


M8
The rails are all a std size. Not many people have or use the extra length racks so you are able to mount a server front and rear of the rack.
I forgot to look on flea bay for something but will look tomorrow for you.
What budget do you have? How many U's must it have or what are your plans and requirements?


----------



## twerk

Quote:


> Originally Posted by *levontraut*
> 
> M8
> The rails are all a std size. Not many people have or use the extra length racks so you are able to mount a server front and rear of the rack.
> I forgot to look on flea bay for something but will look tomorrow for you.
> What budget do you have? How many U's must it have or what are your plans and requirements?


Alright cool.

I'm putting these in it:
2U server
1U switch
1U router
1U PDU

So 6U+

Budget is <£250 but less than £100 would be preferred.


----------



## levontraut

Quote:


> Originally Posted by *twerk*
> 
> Alright cool.
> 
> I'm putting these in it:
> 2U server
> 1U switch
> 1U router
> 1U PDU
> 
> So 6U+
> 
> Budget is <£250 but less than £100 would be preferred.


OMG.. you have more than enough money to get a very nice rack.

I will post some stuff tomorrow. Have a look at my rack... I paid £ 75
Quote:


> Originally Posted by *levontraut*
> 
> Since my last 2 posts I have done a huge change ( or in the process of). I am busy upgrading my kit and relocating it all.
> Please see below pics.
> 
> 
> 
> 
> 
> 
> 
> 
> In the pic is a pic of my daughter in front of the rack, as a size comparison. The rack is a little 27U Fusion ( 800x1000x1430).
> I will be putting all 3 machines in there ( so my cases will be up for sale soon.
> The cases I am looking at are 2X 4U 16x3.5 hot swap chassis.
> then something that will take my watercooler and 4 SSD's and the rest of my games rig.
> 
> I am hoping to make some or most of my money back from the sale of my cases as they are still in very good condition.
> 
> Will keep you posted as I go along.


----------



## Petrol

Hey guys, I usually lurk in the Linux/*nix subforum but have been checking out this area lately, thought I'd post some of my home network to show how I'm managing on a budget











On the right is a 2TB NAS (RAID 1) running OpenMediaVault (Debian). I picked the rig up after a neighbour threw it out and tore out the toasted/useless components but kept the CPU, mobo and RAM* and refurbished the box with an old SSD, 2TB WD Green drives and a modular PSU (I went a bit fancy because it was on sale). The case was modded a bit to get rid of the optical drive mounts and I just bolted the 3.5" mounts back to the holes left after drilling the rivets out. It's not much to look at but it's functional and quiet.

On the left is a Pi running Archlinux with lighttpd serving some dynamic content, and underneath it is a 600W PFC UPS. Not pictured is the modem/router/AP up top but that's also plugged into the UPS along with the Pi and NAS, and altogether the load comes to 42W at idle.

I'm currently waiting on a dual-port NIC to arrive so I can turn another old 775 rig into a filtering bridge, still working out how to tie it all together but that should be happening sometime in the next few weeks.

*Core 2 Duo E4600 w/ 800MHz DDR2 on a Gigabyte GA-G31M-S2L mobo, so not bad for something that might have ended up in a landfill


----------



## twerk

Just setting up Windows Server 2012 R2 on my HP DL80 Gen9...

I've set up a RAID 1 array in hardware (HP B140i) and installed the OS on that. Now I'm trying to set up my 3 x 3TB WD Red drives in RAID 5 from within software. I figured the on-board RAID controller is pretty slow and it'd be faster this way.

It's been 2 hours since it started building/formatting the array and it's only at 1%! This can't be normal surely?


----------



## NKrader

still so much left to do, but figured you guys would appreciate what I do have done.




http://www.overclock.net/t/1575005/build-log-pizza-under-the-sea-nas-caselabs-seasonic-supermicro


----------



## jibesh

Quote:


> Originally Posted by *NKrader*
> 
> still so much left to do, but figured you guys would appreciate what I do have done.


What case and drive bays are you using?


----------



## NKrader

Quote:


> Originally Posted by *jibesh*
> 
> What case and drive bays are you using?


CaseLabs - X2M
SuperMicro CSE-M35T-1B


----------



## jibesh

Quote:


> Originally Posted by *twerk*
> 
> Just setting up Windows Server 2012 R2 on my HP DL80 Gen9...
> 
> I've set up a RAID 1 array in hardware (HP B140i) and installed the OS on that. Now I'm trying to set up my 3 x 3TB WD Red drives in RAID 5 from within software. I figured the on-board RAID controller is pretty slow and it'd be faster this way.
> 
> It's been 2 hours since it started building/formatting the array and it's only at 1%! This can't be normal surely?


Using RAID5 within windows is bad idea. Either use storage spaces within windows or RAID5 on the HP controller.


----------



## NKrader

Quote:


> Originally Posted by *twerk*
> 
> It's been 2 hours since it started building/formatting the array and it's only at 1%! This can't be normal surely?


I setup softraid 1 with a pair of 2tb blacks and it took like 2 days.


----------



## broadbandaddict

Quote:


> Originally Posted by *twerk*
> 
> Just setting up Windows Server 2012 R2 on my HP DL80 Gen9...
> 
> I've set up a RAID 1 array in hardware (HP B140i) and installed the OS on that. Now I'm trying to set up my 3 x 3TB WD Red drives in RAID 5 from within software. I figured the on-board RAID controller is pretty slow and it'd be faster this way.
> 
> It's been 2 hours since it started building/formatting the array and it's only at 1%! This can't be normal surely?


Yes, that's normal. Windows built in RAID is very slow.

With Server 2012 you want to put them in either a parity (raid 5) or mirrored (raid 1) storage pool. The GUI for storage spaces is in Server Manager > File and Storage Services > Volumes > Storage Pools.


----------



## CloudX

Lovely build @NKrader


----------



## NKrader

Quote:


> Originally Posted by *CloudX*
> 
> Lovely build @NKrader


Come and join the fun
http://www.overclock.net/t/1575005/build-log-pizza-under-the-sea-nas-caselabs-seasonic-supermicro


----------



## Petrol

wow that's a pretty sweet case and drive bay combo! should be sweet


----------



## Shrak

@NKrader the babies that would come from your X2M server and my M8 server would be glorious.

Been thinking about downsizing mine to an X2M as well, and yours may have me commit to it. Would only lose 1 drive bay ( same as yours ) and my optical drive, but I could take the opportunity to swap out the 2TB disks for 4-6TB disks at the same time, so I wouldn't lose anything, but gain at least 50% more space...

Sigh, thinking about spending money hurts sometimes


----------



## NKrader

Quote:


> Originally Posted by *Shrak*
> 
> @NKrader the babies that would come from your X2M server and my M8 server would be glorious.
> 
> Been thinking about downsizing mine to an X2M as well, and yours may have me commit to it. Would only lose 1 drive bay ( same as yours ) and my optical drive, but I could take the opportunity to swap out the 2TB disks for 4-6TB disks at the same time, so I wouldn't lose anything, but gain at least 50% more space...
> 
> Sigh, thinking about spending money hurts sometimes


Use icy dock bays, these supermicro are going to take a bit of work to get completed. They are really big and just slightly out of spec size.


----------



## bobfig

whats yall thought on me being able to get a hp z800 workstation/server with dual 4 core processor i think with HT, 16gb of ram, and no hdd for ~$450+ tax? would be adding a perc 5 i and eventually some WD red drives and ssd for boot drive.

would be replacing my server with a q8400 and 4gb of ram.


----------



## xxpenguinxx

Why not go duel 6 cores?


----------



## bobfig

money and honestly not needed.


----------



## tiro_uspsss

Quote:


> Originally Posted by *bobfig*
> 
> whats yall thought on me being able to get a hp z800 workstation/server with dual 4 core processor i think with HT, 16gb of ram, and no hdd for ~$450+ tax? would be adding a perc 5 i and eventually some WD red drives and ssd for boot drive.
> 
> would be replacing my server with a q8400 and 4gb of ram.


PERC5i is restricted to 2TB HDDs, wouldn't bother with it


----------



## bobfig

Quote:


> Originally Posted by *tiro_uspsss*
> 
> PERC5i is restricted to 2TB HDDs, wouldn't bother with it


i know about that. would be getting it off the ground first as i don't want to spend $1000 right off the bat as its something i don't need right away.


----------



## burksdb

Quote:


> Originally Posted by *bobfig*
> 
> i know about that. would be getting it off the ground first as i don't want to spend $1000 right off the bat as its something i don't need right away.


honestly i would pass up the perc card and pickup an Ibm 1015 cost isnt that much more and you will much better off


----------



## bobfig

Quote:


> Originally Posted by *burksdb*
> 
> honestly i would pass up the perc card and pickup an Ibm 1015 cost isnt that much more and you will much better off


i understand. i already have the perc 5i in my old server now. i was actually looking at the 3Ware 9650SE-8LPML for around the same price.

if i were to get the server my plan was to use the set up now and migrate over to a new card and drives later.


----------



## KyadCK

New primary ESXi box, hosting my primary Domain Controller, DNS server, media/file server, and whatever random junk I wanna throw on it.

OS: ESXi 6.0U1 (VMs = Server 2012 x3, a few linux distros, etc)
Case: CoolerMaster HAF XM
CPU: AMD FX-8320
Motherboard: Gigabyte 990FXA-UD5 Rev 3.0
Memory: 4x8GB 1600Mhz Corsair Vengeance
PSU: Corsair TX750
OS HDD: Sandisk 8GB SD chip
Network capacity: 4x 1gbps NICs
Storage: 3x HP SmartArray P410/512MB BBWC RAID cards + Onboard chipsets

Card 1: (Cache 25%/75% Read/Write)
4x 480GB Sandisk SSDs in RAID5 (1.36TB usable)
4 blanks

Card 2: (Cache 25%/75% Read/Write)
4x 500GB HGST HDDs in RAID5 (1.4TB usable)
4 blanks

Card 3:
8 blanks

Onboard:
2x WD Reds 2TB (Mirror)
2x WD Black Ent (Mirror)







Array tests, 1st is SSDs, second is Cache only.



Main server is having a fun time packing all our main VMs until the others come back online.










Those drives in the front can only be 2.5" 7mm. My hope is that before I need to expand much more, SSDs become cheap enough to never need to load it with HDDs.

Second and Third servers (also ESXi, using vCenter for clustering woo) are being rebuilt as well to handle this as the new primary.


----------



## NKrader

Quote:


> Originally Posted by *KyadCK*
> 
> New primary ESXi box, hosting my primary Domain Controller, DNS server, media/file server, and whatever random junk I wanna throw on it.
> 
> OS: ESXi 6.0U1 (VMs = Server 2012 x3, a few linux distros, etc)
> Case: CoolerMaster HAF XM
> CPU: AMD FX-8320
> Motherboard: Gigabyte 990FXA-UD5 Rev 3.0
> Memory: 4x8GB 1600Mhz Corsair Vengeance
> PSU: Corsair TX750
> OS HDD: Sandisk 8GB SD chip
> Network capacity: 4x 1gbps NICs
> Storage: 3x HP SmartArray P410/512MB BBWC RAID cards + Onboard chipsets
> 
> Card 1: (Cache 25%/75% Read/Write)
> 4x 480GB Sandisk SSDs in RAID5 (1.36TB usable)
> 4 blanks
> 
> Card 2: (Cache 25%/75% Read/Write)
> 4x 500GB HGST HDDs in RAID5 (1.4TB usable)
> 4 blanks
> 
> Card 3:
> 8 blanks
> 
> Onboard:
> 2x WD Reds 2TB (Mirror)
> 2x WD Black Ent (Mirror)
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Array tests, 1st is SSDs, second is Cache only.
> 
> 
> 
> Main server is having a fun time packing all our main VMs until the others come back online.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Those drives in the front can only be 2.5" 7mm. My hope is that before I need to expand much more, SSDs become cheap enough to never need to load it with HDDs.
> 
> Second and Third servers (also ESXi, using vCenter for clustering woo) are being rebuilt as well to handle this as the new primary.


arent those 8x hotswap bays awesome? I wish that I could get some more drives in mine faster.. building my fileserver at the moment, then a whole bunch of SSD for desktop!


----------



## KyadCK

Quote:


> Originally Posted by *NKrader*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> New primary ESXi box, hosting my primary Domain Controller, DNS server, media/file server, and whatever random junk I wanna throw on it.
> 
> OS: ESXi 6.0U1 (VMs = Server 2012 x3, a few linux distros, etc)
> Case: CoolerMaster HAF XM
> CPU: AMD FX-8320
> Motherboard: Gigabyte 990FXA-UD5 Rev 3.0
> Memory: 4x8GB 1600Mhz Corsair Vengeance
> PSU: Corsair TX750
> OS HDD: Sandisk 8GB SD chip
> Network capacity: 4x 1gbps NICs
> Storage: 3x HP SmartArray P410/512MB BBWC RAID cards + Onboard chipsets
> 
> Card 1: (Cache 25%/75% Read/Write)
> 4x 480GB Sandisk SSDs in RAID5 (1.36TB usable)
> 4 blanks
> 
> Card 2: (Cache 25%/75% Read/Write)
> 4x 500GB HGST HDDs in RAID5 (1.4TB usable)
> 4 blanks
> 
> Card 3:
> 8 blanks
> 
> Onboard:
> 2x WD Reds 2TB (Mirror)
> 2x WD Black Ent (Mirror)
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Array tests, 1st is SSDs, second is Cache only.
> 
> 
> 
> Main server is having a fun time packing all our main VMs until the others come back online.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Those drives in the front can only be 2.5" 7mm. My hope is that before I need to expand much more, SSDs become cheap enough to never need to load it with HDDs.
> 
> Second and Third servers (also ESXi, using vCenter for clustering woo) are being rebuilt as well to handle this as the new primary.
> 
> 
> 
> arent those 8x hotswap bays awesome? I wish that I could get some more drives in mine faster.. building my fileserver at the moment, then a whole bunch of SSD for desktop!
> 
> 
> Spoiler: Warning: Spoiler!
Click to expand...

They are actually really really nice. I was half expecting plastic but nope, solid sturdy metal throughout. Very reassuring feeling.

Then I put lightweight weak plastic SSD in them.


----------



## twerk

Wondering if anyone can help me out of my predicament...

I have HP DL80 Gen9 with 8 LFF 3.5" bays up front. All running of the on-board B140i controller.

I have 3x WD Red drives in RAID 5, fitted in HP caddies that work fine. The machine doesn't even moan that they aren't genuine HP drives.

I also have 2x Sandisk Extreme 120GB drives in RAID 1 for boot. They have been sat without caddies for a while until I bought 2.5" to 3.5" adapters, in this state they work fine apart from the "these aren't genuine HP drives" message on boot. I put them in the adapters, then into the caddies and now the controller reports them as broken or non existent. I thought it may be an issue with the adapter so I put the drive in with the caddy minus the adapter, still the same problem.

So the drives work fine on their own, but not in conjunction with the caddy. Any ideas? Thanks.


----------



## stumped

Quote:


> Originally Posted by *twerk*
> 
> Wondering if anyone can help me out of my predicament...
> 
> I have HP DL80 Gen9 with 8 LFF 3.5" bays up front. All running of the on-board B140i controller.
> 
> I have 3x WD Red drives in RAID 5, fitted in HP caddies that work fine. The machine doesn't even moan that they aren't genuine HP drives.
> 
> I also have 2x Sandisk Extreme 120GB drives in RAID 1 for boot. They have been sat without caddies for a while until I bought 2.5" to 3.5" adapters, in this state they work fine apart from the "these aren't genuine HP drives" message on boot. I put them in the adapters, then into the caddies and now the controller reports them as broken or non existent. I thought it may be an issue with the adapter so I put the drive in with the caddy minus the adapter, still the same problem.
> 
> So the drives work fine on their own, but not in conjunction with the caddy. Any ideas? Thanks.


Maybe try moving the SSDs into the caddy 1 at a time?


----------



## twerk

Quote:


> Originally Posted by *stumped*
> 
> Maybe try moving the SSDs into the caddy 1 at a time?


Tried doing that, it just doesn't detect the drive in the caddy and the RAID becomes degraded.


----------



## levontraut

Hi All

I have a HP DL180 G6.. I was making changes in the bios and then it was busy saving then my little one thought he would be funny and switch off the power by the wall. now when i boot up the server does not seem to have saved anything the fans will stay on at 20% then after 3 or so seconds ramp up to 100%. Nothing is showing on the monitor and I do not get any options to do anything... how FUBAR is this or is there away that i can recover it with a HP flash stick?

Cheers
Levon


----------



## levontraut

Quote:


> Originally Posted by *levontraut*
> 
> Hi All
> 
> I have a HP DL180 G6.. I was making changes in the bios and then it was busy saving then my little one thought he would be funny and switch off the power by the wall. now when i boot up the server does not seem to have saved anything the fans will stay on at 20% then after 3 or so seconds ramp up to 100%. Nothing is showing on the monitor and I do not get any options to do anything... how FUBAR is this or is there away that i can recover it with a HP flash stick?
> 
> Cheers
> Levon


Resolved.

Reset the server by removing the cmos battery


----------



## DaveLT

Quote:


> Originally Posted by *levontraut*
> 
> Resolved.
> 
> Reset the server by removing the cmos battery


Standard procedure these days lol. Fixed a non booting X58 mobo that I thought was truly dead for months, fixed a non booting X99. Told my friend to remove his cmos battery on his X58 and it worked.


----------



## Sodalink

Quote:


> Originally Posted by *twerk*
> 
> Anyone have any insight on software vs hardware RAID?
> 
> I can't decide whether to use the integrated HP Dynamic Smart Array B140i on the motherboard, or just do it in software via Ubuntu.
> 
> There will be one RAID 1 array for the boot drive (hardware RAID of course) but there will also be three 3TB disks in RAID 5 for storage, which I can't decide on.


So far I've been loving my Server 2012 R2 software raid. I've replaced the hardware 3 times already and I've not had any problems in the last 3-4 years. Also I just didn't have money to buy an expensive raid card.


----------



## KyadCK

Quote:


> Originally Posted by *Sodalink*
> 
> Quote:
> 
> 
> 
> Originally Posted by *twerk*
> 
> Anyone have any insight on software vs hardware RAID?
> 
> I can't decide whether to use the integrated HP Dynamic Smart Array B140i on the motherboard, or just do it in software via Ubuntu.
> 
> There will be one RAID 1 array for the boot drive (hardware RAID of course) but there will also be three 3TB disks in RAID 5 for storage, which I can't decide on.
> 
> 
> 
> So far I've been loving my Server 2012 R2 software raid. I've replaced the hardware 3 times already and I've not had any problems in the last 3-4 years. Also I just didn't have money to buy an expensive raid card.
Click to expand...

Na man, big boy RAID cards can cost as little as $30, $55 if you want some writeback cache. That how much my P410s cost each.









All servers online, network "overhaul" (I got off my ass and set it up). No more silly 1 or 2 1gbps NICs...








Those fibre lines are 4gbps EMULEX HBAs, the 2nd and 3rd servers will be using them to access the RAID arrays. Otherwise the servers are setup as 6/3/3 1gbps links to a Catalyst 2970. My own desktop gets 2 1gbps links to the switch as well.


----------



## Master__Shake

Quote:


> Originally Posted by *KyadCK*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Na man, big boy RAID cards can cost as little as $30, $55 if you want some writeback cache. That how much my P410s cost each.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> All servers online, network "overhaul" (I got off my ass and set it up). No more silly 1 or 2 1gbps NICs...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Those fibre lines are 4gbps EMULEX HBAs, the 2nd and 3rd servers will be using them to access the RAID arrays. Otherwise the servers are setup as 6/3/3 1gbps links to a Catalyst 2970. My own desktop gets 2 1gbps links to the switch as well.


4gbps is pretty cool.



10 is better










you said your cards are emulex cards.

what is the model number?

i have 2 and i am wondering if they are the same.


----------



## KyadCK

Quote:


> Originally Posted by *Master__Shake*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Na man, big boy RAID cards can cost as little as $30, $55 if you want some writeback cache. That how much my P410s cost each.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> All servers online, network "overhaul" (I got off my ass and set it up). No more silly 1 or 2 1gbps NICs...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Those fibre lines are 4gbps EMULEX HBAs, the 2nd and 3rd servers will be using them to access the RAID arrays. Otherwise the servers are setup as 6/3/3 1gbps links to a Catalyst 2970. My own desktop gets 2 1gbps links to the switch as well.
> 
> 
> 
> 4gbps is pretty cool.
> 
> 
> 
> 10 is better
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> you said your cards are emulex cards.
> 
> what is the model number?
> 
> i have 2 and i am wondering if they are the same.
Click to expand...

Mine will perform better at their designated task because they're point to point SAN, not LAN.


----------



## t00sl0w

Quote:


> Originally Posted by *KyadCK*
> 
> Na man, big boy RAID cards can cost as little as $30, $55 if you want some writeback cache. That how much my P410s cost each.


couple questions about the p410s.
can these be used as sata expansion only or does it have to be raid?
can you use sas to sata cables or is something else needed?


----------



## KyadCK

Quote:


> Originally Posted by *t00sl0w*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Na man, big boy RAID cards can cost as little as $30, $55 if you want some writeback cache. That how much my P410s cost each.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> couple questions about the p410s.
> can these be used as sata expansion only or does it have to be raid?
> can you use sas to sata cables or is something else needed?
Click to expand...

I used these. http://www.amazon.com/gp/product/B001L9DU88

I do not recommend them. The sleeving is not very good (loose at ends). The connectors however (SFF8087 to SATA) are standardized, and are what the P410 uses.

I did not attempt to make them anything other than RAID5 or RAID5+0, which is all I wanted. I do not think I saw any options for non-RAID.


----------



## twerk

After fighting with HP to get my rebate on the BOGOF server offer, I lost... so I'm now left with a second server and a lot of money down.

I'm selling it off to try and recoup my losses. If anyone wants a nosy, link in my sig.


----------



## NKrader

Quote:


> Originally Posted by *twerk*
> 
> After fighting with HP to get my rebate on the BOGOF server offer, I lost... so I'm now left with a second server and a lot of money down.
> 
> I'm selling it off to try and recoup my losses. If anyone wants a nosy, link in my sig.


That would make a nice little server, got some nice specs.


----------



## LuckyJack456TX

[
Quote:


> Originally Posted by *bobfig*
> 
> whats yall thought on me being able to get a hp z800 workstation/server with dual 4 core processor i think with HT, 16gb of ram, and no hdd for ~$450+ tax? would be adding a perc 5 i and eventually some WD red drives and ssd for boot drive.
> 
> would be replacing my server with a q8400 and 4gb of ram.


Well bob fig I'm actually in the area you are. I have a dell workstation similar to what you want. Refer to the blackpearl sig as that's system I'm parting with. pm me for details


----------



## Pawelr98

My server has a new addon.
Quantum LTO-3 tape drive and SCSI PCI-X card(because of tape drive).


Now what remains is to buy some LTO-2/3 tapes(read/write and read only for LTO-1)and check if it works properly(driver wise everything is OK).

The total cost of this was 301.5PLN for tape drive + 14.5PLN shipping and 19.99PLN for ADAPTEC SCSI-CARD 29320LP + 7.99PLN for shipping.
This is less than 100USD total by current (and also back then) course of PLN/USD.

I needed some cheap storage(I can get LTO-3 tapes for 30-50PLN) while my brother wanted reliable storage for photos so the tape drive fills both needs.


----------



## Prophet4NO1

My FreeNAS server. Backups and Plex are its main uses. Want to setup a cloud function at some point for mobile devices as well rather then upload to Google or MS.


----------



## NKrader

Current Server -
Fractal Define XL
Supermicro H8DMi-2
2x 1.6 Ghz 6 core Opteron
OEM AM2 AMD Heatsink - Noctua 80mm PWM
4GB ECC ddr2
Intel Dual 10/100/1000 PCIe card
Seasonic 500w Bronze PSU

Drives, (Not all pictured)
2x WD 2TB Black
2x WD 640GB black
1TB WD Green
500GB WD RE4


----------



## twerk

Looking for a RAID card that'll do RAID 5 for my ProLiant DL80 Gen9. Would a HP Smart Array P410 be a good match? I can get it with battery and 256MB cache for £40.

Will it work as it's an old card with a current gen machine?


----------



## LockoutNex

Here the start of a plex server a friend and I are making. Going to be putting in a 5th 3TB and 2x 1TB drives once I get our old server back from the data center we have it hosted at and going to put a GT 740 in it too from the old server. The case is going to be a NZXT Source 220 and we'll be getting it re-hosted at the same data center we have our other one at because they take mid-towers too. If anyone would like to make a recommendation, I would love it.

Here the part list: http://pcpartpicker.com/p/PvrXK8


----------



## cones

Quote:


> Originally Posted by *LockoutNex*
> 
> Here the start of a plex server a friend and I are making. Going to be putting in a 5th 3TB and 2x 1TB drives once I get our old server back from the data center we have it hosted at and going to put a GT 740 in it too from the old server. The case is going to be a NZXT Source 220 and we'll be getting it re-hosted at the same data center we have our other one at because they take mid-towers too. If anyone would like to make a recommendation, I would love it.
> 
> Here the part list: http://pcpartpicker.com/p/PvrXK8
> 
> 
> Spoiler: Warning: Spoiler!


Why the GPU especially if it's going in a data center?


----------



## beers

Quote:


> Originally Posted by *cones*
> 
> Why the GPU especially if it's going in a data center?


Also, why not rackmount if it's going in a data center?


----------



## LockoutNex

Quote:


> Originally Posted by *cones*
> 
> Why the GPU especially if it's going in a data center?


Once upon a time the original server (one currently being hosted) was used for Clash of Clans Bots plus the plex server. My friend is the one that bought the server and wanted to bot the game with his friends, and me the one managing it didn't care what they did because I got to use the server for whatever I wanted and have as many VMs as I wanted too. But once they stopped playing CoC I started using the GPU on the server for projects like using the league of legends API to build a program that would look if someone was in a game and start streaming it to twitch or saving it locally for later. We needed the GPU, so that the VMs could be use "accelerate 3D graphics."

Edit: So going to keep it, so I can continue using it for projects that need accelerate graphics.
Quote:


> Originally Posted by *beers*
> 
> Also, why not rackmount if it's going in a data center?


Save on the monthly bill really, to host a mid tower at the data center we use it's only $40 a month and to host something racked at this size we would need a 3-4U rack which is about $70-80 a month.


----------



## levontraut

Quote:


> Originally Posted by *twerk*
> 
> Looking for a RAID card that'll do RAID 5 for my ProLiant DL80 Gen9. Would a HP Smart Array P410 be a good match? I can get it with battery and 256MB cache for £40.
> 
> Will it work as it's an old card with a current gen machine?


Where do you live? ( hampshire area?)
I have a spare card that you could use tEST WITH BEFORE YOU GO OUT AND BUY.


----------



## NKrader

Quote:


> Originally Posted by *LockoutNex*
> 
> Save on the monthly bill really, *to host a mid tower at the data center we use* it's only $40 a month and to host something racked at this size we would need a 3-4U rack which is about $70-80 a month.


how would one go about finding a datacenter that would host my server?


----------



## LockoutNex

Quote:


> Originally Posted by *NKrader*
> 
> how would one go about finding a datacenter that would host my server?


The place we use is Joe's Datacenter in Kansas City, MO, their site is here: https://joesdatacenter.com just go to the colocation area to see the rates for mid-full towers and racks. They do have requirements on what size the case can be.
Quote:


> Case Requirements for Mid-Tower:
> - Must be sold as a "Mid-Tower" or a smaller type such as a "Mini-Tower"
> - Maximum Height: 18″ inch / 45.72 cm
> - Maximum Width: 8″ inch / 20.32 cm
> - Maximum Length: 20″ inch / 50.80 cm
> - No more than 8 Expansion Card Slots
> - No more than 8 Internal 3.5″ Drive Bays
> - No more than 5 External 5.25″ Drive Bays
> 
> Case Requirements for Full-Tower:
> - Must be sold as a "Full-Tower" or a smaller type. Server cases sold as "Super Tower" or "Pedestal" are NOT allowed.
> - Maximum Height: 22″ inch / 55.88 cm
> - Maximum Width: 9″ inch / 22.86 cm
> - Maximum Length: 22″ inch / 55.88 cm
> - No more than 10 Expansion Card Slots
> - No more than 10 Internal 3.5″ Drive Bays
> - No more than 7 External 5.25″ Drive Bays
> 
> Dual Power: Full-Tower colocation can only have dual power connections not Mid-Tower.


----------



## DaveLT

Wow those are some pretty strict requirements. No more than 8 3.5" drive bays and no more than 8 expansion slots ... ._.


----------



## cones

I wonder why on that, if it can fit within those dimensions why does it matter?


----------



## LockoutNex

Quote:


> Originally Posted by *DaveLT*
> 
> Wow those are some pretty strict requirements. No more than 8 3.5" drive bays and no more than 8 expansion slots ... ._.


Quote:


> Originally Posted by *cones*
> 
> I wonder why on that, if it can fit within those dimensions why does it matter?


No idea why they are so strict with the requirements. I wish they would have a little leeway when it comes to dimensions, had a case less than 1/4" over the 8" and they said it doesn't count as a mid-tower and would be a full tower, but oh well not a lot of people host mid-towers, so just had to buy a new case.


----------



## beatfried

First: I really wouldn't call a p410s a "big boi" raid card...
Quote:


> Originally Posted by *LockoutNex*
> 
> No idea why the dimensions matter or why they are so strict. I asked if a 8.23" width mid-tower would be fine and they said not or they'll count it as a full even if it is sold sold a mid-tower, but not a lot of people host mid-towers.


lol... really?
Alcohol is sold at the age of 18 here, i'm 17, can I still get some?
Marijuana is illegal here, but its only 1g, thats okay, isn't it?
Thats a 10A fuse, I can use it with 14A, right?
i mean.. really?


----------



## DaveLT

Quote:


> Originally Posted by *beatfried*
> 
> First: I really wouldn't call a p410s a "big boi" raid card...
> lol... really?
> Alcohol is sold at the age of 18 here, i'm 17, can I still get some?
> Marijuana is illegal here, but its only 1g, thats okay, isn't it?
> Thats a 10A fuse, I can use it with 14A, right?
> i mean.. really?


What on earth are you on about

The P410s was. Was.
It was a proper card used in HP servers. How much more big boy can that get? Does it need to be 4x SFF 8087 SAS 3 to count as a big boy raid card?


----------



## xxpenguinxx

Quote:


> Originally Posted by *LockoutNex*
> 
> No idea why they are so strict with the requirements. I wish they would have a little leeway when it comes to dimensions, had a case less than 1/4" over the 8" and they said it doesn't count as a mid-tower and would be a full tower, but oh well not a lot of people host mid-towers, so just had to buy a new case.


Size makes sense. They're probably put on a rack shelf so size must be below X.

Says nothing about 2.5" drives. Could someone build a mid tower with five 8x 2.5" drive bays?

Also, how are hardware problems handled? If you have a drive fail do they send the server back or can you send them the part and they can swap it? EDIT. Just read the FAQ, they'll keep your parts on site or allow you to ship parts to them. It's $25 an hour to replace the parts.


----------



## Prophet4NO1

Seems way easier to run a server out of the house or rent one from a service. This seems more expensive and a bigger pain then anything else.


----------



## bobfig

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Seems way easier to run a server out of the house or rent one from a service. This seems more expensive and a bigger pain then anything else.


while i agree not all of us have 1gbps+ connection with multiple backup power protection available at home.


----------



## Prophet4NO1

Quote:


> Originally Posted by *bobfig*
> 
> while i agree not all of us have 1gbps+ connection with multiple backup power protection available at home.


Again, just rent space on a server. Save the parts costs. Depending on needs the monthly bill will be roughly the same. Since you are just a small part of a bigger server you don't have to worry about drive failures and other hardware issues costing you more. You also don't pay $25/hr for the work. Rent your space and you are good to go. Cheaper and easier.


----------



## NKrader

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Again, just rent space on a server. Save the parts costs. Depending on needs the monthly bill will be roughly the same. Since you are just a small part of a bigger server you don't have to worry about drive failures and other hardware issues costing you more. You also don't pay $25/hr for the work. Rent your space and you are good to go. Cheaper and easier.


That would work for small servers like voice chat or game servers.

I found that I could get 1000/1000 connection for my server with 5tb bandwidth for 75$ a month.

To rent the 16 cores with 64gb ram and 30tb of drive space that my server has I feel like it would cost significantly more over the 5+ years I plan on operating this server without changes (not arbitrarily 5 years, the shortest warranty on the hardware is 5 years). And drive failures don't cost more unless you have them for longer than the warranty.

Also 25$/hour for what work? I would have 24/7 access to my server to work on as I saw fit.

I'm really pondering colocating my server because after the power savings and the savings from being able to turn my internet speeds WAY down this service would only end up costing me about 30$ a month. Rent a few VM's to friends for misc personal or clan servers and end up making money or at least breaking even.


----------



## Prophet4NO1

Quote:


> Originally Posted by *NKrader*
> 
> That would work for small servers like voice chat or game servers.
> 
> I found that I could get 1000/1000 connection for my server with 5tb bandwidth for 75$ a month.
> 
> To rent the 16 cores with 64gb ram and 30tb of drive space that my server has I feel like it would cost significantly more over the 5+ years I plan on operating this server without changes (not arbitrarily 5 years, the shortest warranty on the hardware is 5 years). And drive failures don't cost more unless you have them for longer than the warranty.
> 
> Also 25$/hour for what work? I would have 24/7 access to my server to work on as I saw fit.
> 
> I'm really pondering colocating my server because after the power savings and the savings from being able to turn my internet speeds WAY down this service would only end up costing me about 30$ a month. Rent a few VM's to friends for misc personal or clan servers and end up making money or at least breaking even.


Hmmm, prices have shot up since i was last shopping for servers. That or we stumbled into so really good deals with ours by accident.

The $25/hr was for on site maintenance. The stuff you can not do remotely. Like drive swaps. Sometimes they charge for the time the arrays rebuild too. At least that is what i have seen.


----------



## ChRoNo16

If they charge for rebuild time I would drop them in an instant thats a bad deal considering it really requires nothing from the staff to do.


----------



## Xtreme21

Quote:


> Originally Posted by *Pawelr98*
> 
> 
> 
> 
> My server has a new addon.
> Quantum LTO-3 tape drive and SCSI PCI-X card(because of tape drive).
> 
> 
> Now what remains is to buy some LTO-2/3 tapes(read/write and read only for LTO-1)and check if it works properly(driver wise everything is OK).
> 
> The total cost of this was 301.5PLN for tape drive + 14.5PLN shipping and 19.99PLN for ADAPTEC SCSI-CARD 29320LP + 7.99PLN for shipping.
> This is less than 100USD total by current (and also back then) course of PLN/USD.
> 
> I needed some cheap storage(I can get LTO-3 tapes for 30-50PLN) while my brother wanted reliable storage for photos so the tape drive fills both needs.


Very interesting firgured it would cost more than 100 USD to get into tape backups, thanks for sharing!


----------



## TheBloodEagle

Always wanted a tape drive just for the hell of it. Very cool.


----------



## levontraut

LT3 are very cheap these days.

I am having issues configuring my switch. Any helpers pretty please?

UPDATE:
Managed to resolve this
I was doing a very noob error.
I didnt change the speed of the console port to a compatible speed and once logged in I didnt enter in "enable" to configure the switch


----------



## Jeci

Here's my contribution:

My FreeNas Box:

2 x 4 Core LGA 2011 CPU's
32GB ECC RAM
6 x WD RED 2TB's - RAID 5 (Yes yes, i know...)
4 x WD RED 4TB's - RAID 5 (Yes yes, i know...)



Here's where it lives:



It resides with a HP N54L and a pair of Dell 1u Servers (8CPU's/16GBRAM/1TBDisk).


----------



## levontraut

This is my setup at the moment



*Servers*

HP DL 180G6
SPEC:
> 2 x Intel Xeon E5530 Quad Core 2.4GHz CPU
> 12GB DDR3 RAM
> 25 x 2.5" SAS/SATA Hard Drive Bays
> HP Smart Array P410i SATA RAID Controller (RAID 0, 1, 1+0, 5, 6)
> HP NC362i Integrated Dual Port Gigabit Server Adapter
> Dual Redundant 750W Common Slot Power Supplies

The others are in my Sig

Switch is a TP Link SG3424 and a few other TP link things

Quote:


> Originally Posted by *Jeci*
> 
> Here's my contribution:
> 
> My FreeNas Box:
> 
> 2 x 4 Core LGA 2011 CPU's
> 32GB ECC RAM
> 6 x WD RED 2TB's - RAID 5 (Yes yes, i know...)
> 4 x WD RED 4TB's - RAID 5 (Yes yes, i know...)
> 
> 
> 
> Here's where it lives:
> 
> 
> 
> It resides with a HP N54L and a pair of Dell 1u Servers (8CPU's/16GBRAM/1TBDisk).


How loud is that? my 180 is stupid loud


----------



## Pawelr98

Quote:


> Originally Posted by *levontraut*
> 
> LT3 are very cheap these days.
> 
> I am having issues configuring my switch. Any helpers pretty please?
> 
> UPDATE:
> Managed to resolve this
> I was doing a very noob error.
> I didnt change the speed of the console port to a compatible speed and once logged in I didnt enter in "enable" to configure the switch


For me LTO-3 is not that cheap.

The usual price for cheapest LTO-3 drive is 500-600PLN which by my standards is not cheap(I paid 301.5 PLN). LTO-1/2 costs about 200PLN (would be around 50USD if assuming ~4PLN/USD).
However compared to prices of new LTO-5/6 drives these are indeed cheap.

The cheapest tape drives are DDS-3 or DAT72 drives. Those can go for as low as 50-100PLN (12-25 USD).Of course you need SCSI for this as well.

For now I wait for Sata->Usb adapter to boot openSUSE.Any Windows newer than Server 2003 has no integrated/freeware software to record on tapes without limitations.Linux has plenty of freeware software and can record on tapes by just using console.
HP ML350 G5 is very problematic when it comes to booting.

On integrated SAS controller the array with most drives has priority over other drives when it comes to booting.
I have 2 15K rpm drives in RAID0 for intensive workload (for now Arma 3 hosting which has 65GB of mods atm, soon I also plan to host Arma 2 from this array). The rest is three standalone 10K rpm drives.
To boot from this controller I would have to build array with at least 3 drives.

My adaptec sata controller has only 2 sata ports which right now runs RAID0 2x40GB for OS.

SCSI drives are not that expensive but the adapters to connect them to my SCSI card are (drives are 20-30PLN while adapters are 40-50PLN + cables).

Tried to add some cheap PCI sata card but server won't load the bios of the controller theafore not allowing to choose proper boot device within the controller.
And the bonus from this try is burned out IOpower pin in PCI-X slot as some idiot designed this "universal" pci card to have IOpower connected to 5V pins.
IOpower can be both 3.3V and 5V. Most standard PC's have 5V PCI slot but PCI-X is strictly 3.3V. 3.3V shorted with 5V. Current was high enough to melt goldpad on PCI card,burn out the trace and melt the plastic of PCI-X slot.
I removed this burned out pin from slot. Cards still work properly as there are more power pins in the slot.


----------



## levontraut

Quote:


> Originally Posted by *Pawelr98*
> 
> For me LTO-3 is not that cheap.
> 
> The usual price for cheapest LTO-3 drive is 500-600PLN which by my standards is not cheap(I paid 301.5 PLN). LTO-1/2 costs about 200PLN (would be around 50USD if assuming ~4PLN/USD).
> However compared to prices of new LTO-5/6 drives these are indeed cheap.
> 
> The cheapest tape drives are DDS-3 or DAT72 drives. Those can go for as low as 50-100PLN (12-25 USD).Of course you need SCSI for this as well.
> 
> For now I wait for Sata->Usb adapter to boot openSUSE.Any Windows newer than Server 2003 has no integrated/freeware software to record on tapes without limitations.Linux has plenty of freeware software and can record on tapes by just using console.
> HP ML350 G5 is very problematic when it comes to booting.
> 
> On integrated SAS controller the array with most drives has priority over other drives when it comes to booting.
> I have 2 15K rpm drives in RAID0 for intensive workload (for now Arma 3 hosting which has 65GB of mods atm, soon I also plan to host Arma 2 from this array). The rest is three standalone 10K rpm drives.
> To boot from this controller I would have to build array with at least 3 drives.
> 
> My adaptec sata controller has only 2 sata ports which right now runs RAID0 2x40GB for OS.
> 
> SCSI drives are not that expensive but the adapters to connect them to my SCSI card are (drives are 20-30PLN while adapters are 40-50PLN + cables).
> 
> Tried to add some cheap PCI sata card but server won't load the bios of the controller theafore not allowing to choose proper boot device within the controller.
> And the bonus from this try is burned out IOpower pin in PCI-X slot as some idiot designed this "universal" pci card to have IOpower connected to 5V pins.
> IOpower can be both 3.3V and 5V. Most standard PC's have 5V PCI slot but PCI-X is strictly 3.3V. 3.3V shorted with 5V. Current was high enough to melt goldpad on PCI card,burn out the trace and melt the plastic of PCI-X slot.
> I removed this burned out pin from slot. Cards still work properly as there are more power pins in the slot.


I was speaking of the tapes... they are cheap as chips... the drive is another story even for second hand.


----------



## DaveLT

Quote:


> Originally Posted by *levontraut*
> 
> I was speaking of the tapes... they are cheap as chips... the drive is another story even for second hand.


Makes good sense if you are prepared to keep a whole entire room of tapes but make sure it's not humid or the tapes will be a goner. But damn the drive prices and their maintenance as far back as I can remember is exhausting ..... Really exhausting. Be prepared to get a second drive to diagnose the first one.


----------



## levontraut

Quote:


> Originally Posted by *DaveLT*
> 
> Makes good sense if you are prepared to keep a whole entire room of tapes but make sure it's not humid or the tapes will be a goner. But damn the drive prices and their maintenance as far back as I can remember is exhausting ..... Really exhausting. Be prepared to get a second drive to diagnose the first one.


I know all about tape libraries and the devil they can be.


----------



## Jeci

Quote:


> Originally Posted by *levontraut*
> 
> How loud is that? my 180 is stupid loud


It can't be heard with the door shut, but bare in mind the Dell 1u boxes are powered down. With them powered on it's pretty loud


----------



## DaveLT

Quote:


> Originally Posted by *Jeci*
> 
> It can't be heard with the door shut, but bare in mind the Dell 1u boxes are powered down. With them powered on it's pretty loud


Wait till you own a Dell R720. Damn buggers have a top speed of 13300rpm!


----------



## CloudX

Quote:


> Originally Posted by *DaveLT*
> 
> Wait till you own a Dell R720. Damn buggers have a top speed of 13300rpm!


I have several of those suckers deployed, they are pretty intense!


----------



## tiro_uspsss

very slowly rebuilding / transplanting my server.............

what are you looking at?

The expansion cards, all of which have had their cooling modded to include a ThermalRight IFX-10 backplate cooler (http://www.performance-pcs.com/thermalright-ifx-10-motherboard-backplate-heatpipe-cpu-cooler.html). The fin array has been bent to allow fitment.

Motherboard has two chipsets, both cooled by CoolJag Falcon Mini (http://lib.store.yahoo.net/lib/directron/falconmini01.jpg)

Cards are:

Chenbro CK23601 (36 port SAS expander)
Mellanox Connect-X2 (10GbE)
IBM M1015
IBM M5014
Hotlava 6 port 1GbE Intel NIC

main parts:

2x Intel Xeon X5650
SuperMicro X8DTH
12x Samsung 4GB DDR3-1333 ECC+REG

& truckloads of fast, loud fans.


----------



## LuckyJack456TX

Quote:


> Originally Posted by *tiro_uspsss*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> very slowly rebuilding / transplanting my server.............
> 
> what are you looking at?
> 
> The expansion cards, all of which have had their cooling modded to include a ThermalRight IFX-10 backplate cooler (http://www.performance-pcs.com/thermalright-ifx-10-motherboard-backplate-heatpipe-cpu-cooler.html). The fin array has been bent to allow fitment.
> 
> Motherboard has two chipsets, both cooled by CoolJag Falcon Mini (http://lib.store.yahoo.net/lib/directron/falconmini01.jpg)
> 
> Cards are:
> 
> Chenbro CK23601 (36 port SAS expander)
> Mellanox Connect-X2 (10GbE)
> IBM M1015
> IBM M5014
> Hotlava 6 port 1GbE Intel NIC
> 
> main parts:
> 
> 2x Intel Xeon X5650
> SuperMicro X8DTH
> 12x Samsung 4GB DDR3-1333 ECC+REG
> 
> & truckloads of fast, loud fans.


HOLY HEATSINKS BATMAN!!!


----------



## jibesh

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *tiro_uspsss*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> very slowly rebuilding / transplanting my server.............
> 
> what are you looking at?
> 
> The expansion cards, all of which have had their cooling modded to include a ThermalRight IFX-10 backplate cooler (http://www.performance-pcs.com/thermalright-ifx-10-motherboard-backplate-heatpipe-cpu-cooler.html). The fin array has been bent to allow fitment.
> 
> Motherboard has two chipsets, both cooled by CoolJag Falcon Mini (http://lib.store.yahoo.net/lib/directron/falconmini01.jpg)
> 
> Cards are:
> 
> Chenbro CK23601 (36 port SAS expander)
> Mellanox Connect-X2 (10GbE)
> IBM M1015
> IBM M5014
> Hotlava 6 port 1GbE Intel NIC
> 
> main parts:
> 
> 2x Intel Xeon X5650
> SuperMicro X8DTH
> 12x Samsung 4GB DDR3-1333 ECC+REG
> 
> & truckloads of fast, loud fans.













Why? How hot were these expansion cards getting?


----------



## mbreitba

Quote:


> Originally Posted by *jibesh*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Why? How hot were these expansion cards getting?


I think he likes heatsinks. Those cards shouldn't get hot. I know I've used the Connectx2 cards in bladeservers, and they don't have anything like that required for cooling.


----------



## burksdb

My new lab / play build.

This is Ocn right?

Dual L5520's
12gb ram i have more to add if needed
24 128gb ssd's in the hotswap bays with 3 more hooked up internally (not shown)
Intel 10gb
IBM M1015 - flashed to IR mode
Intel 24 port Sas expander - RES2SV240NC

Missing an intel sas expander and cables right now, but they should be arriving in a few days.


----------



## akshep

Quote:


> Originally Posted by *burksdb*
> 
> My new lab / play build.
> 
> This is Ocn right?
> 
> Dual L5520's
> 12gb ram i have more to add if needed
> 24 128gb ssd's in the hotswap bays with 3 more hooked up internally (not shown)
> Intel 10gb
> IBM M1015 - flashed to IR mode
> Intel 24 port Sas expander - RES2SV240NC
> 
> Missing an intel sas expander and cables right now, but they should be arriving in a few days.


How to those h60's im guessing handle those xeonxs? Very nice looking setup btw.


----------



## burksdb

Quote:


> Originally Posted by *akshep*
> 
> How to those h60's im guessing handle those xeonxs? Very nice looking setup btw.


They are H55s that I had on them in my old setup.
Slight pain to get them installed I have to remove the fan bracket in order to attach them. Using the stick fans that came with the case for now. I will upgrade them if needed

Was worried that the case would have came with the 80mm bracket but got lucky with the 120mm one. I also didn't have any other heat sink that would have worked if these didn't and I didn't want to buy any atm so I got lucky!

I haven't had a chance to test temps in this case but it's on my list of things to do. i don't remember them breaking 60c at load in my previous setup. They prob get less airflow now so we shall see


----------



## akshep

It will be interesting for sure. I thought about using aio water coolers in a server before, but i've been too lazy to put one in.


----------



## burksdb

Quote:


> Originally Posted by *akshep*
> 
> It will be interesting for sure. I thought about using aio water coolers in a server before, but i've been too lazy to put one in.


i got lucky since the corsair mounting bracket screwed directly into the backplat of the asus board. Helps keep the noise down also


----------



## burksdb

Quote:


> Originally Posted by *akshep*
> 
> How to those h60's im guessing handle those xeonxs? Very nice looking setup btw.


Idle temps



Load temps using prime - granted i dont believe these chips run all that hot to begin with


----------



## akshep

Those numbers look nice considering the rads didn't have direct access to "fresh air"


----------



## DaveLT

Quote:


> Originally Posted by *akshep*
> 
> Those numbers look nice considering the rads didn't have direct access to "fresh air"


And that the air coming out of them is gonna be very low ...
Quote:


> Originally Posted by *burksdb*
> 
> Idle temps
> 
> 
> 
> Load temps using prime - granted i dont believe these chips run all that hot to begin with


Did you put deltas







This sort of situation seems a bit bad I would have just gone with a normal air sink setup lol. You could boost airflow by velcroing another fan behind the AIO or just push pull.








Seeing as you put SSDs then no heat to worry about lol.

Hey you just gave me an idea to put 4 480GB SSDs in a array (well they're cheap at 229SGD and with 20$ off instantly ... yeah. Zotac Premium SSDs are really cheap. Or a AMD R7 SSD could do well.)


----------



## burksdb

Quote:


> Originally Posted by *DaveLT*
> 
> And that the air coming out of them is gonna be very low ... Are the HDDs roasting? Or did you put fast fans
> 
> 
> 
> 
> 
> 
> 
> This sort of situation seems a bit bad I would have just gone with a normal air sink setup lol


they are ssds and so far are running pretty cool, but i will keep an eye on them.

tad overkill but thats what we do on ocn right









nah i had them and didnt want to buy new air coolers for the system if i didnt have to


----------



## DaveLT

Quote:


> Originally Posted by *burksdb*
> 
> they are ssds and so far are running pretty cool, but i will keep an eye on them.
> 
> tad overkill but thats what we do on ocn right
> 
> 
> 
> 
> 
> 
> 
> 
> 
> nah i had them and didnt want to buy new air coolers for the system if i didnt have to


I edited the post lol.


----------



## broadbandaddict

Hey question for you guys that know more than me. I just got a C2100 and I'm trying to figure out how to hook all my drives up and which controller(s) to use. The server came with an H700 hooked up to the 12 front 3.5" bays (using two SAS cables). I want to have the following setup:


2x 120GB SSD (RAID 1) Boot
2x 500GB SSD (Mirror) VM1
8x 5TB Toshibas + 4x 256GB SSDs (Tiered Storage) Storage and VM2 [+4x 5TB Toshibas at a later date]
OS will be Server 2012 R2 Datacenter, I would like to run Bitlocker on the system so the boot array will need to be an actual RAID array through the H700 I assume. I can run the 3.5"x12 bays off an H200 right? Or maybe the optional SAS Mezanine card available for the C2100? If I hook the other SSDs up through the H700 can I pass them through _without_ creating a RAID array? I've had great experiences with Storage Spaces and I'd like to stick with it when possible as it makes troubleshooting and expansion a breeze.

Thanks to anyone who can give me any advice, I know a few people around here have the C2100.


----------



## Goldn3agle

Jyst thought I'd post my "new" server here.
I got my hands on an Intel S5520HC and a pair of Xeon X5650s and cannibalised the chassis from my xw8400 workstation that was running a pair of E5345s.
I've got it running as a PLEX server and a video encoding machine.

It's a bit of a mess but I don't have the money for one of those fancy Intel server chassis, and I had to mangle the rear I/O shield to get the motherboard to fit properly because my dremel isn't working and I need a new one.
I got the 48GB of RAM at a good price from the same bloke I got the motherboard from.


If you've got a super sharp eye you'll notice none of the fan connectors are connected to the board because the bloody board blasts the fans at 100% because I don't have one of those fancy Intel Server chassis so they're connected to a 4-channel fan controller instead.










I've got the storage running in two RAID arrays; one RAID 10, one RAID 0, and two drives on their own.


----------



## Clos

Hey everyone, I'm back again! Finally went ahead and ordered the L5640 Procs and ordered 64gb of memory for my Dell T710. I am Curious about something, and hopefully one of you guys can chime in.

Does anyone know, if Dell, and especially the T710's CPU Cooler pattern is standardized? I .e. if i remove all of the OEM brackets, could i for example bolt a Corsair AIO and use the X58 Bracket? Trying to find some intricate ways to shut this thing up. It's not horribly loud, especially in my now server closet, but I'd definitely like to shut it up a bit more if possible.

I forgot to previously, but once my Ram shows up, and i have everything installed, i'll snap some pictures of my server in the closet.


----------



## OliG

I recently upgraded my server, so I'm adding my humble contribution to this thread. I started my server journey 2 years ago when I bought an el cheapo Rackable Systems 2U :



Specs :
Intel S5000PSL motherboard
2x xeon L5420
16 GB DDR2 FB-DIMM
450W psu

For a home server, I was satisfied with the overall performance of this machine, on wich I had 4 VMs running hosted in Server 2008 R2, but there were some clouds in the sky :

I soon realised than 4 hdd bays where not much for storage, and I was soon limited in this regard. I had 2 raid 1 arrays, one for VMs and one for storage. To maximise the space, I had the host booting from a usb stick with all paging/writing disabled as advised by microsoft.
Server was incredibly loud, blasting 3x 80mm fans at full throttle all the time. I tried to modify the BMC to allow the motherboard to control the speed but nothing worked. I ended up "5 volting" theses fans to reduce the speed, but the PSU was still quite loud. As you can see on first picture, I located the server in my house "engine room" near the furnace and electrical pannels, where the furnace noise was hiding the server sound.
The FB-DIMM memory is not only slow, it run hot and is power hungry ! The whole system at idle was drawing around 180w with 8 x 2GB sticks. I replaced thoses sticks with 4 x 4GB, wich lowered the consumption at around 160w, but this is not ideal for a 24/7 machine. I also suspected the PSU to be highly inneficient.
I ended up upgrading as I really needed to add disk space, and after some shopping, ordered a used Supermicro Opteron based tower server. For my needs, this time the match is perfect. I bought the complete system first, and liked it so much than I completed it with a cpu/ram upgrade, the Nics / Raid cards and some spares parts to replace the usuals failing suspects. I now need to find good deals on high capacity Hdd to add storage.






Specs :
Supermicro SC745TQ-R920B case
Supermicro H8DGi-F motherboard
Single AMD Opteron 6276 (16 cores @ 2.3ghz)
32 GB DDR3 ECC (4x 8GB)
Adaptec 5805 RAID card + BBU
HP NC364T quad ports Gbit ethernet
Supermicro PWS-920P-1R 920W Platinum PSU
2x 256gb SSD in Raid1 for Host and VMs boot drives
3x 1TB hard drives ; 2 in Raid1 for Storage, and 1 for backup / snapshots

I've seen a lot of consumer grade tower case, but this server tower is in its own class. This is some serious stuff : built of heavy steel, the case alone weigth 28kg without anything inside. Hot-swapable redundant PSU / PWM fans / SAS/SATA drives. Airflow is impressive because cable management is childplay, the air shroud aims air directly on CPU/memory and finally, the 5 SanAce fans actually move some air! Best of all, under normal load it is not louder than a gaming rig.

The motherboard is also quite solid from my point of view, as it supports 2 CPUs and a ridiculous 512GB max amount of ECC ram. The nicest feature on it is the IPMI : the screen next to the server was never used, I'm able to do everything from my office two floors up. And I mean everything, power on/off, insert boot disk/usb, play in bios, monitor hardware, raid setup... Exactly as if you where next to it, and all you need is a web browser !

My only regret is not finding this kind of setup in an Intel platform. Still, thoses opterons are really good values, considering than mine scored around 9k in Passmark for a chip I paid 60 buck. Also, the idle power consumption without the PCI cards is of 90w, 105w with the cards... Not a xeon D, but not bad either !

All that to say I'm now a Supermicro fan







.


----------



## xKIAxMaDDoG

x3650_1_1.jpg 61k .jpg file
OS: Windows 10 Pro (64)
Case: IBM x3650 case
CPU: Two Intel Xeon E5420 2.5ghz Quad-cores
Motherboard: IBM board
Memory:12x1 (12gb) DDR2 667mhz
PSU: Redundant 835w PSU(s)
Storage HDD(s): Six 15k RPM 146.3 GB IBM SAS drives in RAID 0
Server Manufacturer: IBM

I also added a MSI R7 260x 2gb because I have always wanted to game on a server, yes I know people tell me how inefficient it is but it is fun


----------



## cuppycake

Case: Bitfenix Shinobi
MOBO: ASRock E3C224-V+
CPU: Intel Xeon E3 1230
RAM: 32gb Kingston ECC
PSU: 430w Corsair
GFX: EVGA 8400
Storage:

1x 1TB WD Green
2x 1TB WD Red
1x 120GB Corsair SSD
2x 120GB Samsung 850 EVO SSDs in Raid 0
OS: CentOS 6.3

Server use is primarily game servers for my gaming community. Minecraft, ARK, CS:GO, that sort of stuff. Started hosting on my desktop back in the day and realized I couldn't keep hosting a server on my gaming rig 24/7(Though the server nor I ever had any performance issues) and with some donations from the community I was able to build our first server which was little more than desktop parts with Ubuntu running on it. At that time we were only hosting 1 or 2 Minecraft servers, TeamSpeak server, and some websites so it did the job. Branching into different games and more modified minecraft we needed some more power and my community being amazing as always donated almost 3/4 of this machine...


----------



## Xtreme21

Here is my Poweredge R410, going to be used as my main ESXi node, still need to pick up some ECC RAM either 64GB or 128GB not sure yet.

So far the specs outside of onboard stuff:
2x Xeon X5570's
PERC 6/I
2x 40GB Intell 320 SSDs in Raid 1 for ESXi
Planned 64GB or 128GB of DDR3 13333 ECC RAM

Haven't address my storage needs yet but i'm looking in the direction of a FreeNAS build with ZFS pools as an iSCSI target. Have the Dell C2100 under the R410 as a potential candidate for the ZFS storage, has 12x 3.5" drive bays and supports SATA/SAS


----------



## KyadCK

Quote:


> Originally Posted by *Xtreme21*
> 
> Here is my Poweredge R410, going to be used as my main ESXi node, still need to pick up some ECC RAM either 64GB or 128GB not sure yet.
> 
> So far the specs outside of onboard stuff:
> 2x Xeon X5570's
> PERC 6/I
> 2x 40GB Intell 320 SSDs in Raid 1 for ESXi
> Planned 64GB or 128GB of DDR3 13333 ECC RAM
> 
> Haven't address my storage needs yet but i'm looking in the direction of a FreeNAS build with ZFS pools as an iSCSI target. Have the Dell C2100 under the R410 as a potential candidate for the ZFS storage, has 12x 3.5" drive bays and supports SATA/SAS


Suggestion.

Do not run ESXi on the HDDs/SSDs at all. It's a 300MB OS, boot from USB, save all HDD slots for VMs. Makes it easier to upgrade to new versions, and not being on RAID makes it even more hardware agnostic, which is the whole point of VMs.

ESXi does not save VM configs to the OS drive, failure is not a concern. It saves them with the VMDKs on the datastores; your HDDs where the VM's "drives" files are located. Separate ESX from them as much as you can.

Heck, being an R410 it may even have an SD slot on the motherboard explicitly for ESXi. Use that. Save the SSDs for a little SQL database to mess with or something.


----------



## Dalchi Frusche

So here's my humble budget home server. The build is complete and now just a matter of testing these older HDDs and getting all my services installed. Powered it on just to make sure everything spins up and was pleasently surprised to hear.... nothing(almost). It is almost dead quiet, I have to listen really closely to hear a soft hum from the fans. (Yes I know the case is pink, it was my old mod that I did for the wife. She let it collect dust so I stole it for a server instead)




Specs(be gentle)
CPU: Athlon II X2 220 Dual Core 2.8 Ghz
Mobo: GIGABYTE GA-MA785GM-US2H
Memory: 12GB DDR2 800mhz
HDD: Whatever I can scrounge up
PSU: Raidmax RX 530
Onboard Gigabit NIC - Minecraft Server traffic
PCI Gigabit NIC - Data Sharing / Media Streaming
PCI N300 Wireless Adapter - Administrative functions/general use
OS: Ubuntu Server latest stable build

Services(Planned)

- OpenVPN
- SSH
- Craftbukkit Server
- PLEX Media?
- Samba Sharing
- SFTP
- Webmin?


----------



## OliG

Does the PSU is really tilted by the weight of the hdd sitting on power cords??!


----------



## Dalchi Frusche

Quote:


> Originally Posted by *OliG*
> 
> Does the PSU is really tilted by the weight of the hdd sitting on power cords??!


Haha, no. The case never came with a PSU bracket so I have to craft one this weekend. The HDD is cradled from some other cables and the IDE cable hooked in it. That is also temporary until I can test all those HDDs to see which are worth keeping. After which I will rerun some cables and throws some zip ties at it.


----------



## Sodalink

So I want to make my server more energy efficient while trying to spend the least possible since my energy bill is $150+ even after some discounts. I'm cutting corners everywhere I can to save energy. what suggestions do you guys have?

Current specs:

NZXT H2 Case with 3x120mm fans running
Hitachi 7200rpm 5x2TB Raid 5
40GB OS SSD
Corsair 430w v2/3?
Intel G3258 3.2Ghz cpu
MSI Motherboard
Patriot 1600 4x4GB DDR3

I was thinking of maybe replacing the 5x2TB drives with 4x4TB to gain 4TB of storage and use better energy efficient drives which at this point I'm not sure what those will be. I've been out of the hardware market for a while. Also since I have the server in the basement and it is cold most of the time I might remove 1 of the 120mm fans considering that it really doesn't get hot, but that might be a bad idea or might not make a big difference.

This server is on 24/7 and is used as a, Minecraft Server or ARK server, Security camera NVR, Data storage, Plex server.


----------



## cones

Quote:


> Originally Posted by *Sodalink*
> 
> So I want to make my server more energy efficient while trying to spend the least possible since my energy bill is $150+ even after some discounts. I'm cutting corners everywhere I can to save energy. what suggestions do you guys have?
> 
> Current specs:
> 
> NZXT H2 Case with 3x120mm fans running
> Hitachi 7200rpm 5x2TB Raid 5
> 40GB OS SSD
> Corsair 430w v2/3?
> Intel G3258 3.2Ghz cpu
> MSI Motherboard
> Patriot 1600 4x4GB DDR3
> 
> I was thinking of maybe replacing the 5x2TB drives with 4x4TB to gain 4TB of storage and use better energy efficient drives which at this point I'm not sure what those will be. I've been out of the hardware market for a while. Also since I have the server in the basement and it is cold most of the time I might remove 1 of the 120mm fans considering that it really doesn't get hot, but that might be a bad idea or might not make a big difference.
> 
> This server is on 24/7 and is used as a, Minecraft Server or ARK server, Security camera NVR, Data storage, Plex server.


I think you may be better off looking somewhere else to lower your bill. I'd bet any money you would spend too help lower the bill would at most lower it by $10 just for swapping some of those parts. I'd first suggest picking up a kill-a-watt to help see what stuff is using.


----------



## KyadCK

Quote:


> Originally Posted by *cones*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Sodalink*
> 
> So I want to make my server more energy efficient while trying to spend the least possible since my energy bill is $150+ even after some discounts. I'm cutting corners everywhere I can to save energy. what suggestions do you guys have?
> 
> Current specs:
> 
> NZXT H2 Case with 3x120mm fans running
> Hitachi 7200rpm 5x2TB Raid 5
> 40GB OS SSD
> Corsair 430w v2/3?
> Intel G3258 3.2Ghz cpu
> MSI Motherboard
> Patriot 1600 4x4GB DDR3
> 
> I was thinking of maybe replacing the 5x2TB drives with 4x4TB to gain 4TB of storage and use better energy efficient drives which at this point I'm not sure what those will be. I've been out of the hardware market for a while. Also since I have the server in the basement and it is cold most of the time I might remove 1 of the 120mm fans considering that it really doesn't get hot, but that might be a bad idea or might not make a big difference.
> 
> This server is on 24/7 and is used as a, Minecraft Server or ARK server, Security camera NVR, Data storage, Plex server.
> 
> 
> 
> I think you may be better off looking somewhere else to lower your bill. I'd bet any money you would spend too help lower the bill would at most lower it by $10 just for swapping some of those parts. I'd first suggest picking up a kill-a-watt to help see what stuff is using.
Click to expand...

What he said.

Replacing lights you use all the time with LED ones will drop their power usage by like 85% too, 4-5 lightbulbs can equal a strong computer.


----------



## Sodalink

Quote:


> Originally Posted by *cones*
> 
> I think you may be better off looking somewhere else to lower your bill. I'd bet any money you would spend too help lower the bill would at most lower it by $10 just for swapping some of those parts. I'd first suggest picking up a kill-a-watt to help see what stuff is using.


Quote:


> Originally Posted by *KyadCK*
> 
> What he said.
> 
> Replacing lights you use all the time with LED ones will drop their power usage by like 85% too, 4-5 lightbulbs can equal a strong computer.


I guess it will not be worth it to do it then. I got a kill-a-watt already, but only used it on the server like a year or 2 and forgot I had it... I will start using it on most of my stuff now. I've replaced all my light bulbs with LED 8/9w ones and replaced the outdoor lights with solar LED lights. Since I have a house built in 1930 I'm starting to think there is something old eating up the wattage usage. I just can't stop thinking about my power bill being $300~ after my discount program ends next year


----------



## cones

Quote:


> Originally Posted by *Sodalink*
> 
> I guess it will not be worth it to do it then. I got a kill-a-watt already, but only used it on the server like a year or 2 and forgot I had it... I will start using it on most of my stuff now. I've replaced all my light bulbs with LED 8/9w ones and replaced the outdoor lights with solar LED lights. Since I have a house built in 1930 I'm starting to think there is something old eating up the wattage usage. I just can't stop thinking about my power bill being $300~ after my discount program ends next year


Now I see you have a much better climate than I do. For me the biggest use is the HVAC, then the water heater. All my apartments have been electric everything, electric heat is about the worst thing to use all the time. You could also look at stuff like your fridge. Now it also just depends on your price of electricity, for me it's around $0.08 a kilowatt.


----------



## Sodalink

Don't know what is my rate on top of my head, but quick googe, "The average residential electricity rate in Santa Cruz is 15.59¢/kWh." There are 4 tiers of usage and the higher the tier you are in the more the charge you per kWh I believe. I'm at tier 4







. The only thing I'm running besides the everage house is 1 regri extra, 2 tvs extra, 24./7 server and 2 lights extra. I wouldn't think that would multiply my bill 5x of what ppl I know get charged per month. All listed is energy efficient.


----------



## wiretap

I'm in the same boat.. I even work for the power company and don't get a discount.









My rates are:
Power Supply Charges:
First 17 kWh per day. . . . . . . . . . . . . . . . . . . . . . . . . . 6.912¢ per kWh
Additional kWh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.257¢ per kWh
Delivery Charges:
Service Charge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $6.00 per Month
Distribution kWh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.003¢ per kWh

My computer equipment really isn't a factor in my electricity costs when you consider the AC and pool pump.. I range anywhere from using 615kWh/mo during the winter, to 2200kWh/mo during the summer. (~2000sq.ft home, quad level)


----------



## Prophet4NO1

Picked up a Dell 2900 with dual 2.5Ghz quad cores today.


----------



## herkalurk

I guess I should update what I've got. I recently purchased an older HP desktop server.

HP ML350 G6

OS: Centos 7 x86_64
CPU: 2 X Intel X5650 6 core @ 2.67 Ghz (with hyperthreading 24 logical cores)
RAM: 4X4 ECC ram (16 GB)
HD: 2 X 300 GB WD Raptor (OS HW Raid 1), 2 X 146 GB 10K SAS (came with server), 2 X 300 GB 10K SAS (came with server), 4 X 1 TB (Storage in the top hotswap, mdadm raid 5)
Hotswap tray: Rosewill 3 CD bays to 4 HDD bays Link

It's my everything server. I have websites, mysql db, splunk indexer, backups for my cloud vps, myth tv recording, gitlab server, nzb downloading (sabnzbd+, sickbeard, couchpotato), fileserver, and other stuff I'm not thinking of. The 4 SAS drives that came with the server are all independent and used by myth as the scratch disk for recordings to be stored on. If one dies I do have 1 other that came with it I took out, and another sata drive to put in and just keep recording. I can always re-record a show. I bought it with only a single quad core cpu in it, ebay provided me with the pair of 6 core cpus for $200. I know 24 cores sounds like over kill for just recording, but there are a few shows my wife likes to watch over and over again. She'll marathon Bones or Castle for a day while sewing or just lounging, so those shows I use handbrake to down convert to 720P MKV, that's when the 24 cores really shine.

Overall cost of upgrade is just shy of $500. $200 for the server from craigslist, $200 for cpus, $10 for 2nd identical power supply (just in case), $10 for 2nd HP heatsink, $2 for HP fan to fill last fan slot, and $50 for the HD cage.


----------



## bobfig

welp my server has been upgraded now to a e3-1230v1 and a supermicro x9scm-f, from a cheap biostar 775 board and a q8400. now i can host vm's and stuff better.

next up when i get to it is some more storage. probably 3-4 3tb WD reds.


----------



## twerk

Anyone know if you can add drives to an existing RAID5 array with HP Smart Array controllers? The Smart Array P440 to be specific. I'd like to expand from 5 drives to 8.

Sorry for the basic question!


----------



## herkalurk

You should be able to, do you have the hpssacli application installed on the server? Also, what OS is the server? In linux there are drivers that get installed with that app so it will rescan the devices for you, in windows you should just be able to do a rescan in the disk management screen.

EDIT: you can

http://www8.hp.com/h20195/v2/GetPDF.aspx/c04346277.pdf


----------



## twerk

Quote:


> Originally Posted by *herkalurk*
> 
> You should be able to, do you have the hpssacli application installed on the server? Also, what OS is the server? In linux there are drivers that get installed with that app so it will rescan the devices for you, in windows you should just be able to do a rescan in the disk management screen.
> 
> EDIT: you can
> 
> http://www8.hp.com/h20195/v2/GetPDF.aspx/c04346277.pdf


Thanks. I have the HP SSA GUI installed, not the CLI. I'm assuming it has the functionality, someone please correct me if I'm wrong. I'm running Server 2012 R2.

I'm guessing expanding the array won't involve losing any data already on it?


----------



## herkalurk

Quote:


> Originally Posted by *twerk*
> 
> Thanks. I have the HP SSA GUI installed, not the CLI. I'm assuming it has the functionality, someone please correct me if I'm wrong. I'm running Server 2012 R2.
> 
> I'm guessing expanding the array won't involve losing any data already on it?


Yeah, the hpssacli is the linux command line utility. As far as I know doing a resize won't lose data, but it's best to have a backup regardless. Is the server under support? Give HP a quick email and ask their advice.


----------



## EvilMonk

Quote:


> Originally Posted by *twerk*
> 
> Anyone know if you can add drives to an existing RAID5 array with HP Smart Array controllers? The Smart Array P440 to be specific. I'd like to expand from 5 drives to 8.
> 
> Sorry for the basic question!


Yes you can and it's quite easy, the help HP Provided with the SSA is quite well made and should guide you well, I found also many users giving step by step instructions on how to do it on the HP forums, I did it twice now on my DL360 & DL380 G7 with Smart Array P420 Controllers and once on my DL380 G7 with a P812 controller hooked to a StorageWorks MSA60. Its quite easy and straightforward.


----------



## Prophet4NO1

Couple upgrades. Have a new cpu in the file server, Xeon E3 1241 V3. Handles Plex transcode much better then the little pentium did. Overkill for everything else. Also go a new Noctua cooler to replace the gigantic Silver Arow coller i had put on after the stock fan started making noises. I think this will work better for my plans to go in a rack case later. Should be able to just drop it in as is.


----------



## cones

Took me longer then it should have to find the RAM.


----------



## OliG

Quote:


> Couple upgrades. Have a new cpu in the file server, Xeon E3 1241 V3.


Good single cpu choice, paired with a nice motherboard. I have a E3-1230 v3 in my current desktop and it handles everything I throw at it. Definitely overkill for a file server... Well done !









I also upgraded my already recently upgraded server (see post 3410)... While looking for a second opteron 6276 to complete my 2P system, I found out in another thread that very good deals could be had on E5-2670 v1 cpus. I also realised that 2x E5-2670v1 where actually 2 times faster than 2x 6276, while using quite a bit less power. So I took the plunge and bought 2 xeons / heatsinks plus a dual LGA2011 motherboard. The main board is the big chunk in the budget, costing nearly 3 times what I bought the 2 cpus for. I confort myself by thinking that at least I sold my nic card and previous motherboard for a good price, and the removal of the pcie card saved me 20 watts.

Motherboard is a Supermicro X9DRI-LNF4+, I choose this one because it is in the compatibility list of my case, and I'm quite happy with the layout. You can clearly tell than Supermicro put a lot of thinking in their boards layout to ensure good airflow.



Nows come the hardest part : allocating thoses 16 new cores to my VMs !


----------



## Prophet4NO1

Quote:


> Originally Posted by *OliG*
> 
> Good single cpu choice, paired with a nice motherboard. I have a E3-1230 v3 in my current desktop and it handles everything I throw at it. Definitely overkill for a file server... Well done !
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I also upgraded my already recently upgraded server (see post 3410)... While looking for a second opteron 6276 to complete my 2P system, I found out in another thread that very good deals could be had on E5-2670 v1 cpus. I also realised that 2x E5-2670v1 where actually 2 times faster than 2x 6276, while using quite a bit less power. So I took the plunge and bought 2 xeons / heatsinks plus a dual LGA2011 motherboard. The main board is the big chunk in the budget, costing nearly 3 times what I bought the 2 cpus for. I confort myself by thinking that at least I sold my nic card and previous motherboard for a good price, and the removal of the pcie card saved me 20 watts.
> 
> Motherboard is a Supermicro X9DRI-LNF4+, I choose this one because it is in the compatibility list of my case, and I'm quite happy with the layout. You can clearly tell than Supermicro put a lot of thinking in their boards layout to ensure good airflow.
> 
> 
> 
> Nows come the hardest part : allocating thoses 16 new cores to my VMs !


Thanks. Wanting a modern dual CPU machine to play with. I have a dual chip Dell 2900, but it's just used as a game server.

The Pentium G3260 I had before did a fine job, up until more then one 1080 stream was running and transcoding on the fly. Now I use about 20-30% cpu in the same situation.


----------



## herkalurk

Quote:


> Originally Posted by *Prophet4NO1*
> 
> 
> 
> Couple upgrades. Have a new cpu in the file server, Xeon E3 1241 V3. Handles Plex transcode much better then the little pentium did. Overkill for everything else. Also go a new Noctua cooler to replace the gigantic Silver Arow coller i had put on after the stock fan started making noises. I think this will work better for my plans to go in a rack case later. Should be able to just drop it in as is.


I have CX430M's in both of my home built servers. They work great, reliable company, and not very expensive.


----------



## Prophet4NO1

Quote:


> Originally Posted by *herkalurk*
> 
> I have CX430M's in both of my home built servers. They work great, reliable company, and not very expensive.


I have never had an issue with one. Would not use it in a heavy load application. But for this kind of machine they seem like a really good option.


----------



## LuckyJack456TX

Quote:


> Originally Posted by *Prophet4NO1*
> 
> 
> 
> Couple upgrades. Have a new cpu in the file server, Xeon E3 1241 V3. Handles Plex transcode much better then the little pentium did. Overkill for everything else. Also go a new Noctua cooler to replace the gigantic Silver Arow coller i had put on after the stock fan started making noises. I think this will work better for my plans to go in a rack case later. Should be able to just drop it in as is.


Nice Board and CPU combo I have a 1230v2 and its awesome for Transcoding and Hyper-V.


----------



## TheBloodEagle

Isn't the rear fan and CPU can competing with each other? Wouldn't it make more sense to reverse the CPU fan? I see this often on a lot of builds but it doesn't make overall sense to me.

"Pulling" the heat off the CPU that flows directly into the flow path of the rear is more logical. The front intake fans would be sending air over the motherboard anyway.


----------



## DaveLT

Quote:


> Originally Posted by *TheBloodEagle*
> 
> Isn't the rear fan and CPU can competing with each other? Wouldn't it make more sense to reverse the CPU fan? I see this often on a lot of builds but it doesn't make overall sense to me.
> 
> "Pulling" the heat off the CPU that flows directly into the flow path of the rear is more logical. The front intake fans would be sending air over the motherboard anyway.


It's logical but for some reason turning it over creates lots of turbulence and makes an odd whine that's extremely irritating

The cpu fan is so far away it doesn't really matter tho so as it blows towards the board the heated air just dissipates around the board and is exhausted by the rear fan making it less turbulent on the exit
And the air may be a lot more stale if the cpu exhaust faced the side panel instead because it's fairly low velocity being a noctua fan


----------



## Prophet4NO1

Quote:


> Originally Posted by *TheBloodEagle*
> 
> Isn't the rear fan and CPU can competing with each other? Wouldn't it make more sense to reverse the CPU fan? I see this often on a lot of builds but it doesn't make overall sense to me.
> 
> "Pulling" the heat off the CPU that flows directly into the flow path of the rear is more logical. The front intake fans would be sending air over the motherboard anyway.


Having zero cooling issues. The CPU sits in the 30s most of the time and onliy in the 50s durring load.


----------



## TheBloodEagle

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Having zero cooling issues. The CPU sits in the 30s most of the time and onliy in the 50s durring load.


Keep in mind I'm not trying to be heavily critical of your specific build, just more like a general discussion since I see it a lot. But it seems like a waste of energy & effort on part of the fans; there's an efficiency loss. It doesn't show most likely but it seems like it. If you could show an airflow simulation in Solidworks or similar, it would just be a lot of turbulence. I overly critique this type of things though. I know most of the time it's sufficient to do the job & not a big deal.


----------



## DaveLT

Quote:


> Originally Posted by *TheBloodEagle*
> 
> Keep in mind I'm not trying to be heavily critical of your specific build, just more like a general discussion since I see it a lot. But it seems like a waste of energy & effort on part of the fans; there's an efficiency loss. It doesn't show most likely but it seems like it. If you could show an airflow simulation in Solidworks or similar, it would just be a lot of turbulence. I overly critique this type of things though. I know most of the time it's sufficient to do the job & not a big deal.


What is worse is the turbulence caused by flipping the fan around the whine is just intolerable at any rpm


----------



## Prophet4NO1

Quote:


> Originally Posted by *TheBloodEagle*
> 
> Keep in mind I'm not trying to be heavily critical of your specific build, just more like a general discussion since I see it a lot. But it seems like a waste of energy & effort on part of the fans; there's an efficiency loss. It doesn't show most likely but it seems like it. If you could show an airflow simulation in Solidworks or similar, it would just be a lot of turbulence. I overly critique this type of things though. I know most of the time it's sufficient to do the job & not a big deal.


I think this is one area people over think things. There is suck a large volume of air in the case that the fans wont really be fighting each other. Maybe if the fans where higher speed it might be an issue. But rather low speed fans wont have much of a problem.


----------



## Paul17041993

Quote:


> Originally Posted by *Prophet4NO1*


Quote:


> Originally Posted by *TheBloodEagle*
> 
> Isn't the rear fan and CPU can competing with each other? Wouldn't it make more sense to reverse the CPU fan? I see this often on a lot of builds but it doesn't make overall sense to me.
> 
> "Pulling" the heat off the CPU that flows directly into the flow path of the rear is more logical. The front intake fans would be sending air over the motherboard anyway.


Pushing air into a heatsink is generally more effective than pulling, while the air recycling back into the CPU heatsink may sound like an issue, in systems like these it isn't.
The only concern I have is a small amount of air would be sucked in through the open top vents, really that back fan isn't even needed.


----------



## Prophet4NO1

Quote:


> Originally Posted by *Paul17041993*
> 
> Pushing air into a heatsink is generally more effective than pulling, while the air recycling back into the CPU heatsink may sound like an issue, in systems like these it isn't.
> The only concern I have is a small amount of air would be sucked in through the open top vents, really that back fan isn't even needed.


Drive temps are lower with the rear fan. The hot air does not seem to find its way out as well with out it. I tried it for a day then put it back in.


----------



## Paul17041993

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Drive temps are lower with the rear fan. The hot air does not seem to find its way out as well with out it. I tried it for a day then put it back in.


Huh, weird, those front fans must be just on the edge of enough power...


----------



## Prophet4NO1

Quote:


> Originally Posted by *Paul17041993*
> 
> Huh, weird, those front fans must be just on the edge of enough power...


I think it has a lot to do with being a wide open space. Like a river slows down when it widens. With out giving it some direction the air just kinda spreads out and slows down. Slowly finding it's way out he back or top. The fan helps keep a current moving through the case, I think. A lot of the heat from the drives goes with it. Seems logical.


----------



## danilon62

_*WHITE SWAN*_

Usage: File server

*OS:* FreeNAS
*Case:* PowerMac G5
*CPU:* AMD A4 5300
*Motherboard:* Gigabyte F2A75M-D3H
*Memory:* 2x2GB Corsair vengeance LP 1600MHz (Black)
*PSU:* Tooq 480W
*OS HDD:* 8GB SanDisk USB Thumb Drive
*Storage HDDs:* 1xSeagate Barracuda 7200 500GB; 1xHGST 5400 1TB.
*Server Manufacturer:* Me









*PICS PICS PICS:*


----------



## EvilMonk

Quote:


> Originally Posted by *danilon62*
> 
> _*WHITE SWAN*_
> 
> Usage: File server
> 
> *OS:* FreeNAS
> *Case:* PowerMac G5
> *CPU:* AMD A4 5300
> *Motherboard:* Gigabyte F2A75M-D3H
> *Memory:* 2x2GB Corsair vengeance LP 1600MHz (Black)
> *PSU:* Tooq 480W
> *OS HDD:* 8GB SanDisk USB Thumb Drive
> *Storage HDDs:* 1xSeagate Barracuda 7200 500GB; 1xHGST 5400 1TB.
> *Server Manufacturer:* Me
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *PICS PICS PICS:*


As a G5 Quad and G5 dual 2.7 Ghz owner I have to say awesome mod on the G5 case to fit a PC inside







Great work + REP well done








how hard was the fitting? thanks


----------



## CloudX

Had some fun last night with my home office HyperV box. I had 4 x 1TB drives in there. This box runs HyperV 2012 and it hosts my pfSense and a DC. I have a persistent VPN across 3 sites. Two of them use the same HyperV setup. It works insanely well. I was throwing away an old monster (DL380 G5) so I decided to take some goodies from it. Pulled the p400 card and the 5.25 hot swap 8 bay box. There are 8 146Gb 10k SAS drives in there. I was able to get it all wired up and working on my HP 8200 Elite MT in RAID 5. Updated the firmware and it's all running very well. Seems a little slower than the RAID10 with 4 drives I had on the Intel RAID, but it's not bad at all. I like the hot swap capabilities now. I do have about 20 300GB SAS 10k spares that are for our main Hyper-V server in our datacenter. So I could always rebuild this with some of those.

Anyways I think I just wanted an excuse to nerd out. It was fun.





PS. I did wipe it downnd dust everything, this server lives in my garage on a shelf with a switch and 1500VA UPS. Gets dusty in the garage


----------



## CJston15

Just to clarify...

Your HP 8200 MT is your HyperV host? Is that running the i7 2600 series processor? You have 4 x 1 TB drives plus the additional 8 drive bay from the DL380 G5 all in the 8200 giving you 12 drives total?


----------



## CloudX

Yes this is one of our satellite office builds, about 2 years ago we started using pfSense very heavily with our clients, so we started thinking of ways to package things together using HyperV 3.0 which works very well with Linux or FreeBSD in the case of pfSense. So for almost all clients we use a more robust small 1u or tower server with at least a RAID1 and dual PSUs. We then install HyperV 2012 as the host and run a virtual server 2012 domain controller, and a pfSense VM. It's an i7 2600k with 8GB RAM and 3 x 1Gbit NICs the pfSense VM gets 2 NICs.

So yesterday i removed the 4 x 1TB drives. Now it has the HP Smart Array with the 8 bay of 2.5in drives. I didn't need that much space and could put the 1TB drives to use elsewhere. This box does Firewall / VPN and domain controller duties in a neat package. It was only pulling 55 watts with the 4 x 1TB but with the 8 Bay it is about 110 watts now.


----------



## G33K

Cheap little home Plex/"learning how to server/linux" server



i5 650
8gb DDR3
750gb Scorpio main hard drive
3tb Caviar Green storage drive
500gb IDE drive to hold up the Scorpio (previous build in the case was an AM2+ with IDE and SATA)
Debian 8


----------



## beatfried

just a little preview of my new storage


----------



## danilon62

Quote:


> Originally Posted by *EvilMonk*
> 
> As a G5 Quad and G5 dual 2.7 Ghz owner I have to say awesome mod on the G5 case to fit a PC inside
> 
> 
> 
> 
> 
> 
> 
> Great work + REP well done
> 
> 
> 
> 
> 
> 
> 
> how hard was the fitting? thanks


It wasnt "hard" per say, just really tedious, It seems Apple likes to make their stuff as hard to dissasemble as possible; I think I had to use 5 different types of screws to dissasemble it (Allen, Torx, Philips, etc...)
Once the case was empty it was only up to engineering and creativity, here's a build log of the thing: http://www.overclock.net/t/1594495/white-swan-powemac-g5-build-log
I pretty much designed everything on the go, altho this log might kinda work as a guide, if you have a questions I'd be more than happy to answer them!


----------



## mcdoc77

Name: Yggdrasil....because it is basically the root of my network
OS: *FreeNas*
Case:*Fractal Define R5*
CPU:*Intel Core i3 6100*
Motherboard:*Supermicro X11SSM-F*
Memory: *2x 16GB 16384MB) Crucial CT16G4WFD8213 DDR4-2133 ECC DIMM CL15 Single*
PSU: *Corsair CX600* plus UPS *Cyberpower Value Serie 800 VA / 480 Watt Tower*
OS HDD (If you have one): Old 500GB 2,5" I-don't_know_and_I_don't_care
Storage HDD(s): *8x 3000GB WD Purple WD30PURX 64MB @RAIDZ2*
Server Manufacturer: *me*









Pre-build and build



Ok, the second drive cage isn't standard. I just had a spare one, since my 9yr old daughter would not use more than 3 HDs in her Corsair C70









I never attached a Keyboard, mouse, Monitor or optical drive to this PC. IPMI is awesome.
Some Pics accessing Bios (Yes, it is an UEFI, but it looks like the good old BIOS. Hey man, this is server grade stuff!) and doing Memtest via *REMOTE* . The Optical Drive can be simulated via IPMI/Web-Interface. Simply load the ISO and you are good to go.




I am still testing, but till now I must admit: It works great! #nerdporn

WD Purple...well. First: They are rated for 24/7 and very reliable harddisks.
But I guess the real question is "Why not WD red?"
What ist the distinction between red and purple? Mainly TLER-support. What TLER does...well it interrupts the error-recovery so a harddisk would not kicked out of an arrray. This only makes sense if you got a contoller which can handle that (and would kick the hd out) and the need of real quick response. I do not have any of this.
Ok, ZFS could handle this, yes, but I prefer an error-recovery who does it's job. Remember: This is a SOHO-Server. It is not designed for an SAN for ultra high availability databases or somthing like that.

Due to the fact that I got 9 HDs (8 Storage + Boot] I added a *Delock 89395 4 Port PCIe x4* Controller and connected 2 HDs. Works fine.

I tested the UPS...45 Minutes on idle! I Never expected that!









[Edit: USV/UPS False Friends]
[Edit: Forgot about the drive cage]
[Edit: added Answer about WD Purple-HDs]
[Edit: Forgot to mention the HD Controller]
[Edit: Minor corrections plus UPS time]
[Edit: RAIDZ2]
I'll keep adding some informations to this post, if the discussion leads to some interesting issues. Just to have all information in the main Post available.


----------



## Prophet4NO1

Quote:


> Originally Posted by *mcdoc77*
> 
> OS: *FreeNas*
> Case:*Fractal Define R5*
> CPU:*Intel Core i3 6100*
> Motherboard:*Supermicro X11SSM-F*
> Memory: *2x 16GB 16384MB) Crucial CT16G4WFD8213 DDR4-2133 ECC DIMM CL15 Single*
> PSU: *Corsair CX600* plus UPS *Cyberpower Value Serie 800 VA / 480 Watt Tower*
> OS HDD (If you have one): Old 500GB 2,5" I-don't_know_and_I_don't_care
> Storage HDD(s): *8x 3000GB WD Purple WD30PURX 64MB*
> Server Manufacturer: *me*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Pre-build and build
> 
> 
> 
> Ok, the second drive cage isn't standard. I just had a spare one, since my 9yr old daughter would not use more than 3 HDs in her Corsair C70
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I never attached a Keyboard, mouse, Monitor or optical drive to this PC. IPMI is awesome.
> Some Pics accessing Bios (Yes, it is an UEFI, but it looks like the good old BIOS. Hey man, this is server grade stuff!) and doing Memtest via *REMOTE* . The Optical Drive can be simulated via IPMI/Web-Interface. Simply load the ISO and you are good to go.
> 
> 
> 
> 
> I am still testing, but till now I must admit: It works great! #nerdporn
> 
> [Edit: USV/UPS False Friends]
> [Edit: Forgot about the drive cage]


Nece setup! IPMI is pretty sweet. Part of the reason i like supermicro server boards for any machine i wont actively be using. Hoping Asrock IPMI works just as good for my PFSENSE box. Is ECC support on i3 new for the these chips? Think it was only Pentiums and xeon last couple generations.

Also, why are you not running off a flash drive? FreeNAS was designed to be installed on a flash drive. It basically loads everything into RAM. Rarely ever touching the OS storage unless being shut down.


----------



## CloudX

That's a nice build!


----------



## bobfig

Quote:


> Originally Posted by *mcdoc77*
> 
> OS: *FreeNas*
> Case:*Fractal Define R5*
> CPU:*Intel Core i3 6100*
> Motherboard:*Supermicro X11SSM-F*
> Memory: *2x 16GB 16384MB) Crucial CT16G4WFD8213 DDR4-2133 ECC DIMM CL15 Single*
> PSU: *Corsair CX600* plus UPS *Cyberpower Value Serie 800 VA / 480 Watt Tower*
> OS HDD (If you have one): Old 500GB 2,5" I-don't_know_and_I_don't_care
> Storage HDD(s): *8x 3000GB WD Purple WD30PURX 64MB*
> Server Manufacturer: *me*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Pre-build and build
> http://www.overclock.net/content/type/61/id/2747109/width/350/height/700
> http://www.overclock.net/content/type/61/id/2747110/width/350/height/700
> 
> Ok, the second drive cage isn't standard. I just had a spare one, since my 9yr old daughter would not use more than 3 HDs in her Corsair C70
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I never attached a Keyboard, mouse, Monitor or optical drive to this PC. IPMI is awesome.
> Some Pics accessing Bios (Yes, it is an UEFI, but it looks like the good old BIOS. Hey man, this is server grade stuff!) and doing Memtest via *REMOTE* . The Optical Drive can be simulated via IPMI/Web-Interface. Simply load the ISO and you are good to go.
> 
> http://www.overclock.net/content/type/61/id/2747138/width/350/height/700
> http://www.overclock.net/content/type/61/id/2747139/width/350/height/700
> 
> I am still testing, but till now I must admit: It works great! #nerdporn
> 
> [Edit: USV/UPS False Friends]
> [Edit: Forgot about the drive cage]


why western digital purple? arn't those good for surveillance because they are more for writing and some other things?


----------



## Master__Shake

mine got ugly after the move.



so i made it pretty again/


----------



## seross69

Subbed to add mine later.


----------



## PuffinMyLye

Just got my new server rack in today. Still waiting for more parts but just threw in what I've got (mainly just the empty chassis') to get it all cleaned up before the wifey gets home tomorrow







.


----------



## seross69

Quote:


> Originally Posted by *PuffinMyLye*
> 
> Just got my new server rack in today. Still waiting for more parts but just threw in what I've got (mainly just the empty chassis') to get it all cleaned up before the wifey gets home tomorrow
> 
> 
> 
> 
> 
> 
> 
> .
> 
> 
> Spoiler: Warning: Spoiler!


what do you use yours for??


----------



## PuffinMyLye

Quote:


> Originally Posted by *seross69*
> 
> what do you use yours for??


Building out a 3-node vSAN cluster. You can check out the build log I started the other day *here*.


----------



## Prophet4NO1

Really want to move my stuff into a rack. Can not find them, even used, for a decent price.


----------



## seross69

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Really want to move my stuff into a rack. Can not find them, even used, for a decent price.


Not on ebay or salvage yard?? What you willing to pay i have lot of contacts


----------



## Prophet4NO1

Quote:


> Originally Posted by *seross69*
> 
> Not on ebay or salvage yard?? What you willing to pay i have lot of contacts


15 or 24 U would be enough for what I have and future plans. I check craigslist and I have tried ebay but don't see much so far. There is a FreeGeek here, they get servers but no racks.

Shooting for about $100 as my max. In the middle of another project right now, so not able to buy one for a few weeks if one does crop up.


----------



## Master__Shake

Quote:


> Originally Posted by *Prophet4NO1*
> 
> 15 or 24 U would be enough for what I have and future plans. I check craigslist and I have tried ebay but don't see much so far. There is a FreeGeek here, they get servers but no racks.
> 
> Shooting for about $100 as my max. In the middle of another project right now, so not able to buy one for a few weeks if one does crop up.


any where near albany










http://www.ebay.com/itm/SERVER-RACKS-/381572014698?hash=item58d776e26a:g:HwoAAOSwoBtW6qAe

damn cheap right now too


----------



## cones

Quote:


> Originally Posted by *Master__Shake*
> 
> any where near albany
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.ebay.com/itm/SERVER-RACKS-/381572014698?hash=item58d776e26a:g:HwoAAOSwoBtW6qAe
> 
> damn cheap right now too


So if you only need one and you "win" the bid for $30 they are stuck with the rest of them?


----------



## Prophet4NO1

MN. lol


----------



## cones

Quote:


> Originally Posted by *Prophet4NO1*
> 
> MN. lol


If your talking to me I don't need any. Just thought that was weird in the ad.


----------



## Prophet4NO1

Quote:


> Originally Posted by *cones*
> 
> If your talking to me I don't need any. Just thought that was weird in the ad.


No, to Maser Shake's post.


----------



## mcdoc77

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Nece setup! IPMI is pretty sweet. Part of the reason i like supermicro server boards for any machine i wont actively be using. Hoping Asrock IPMI works just as good for my PFSENSE box. Is ECC support on i3 new for the these chips? Think it was only Pentiums and xeon last couple generations.


As far as I know, it isn't that new. Lets have a look..
http://ark.intel.com/de/products/90729/Intel-Core-i3-6100-Processor-3M-Cache-3_70-GHz --> ECC Yes
http://ark.intel.com/de/products/77480/Intel-Core-i3-4130-Processor-3M-Cache-3_40-GHz --> ECC Yes
http://ark.intel.com/de/products/65693/Intel-Core-i3-3220-Processor-3M-Cache-3_30-GHz --> ECC No

So it hasn't been always the case, but they do for quite a while. BTW: Intel annonced it, I think, some month after they released the C236 Chipset. So in the first place, they didn't tell anyone
Quote:


> Also, why are you not running off a flash drive? FreeNAS was designed to be installed on a flash drive. It basically loads everything into RAM. Rarely ever touching the OS storage unless being shut down.


Just because I don't trust USB-Thumpdrives that much and I had the old HD of my wifes notebook laying around


----------



## mcdoc77

WD Purple...well. First: They are rated for 24/7 and very relaiable harddisks.

But I guess the real question is "Why not WD red?"
What ist the distinction between red and purple? Mainly TLER-support. What TLER does...well it interrupts the error-recovery so a harddisk would not kicked out of an arrray. This only makes sense if you got a contoller which can handle that (and would kick the hd out) and the need of real quick response. I do not have any of this.
Ok, ZFS could handle this, yes, but I prefer an error-recovery who does it's job. Remember: This is a SOHO-Server. It is not designed for an SAN for ultra high availability databases or somthing like that.


----------



## Prophet4NO1

I am using Toshiba 7200rpm drives in my NAS. Good reliability and relatively cheap.


----------



## PuffinMyLye

Quote:


> Originally Posted by *Prophet4NO1*
> 
> I am using Toshiba 7200rpm drives in my NAS. Good reliability and relatively cheap.


8TB Seagate SMR drives here. Can't beat the price per GB/TB on them and their performance downfalls are mitigated by the software array (UnRAID) I use. I don't think they'd work well in any stripped array environment.


----------



## seross69

Anyone interested in building server take a look at this add and I also have a copy of Windows Server 2012 OEM. PM me about the Server Software.

http://www.overclock.net/t/1596272/mediapc-case-with-motherboard-cpu-psu-and-intel-nics


----------



## Cyclops

I re-did my server and doubled the storage capacity as well as the memory, because ZFS.

It might be pretty boring to you HARDCORE LIFT BROS, but it serves its purpose quite well.

Specs under the "File Server" thingy in my Sig Rig but to sum it up:

CPU: E5-2670
Mobo: Supermicro X9SRA
RAM: 65GB Samsung ECC 1333 MHz
HDD (Boot): 32GB Mushkin Ventura Pro
HDD (Storage): 10 * 6TB HGST Desktar NAS
Cooling: Corsair H60 -_-
Case: Fractal Design Define S
PSU: Corsair RM650
OS: FreeNAS 9.10

I'm running RAID Z2 so two redundant drives. 60TB total storage. After everything is done I am left with...................40.2TB of usable storage.

Total cost of the server is........... $4665 CAD which is roughly $23 USD.

CPU and memory were bought used for $300 CAD. Majority of the money went to the hard drives. Now I'm left with 8 * 4TB WD Reds that I have no use for whatsoever.


----------



## Gunfire

Quote:


> Originally Posted by *Cyclops*
> 
> Total cost of the server is........... $4665 CAD which is roughly $23 USD.


LOL

Nice set-up though


----------



## tiro_uspsss

major hardware changes since the last burner rig!








previously had a single socket s1366 system, now a dual s771 rig.
oh... & more burners







previously 10, now 19









specs:

2x Xeon L5410
Supermicro X7DAE
8x 4GB FB-DDR2 ECC+REG
NV 7300LE
ThermalTake TP XT 875W
4x Silicon Image 3114
Asmedia SATA3 PCIE
Lian Li PC-A77
Lian Li PC-A77F
Intel 330 120GB
Intel 520 60GB

the two cases are bolted together to form one 'super tower'


----------



## OliG

Quote:


> Originally Posted by *tiro_uspsss*
> 
> major hardware changes since the last burner rig!
> 
> 
> 
> 
> 
> 
> 
> 
> previously had a single socket s1366 system, now a dual s771 rig.
> oh... & more burners
> 
> 
> 
> 
> 
> 
> 
> previously 10, now 19
> 
> 
> 
> 
> 
> 
> 
> 
> 
> specs:
> 
> 2x Xeon L5410
> Supermicro X7DAE
> 8x 4GB FB-DDR2 ECC+REG
> NV 7300LE
> ThermalTake TP XT 875W
> 4x Silicon Image 3114
> Asmedia SATA3 PCIE
> Lian Li PC-A77
> Lian Li PC-A77F
> Intel 330 120GB
> Intel 520 60GB
> 
> the two cases are bolted together to form one 'super tower'


What is this build used for?

To be honest, I use what is left of my dvd collection as drink coasters in my office, so I'm curious to why you need so many dvd burners?


----------



## bobfig

Quote:


> Originally Posted by *Cyclops*
> 
> . Now I'm left with 8 * 4TB WD Reds that I have no use for whatsoever.


Ill suffer and take them off your hands if they take up to much room.


----------



## maddangerous

Quote:


> Originally Posted by *bobfig*
> 
> Ill suffer and take them off your hands if they take up to much room.


Me too


----------



## Nizzen

Quote:


> Originally Posted by *Cyclops*
> 
> I re-did my server and doubled the storage capacity as well as the memory, because ZFS.
> 
> It might be pretty boring to you HARDCORE LIFT BROS, but it serves its purpose quite well.
> 
> Specs under the "File Server" thingy in my Sig Rig but to sum it up:
> 
> CPU: E5-2670
> Mobo: Supermicro X9SRA
> RAM: 65GB Samsung ECC 1333 MHz
> HDD (Boot): 32GB Mushkin Ventura Pro
> HDD (Storage): 10 * 6TB HGST Desktar NAS
> Cooling: Corsair H60 -_-
> Case: Fractal Design Define S
> PSU: Corsair RM650
> OS: FreeNAS 9.10
> 
> I'm running RAID Z2 so two redundant drives. 60TB total storage. After everything is done I am left with...................40.2TB of usable storage.
> 
> Total cost of the server is........... $4665 CAD which is roughly $23 USD.
> 
> CPU and memory were bought used for $300 CAD. Majority of the money went to the hard drives. Now I'm left with 8 * 4TB WD Reds that I have no use for whatsoever.


Do you use an HBA for the HDD, or do you connect the disks direct to the MB ?


----------



## Prophet4NO1

Quote:


> Originally Posted by *OliG*
> 
> What is this build used for?
> 
> To be honest, I use what is left of my dvd collection as drink coasters in my office, so I'm curious to why you need so many dvd burners?


my guess, ripping movies to sell them. We had some Hmung guys that came into the store when I worked at Microcenter that would buy cases of DVD's. As in shipping cases. Hell, sold one guy a whole pallet of them once. lol All selling illegal movies.


----------



## Cyclops

Quote:


> Originally Posted by *bobfig*
> 
> Ill suffer and take them off your hands if they take up to much room.


Quote:


> Originally Posted by *maddangerous*
> 
> Me too


I think I'm gonna be the one to bear the burden. I'm that selfless, so I think I'll be keeping that 32TB of storage... mess.
Quote:


> Originally Posted by *Nizzen*
> 
> Do you use an HBA for the HDD, or do you connect the disks direct to the MB ?


The Supermicro board has 10 onboard SATA ports, and since i have 10 Hard drives, they are all occupied.

The server is at its limit whichever way you look at it. 65GB of ram which is the recommended amount for a 60TB pool of storage. All SATA ports are in use and I dislike using SATA to PCIe adapters and such.

The case itself has all 10 bays full so I'm maxed there too. As you can see, the kool-aid jar is truly full.


----------



## cones

Quote:


> Originally Posted by *Prophet4NO1*
> 
> my guess, ripping movies to sell them. We had some Hmung guys that came into the store when I worked at Microcenter that would buy cases of DVD's. As in shipping cases. Hell, sold one guy a whole pallet of them once. lol All selling illegal movies.


Did that pallet go into a mini van or a really small car? I'm picturing someone in the parking lot with a pallet putting each one into their car.


----------



## Prophet4NO1

Quote:


> Originally Posted by *cones*
> 
> Did that pallet go into a mini van or a really small car? I'm picturing someone in the parking lot with a pallet putting each one into their car.


Hand loaded it into a minivan.


----------



## cones

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Hand loaded it into a minivan.


That's better than what I was picturing. They just bought them so they could sell bootleg movies, must have been selling a lot to want that much.


----------



## Prophet4NO1

Quote:


> Originally Posted by *cones*
> 
> That's better than what I was picturing. They just bought them so they could sell bootleg movies, must have been selling a lot to want that much.


If you go to any of the Hmong or Somalian areas around Minneapolis/St Paul you will find stores packed full of bootleg movies. Most are from their native parts of the world. Though Hmong don't really have a "home" any more. So, no one really cares. Not a lot of Hollywood movies.


----------



## cones

Quote:


> Originally Posted by *Prophet4NO1*
> 
> If you go to any of the Hmong or Somalian areas around Minneapolis/St Paul you will find stores packed full of bootleg movies. Most are from their native parts of the world. Though Hmong don't really have a "home" any more. So, no one really cares. Not a lot of Hollywood movies.


Can't say I've ever been in any of those stores before, thought we have the largest Hmong population now. I guess that makes sense if they aren't selling US movies.


----------



## Prophet4NO1

Quote:


> Originally Posted by *cones*
> 
> Can't say I've ever been in any of those stores before, thought we have the largest Hmong population now. I guess that makes sense if they aren't selling US movies.


I worked for Truegreen for a while doing sales. Spent a good chunk of time in Little Canada and the surrounding area. Lots of Hmong. One of the guys I worked with was also Hmong so we would go to the stores and get drinks and food when in the area. There would be racks of DVD's. Right next to the bread and candy. lol


----------



## twerk

Anyone have any recommendations for a free AV for Server 2012 R2? Struggling to find one at the moment. I have MBAM Free installed but it doesn't offer real time protection.


----------



## bobfig

i had bit defender on server 2008 before i went to 2012. was able to get norton on 2012 with an extra license i have but said it didn't like the idea lol.


----------



## twerk

Yeah, I probably will end up having to pay for Kaspersky or Bitdefender. It would just be nice if there was a decent free alternative. Sadly none of the free consumer AVs work on server.


----------



## Cyb3r

yeah atm i prefer bitdefender over kaspersky i've had too many performance issues with kaspersky lately and every new release of them they promise to fix the performance yet it takes them nearly till the end off the lifetime off that license to get it right


----------



## herkalurk

I use clam win, only really because it works natively with my email server app. I have kaspersky around the rest of my house.


----------



## Prophet4NO1

ESET basically got started in buisness and server machines. Nice and light.


----------



## digitalbirth

My home NAS and networked cable tv tuner. Mostly all the components left over from my main rig after upgrading to a new Skylake setup.
Corsair Air 540 case, (modded)
Gigabyte G1 Sniper 3 mobo, (custom painted purple pearle)
Intel i7 2600k cpu
8 gigs KLEVV Genuine memory
Amd radeon r9 380 gpu, (needed for the tv tuner)
Ceton infiniTv 4 networked tv tuner with cable card from verizon
3 1 terabyte hdds
120 gig intel ssd, (OS and programs drives)
Corsair 860i psu,
Custom watercooled and cables sleeved


----------



## maddangerous

Quote:


> Originally Posted by *digitalbirth*
> 
> My home NAS and networked cable tv tuner. Mostly all the components left over from my main rig after upgrading to a new Skylake setup.
> Corsair Air 540 case, (modded)
> Gigabyte G1 Sniper 3 mobo, (custom painted purple pearle)
> Intel i7 2600k cpu
> 8 gigs KLEVV Genuine memory
> Amd radeon r9 380 gpu, (needed for the tv tuner)
> Ceton infiniTv 4 networked tv tuner with cable card from verizon
> 3 1 terabyte hdds
> 120 gig intel ssd, (OS and programs drives)
> Corsair 860i psu,
> Custom watercooled and cables sleeved


that's pretty sweet! Might I ask how you did the custom mobo paint?


----------



## digitalbirth

A lot of patients, and some [email protected] lol I had a group of people screaming at me over text, "nmoooo, you're gonna destroy it!!" "You're crazy!!" But i'm always doing some thing crazy.
It took me 2 and a half hours just taping it off.


----------



## EvilMonk

Quote:


> Originally Posted by *digitalbirth*
> 
> A lot of patients, and some [email protected] lol I had a group of people screaming at me over text, "nmoooo, you're gonna destroy it!!" "You're crazy!!" But i'm always doing some thing crazy.
> It took me 2 and a half hours just taping it off.


Wow that is sweet!!!







Nice job!!!


----------



## digitalbirth

Quote:


> Originally Posted by *EvilMonk*
> 
> Wow that is sweet!!!
> 
> 
> 
> 
> 
> 
> 
> Nice job!!!


Thanks man!! If you're interested. Look up my YouTube channel. It's CyberDen Systems. I'm working on a mini itx build. Custom case and custom waterblocks.


----------



## darksideleader

Quote:


> Originally Posted by *tiro_uspsss*
> 
> 
> 
> 
> 
> 
> 
> major hardware changes since the last burner rig!
> 
> 
> 
> 
> 
> 
> 
> 
> previously had a single socket s1366 system, now a dual s771 rig.
> oh... & more burners
> 
> 
> 
> 
> 
> 
> 
> previously 10, now 19
> 
> 
> 
> 
> 
> 
> 
> 
> 
> specs:
> 
> 2x Xeon L5410
> Supermicro X7DAE
> 8x 4GB FB-DDR2 ECC+REG
> NV 7300LE
> ThermalTake TP XT 875W
> 4x Silicon Image 3114
> Asmedia SATA3 PCIE
> Lian Li PC-A77
> Lian Li PC-A77F
> Intel 330 120GB
> Intel 520 60GB
> 
> the two cases are bolted together to form one 'super tower'


Holy crap what a monstrosity. I know automated DVD duplicators are stupidly expensive, but how long do you have to stick around when you have "job" to do?


----------



## nerdalertdk

Hi all

Doing an worklog on my server project.
http://www.overclock.net/t/1596901/worklog-server-x-case-214


----------



## Prophet4NO1

PFSENSE parts are showing up.


----------



## CloudX

That's sick. I love pfsense!


----------



## Prophet4NO1

Quote:


> Originally Posted by *CloudX*
> 
> That's sick. I love pfsense!


I am excited. Graduating from DD-WRT.


----------



## seross69

what is pfsense?? Help this noob or dummy out!!????


----------



## Trogdor

Here's my LAN stuff. First is the pfSense box, second is the server. There will soon be a CLI CentOS box as well.





Quote:


> Originally Posted by *Prophet4NO1*
> 
> PFSENSE parts are showing up.


Which CPU did you choose?

I'm using a Pentium G3240, 4GB and an older Hitachi Deskstart HDD. It runs very well with a few packages installed.

Quote:


> Originally Posted by *seross69*
> 
> what is pfsense?? Help this noob or dummy out!!????


pfSense is an operating system based on FreeBSD that is designed to be a router with advanced functionality. It can run very well on older or current low power hardware.


----------



## seross69

Quote:


> Originally Posted by *Trogdor*
> 
> Which CPU did you choose?
> 
> I'm using a Pentium G3240, 4GB and an older Hitachi Deskstart HDD. It runs very well with a few packages installed.
> pfSense is an operating system based on FreeBSD that is designed to be a router with advanced functionality. It can run very well on older or current low power hardware.


ohh I see said the blind man!! never heard of this as I am just using Windows 2012 Essentials server!!


----------



## Prophet4NO1

Quote:


> Originally Posted by *Trogdor*
> 
> Here's my LAN stuff. First is the pfSense box, second is the server. There will soon be a CLI CentOS box as well.
> 
> 
> 
> 
> Which CPU did you choose?
> 
> I'm using a Pentium G3240, 4GB and an older Hitachi Deskstart HDD. It runs very well with a few packages installed.
> pfSense is an operating system based on FreeBSD that is designed to be a router with advanced functionality. It can run very well on older or current low power hardware.


G3260 that was in my file server before an upgrade.


----------



## seross69

Well here is my Server

It has a Asus Z87 WS Mother Board
CPU is a i5-4670k 4/8
32 GB of 2400 MHZ memory
2 ea 512 Samsung pro in Raid 0 for O/S
2 WD Red 6Tb in Raid 0
1 WD Red 6TB I need to find one more to make this a raid 1
3 Seagate 3 TB HDD in raid 0 on the Marvel 6g controller on the M/B
Areca ARC-1883-16 with 8 GB Cache
I have 4 San Disc Ultra II 480's in raid 0 and Also have 3 WD RE SAS 4TB drives in Raid 5
Intel 2 port 10g NIC




Dont look like much but I have room for 6 more HDD and it is my media file server!!!

here is the transfer rate of my Raid 0 ssd's


----------



## cones

Quote:


> Originally Posted by *Trogdor*
> 
> ...
> pfSense is an operating system based on FreeBSD that is designed to be a router with advanced functionality. It can run very well on older or current low power hardware.


Don't forget the firewalling.


----------



## ivoryg37

What can I do with a spare Supermicro C2758 ITX motherboard? I use to have it running PFsense but got tired of always having to open ports for each individual games for four roommates so I switch back to a traditional router. Now its just sitting in my closet. I was thinking of installing xpenology on it. Any suggestion?


----------



## Trogdor

Quote:


> Originally Posted by *seross69*
> 
> Intel 2 port 10g NIC


Is that Fiber channel or copper?

Quote:


> Originally Posted by *cones*
> 
> Don't forget the firewalling.


Yes! And the VOIP phoning, caching, VPNing, AVing and all the other ings this amazing OS does.


----------



## seross69

Quote:


> Originally Posted by *Trogdor*
> 
> Is that Fiber channel or copper?
> Yes! And the VOIP phoning, caching, VPNing, AVing and all the other ings this amazing OS does.


Copper!! the X540-T2


----------



## Trogdor

Quote:


> Originally Posted by *seross69*
> 
> Copper!! the X540-T2


Wow, nice. Have you picked up a 10G switch too?

I'm pretty jealous.


----------



## seross69

Quote:


> Originally Posted by *Trogdor*
> 
> Wow, nice. Have you picked up a 10G switch too?
> 
> I'm pretty jealous.


I did but I sold it as it really is not needed with 3 PC's connected to the server but the 1GB funtion of this NIC is a lot faster than the on board solution so I have teamed the 2 ports to my router and this is good enough!!


----------



## cones

Quote:


> Originally Posted by *ivoryg37*
> 
> What can I do with a spare Supermicro C2758 ITX motherboard? I use to have it running PFsense but got tired of always having to open ports for each individual games for four roommates so I switch back to a traditional router. Now its just sitting in my closet. I was thinking of installing xpenology on it. Any suggestion?


You know you can enable UPNP on it if you wanted.


----------



## seross69

does anyone use Microsoft Server for their servers?? or all free ware??


----------



## ivoryg37

Quote:


> Originally Posted by *cones*
> 
> You know you can enable UPNP on it if you wanted.


I actually did enable UPNP but for some reason, It still required me to open ports and forward for each games. Xbox was constantly on a strict NAT type. LoL and CS GO was experience extreme lag spikes until I open ports for those as well for each individual PC IP address. Also I couldn't figure out a way to get my IPTV (Multicast / IGMP Routing) to work correctly on PFSense. I may try it again since I'm still new to PFsense


----------



## cones

Quote:


> Originally Posted by *ivoryg37*
> 
> I actually did enable UPNP but for some reason, It still required me to open ports and forward for each games. Xbox was constantly on a strict NAT type. LoL and CS GO was experience extreme lag spikes until I open ports for those as well for each individual PC IP address. Also I couldn't figure out a way to get my IPTV (Multicast / IGMP Routing) to work correctly on PFSense. I may try it again since I'm still new to PFsense


I will admit it is not the best at it. I believe you can make groups of IP addresses and then enter that in for the port destination to make it a little easier. It's also been a while since I've last used it.


----------



## Trogdor

Quote:


> Originally Posted by *seross69*
> 
> does anyone use Microsoft Server for their servers?? or all free ware??


I use Server 2012 R2. The school I attend participates in the Microsoft Dreamspark program, so students in IT programs get keys to various versions of Windows included with their tuition.

If I wasn't using WSUS I'd be using CentOS for everything though.


----------



## beatfried

Server 2012 R2 for the infrastructure server (AD, DHCP, DNS, etc.) 2016 TP4 for the new fileserver.


----------



## herkalurk

Quote:


> Originally Posted by *seross69*
> 
> does anyone use Microsoft Server for their servers?? or all free ware??


I have a Server 2012 R2 and Centos 7 server.


----------



## Vispor

Quote:


> Originally Posted by *seross69*
> 
> does anyone use Microsoft Server for their servers?? or all free ware??


I'm a Microsoft Gold Partner so I use MS server products exclusively. Feel free to PM me any questions you may have.


----------



## Versa

Quote:


> Originally Posted by *beatfried*
> 
> Server 2012 R2 for the infrastructure server (AD, DHCP, DNS, etc.) 2016 TP4 for the new fileserver.


I quickly deleted TP4, although It was the SMB essentials: I was pretty ticked to include Telemetry services in it. I'd still use something else even if it wasn't connected with an outside connection. Maybe standard and data center editions will be different?
Quote:


> Originally Posted by *seross69*
> 
> does anyone use Microsoft Server for their servers?? or all free ware??


Used data center for awhile before swapping it. Reusing the same key for Hyper-V VMs is nice


----------



## Paul17041993

Quote:


> Originally Posted by *seross69*
> 
> does anyone use Microsoft Server for their servers?? or all free ware??


I just use CentOS as I don't have a need for windows servers.


----------



## beatfried

Quote:


> Originally Posted by *Versa*
> 
> I quickly deleted TP4, although It was the SMB essentials: I was pretty ticked to include Telemetry services in it. I'd still use something else even if it wasn't connected with an outside connection. Maybe standard and data center editions will be different?


its a tech preview - they have to get the data and i'm happy to help out there.


----------



## Zeus

Quote:


> Originally Posted by *seross69*
> 
> does anyone use Microsoft Server for their servers?? or all free ware??


I use Server 2008 Hyper-V Core for my NAS host


----------



## ebduncan

I guess I will bite

Personal server
Windows Server 2003
AMD FX [email protected] underclocked/volted
Gigabyte FXA990 UD3 rev 1.0
Kingston DDR3 1600mhz ECC 32gb(8gb x4)
AMD Firepro 4300
Case: All Aluminum GlobalWin YCC-8870
Storage:
Seagate 4tb desktop drives x4
64gb OCZ SSD

Purpose: Vmware, file server, 3d modeling.

Been thinking about upgrading it to something much faster, as I like to do lots of rendering on this machine. Figure in another year or two I can probably get a good deal on some older xeons (12c+)


----------



## mcdoc77

Maybe IPFire.
Nice Firewall-Distro. Basicly an IPCop-Fork, but focussed more on usability....got a lot of plugins f.ex.
https://en.wikipedia.org/wiki/IPFire


----------



## Rbby258

Quote:


> Originally Posted by *ebduncan*
> 
> I guess I will bite
> 
> Personal server
> Windows Server 2003
> AMD FX [email protected] underclocked/volted
> Gigabyte FXA990 UD3 rev 1.0
> Kingston DDR3 1600mhz ECC 32gb(8gb x4)
> AMD Firepro 4300
> Case: All Aluminum GlobalWin YCC-8870
> Storage:
> Seagate 4tb desktop drives x4
> 64gb OCZ SSD
> 
> Purpose: Vmware, file server, 3d modeling.
> 
> Been thinking about upgrading it to something much faster, as I like to do lots of rendering on this machine. Figure in another year or two I can probably get a good deal on some older xeons (12c+)


E5 2670's on ebay for like $50


----------



## mcdoc77

Name: Puk....it was meant to be an play-around-linux-system
OS: *Linux Mint Debian Edition*
Case:*BitFenix Neos Midi Tower*
CPU:*Intel J1900* (onboard)
Motherboard:*ASRock Q1900M Pro3*
Memory: *16GB G.Skill RipJawsX DDR3-1600 DIMM CL10 Dual Kit*
PSU: *430 Watt Corsair CX Series Non-Modular 80+ Bronze*
OS HDD (If you have one): 60GB Corsair Force Series SSD
Storage HDD(s): *1xWD Green 1TB, 1x WD Green 6TB*
Server Manufacturer: *me*









Old Haupauge TV-Card (hence the "Pro"-Version of this MB) and an PCI-SATA-Storage-Controller.

Ok, whats so special about this build? Basicaly nothing. That's the point.

I put together some old hardware, added a low cost case, bought a MB with a Quadcore-CPU (~60€) and added a new PSU later. This system was meant to exist just for fun. Linux fiddeling around and stuff.
Now I added my 6TB Green and hence a storage controller.

Two weeks agon, I build my NAS (Build), so the 6TB in my desktop were no longer necessary.

But (since the Idea for the NAS was born out of a 4-HD-loss-disaster) I decided to add a second chain link to my backup chain. It is planned to rsync my filed from the nas periodically with that "server".

If I would build this system from scratch, I'd rather go with an ASRock N3050M, 'cos who needs PCI anymore? + It would provide dvi/HDMI which I use in Dual-Monitor. I got an old Radeon 5450 working ....

The case is very simple build. I would not expect much. The side panels are flexing [....] Some solutions are kind of ghetto.... BUT it is a solid case with (rudimentary) cable management and dust filters in the front and for the psu. Works for me. I bought it once again.




Edit - "1xWD Green 1TB, 1x WD Green 6TB" well, I was kind of dreaming. No RaidZ whatsoever.


----------



## ebduncan

Quote:


> Originally Posted by *Rbby258*
> 
> E5 2670's on ebay for like $50


interesting choice, but assuming I assembled my current server with basically left over parts and it works well enough for my needs now. Ideally any modern 2p system would improve my workflow, but hard to justify spending $$ when it could be used elsewhere









one day I hope to be able to afford something like the current dual 18c/36t haswells for a rendering machine.


----------



## Blackstare

Does my main gaming rig qualifies as server?

Uses: Gaming, 3D Rendering, Video Edition, VM lab (ESXi on another drive), Kitten videos viewer, Video Streaming to Xbone and other devices. Everything goes to a GbE switch and WiFi is handled by two Aruba IAP-105 Access Points with virtual controller.

Xeon E5-2658v3 ES 12 core
Gigabyte X99M-Gaming 5
32GB Corsair Vengeance 2133Mhz
Couple SSDs and couple HDDs
Radeon R9 390


----------



## yawa77

I've been seeing tons of videos and post of the E5-2670s. My current rig for Linux is an AMD 8120, I'd love a dual 2670 setup.


----------



## tiro_uspsss

Quote:


> Originally Posted by *ebduncan*
> 
> I guess I will bite
> 
> Personal server
> Windows Server 2003


why server '03??


----------



## cones

Quote:


> Originally Posted by *tiro_uspsss*
> 
> why server '03??


It's newer then XP. I would guess cost was why.


----------



## Master__Shake

Quote:


> Originally Posted by *yawa77*
> 
> I've been seeing tons of videos and post of the E5-2670s. My current rig for Linux is an AMD 8120, I'd love a dual 2670 setup.


the 2 e5-2670's i bought just came in










just need ram motherboard and coolers


----------



## Rbby258

Quote:


> Originally Posted by *yawa77*
> 
> I've been seeing tons of videos and post of the E5-2670s. My current rig for Linux is an AMD 8120, I'd love a dual 2670 setup.


They turbo to 3.2ghz


----------



## ondoy

E5-2670's are cheap, but the motherboards are expensive and hard to find...


----------



## KyadCK

I mean... It's not a server, but it's _for_ my servers...












Pair of WS-C4948-E's, Layer 3 48-port 1gbps switchs with 4x 10gbps SFPs and dual PSUs.


----------



## Blackstare

Quote:


> Originally Posted by *yawa77*
> 
> I've been seeing tons of videos and post of the E5-2670s. My current rig for Linux is an AMD 8120, I'd love a dual 2670 setup.


I used to have a 2670 before making the jump to x99, got it for 80 bucks on ebay, mobo was a MSI x79 GD45 also used for 100 bucks, cheap and powerful combination.


----------



## ebduncan

Quote:


> Originally Posted by *tiro_uspsss*
> 
> why server '03??


it was free :-D, I will probably upgrade the OS, when I upgrade the hardware later down the road.
Quote:


> Originally Posted by *cones*
> 
> It's newer then XP. I would guess cost was why.


nailed it. I didn't feel like buying a new license for a newer os, for something I was basically doing as an experiment.


----------



## Prophet4NO1

Got mt Intel 535 240GB SSD for the PFSense box today. All I need now is the cooler. Hate waiting.


----------



## Prophet4NO1

More PFSense parts. Waiting on the Noctua cooler and fans. Then, we build!



Quick question on data cache on PFSense. Is it best to have a dedicated drive for the cache? I would use another SSD, probably the same one. 240GB would be enough to cover a typical month. Also is there a way to configure what is cached? If I can make sure it's not caching Netflix for example and only web traffic for example? This is a new OS for me, so I am not sure of all of it's abilities. Coming from DD-WRT.


----------



## cones

If I remember right right you can set it to say don't cache files over 1gig or something similar.


----------



## Versa

Quote:


> Originally Posted by *ondoy*
> 
> E5-2670's are cheap, but the motherboards are expensive and hard to find...


http://www.natex.us/Default.asp
If you are fine with the Intel S2600CP boards they are pretty cheap


----------



## lowfat

Quote:


> Originally Posted by *Prophet4NO1*
> 
> More PFSense parts. Waiting on the Noctua cooler and fans. Then, we build!
> 
> 
> 
> Quick question on data cache on PFSense. Is it best to have a dedicated drive for the cache? I would use another SSD, probably the same one. 240GB would be enough to cover a typical month. Also is there a way to configure what is cached? If I can make sure it's not caching Netflix for example and only web traffic for example? This is a new OS for me, so I am not sure of all of it's abilities. Coming from DD-WRT.


It would probably take squid a year to cache 240GB unless you have it set to caching videos, etc.

Also for everyone, pfSense 2.3 was released yesterday. Comeplete UI overhaul. It looks significantly better.


----------



## Boyboyd

squid will only cache http by default iirc and not https


----------



## Prophet4NO1

Quote:


> Originally Posted by *lowfat*
> 
> It would probably take squid a year to cache 240GB unless you have it set to caching videos, etc.
> 
> Also for everyone, pfSense 2.3 was released yesterday. Comeplete UI overhaul. It looks significantly better.


Quote:


> Originally Posted by *Boyboyd*
> 
> squid will only cache http by default iirc and not https


Ok, but for the second part of the question, best to use a second drive for the cache? If not I might go ahead and get another drive for a RAID1 setup. Install everything on one drive as well as the cache. I know the drive is a bit overkill size wise for the install drive. But, it was cheap enough I did not care.


----------



## lowfat

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Ok, but for the second part of the question, best to use a second drive for the cache? If not I might go ahead and get another drive for a RAID1 setup. Install everything on one drive as well as the cache. I know the drive is a bit overkill size wise for the install drive. But, it was cheap enough I did not care.


I've always used the same drive as the boot drive. Will be much easier to setup.


----------



## Prophet4NO1

Quote:


> Originally Posted by *lowfat*
> 
> I've always used the same drive as the boot drive. Will be much easier to setup.


Fair enough. Might still do the RAID1 though. Leaving to take the kid to Disney on Friday. Last parts will arrive when I am gone. So, will work on it when I get back. Thanks.


----------



## Dalchi Frusche

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Fair enough. Might still do the RAID1 though. Leaving to take the kid to Disney on Friday. Last parts will arrive when I am gone. So, will work on it when I get back. Thanks.


Look forward to seeing the completed build









Have an awesome time at Disney, we just went last month and had a blast. Safe travels.


----------



## Wildcard36qs

Quote:


> Originally Posted by *lowfat*
> 
> It would probably take squid a year to cache 240GB unless you have it set to caching videos, etc.
> 
> Also for everyone, pfSense 2.3 was released yesterday. Comeplete UI overhaul. It looks significantly better.


Sweet! Thanks for the heads up. Will be upgrading tonight.


----------



## techx86

Picked up this rack off craigslist for $175 INCLUDING the Linksys rv082 VPN Router, Dell 4 tape drive thingy (don't wont be using it), APC 1500 UPS which works, and 2 old dell servers.
From Top to bottom:

Old POS cisco router thing
Dell Powerconnect 2748
Broken DLink switch
Linksys rv082 VPN Router
Another smaller dell Powerconnect
a noisy netgear (10/100)
HP Procurve Managed 2650
TYAN AMD Quad-core W/ 4GB DDR2 (Used for whatever, testing drives, OS, ect)
1U dell Server, nothing special
Apple Xserve
1U IBM Server (Don't use it)
Dell tape drive thingy (Don't use it)
old P3 Dell server
Pull-out monitor and keyboard (im sure this has a name)
Compellent 8x SCSI currently no mobo (Just got this one here)
APC 1500
another old H-E-A-V-Y Dell server (Dual 604, 2GB RAM)
Chenbro Dual Processor 8GB RAM, 8x SATA hotswap (to be setup as 2nd backup)
Unnamed case, Was my original do it all server. I think i posted it here a looong time ago.Setup for image rendering with DAZ and a Tesla card.
The rack is a Dell, i am guessing full depth since i have about a foot of working space behind my longest server.

The rack isn't really setup or running yet. I am waiting on rails and screws and one more server. These will be used for setting an actual physical environment for learning windows 10 along with windows server 2012 with domain and AD, eventually a SOPHOS and zyxel for learning also . I work in IT and they require at-least one M$ cert. I may be a linux man, but the world isn't

I was surprised i could nearly fill a rack with stuff i ALREADY had.




This one is my NEW do it all server. I have more info on this one since i just built it this year. Most parts are from eBay, but it's been working great so far.
Runs fileserver, TS3, VM host, p2p downloader, everything else.

Rosewill B2 Spirit case
Supermicro X8DTN+ LGA1366
48GB DDR3 FBDIMM RAM (HP Hynix)
2x Intel Xeon E5530s (Quad core W/ HT, 2.4 Ghz - Soon to be replaced with 2x 6 core Xeons)
1000W EVGA PSU
Total of 20TB of storage
10GBE SFP PCI Card (from PC to Server, No switch)
6x 1GBE RJ45 Ports
Ubuntu 14.04.4




P.S. Racks are HEAVY AS F***. Damn near killed myself, my friend, and his truck getting it home (Not really, but still, heavy).


----------



## Versa

I'd stripped everything outta that rack personally since its junk (minus the UPS), though that rack at 175 is a steal


----------



## techx86

I agree, but junk is free!









I only have plans for the tyan, small dell, switches, the unnamed and the chenbro. The rest is just kinda rack-filler untill i can afford better hardware.
For now i shall continue scavenging the donation pile at work!


----------



## EvilMonk

Quote:


> Originally Posted by *techx86*
> 
> I agree, but junk is free!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I only have plans for the tyan, small dell, switches, the unnamed and the chenbro. The rest is just kinda rack-filler untill i can afford better hardware.
> For now i shall continue scavenging the donation pile at work!


Free is always good if you ask me. You just have to find a use for everything and whats useless you just send to the recycling center to make room for a future server


----------



## Stupidfastwagon

Picked up a couple of things for free last month to setup my "shop" down in the basement.

ML150G6 (VM Box)



It originally came with a single 5506 and 2GB of ram... So why not just stuff some extra crap into it! Here is where it sits now:

2x Intel Xeon 5540
12x 4GB DDR3 10600 ECC Registered
1x Samsung SSD 250GB
2x Seagate 250GB Drives (Raid 0)
P410 SAS Controller
Quad BroadCom Gb Nic
Windows 2012 R2 Datacenter

Beside being a freaking vacuum on start up, its pretty quiet to my standard.

ML110G7 (Storage Host)

Intel i3-2120
2x 2GB DDR3 10600 ECC Unbuffered
1x 250GB WD RE (Boot)
3x 500GB WD Drives (Raid 5)
Windows 2012 R2 Standard

Dell OptiPlex GX620 (Torrent Box)
Pentium D
4x 1GB DDR2
1x 160GB
Quad BroadCom Gb Nic

My Asus RT68U (upstairs) feeds my Procurve 1410-24G Switch (basement)... Not a bad little setup in my eyes.


----------



## wiretap

On it's way from Natex.. will post pictures when it arrives.. sorry, a little excited. Couldn't pass up the bundle deal. Only $451 after coupon code. lolll

Intel S2600CP Dual LGA 2011 Motherboard
128GB (16x8GB) Kingston 2Rx4 PC3L-10600R ECC RAM
2 x Intel Xeon E5-2670 2.60 Ghz. 20MB cache
2 x Passive Heat sinks (I will change these out with something else, but it came with the bundle)


----------



## Cyclops

Quote:


> Originally Posted by *wiretap*
> 
> On it's way from Natex.. will post pictures when it arrives.. sorry, a little excited. Couldn't pass up the bundle deal. Only $451 after coupon code. lolll
> 
> Intel S2600CP Dual LGA 2011 Motherboard
> 128GB (16x8GB) Kingston 2Rx4 PC3L-10600R ECC RAM
> 2 x Intel Xeon E5-2670 2.60 Ghz. 20MB cache
> 2 x Passive Heat sinks (I will change these out with something else, but it came with the bundle)


Good deal. For FreeNAS I presume?

14 x 8TB = 112TB. Close to the limit







.


----------



## Darklyric

Quote:


> Originally Posted by *Cyclops*
> 
> Good deal. For FreeNAS I presume?
> 
> 14 x 8TB = 112TB. Close to the limit
> 
> 
> 
> 
> 
> 
> 
> .


Limit? I didn't think anyone but the nsa could reach that one









Edit: for zfs***


----------



## wiretap

I'll just be using it as a VM box to mess around with. I'll be testing out some graphics card for pass-through support to see if I can get 4 HTPC VM's up and running for all the TV's in my house. Currently I'm just using 4 independent HTPC's. My current file server is setup with SnapRAID in an ESXi Windows VM. I might move some new VM's to this new setup once I get it, to free up my current ESXi box for other applicaitions.


----------



## PuffinMyLye

Finished wiring my new server rack build (for now). Just waiting on the last motherboard to add the 3rd 10Gb DAC Twinax cable.


----------



## lowfat

Quote:


> Originally Posted by *wiretap*
> 
> I'll just be using it as a VM box to mess around with. I'll be testing out some graphics card for pass-through support to see if I can get 4 HTPC VM's up and running for all the TV's in my house. Currently I'm just using 4 independent HTPC's. My current file server is setup with SnapRAID in an ESXi Windows VM. I might move some new VM's to this new setup once I get it, to free up my current ESXi box for other applicaitions.


How does that work? HDMI cables running throughout the entire house? Or CAT6 to HDMI adapters?


----------



## wiretap

Quote:


> Originally Posted by *lowfat*
> 
> How does that work? HDMI cables running throughout the entire house? Or CAT6 to HDMI adapters?


I use monoprice HDMI to ethernet adapters for video/audio to each TV, and USB to ethernet to each TV for the MCE IR remote (or wireless keyboard/mouse). So, each TV has (3) CAT6 cables running to it. Two CAT6 for the HDMI adapter, and one CAT6 for the USB. It works flawless. For the past several years I've been using them, no dropouts, and no dead adapters. I think my longest run is about 65 feet. I can turn on and off the HTPC's via the MCE remote since I have the keyboard wake setting turned on in the BIOS.


----------



## Prophet4NO1

Cooler for the PFSense router is finally here. Still waiting on fans.





The other parts and the tiny case.



Apparently the floods in TX are the cause of the delays in shipping. My fans just shipped today. Only thing I need to decide on now is if I want the fans to intake or exhaust. Thinking intake just to keep with positive pressure and help with just.


----------



## Versa

I forgot how small that cpu fan is, any idea how loud it is? Love to build a quiet storage server, the SA120 DAS I have is pretty loud.


----------



## Prophet4NO1

Quote:


> Originally Posted by *Versa*
> 
> I forgot how small that cpu fan is, any idea how loud it is? Love to build a quiet storage server, the SA120 DAS I have is pretty loud.


I have the bigger brother, 65mm thick, in my file server. It makes less noise then the three NF-F12 fans in the case with the LNA on them. Pretty much a silent machine. Since this cooler has the same fan I expect the same.

This is my file server. You can see the cooler pretty clearly in it.


----------



## wiretap

New ESXi build is starting.. parts are arriving. I think this should be enough resources for a few VM's.









16 cores / 32 threads, 128GB ECC


----------



## Cyclops

Quote:


> Originally Posted by *wiretap*
> 
> New ESXi build is starting.. parts are arriving. I think this should be enough resources for a few VM's.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 16 cores / 32 threads, 128GB ECC


You mean 130GB ECC.


----------



## EvilMonk

Quote:


> Originally Posted by *wiretap*
> 
> New ESXi build is starting.. parts are arriving. *I think this should be enough resources for a few VM's.*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 16 cores / 32 threads, 128GB ECC


Seriously, are you sure? I have doubts? Because I have a 2 HP Proliant DL380 G7 with 2 6 cores Xeon X5670 2.93Ghz Westmere-EP (so 12 cores 24 threads total) ,96 Gb of DDR3 1333 ECC-R and a 12x2Tb external SAS2 MSA60 HP StorageWorks SAN I can hardly run more than 10 vm on each server so I think you're going to be limited there... not







lol seriously if you doubt you have enough power with this server to run a few VM I don't know who will be able to lol...







I started looking on eBay for boards to get me some of these E5-2670 xeons a month ago, those things are sweet


----------



## cones

Quote:


> Originally Posted by *EvilMonk*
> 
> Seriously, are you sure? I have doubts? Because I have a 2 HP Proliant DL380 G7 with 2 6 cores Xeon X5670 2.93Ghz Westmere-EP (so 12 cores 24 threads total) ,96 Gb of DDR3 1333 ECC-R and a 12x2Tb external SAS2 MSA60 HP StorageWorks SAN I can hardly run more than 10 vm on each server so I think you're going to be limited there... not
> 
> 
> 
> 
> 
> 
> 
> lol seriously if you doubt you have enough power with this server to run a few VM I don't know who will be able to lol...
> 
> 
> 
> 
> 
> 
> 
> I started looking on eBay for boards to get me some of these E5-2670 xeons a month ago, those things are sweet


Depends on the OS of the VM. I mean you could make a VM with 12 cores and 103GB of RAM just say you have one


----------



## Versa

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Prophet4NO1*
> 
> I have the bigger brother, 65mm thick, in my file server. It makes less noise then the three NF-F12 fans in the case with the LNA on them. Pretty much a silent machine. Since this cooler has the same fan I expect the same.
> 
> This is my file server. You can see the cooler pretty clearly in it.






That is pretty low profile, I could slap that on my 3930k or mITX build with a g3240 lying around here


Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *wiretap*
> 
> New ESXi build is starting.. parts are arriving. I think this should be enough resources for a few VM's.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 16 cores / 32 threads, 128GB ECC






Winning, I'm actually still debating on grab that bundle or start a new build with the 2630v4s: That 10/20 per socket core goodness


----------



## Prophet4NO1

Quote:


> Originally Posted by *Versa*
> 
> 
> That is pretty low profile, I could slap that on my 3930k or mITX build with a g3240 lying around here


3930K would not be the best idea if it's ever under full load for long. My E3 1241 V3 gets into the 65C rang when transcoding video for my mobile devices. Transcode on the fly barely stresses it. But when you dump videos for storage on a mobile device it max loads the cores. Did 20 DVD MKV files in about 2 hours and sat at 65C, give or take a couple degrees, the whole time. Pretty sure a 3930K would be more then the poor thing can handle.


----------



## Versa

I need to replace it, its degraded over the years, its holding up for 2 streams and stable at 3.9, however I wouldn't trust it for VMs our anything heavy right now. Looking to do a Xeon build for studies + plex once I decide for 2670s or 2630v4


----------



## jibesh

Quote:


> Originally Posted by *Versa*
> 
> Looking to do a Xeon build for studies + plex once I decide for 2670s or 2630v4


Why an E5-2630 v4 build when you can build a complete dual E5-2670 system for the cost of one E5-2630 v4 most likely?


----------



## beatfried

Quote:


> Originally Posted by *jibesh*
> 
> Why an E5-2630 v4 build when you can build a complete dual E5-2670 system for the cost of one E5-2630 v4 most likely?


because the 2670 is 3 generations/4 years old?
ddr4
14nm
avx 2.0
1.5tb ram
tsx-ni
etc, etc.

but why am I even telleing you this after you asked this question............


----------



## Versa

Quote:


> Originally Posted by *jibesh*
> 
> Why an E5-2630 v4 build when you can build a complete dual E5-2670 system for the cost of one E5-2630 v4 most likely?


Power, Efficiency , and a lot more others between v1s and v4s. Would prolly get a Xeon-D1540/1520 later for a 1U build somewhere later.


----------



## Dotachin

Quote:


> Originally Posted by *techx86*


Sir, you have the only picture on the internet of a Nanoxia Deep Silence 6 with a dual cpu motherboard.
+Rep









Just ordered one


----------



## Prophet4NO1

So, post testing my hardware before putting it in the case for the little PFSENSE box. Turns out the G3260 I had laying around is not supported by this mobo. I thought it was, but I may have mixed it up with one of the other boards I looked at. So, no I either need to go to Microcenter and get the Celeron that they have in stock an drop ECC support. Or, order a G3240 instead and keep ECC support. Kinda want to keep the ECC support since I had a 4GB stick already. Guess the stick works either way. Just wont have ECC with the Celeron. It's only a $25 difference in price. Decisions.


----------



## The Pook

I call my rig a server. Does that count?


----------



## Versa

Quote:


> Originally Posted by *Prophet4NO1*
> 
> So, post testing my hardware before putting it in the case for the little PFSENSE box. Turns out the G3260 I had laying around is not supported by this mobo. I thought it was, but I may have mixed it up with one of the other boards I looked at. So, no I either need to go to Microcenter and get the Celeron that they have in stock an drop ECC support. Or, order a G3240 instead and keep ECC support. Kinda want to keep the ECC support since I had a 4GB stick already. Guess the stick works either way. Just wont have ECC with the Celeron. It's only a $25 difference in price. Decisions.


G3240 has ECC support? That seems weird when the 3260 doesn't
Quote:


> Originally Posted by *The Pook*
> 
> I call my rig a server. Does that count?


----------



## Prophet4NO1

Quote:


> Originally Posted by *Versa*
> 
> G3240 has ECC support? That seems weird when the 3260 doesn't


Yep.

http://ark.intel.com/products/80796/Intel-Pentium-Processor-G3240-3M-Cache-3_10-GHz

I think all, or at least most, of the Pentium chips as well as a couple i3 chips support ECC. The 3260 I was trying to use as well. I worked in my Supermicro board with 32GB ECC before the E3 upgrade in the file server.


----------



## Dalchi Frusche

Here is my humble beginnings of a server. It runs Samba shares, backups, two Minecraft servers, and a database to store all my wife's recipe information. Specs below.

*CPU* Phenom II X4 945 3.0Ghz
*Motherboard* Asus M5A88-V EVO
*RAM* ADATA 6GB
*Hard Drive* Hitachi Desktar 160GB
*Hard Drive* Western Digital WD3200
*Hard Drive* Western Digital WD800
*OS* Ubuntu Server
*Optical Drive* Generic DVD RW
*Power* Raidmax RX 530W
*Graphics* Integrated
*Cooling* Xigmatek Loki
*Case XCLIO* A380BK

Before:



After:


----------



## The Pook

Needs more pink


----------



## Prophet4NO1

Looks like the Celeron G1840 also supports ECC memory. http://ark.intel.com/products/80800/Intel-Celeron-Processor-G1840-2M-Cache-2_80-GHz When I get a chance I will head over to MC and pick one up to finish this build. Really annoying but it's my fault for not double checking the CPU support list.

Gotta find something to stick this G3260 in after pulling it from the file server.

Pic after post testing with the unusable CPU.


----------



## Prophet4NO1

This machine is ticking me off. So, got the Celeron G1840 today. Stick it in, same issue. Got in contact with William over at Asrock. We will see what happens from here. Might try a stick of RAM from my file server. See if I can get it to boot at least.


----------



## Prophet4NO1

Swapped RAM sticks for one from my file server. Works now. Guess I got a bad stick. But, now ASRock IPMI console redirection wont run. It's always something. lol


----------



## DogeTactical

OS: Ubuntu/LXDE
Case: Optiplex gx270 sff
CPU: Pentium 4
Motherboard: Gx270
Memory: One 512mb and one 256mb








PSU: Stock
Storage HDD(s): 40gb hdd
Server Manufacturer (Ex: Dell, HP, You?)ell

So i got it for free from my grandma and decided to see what i could do with it..
I use it as a download server and pretty much just messing around in linux









EDIT: and now 24/7 Folding


----------



## Prophet4NO1

Been talking with William over at Asrock. Looks like the EEPROM for the BMC is not working correctly. So i sent a BIOS screen shot with the two MAC addresses on it. He programmed and tested a new chip for me and put it in the mail today. Fingers crossed this finally gets me going. Simple router build just keeps having stupid issues holding me up. Good times. Lol


----------



## Dalchi Frusche

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Been talking with William over at Asrock. Looks like the EEPROM for the BMC is not working correctly. So i sent a BIOS screen shot with the two MAC addresses on it. He programmed and tested a new chip for me and put it in the mail today. Fingers crossed this finally gets me going. Simple router build just keeps having stupid issues holding me up. Good times. Lol


If it's not one thing, it's another. I have my fingers crossed for you hoping that the new chip solves the issue. It is awesome that Asrock was able to do that for you. I'm looking forward to seeing that working router of yours.


----------



## Dark

Here's the server setup before we moved to a SFH last week--

Tripplite 12U case
Ubiquiti USG
Ubiquiti AC Pro
APC SC1500 UPS
Dlink 16port gigabit switch
Dell R730xd - 2x 6c, 64GB DDR4 ECC, 2x 256GB SSD, 6x 3TB 7200
Dell R320 - 1x 8c, 64GB DDR3 ECC, 1x 60GB, 4x 4TB 7200rpm

What isn't shown is the Tripplite slide out console KVM.

What's changing?
42U server rack
dedicated 2x 20amp circuits
additional APC SC1500
Ubiquiti Unifi 24-port 500w switch
2x Dell R710 servers


----------



## Prophet4NO1

Quote:


> Originally Posted by *Dalchi Frusche*
> 
> If it's not one thing, it's another. I have my fingers crossed for you hoping that the new chip solves the issue. It is awesome that Asrock was able to do that for you. I'm looking forward to seeing that working router of yours.


Hope so as well. The main reason i got a server grade board was for IPMI. Never need a screen or a keyboard. Has not really worked out like that so far on this one. Lol


----------



## Prophet4NO1

Went ahead and stuffed the machine into the case. It is a tight fit. Especially with all the extra cables for the front that i wont be using. I am still waiting for the EEPROM chip. But i should be able to just take the NIC out to get to it. So, some pics of the machine built and turned on.





The plan for now is to not use the mobo LAN at all. I will use the Pro 1000 to run it all. Plan for now, bottom to top WAN, LAN1 (main LAN), LAN2 (game server and work area switch), WiFi, EPMTY. I have more ports than I need, but I figure it give me more room to expand later. Maybe another WiFi for example.


----------



## twerk

Hi all. Just bought a new RAID card - HP P440 w/4GB FBWC. Very happy with the performance, I only have 4 hard drives in RAID 5 currently and the speeds are insane. Looking to add another 2 next week and then another 2 not too far in the distant future.

Just wanted to know if my temperatures are normal. The card's sitting at 85-90C constantly, is this too high? I'm thinking of buying another 2 hot-swap fans to cool it down a bit.


----------



## Versa

What are your ambients, thats seriously high







? My SA120DAS (controller + drives) and the LSI card (inside a whitebox) never go above 30C.


----------



## Prophet4NO1

So, discovered a problem with my pro 1000 NIC. Turns out the heat sink is not making contact with the two chips under it. And its soldered onto the PCB. This build is driving me nuts. So, now i have to desolder the heatsink, junst four pins. Then put new thermal compound on and solder it all together. Really is one thing after another.


----------



## ChRoNo16

Prophet how did you even notice? that seems like a tiny detail I would never see!


----------



## Prophet4NO1

Quote:


> Originally Posted by *ChRoNo16*
> 
> Prophet how did you even notice? that seems like a tiny detail I would never see!


Dumb luck. I was just looking at the card at my work desk. Checking it out basically. I wanted to see how big the chip was under the heat sink, so I held it up to the light to look. That's when I could see it was not even touching. It's direct contact. Both chips have no contact.

This is the best picture I could get.



One chip it's clear is not contacting. The other one to the right looks like it might be touching a tiny bit, but I am not 100% sure.


----------



## twerk

Quote:


> Originally Posted by *Versa*
> 
> What are your ambients, thats seriously high
> 
> 
> 
> 
> 
> 
> 
> ? My SA120DAS (controller + drives) and the LSI card (inside a whitebox) never go above 30C.


Ambient is 22C.

All other components are ice cold. CPU is always <40C and chipset <50C even under load.

In the HP iLO the default warning alert for the card is 100C which makes me think it's meant to be hot... not sure.


----------



## Cyclops

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Dumb luck. I was just looking at the card at my work desk. Checking it out basically. I wanted to see how big the chip was under the heat sink, so I held it up to the light to look. That's when I could see it was not even touching. It's direct contact. Both chips have no contact.
> 
> One chip it's clear is not contacting. The other one to the right looks like it might be touching a tiny bit, but I am not 100% sure.


Maybe they're just missing a thermal pad.


----------



## Prophet4NO1

Quote:


> Originally Posted by *Cyclops*
> 
> Maybe they're just missing a thermal pad.


There is a thin layer of TIM on the heatsink. Not sure why the contact is so poor. But, i will fix it. It will have to wait till next weekend. Leaving Monday for work. Gone all week.


----------



## cones

Quote:


> Originally Posted by *twerk*
> 
> Ambient is 22C.
> 
> All other components are ice cold. CPU is always <40C and chipset <50C even under load.
> 
> In the HP iLO the default warning alert for the card is 100C which makes me think it's meant to be hot... not sure.


Maybe it's not getting any air flow in the case?


----------



## DaveLT

Quote:


> Originally Posted by *Prophet4NO1*
> 
> There is a thin layer of TIM on the heatsink. Not sure why the contact is so poor. But, i will fix it. It will have to wait till next weekend. Leaving Monday for work. Gone all week.


Soldered at the wrong height?
Quote:


> Originally Posted by *cones*
> 
> Maybe it's not getting any air flow in the case?


They're designed to get A LOT of airflow so that.


----------



## Prophet4NO1

Quote:


> Originally Posted by *DaveLT*
> 
> Soldered at the wrong height?
> They're designed to get A LOT of airflow so that.


Possible. I suck at PCB soldering, but i will fix it.


----------



## ChRoNo16

Thats for sure luck, I would never have noticed.


----------



## Dalchi Frusche

PFSense build update:
So I finally managed to get ahold of a trash-borne ATX case. A local gentleman had some cases that were headed to the scrap yard. Ended up getting a e-machines mATX, and two slim ITX cases for free. I also bought a 1TB WD Black off him for $10 and he threw in an extra 60GB drive.









Parts acquired:



Time to begin:



Cable management = meh:



First power on, everything works. Now to get PFSense installed. I was worried that only running a PSU with a 20pin ATX power and a 4pin CPU would hold the build back. Didn't want to use the bulky non-modular 580w that I had because the cables would've been a rats nest.



Specs:
Mobo: GIGABYTE GA-MA785GM-US2H
CPU: Athlon II X2 220 Dual Core 2.8Ghz w/ stock cooler
PSU: I forget, just old and low power
HDD: 60 GB WD Caviar WD600 IDE
NIC: Intel Pro Dual/Quad(still to purchase)

Next order of business:
- Add 80mm exhaust fan
- Order dual/quad Intel NIC
- Run CAT6 through the house
- Purchase a Gigabit switch
- Purchase patch panel
- Build my custom rack in the basement
- Install all parts in rack
- And much more&#8230;&#8230;.


----------



## DogeTactical

Quote:


> Originally Posted by *Dalchi Frusche*
> 
> Next order of business:
> - Add 80mm exhaust fan
> - Order dual/quad Intel NIC
> - Run CAT6 through the house
> - Purchase a Gigabit switch
> - Purchase patch panel
> - Build my custom rack in the basement
> - Install all parts in rack
> - And much more&#8230;&#8230;.


I would love to watch this build log as you add to it


----------



## Dalchi Frusche

Quote:


> Originally Posted by *DogeTactical*
> 
> I would love to watch this build log as you add to it


Currently typing up the first post, will edit a link to the build log of my Node Zero project once I submit.

EDIT:

Build log for my Node Zero to include PFSense, File, PLEX servers and necessary switches, patch panels, cables, etc.


----------



## wiretap

Got my dual Xeon put into a Phanteks Enthoo Pro case. Still a work in progress and I have to figure out what I'm going to do for VM storage. I have about 10TB of WD Green's laying around, and two SSD's. Since I have so much RAM (128GB), I'll probably setup RAM caching using PrimoCache for a VM that needs some fast speed. I also dropped in my Highpoint DC7280, a 5-port USB 3.0 card, and a cheap graphics card. I'll probably try Hyper-V, and if that doesn't work out, use ESXi and migrate some OVF templates over from my other ESXi server.


----------



## DrockinWV

Here is my freshly build server finished it up last week, for my first Freenas experienec.


----------



## Dalchi Frusche

Quote:


> Originally Posted by *DrockinWV*
> 
> Here is my freshly build server finished it up last week, for my first Freenas experienec.


Looks good! Let us know how the FreeNAS experience goes.


----------



## DrockinWV

Quote:


> Originally Posted by *Dalchi Frusche*
> 
> Looks good! Let us know how the FreeNAS experience goes.


Thanks, Im still pretty excited about it and learning FreeNas as much as I can. I have printed out the 9.10 manual and trying to get everything set up step by step, but still not fully up and running the way Id like it to yet.


----------



## Prophet4NO1

Got the EEPROM chip today from ASRock. Still no functioning console redirect.


----------



## Dalchi Frusche

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Got the EEPROM chip today from ASRock. Still no functioning console redirect.


Man, that sucks! I'm getting just as frustrated as you.







I hate to see when things don't work out for people. So what's the next step?


----------



## Prophet4NO1

Quote:


> Originally Posted by *Dalchi Frusche*
> 
> Man, that sucks! I'm getting just as frustrated as you.
> 
> 
> 
> 
> 
> 
> 
> I hate to see when things don't work out for people. So what's the next step?


Not 100% sure. Out of town this week for work. So, can not do anything till i get back.


----------



## Prophet4NO1

Finally had some success! I reinstalled Firefox and Chrome. Now I can get into console redirect via Firefox. Chrome still hates me. So, now I just need to get a new soldering iron to fix the heat sink on the NIC. Looks like there is some flux residue on the PCB. Someone may have dinked with it before. Don't usually see that out of the factory. So, this could be interesting.


----------



## Turgin

My new FreeNAS server. CPU, RAM, NIC, and hard drives were salvaged from servers at work that were being thrown out.

CPU: 2 x Xeon X5670 6c/12t @ 2.93 GHz
RAM: 96GB ECC (12 x 8GB)
Drives: 10 x HGST 2TB Ultrastar SATA2
NIC1: dual port Intel gigabit NIC
NIC2: dual port Intel X520 10 gigabit

Motherboard: Supermicro X8DT6-F
CPU Heatsinks: Supermicro SNK-P0035AP4
CPU Fans: Noctua NF-B9
Case: Rosewill RSV-L4000
Power supply: EVGA G2-850
Drive cages: 2 x Rosewill RSV-SATA-Cage-34





My VM host will be a Cisco C220M3 server with dual Xeon E5-2650 CPU, 64GB RAM, and another 10Gb NIC.


----------



## Prophet4NO1

I have to ask, what are you doing on a freenas box that can use that much power? My little E3 only ever gets taxed transcoding with Plex. Nothing else I have used it for really uses any CPU. Nice machinne, by the way.


----------



## Turgin

Quote:


> Originally Posted by *Prophet4NO1*
> 
> I have to ask, what are you doing on a freenas box that can use that much power? My little E3 only ever gets taxed transcoding with Plex. Nothing else I have used it for really uses any CPU. Nice machinne, by the way.


Thanks.

Nothing that needs that much CPU power, but they were free. I just had to spend ~$150 on the motherboard.

I will be using iSCSI over the 10Gb between my VM host (ESXi or Xenserver). The 96GB of RAM ought to give me plenty of ARC to make things as fast as possible. I actually have something like 40 of the 2TB drives so plenty of shelf spares.


----------



## Prophet4NO1

Quote:


> Originally Posted by *Turgin*
> 
> Thanks.
> 
> Nothing that needs that much CPU power, but they were free. I just had to spend ~$150 on the motherboard.
> 
> I will be using iSCSI over the 10Gb between my VM host (ESXi or Xenserver). The 96GB of RAM ought to give me plenty of ARC to make things as fast as possible. I actually have something like 40 of the 2TB drives so plenty of shelf spares.


No one ever gives me free hardware.









Lol


----------



## EvilMonk

Quote:


> Originally Posted by *Turgin*
> 
> My new FreeNAS server. CPU, RAM, NIC, and hard drives were salvaged from servers at work that were being thrown out.
> 
> CPU: 2 x Xeon X5670 6c/12t @ 2.93 GHz
> RAM: 96GB ECC (12 x 8GB)
> Drives: 10 x HGST 2TB Ultrastar SATA2
> NIC1: dual port Intel gigabit NIC
> NIC2: dual port Intel X520 10 gigabit
> 
> Motherboard: Supermicro X8DT6-F
> CPU Heatsinks: Supermicro SNK-P0035AP4
> CPU Fans: Noctua NF-B9
> Case: Rosewill RSV-L4000
> Power supply: EVGA G2-850
> Drive cages: 2 x Rosewill RSV-SATA-Cage-34
> 
> 
> 
> 
> 
> My VM host will be a Cisco C220M3 server with dual Xeon E5-2650 CPU, 64GB RAM, and another 10Gb NIC.


What the hell is freenas doing on something that overkill??? lol


----------



## EvilMonk

Quote:


> Originally Posted by *Prophet4NO1*
> 
> No one ever gives me free hardware.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Lol


Me too friend, join the club lol


----------



## Turgin

Quote:


> Originally Posted by *EvilMonk*
> 
> What the hell is freenas doing on something that overkill??? lol


What's overkill about it other than the CPUs?


----------



## EvilMonk

Quote:


> Originally Posted by *Turgin*
> 
> What's overkill about it other than the CPUs?


The amount of RAM for the storage capacity installed in it lol...
I have 2 SANs both HP professional solutions and you are using more ram than I do lol








My 2 SANs are HP StorageWorks MSA60 each with 36Tb in raid6 (12x3Tb), each on a dual link SAS interface card (HP Smart Array P812 1Gb FBWC) linked to a Proliant DL360 G7 with 2 L5640 (low power version 6 cores Westmere EP 2.26Ghz) and 72Gb of DDR3 1333 ECC Registered... and I never had my ram loaded even with a bunch of VMs working from these drives and all my editing work I store and edit on them from my Mac Pros lol...
That's what I meant by overkill... You are using a badass overkill server to run freenas of 10x2Tb drives... I run mines 24x3Tb drives on less and can't get close to choking it with a lot more workload on this single HP server with the iSCSI for all my VMware (close to 10VM), a plex server, AD server, Sharepoint, Basic Cisco ASDM windows install console, Backup exec server/console, all my libraries for my video editing, photo editing, a couple mysql databases.
All behind a Cisco ASA 5505 on it's own internet connection for backups between the office and my house that are loaded incrementally each nights...

For the RAM requirements for a business taken from FreeNAS website...
*Typical Requirements for Small and Medium Business:*
32GB ECC RAM Minimum *(1GB per TB of storage is a good rule of thumb but might need to be adjusted depending on workload/application)*


----------



## Turgin

Quote:


> Originally Posted by *EvilMonk*
> 
> The amount of RAM for the storage capacity installed in it lol...


Its just what I have. We decommissioned an old test environment consisting of a Cisco UCS blade chassis and a couple of NetApp filers. I took the CPU and RAM from the UCS blades and the drives from the NetApp disk shelves before they were thrown away. In further cleaning of storage I rescued the 10Gb NICs as well as a few other odds and ends.

I'm open to trying other storage platforms. I've installed Nappit on OmniOS and Nexentastor but didn't like either as much as FreeNAS.


----------



## EvilMonk

Quote:


> Originally Posted by *Turgin*
> 
> Its just what I have. We decommissioned an old test environment consisting of a Cisco UCS blade chassis and a couple of NetApp filers. I took the CPU and RAM from the UCS blades and the drives from the NetApp disk shelves before they were thrown away. In further cleaning of storage I rescued the 10Gb NICs as well as a few other odds and ends.
> 
> I'm open to trying other storage platforms. I've installed Nappit on OmniOS and Nexentastor but didn't like either as much as FreeNAS.


What would you like to run beside storage?







There is plenty power in there to run a lot of services and allocate ressources to different tasks if you ask me. My 2 6 cores Westmere EP are handling a lot more tasks and they are low power chips @ 2.26 Ghz of your 2 2.93 Ghz X5670 you have in your server


----------



## Turgin

Quote:


> Originally Posted by *EvilMonk*
> 
> What would you like to run beside storage?


As of right now I plan on some sort of home PVR backend like Myth or similar to serve probably 6 or 7 TVs, home automation control, home security DVR for 5 or so cameras, backup destination for 5 or so Windows clients as well as shared file storage for those same clients, internal root CA, SFTP, etc. My kids want a Minecraft server and a Teamspeak or Mumble server. I'll finalize the specifics of those once my server infrastructure is in place and I can test the various options.

My original intent was to run VirtualBox in a jail on FreeNAS for the guests I have in mind. I ditched that plan when I realized you can't (or at least *I* can't) present multiple physical NICs from FreeNAS into VB. I intend to build a true DMZ with some of my guests. So, then my plan morphed into discreet storage and hypervisor servers.

I have a Cisco C220M3 server I plan to use as either an ESXi or Xenserver host. It has 2 x E5-2650 8 core CPUs and 64GB RAM installed already. But, it only holds 2.5" drives which I have several of but they are 76GB 15K SAS drives thus the need for separate storage. iSCSI is my choice for now and I will use the 10Gb NICs direct cabled for that.

Installing Windows 2012 R2 on the storage server right now to give Storage Spaces a spin.


----------



## EvilMonk

Quote:


> Originally Posted by *Turgin*
> 
> As of right now I plan on some sort of home PVR backend like Myth or similar to serve probably 6 or 7 TVs, home automation control, home security DVR for 5 or so cameras, backup destination for 5 or so Windows clients as well as shared file storage for those same clients, internal root CA, SFTP, etc. My kids want a Minecraft server and a Teamspeak or Mumble server. I'll finalize the specifics of those once my server infrastructure is in place and I can test the various options.
> 
> My original intent was to run VirtualBox in a jail on FreeNAS for the guests I have in mind. I ditched that plan when I realized you can't (or at least *I* can't) present multiple physical NICs from FreeNAS into VB. I intend to build a true DMZ with some of my guests. So, then my plan morphed into discreet storage and hypervisor servers.
> 
> I have a Cisco C220M3 server I plan to use as either an ESXi or Xenserver host. It has 2 x E5-2650 8 core CPUs and 64GB RAM installed already. But, it only holds 2.5" drives which I have several of but they are 76GB 15K SAS drives thus the need for separate storage. iSCSI is my choice for now and I will use the 10Gb NICs direct cabled for that.
> 
> Installing Windows 2012 R2 on the storage server right now to give Storage Spaces a spin.


Sorry I tried to find out what raid controller you are using but I couldn't find that information. Are you using the one built in on the motherboard or an hardware one in passthrough?
I run windows server 2012 r2 datacenter on my storage server because of my raid controller (HP Smart Array P812 1Gb FBWC) which isn't working as well for me under FreeNAS. I might be able to help you more with ideas if you intend to run yours via the integrated SAS raid controller on the mobo (LSI SAS2008) or via a combo of both the intel ICH10R and the LSI controller. Are you using an SSD as your OS install drive? For the price they are lately you might want to invest on a cheap pair of SSDs you'll use as your OS drives, I just replaced my previous crucial M500 240Gb by a pair of intel 535 240Gb in raid 1 to install the OS, but you might not need a raid 1, I just needed the safety since it's mostly used for critical operations.

The 10 GbE between your 2 servers for the iSCSI will clearly be a big plus


----------



## Turgin

Quote:


> Originally Posted by *EvilMonk*
> 
> Sorry I tried to find out what raid controller you are using but I couldn't find that information. Are you using the one built in on the motherboard or an hardware one in passthrough?
> I run windows server 2012 r2 datacenter on my storage server because of my raid controller (HP Smart Array P812 1Gb FBWC) which isn't working as well for me under FreeNAS. I might be able to help you more with ideas if you intend to run yours via the integrated SAS raid controller on the mobo (LSI SAS2008) or via a combo of both the intel ICH10R and the LSI controller. Are you using an SSD as your OS install drive? For the price they are lately you might want to invest on a cheap pair of SSDs you'll use as your OS drives, I just replaced my previous crucial M500 240Gb by a pair of intel 535 240Gb in raid 1 to install the OS, but you might not need a raid 1, I just needed the safety since it's mostly used for critical operations.
> 
> The 10 GbE between your 2 servers for the iSCSI will clearly be a big plus


I've flashed the onboard LSI to IT mode per FreeNAS best practice. Right now, I have 6 drives connected to the Intel chipset ports in RAIDZ2 and 4 more split between the SAS channels in a mirrored pool. Plan is for the RAIDZ2 to be file level NFS/CIFS shares and the mirror to be iSCSI block storage. I have plenty of smallish enterprise SSDs too though: 2 x Intel 710 100GB SATA2 SSDs and 6 x Micron M500 128GB SATA3 SSDs. I'm using a pair of Sandisk 32GB flash drives to boot FreeNAS and I've got 4 of the SSDs on the other LSI ports where I've installed OmniOS and Windows Server 2012 to test for now.

I keep saying FreeNAS but I've not truly settled on it yet. Thinking I'm going to try a Linux distro and webmin to see what I think of that for storage. I've also thought about putting the 6 Micron drives on the SAS ports as a mirrored set for my block storage. That really ought to take advantage of the 10Gb!


----------



## twerk

Anyone know how to reliably test performance of an array backed up by a FBWC?

I've tried using CrystalDiskMark and ATTO and it's just giving me insane speeds, clearly it's just testing the cache and not the drives.


----------



## jibesh

Quote:


> Originally Posted by *twerk*
> 
> Anyone know how to reliably test performance of an array backed up by a FBWC?
> 
> I've tried using CrystalDiskMark and ATTO and it's just giving me insane speeds, clearly it's just testing the cache and not the drives.


How much cache do you have? Set the total length to an amount above your cache i.e. if you have 2GB cache, set it to 3 or 4GB.


----------



## twerk

Quote:


> Originally Posted by *jibesh*
> 
> How much cache do you have? Set the total length to an amount above your cache i.e. if you have 2GB cache, set it to 3 or 4GB.


4GB, so setting it to 6GB or so should do it? I'll give it a try later today, thanks!


----------



## KyadCK

Quote:


> Originally Posted by *twerk*
> 
> Quote:
> 
> 
> 
> Originally Posted by *jibesh*
> 
> How much cache do you have? Set the total length to an amount above your cache i.e. if you have 2GB cache, set it to 3 or 4GB.
> 
> 
> 
> 4GB, so setting it to 6GB or so should do it? I'll give it a try later today, thanks!
Click to expand...

Yup, just keep throwing more data at it until the cache is flooded.

Then again, that insane speed is the cache doing it's job, no?


----------



## twerk

Getting 250MB/s read and write with larger transfer size, much more realistic and what I would expect.


----------



## jibesh

Quote:


> Originally Posted by *twerk*
> 
> Getting 250MB/s read and write with larger transfer size, much more realistic and what I would expect.


You can also go into the raid controller settings, disable caching temporarily and test that way.


----------



## twerk

Quote:


> Originally Posted by *jibesh*
> 
> You can also go into the raid controller settings, disable caching temporarily and test that way.


I still want to test with the cache, but result in real-life performance numbers.

I had tested it without the cache a while ago and the numbers are absolutely abysmal. It was 60MB/s read, 15MB/s write or something around there.


----------



## Tokkan

I managed to snatch up a HP Gen8 Microserver for 100 euros brad new with purchase receipt! 2 year warranty as per EU standard. I believe it was a good deal but Idk. What you guys think?
It came with 4Gb of HP ECC memory and a Celeron 1820T. Already bought another 4GB stick.
I already put a few hard drives I had laying arround for storage. 3x500GB and 1x2TB. Will be ordering another 3x2TB and a Xeon E3-1230.
Currently it's running Nas4Free, have looked into FreeNas but since I like underdogs I went with Nas4Free, just to get a feel for it.

My goal current is have it a multi-functional server. Currently I have no idea what it will exactly be but I'm working arround the concept of having two OS, Nas4Free being one of them. The only one with direct access to the HDD's so for that I need VT-d.
Will try to also have available a Linux Distro with an Nviida GT730 or a used low profile quadro for rendering on CUDA (Blender projects).

My biggest concern is the memory, currently I have half the maximum memory, which is 8GB. Max it can support is 16GB which I wouldn't want to spend money in as of now.
Would it be feasible to get a Nas4Free on EXSi and Debian running side by side and allocating most of the resources to Debian when it's on, in order to speed up the renders?
I also have looked arround and it seems I could install only Debian, and then install Nas4Free on it and have it running dedicated inside Debian. I have disregarded that because I don't want to work in the OS that manages my data.

This is my 1st venture into setting up a server, any advices or guidance will be welcomed because I am in need of some


----------



## bobfig

im going to have to say this was a tight fit. added a super micro CSE-M35TQB, fans a little louder then i would like but i have another one in the mail that should fix that.



as for @Tokkan i think you got a fair deal. sounds like you may have a nice little file server going. i don't know much about them but what you are going to end up with is close to what i have and should do you good. as for ram i would stay with 8gb at the moment and see how full they are during use and decide from there. as for OS stuff i haven't really messed with the linux side as i can get windows server 2012 for free and has been awesome to use.


----------



## PuffinMyLye

Finished my vSAN cluster build (for Plex redundancy and home labbing). Link to full post about finished build *here*.


----------



## Prophet4NO1

This PFSense box just does not want to happen. Finally got a chance to fix the heatsink problem on the NIC. Put it in, then went to finnaly instal the OS. First the virtual CD in the consol wont accept the ISO. Ok... so I download the USB image and try to load that into a virtual drive. LOADS! Try to boot, wont load the image. So, I say screw it and try to mount it on an actual USB stick. Wont mount. Says the file is corrupted. Download a new copy from another mirror. Same issue.

Been a really long time since i have had a build fight me every step of the way.

Not real sure what to try next. Might have to buy some CDs and try burning the ISO.


----------



## jibesh

Quote:


> Originally Posted by *Prophet4NO1*
> 
> This PFSense box just does not want to happen. Finally got a chance to fix the heatsink problem on the NIC. Put it in, then went to finnaly instal the OS. First the virtual CD in the consol wont accept the ISO. Ok... so I download the USB image and try to load that into a virtual drive. LOADS! Try to boot, wont load the image. So, I say screw it and try to mount it on an actual USB stick. Wont mount. Says the file is corrupted. Download a new copy from another mirror. Same issue.
> 
> Been a really long time since i have had a build fight me every step of the way.
> 
> Not real sure what to try next. Might have to buy some CDs and try burning the ISO.


Download this disk image file and extract it from the archive.

https://nyifiles.pfsense.org/mirror/downloads/pfSense-CE-memstick-2.3.1-RELEASE-amd64.img.gz

Follow the instructions in this guide afterwards and see if you can boot from this.

https://oitibs.com/pfsense-usb-install-guide-rufus/


----------



## Prophet4NO1

Quote:


> Originally Posted by *jibesh*
> 
> Download this disk image file and extract it from the archive.
> 
> https://nyifiles.pfsense.org/mirror/downloads/pfSense-CE-memstick-2.3.1-RELEASE-amd64.img.gz
> 
> Follow the instructions in this guide afterwards and see if you can boot from this.
> 
> https://oitibs.com/pfsense-usb-install-guide-rufus/


Woot! Installing! Thanks. Somehow missed the Rufus program. But, I was planning to use virtual CD over IPMI the whole time.


----------



## PuffinMyLye

Quote:


> Originally Posted by *Prophet4NO1*
> 
> First the virtual CD in the consol wont accept the ISO


What exactly is the error you're getting here?


----------



## Prophet4NO1

Quote:


> Originally Posted by *PuffinMyLye*
> 
> What exactly is the error you're getting here?


Just said it could not connect the ISO when I hit connect. No more info then that.

At any rate, the machine is up an running. I will wait till tonight to finish the config and get started on my network. Planning to run each interface on the same subnet. Just give different blocks to the DHCP. for each port. I have also changed my planned config a bit. EM0/1 will be WAN/LAN1. The quad NIC will team two ports for the file server in LACP. Pretty sure my Pro1000 supports LACP. The other two will support WiFi and LAN2. LAN2 has my game server and a switch that sits on the work desk. LAN2 might get it's own subnet just because of the gaming server.

Finished box







The work space. Game server is in the closet.


----------



## seross69

Can someone explain the difference between users and CALs in using windows server 2012??

Do I have to buy a Cal for every device I want to stream or trans-code movies to??

thank you in advance???


----------



## Prophet4NO1

So, have the PFSense box rolling and mostly setup how I want it. Went to hook up the FreeNAS server to it and it's crapping the bed. Really slow web UI that drops connection. Installed plugins are missing. Plugin page for downloading them will not load. And the update page wont even let me pick the update string I want. The whole thing seems borked.... All I did was shut it down while setting up the new network.


----------



## Prophet4NO1

Kind of thinking the problem might be firewall related. It's blocking tons from the server. Using the easy rule from the firewall list is still not allowing them to pass. Anyone have a list of rules to get this working?


----------



## Prophet4NO1

Open TCP and UDP on the server interface. No more firewall blocks. But still have the same issues. I think the OS is just messed up.


----------



## Prophet4NO1

Think I have all the kinks worked out. FreeNAS is operating again with no reinstall or anything. Combination of Chrome not liking something about the server all of the sudden and a few other tweaks. Router seems pretty happy too. Been tweaking the DNS settings and some of the firewall settings. Next is to start getting some plugins going. Thinking Squid and Snort for sure.


----------



## tiro_uspsss

Quote:


> Originally Posted by *Prophet4NO1*
> 
> So, have the PFSense box rolling and mostly setup how I want it. Went to hook up the FreeNAS server to it and it's crapping the bed. Really slow web UI that drops connection. Installed plugins are missing. Plugin page for downloading them will not load. And the update page wont even let me pick the update string I want. The whole thing seems borked.... All I did was shut it down while setting up the new network.


Quote:


> Originally Posted by *Prophet4NO1*
> 
> Kind of thinking the problem might be firewall related. It's blocking tons from the server. Using the easy rule from the firewall list is still not allowing them to pass. Anyone have a list of rules to get this working?


Quote:


> Originally Posted by *Prophet4NO1*
> 
> Open TCP and UDP on the server interface. No more firewall blocks. But still have the same issues. I think the OS is just messed up.


Quote:


> Originally Posted by *Prophet4NO1*
> 
> Think I have all the kinks worked out. FreeNAS is operating again with no reinstall or anything. Combination of Chrome not liking something about the server all of the sudden and a few other tweaks. Router seems pretty happy too. Been tweaking the DNS settings and some of the firewall settings. Next is to start getting some plugins going. Thinking Squid and Snort for sure.


this thread is not your personal build log thread, seriously


----------



## Prophet4NO1

Guess i got a bit carried away.


----------



## cones

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Guess i got a bit carried away.


Not like anyone was posting much with pictures of their server.


----------



## CloudX

I certainly didn't mind the traffic in here!


----------



## DunePilot

Ok, I'll see myself out .


----------



## cones

Is it a rack or server thread?


----------



## tiro_uspsss

Quote:


> Originally Posted by *cones*
> 
> Is it a rack or server thread?


no rule on that, but if every IT pro starts posting pics of their companies rack gear, it would be pretty lame. IMO if it's yours, post it whether rack or other


----------



## cones

Quote:


> Originally Posted by *tiro_uspsss*
> 
> no rule on that, but if every IT pro starts posting pics of their companies rack gear, it would be pretty lame. IMO if it's yours, post it whether rack or other


Think the joke went over your head.


----------



## jibesh

Quote:


> Originally Posted by *cones*
> 
> Think the joke went over your head.


Lol was just about to post that


----------



## Versa

Quote:


> Originally Posted by *cones*
> 
> Is it a rack or server thread?


Those servers look racked to me


----------



## PuffinMyLye

Jokes aside this is def. not a rack thread. I posted my new home rack *here* and nothing. Makes sense though, after all this is an overclocking board and those interested in servers are more than likely on the entry level side.


----------



## DunePilot

Quote:


> Originally Posted by *PuffinMyLye*
> 
> Jokes aside this is def. not a rack thread. I posted my new home rack *here* and nothing. Makes sense though, after all this is an overclocking board and those interested in servers are more than likely on the entry level side.


That's sick... what exactly do you do with that? I have a 24 thread Mac Pro and don't even make good use of it other than to store media on.


----------



## PuffinMyLye

Quote:


> Originally Posted by *DunePilot*
> 
> That's sick... what exactly do you do with that? I have a 24 thread Mac Pro and don't even make good use of it other than to store media on.


In short, I'm using it as a highly available Plex server so it's fully redundant (both the Plex server itself and all the media). I'm in the process of setting up a testing environment on it as well so I can test out different configurations and scenarios for use at work.

For more detailed info about i you can check out my build log *here* which gets into more specifics.


----------



## EvilMonk

Quote:


> Originally Posted by *cones*
> 
> Think the joke went over your head.


Still I think it's the right thing to remember people the name of the thread is "Post *YOUR* server"


----------



## EvilMonk

Quote:


> Originally Posted by *DunePilot*
> 
> That's sick... what exactly do you do with that? I have a 24 thread Mac Pro and don't even make good use of it other than to store media on.


I'm on the same boat as you and my 2010 12 cores Mac Pro is already short on all the video encoding jobs I'm doing with Compressor and Handbrake, to be honest my 2013 8 cores Mac Pro is quite faster now that compressor support new instruction sets and GPU assisted encoding.


----------



## DunePilot

Quote:


> Originally Posted by *EvilMonk*
> 
> I'm on the same boat as you and my 2010 12 cores Mac Pro is already short on all the video encoding jobs I'm doing with Compressor and Handbrake, to be honest my 2013 8 cores Mac Pro is quite faster now that compressor support new instruction sets and GPU assisted encoding.


Mine is overkill for what I need. I just use it for media and for Logic X (audio DAW). Logic X is resourceful enough that you could run it on a laptop so with 12 cores and 48GB of ram I can run 30 channels of audio with 5 plugins a piece and never hardly peak above 5% usage.

I use this btw as a resource monitor.

https://bjango.com/mac/istatmenus/


----------



## EvilMonk

Quote:


> Originally Posted by *DunePilot*
> 
> Mine is overkill for what I need. I just use it for media and for Logic X (audio DAW). Logic X is resourceful enough that you could run it on a laptop so with 12 cores and 48GB of ram I can run 30 channels of audio with 5 plugins a piece and never hardly peak above 5% usage.
> 
> I use this btw as a resource monitor.
> 
> https://bjango.com/mac/istatmenus/


That's very neat, thanks for this I'll start using it!







+REP


----------



## Dark

Finally got around to re-racking all the equipment yesterday (it had been stripped when we moved). Wired up in the basement until I can build a dedicated room/closet.

Shown:
Tripplite 12U rack
Tripplite 19" kvm
Dell R710 (sandbox) (4c/4t. 36GB. 6TB)
Dell R710 (sandbox) (4c/4t. 36GB. 6TB)
Dell R730xd (current Plex server, CrashPlan, Unifi) (12c/12t, 48GB, 18TB)
Dell R320 (future Plex server) (8c/16t, 64GB, 16TB)
APC SC1500 UPS

Not shown:
Ubiquiti USG
Ubiquiti Switch 8
Allied 24-port L3 switch


----------



## Tokkan

My Gen8 Microserver I talked about a few posts back. Running FreeNAS currently.
Not fully operational like I want it to be because I'm missing a gigabit router and a Xeon to run EXSi.

Was offline because I was plugging in disks at the time of the photo.


----------



## bobfig

Quote:


> Originally Posted by *Tokkan*
> 
> 
> 
> 
> My Gen8 Microserver I talked about a few posts back. Running FreeNAS currently.
> Not fully operational like I want it to be because I'm missing a gigabit router and a Xeon to run EXSi.
> 
> Was offline because I was plugging in disks at the time of the photo.


you may want to put some ESD mats down over it. looks like you have a mouse infestation.


----------



## herkalurk

Quote:


> Originally Posted by *bobfig*
> 
> you may want to put some ESD mats down over it. looks like you have a mouse infestation.


Don't forget that lizard lookin thing...


----------



## Tokkan

Well yea, when those 3 popped up I got the lizard but it didn't do much and now they just hang there.









Struggling with setting up active directory, but can't dedicate much time right now to it cause of exams.


----------



## Liranan

Quote:


> Originally Posted by *rmp459*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tiro_uspsss;11804932*
> I find it odd that so many 'servers' that ppl run dont have ECC ram :shrug:
> 
> 
> 
> Most home servers are really just serving up files and doing menial hosting tasks... and it is far cheaper to just use older desktop components than going out of the way to get a server board and ecc ram.
> 
> If we were talking a 24/7 production environment w/ ram intensive applications or databases... that would be a different story.
> 
> For the most part even when using the server as a DC and letting it handle dhcp/dns for my network, filesharing, and backups, its barely going through the paces... not really too many places for memory errors.
> 
> Also ive been running this hardware for years and am very confident on what is 100% stable in terms of memory timings/voltages. Ive had my server board and ram paired together since like 2008.
Click to expand...

I know this isn't the right place to ask but is the above still valid? I created a thread about my intentions of building a file/media server and decided on getting a Xeon, 16GB ECC RAM and a board to go with it, which is basically what people recommend on the Freenas forums. I've seen threads in which people were almost lynched for stating they were going to use something other than server class hardware yet from what I've seen Kyadk uses normal consumer hardware (8320, non-ECC RAM and a 'gaming' board) as server.

So, now I'm wondering whether someone who is just going to use his server as film, music and photo server really needs to use server class hardware or whether consumer level equipment also will suffice.


----------



## Versa

You don't need server class hardware/ECC even in ZFS for home use. I'm pretty sure if you mention non-ECC memory on FreeNAS forums and they will lynch you








Even have my plex server running on my rig below.


----------



## Liranan

Quote:


> Originally Posted by *Versa*
> 
> You don't need server class hardware/ECC even in ZFS for home use. I'm pretty sure if you mention non-ECC memory on FreeNAS forums and they will lynch you
> 
> 
> 
> 
> 
> 
> 
> 
> Even have my plex server running on my rig below.


You should take a look at their forums to see what they say. They stress over and over again that ECC is an absolute must due to ZFS and Raid5/6 becoming irretrievably corrupted during repair after even a single bad bit has been written to the disk.


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *rmp459*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tiro_uspsss;11804932*
> Originally Posted by *tiro_uspsss;11804932*
> 
> I find it odd that so many 'servers' that ppl run dont have ECC ram :shrug:
> 
> 
> 
> Most home servers are really just serving up files and doing menial hosting tasks... and it is far cheaper to just use older desktop components than going out of the way to get a server board and ecc ram.
> 
> If we were talking a 24/7 production environment w/ ram intensive applications or databases... that would be a different story.
> 
> For the most part even when using the server as a DC and letting it handle dhcp/dns for my network, filesharing, and backups, its barely going through the paces... not really too many places for memory errors.
> 
> Also ive been running this hardware for years and am very confident on what is 100% stable in terms of memory timings/voltages. Ive had my server board and ram paired together since like 2008.
> 
> Click to expand...
> 
> I know this isn't the right place to ask but is the above still valid? I created a thread about my intentions of building a file/media server and decided on getting a Xeon, 16GB ECC RAM and a board to go with it, which is basically what people recommend on the Freenas forums. I've seen threads in which people were almost lynched for stating they were going to use something other than server class hardware yet from what I've seen Kyadk uses normal consumer hardware (8320, non-ECC RAM and a 'gaming' board) as server.
> 
> So, now I'm wondering whether someone who is just going to use his server as film, music and photo server really needs to use server class hardware or whether consumer level equipment also will suffice.
Click to expand...

Normal ECC is 9-bit, the 9th being on if the number of bits in the other 8 is even or odd (one of the two, can't remember which), and it then compares those two vaules. If you get a "10011100", and the 9th bit is "1", then the CPU knows it failed at data integrity. This kind of ECC actually does work in my 990FXA-UD5 and 970A-UD3 if I cared to do so. I'm fairly certain it will work in my X99-Deluxe when it moves to server duty. *Un*Buffered ECC is fairly common, not too expensive, and is socket compatable with normal RAM.

The big boy expensive RAM is Buffered ECC, and those chips can pack 64GB on a DIMM. They are also incredibly expensive, no matter how old.

I run the risk of data integrity by choosing to not run ECC, but my servers are almost 100% handmedown parts from my main rig. There is no 24/7 mission critical thing on my servers that I do not have a redundant copy of on the other server, including DHCP/DNS/AD.
Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Versa*
> 
> You don't need server class hardware/ECC even in ZFS for home use. I'm pretty sure if you mention non-ECC memory on FreeNAS forums and they will lynch you
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Even have my plex server running on my rig below.
> 
> 
> 
> You should take a look at their forums to see what they say. They stress over and over again that ECC is an absolute must due to ZFS and Raid5/6 becoming irretrievably corrupted during repair after even a single bad bit has been written to the disk.
Click to expand...

Not to sound like a broken record, but yet another reason I use RAID cards and not software RAID. It isn't a viable solution for FreeNAS, but if my card dies I just plug the drives into another one. If data is written wrong to a drive, I take it out, wipe it, put it back in, and rebuild the array. Not that it has happened yet.


----------



## Liranan

Found a lot of really cheap registered/buffered ECC RAM at less than 15 USD but need Opteron or Xeons for them. Non-registered/buffered RAM is more expensive but can still be found for around the same price as regular RAM and as I need more RAM for my server I think I'll get some anyway.


----------



## CookieSayWhat

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *rmp459*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tiro_uspsss;11804932*
> Originally Posted by *tiro_uspsss;11804932*
> 
> I find it odd that so many 'servers' that ppl run dont have ECC ram :shrug:
> 
> 
> 
> Most home servers are really just serving up files and doing menial hosting tasks... and it is far cheaper to just use older desktop components than going out of the way to get a server board and ecc ram.
> 
> If we were talking a 24/7 production environment w/ ram intensive applications or databases... that would be a different story.
> 
> For the most part even when using the server as a DC and letting it handle dhcp/dns for my network, filesharing, and backups, its barely going through the paces... not really too many places for memory errors.
> 
> Also ive been running this hardware for years and am very confident on what is 100% stable in terms of memory timings/voltages. Ive had my server board and ram paired together since like 2008.
> 
> Click to expand...
> 
> I know this isn't the right place to ask but is the above still valid? I created a thread about my intentions of building a file/media server and decided on getting a Xeon, 16GB ECC RAM and a board to go with it, which is basically what people recommend on the Freenas forums. I've seen threads in which people were almost lynched for stating they were going to use something other than server class hardware yet from what I've seen Kyadk uses normal consumer hardware (8320, non-ECC RAM and a 'gaming' board) as server.
> 
> So, now I'm wondering whether someone who is just going to use his server as film, music and photo server really needs to use server class hardware or whether consumer level equipment also will suffice.
> 
> Click to expand...
> 
> Normal ECC is 9-bit, the 9th being on if the number of bits in the other 8 is even or odd (one of the two, can't remember which), and it then compares those two vaules. If you get a "10011100", and the 9th bit is "1", then the CPU knows it failed at data integrity. This kind of ECC actually does work in my 990FXA-UD5 and 970A-UD3 if I cared to do so. I'm fairly certain it will work in my X99-Deluxe when it moves to server duty. *Un*Buffered ECC is fairly common, not too expensive, and is socket compatable with normal RAM.
> 
> The big boy expensive RAM is Buffered ECC, and those chips can pack 64GB on a DIMM. They are also incredibly expensive, no matter how old.
> 
> I run the risk of data integrity by choosing to not run ECC, but my servers are almost 100% handmedown parts from my main rig. There is no 24/7 mission critical thing on my servers that I do not have a redundant copy of on the other server, including DHCP/DNS/AD.
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Versa*
> 
> You don't need server class hardware/ECC even in ZFS for home use. I'm pretty sure if you mention non-ECC memory on FreeNAS forums and they will lynch you
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Even have my plex server running on my rig below.
> 
> Click to expand...
> 
> You should take a look at their forums to see what they say. They stress over and over again that ECC is an absolute must due to ZFS and Raid5/6 becoming irretrievably corrupted during repair after even a single bad bit has been written to the disk.
> 
> Click to expand...
> 
> Not to sound like a broken record, but yet another reason I use RAID cards and not software RAID. It isn't a viable solution for FreeNAS, but if my card dies I just plug the drives into another one. If data is written wrong to a drive, I take it out, wipe it, put it back in, and rebuild the array. Not that it has happened yet.
Click to expand...

You still have the very real risk of bit rot. No hardware raid controller can prevent that. (As far as I know at least) This is why Ecc is a must with zfs or btrs. Without the Ecc data integrity is questionable at best.

Rebuilding an array is great unless you have bit rot then you're just rebuilding an array of corrupted data.

Sent from my iPhone using Tapatalk


----------



## Liranan

Quote:


> Originally Posted by *CookieSayWhat*
> 
> You still have the very real risk of bit rot. No hardware raid controller can prevent that. (As far as I know at least) This is why Ecc is a must with zfs or btrs. Without the Ecc data integrity is questionable at best.
> 
> Rebuilding an array is great unless you have bit rot then you're just rebuilding an array of corrupted data.
> 
> Sent from my iPhone using Tapatalk


After reading about the benefits of ECC I have come to that same conclusion. Will start with 8GB registered ECC and see if that's enough.


----------



## KyadCK

Quote:


> Originally Posted by *CookieSayWhat*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *rmp459*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tiro_uspsss;11804932*
> Originally Posted by *tiro_uspsss;11804932*
> 
> I find it odd that so many 'servers' that ppl run dont have ECC ram :shrug:
> 
> 
> 
> Most home servers are really just serving up files and doing menial hosting tasks... and it is far cheaper to just use older desktop components than going out of the way to get a server board and ecc ram.
> 
> If we were talking a 24/7 production environment w/ ram intensive applications or databases... that would be a different story.
> 
> For the most part even when using the server as a DC and letting it handle dhcp/dns for my network, filesharing, and backups, its barely going through the paces... not really too many places for memory errors.
> 
> Also ive been running this hardware for years and am very confident on what is 100% stable in terms of memory timings/voltages. Ive had my server board and ram paired together since like 2008.
> 
> Click to expand...
> 
> I know this isn't the right place to ask but is the above still valid? I created a thread about my intentions of building a file/media server and decided on getting a Xeon, 16GB ECC RAM and a board to go with it, which is basically what people recommend on the Freenas forums. I've seen threads in which people were almost lynched for stating they were going to use something other than server class hardware yet from what I've seen Kyadk uses normal consumer hardware (8320, non-ECC RAM and a 'gaming' board) as server.
> 
> So, now I'm wondering whether someone who is just going to use his server as film, music and photo server really needs to use server class hardware or whether consumer level equipment also will suffice.
> 
> Click to expand...
> 
> Normal ECC is 9-bit, the 9th being on if the number of bits in the other 8 is even or odd (one of the two, can't remember which), and it then compares those two vaules. If you get a "10011100", and the 9th bit is "1", then the CPU knows it failed at data integrity. This kind of ECC actually does work in my 990FXA-UD5 and 970A-UD3 if I cared to do so. I'm fairly certain it will work in my X99-Deluxe when it moves to server duty. *Un*Buffered ECC is fairly common, not too expensive, and is socket compatable with normal RAM.
> 
> The big boy expensive RAM is Buffered ECC, and those chips can pack 64GB on a DIMM. They are also incredibly expensive, no matter how old.
> 
> I run the risk of data integrity by choosing to not run ECC, but my servers are almost 100% handmedown parts from my main rig. There is no 24/7 mission critical thing on my servers that I do not have a redundant copy of on the other server, including DHCP/DNS/AD.
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Versa*
> 
> You don't need server class hardware/ECC even in ZFS for home use. I'm pretty sure if you mention non-ECC memory on FreeNAS forums and they will lynch you
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Even have my plex server running on my rig below.
> 
> Click to expand...
> 
> You should take a look at their forums to see what they say. They stress over and over again that ECC is an absolute must due to ZFS and Raid5/6 becoming irretrievably corrupted during repair after even a single bad bit has been written to the disk.
> 
> Click to expand...
> 
> Not to sound like a broken record, but yet another reason I use RAID cards and not software RAID. It isn't a viable solution for FreeNAS, but if my card dies I just plug the drives into another one. If data is written wrong to a drive, I take it out, wipe it, put it back in, and rebuild the array. Not that it has happened yet.
> 
> Click to expand...
> 
> You still have the very real risk of bit rot. No hardware raid controller can prevent that. (As far as I know at least) This is why Ecc is a must with zfs or btrs. Without the Ecc data integrity is questionable at best.
> 
> Rebuilding an array is great unless you have bit rot then you're just rebuilding an array of corrupted data.
> 
> Sent from my iPhone using Tapatalk
Click to expand...

Media is on the disk, not in RAM. What exactly do you think I'm writing 24/7 that the bits will sit in RAM long enough to do something? What proof do you have of corrupted info that *actually continues to work?*

Do you have MD5 proof, or do your RAID arrays just, die? Because frankly, if that were even remotely true, the data I have from servers over a decade old are either the biggest statistical anomaly in history, or my ISO of Brood War is just broken in ways I don't notice.

If ECC was a _requirement_ for either of those, Linux distros would not run them from bone stock configs. BTRS is OpenSUSE's standard "next next next" option. So that is 100% pure grain false info. And while we're on the topic, what makes ZFS and BTRS so prone to error as compared to, say, NTFS, that they would even _try_ to replace EXT with it? That logic makes no sense. There is nothing special about ZFS or BTRS.

ECC is good and helpful. Pretending it's a requirement is lying to yourself. "Questionable at best" is a massive _massive_ exaggeration, and if it were true, we wouldn't have non-ECC anymore.


----------



## Aximous

Quote:


> Originally Posted by *KyadCK*
> 
> Media is on the disk, not in RAM. What exactly do you think I'm writing 24/7 that the bits will sit in RAM long enough to do something? What proof do you have of corrupted info that *actually continues to work?*
> 
> Do you have MD5 proof, or do your RAID arrays just, die? Because frankly, if that were even remotely true, the data I have from servers over a decade old are either the biggest statistical anomaly in history, or my ISO of Brood War is just broken in ways I don't notice.
> 
> If ECC was a _requirement_ for either of those, Linux distros would not run them from bone stock configs. BTRS is OpenSUSE's standard "next next next" option. So that is 100% pure grain false info. And while we're on the topic, what makes ZFS and BTRS so prone to error as compared to, say, NTFS, that they would even _try_ to replace EXT with it? That logic makes no sense. There is nothing special about ZFS or BTRS.
> 
> ECC is good and helpful. Pretending it's a requirement is lying to yourself. "Questionable at best" is a massive _massive_ exaggeration, and if it were true, we wouldn't have non-ECC anymore.


No your data is not sitting in the ram, but the calculation about how to put your data on the disks (in case of software raid) has to use the system ram. If a flipped bit occurs there your data will be written to the disk incorrectly or become corrupted if you will.

Watch the video I link below and it's second part for MD5 proof, and detailed explanation of this topic. And yes, your Brood War ISO may be broken without you knowing it, it might appear completely fine, but if you check the MD5 sum or try to mount it might turn up corrupted.

NTFS is a completely different beast compared to ZFS and BTRFS, with NTFS you don't have to make any calculations about splitting and putting data back to gather to read and write from the disk since you are only handling a single disk with that. ZFS and BTRFS on the other hand has to calculate stuff like this because they span across multiple disks with striping. You could argue that ReiserFS does this too but, that addresses disks individually too, so this doesn't become a problem. Also this is an issue with standard raid controllers too, as those have to make the same calculations and use ram for these too, the only difference is that those have their own processor and ram to do these.

Also this is where the advantage of ZFS and alike come in, these file systems have built-in measures to detect and correct bit rot, which the HW raid controllers do not.

The videos I meantions:


----------



## wiretap

Just because someone hasn't experienced it, doesn't mean it doesn't exist. I've been a victim of bit-rot on my old Windows Home Server v1. I had several corrupt family photos, corrupt program installers, corrupt ISO images, and corrupt MP3 files. Luckily I had a backup on a set of dual layer DVD's at the time. You may not even realize you have bit-rot until months/years later when you go to open a file that you really need.. which is what happened to me. I now use a home server with ECC RAM + SnapRAID + Backblaze off-site backup. Since the components required for ECC support doesn't really cost much more than a normal desktop consumer grade system, I now build all my servers with ECC support. Bit-rot isn't some mythical creature, but it also doesn't happen all too often. When it does happen though, it really sucks and you'll probably wish you had spent a few extra dollars on a proper setup to prevent it. It's especially bad when it's in a RAID system and the parity gets calculated with bad data.. then the changes are irreversible.


----------



## Prophet4NO1

Quote:


> Originally Posted by *Liranan*
> 
> Found a lot of really cheap registered/buffered ECC RAM at less than 15 USD but need Opteron or Xeons for them. Non-registered/buffered RAM is more expensive but can still be found for around the same price as regular RAM and as I need more RAM for my server I think I'll get some anyway.


I can not speak for AMD, but on Intel some Celeron and Pentium chips support ECC. My freenas server was using a Pentium with ECC before the bigger Xeon was added for plex. My pfSense router is also using ECC with a $30 Celeron. You just need to check the Intel Ark page before you buy. Pretty sure there are a few i3 chips too.


----------



## KyadCK

Quote:


> Originally Posted by *Aximous*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Media is on the disk, not in RAM. What exactly do you think I'm writing 24/7 that the bits will sit in RAM long enough to do something? What proof do you have of corrupted info that *actually continues to work?*
> 
> Do you have MD5 proof, or do your RAID arrays just, die? Because frankly, if that were even remotely true, the data I have from servers over a decade old are either the biggest statistical anomaly in history, or my ISO of Brood War is just broken in ways I don't notice.
> 
> If ECC was a _requirement_ for either of those, Linux distros would not run them from bone stock configs. BTRS is OpenSUSE's standard "next next next" option. So that is 100% pure grain false info. And while we're on the topic, what makes ZFS and BTRS so prone to error as compared to, say, NTFS, that they would even _try_ to replace EXT with it? That logic makes no sense. There is nothing special about ZFS or BTRS.
> 
> ECC is good and helpful. Pretending it's a requirement is lying to yourself. "Questionable at best" is a massive _massive_ exaggeration, and if it were true, we wouldn't have non-ECC anymore.
> 
> 
> 
> No your data is not sitting in the ram, but the calculation about how to put your data on the disks (in case of software raid) has to use the system ram. If a flipped bit occurs there your data will be written to the disk incorrectly or become corrupted if you will.
> 
> Watch the video I link below and it's second part for MD5 proof, and detailed explanation of this topic. And yes, your Brood War ISO may be broken without you knowing it, it might appear completely fine, but if you check the MD5 sum or try to mount it might turn up corrupted.
> 
> NTFS is a completely different beast compared to ZFS and BTRFS, with NTFS you don't have to make any calculations about splitting and putting data back to gather to read and write from the disk since you are only handling a single disk with that. ZFS and BTRFS on the other hand has to calculate stuff like this because they span across multiple disks with striping. You could argue that ReiserFS does this too but, that addresses disks individually too, so this doesn't become a problem. Also this is an issue with standard raid controllers too, as those have to make the same calculations and use ram for these too, the only difference is that those have their own processor and ram to do these.
> 
> Also this is where the advantage of ZFS and alike come in, these file systems have built-in measures to detect and correct bit rot, which the HW raid controllers do not.
> 
> The videos I meantions:
Click to expand...

Correct. Not for me though, because P410s.

Thank you for actual MD5 proof.

No it isn't. You _can_ RAID0/STRIPE with NTFS. Again, if this were such a big problem, we would not have moved on from EXT to BTRFS on OpenSUSE as the standard file system.

Standard RAID controllers with their own RAM use... Buffered ECC. In fact, the good ones even have battery backups (and they do complain if you disconnect them, loudly) and use Flash for their cache instead of volatile memory to ensure data loss due to power failure doesn't happen. Either way, see below.

So ZFS has error correction, making ECC *less* of a requirement, not more. So I was right. The big issue with using normal RAM for FreeNAS to my understanding is because it will use as much RAM as it can to cache data, meaning it DOES stay in RAM at all times.
Quote:


> Originally Posted by *wiretap*
> 
> Just because someone hasn't experienced it, doesn't mean it doesn't exist. I've been a victim of bit-rot on my old Windows Home Server v1. I had several corrupt family photos, corrupt program installers, corrupt ISO images, and corrupt MP3 files. Luckily I had a backup on a set of dual layer DVD's at the time. You may not even realize you have bit-rot until months/years later when you go to open a file that you really need.. which is what happened to me. I now use a home server with ECC RAM + SnapRAID + Backblaze off-site backup. Since the components required for ECC support doesn't really cost much more than a normal desktop consumer grade system, I now build all my servers with ECC support. Bit-rot isn't some mythical creature, but it also doesn't happen all too often. When it does happen though, it really sucks and you'll probably wish you had spent a few extra dollars on a proper setup to prevent it. It's especially bad when it's in a RAID system and the parity gets calculated with bad data.. then the changes are irreversible.


You're right, it isn't some Mystical creature, but it also does not simply happen over time (and involve system RAM) either unless you're defragging your disk and actually writing data, which you should not do when archiving. If you are loosing data without writing anything, you have straight up data loss due to the storage medium; your HDD/SSD/Array is dying.

The funny thing to me is people arguing that this is something I wouldn't notice. You fail to understand my array.

My primary SSD array is run off a P410 with 1GB BBWB, and presented to the OS (ESXi) as one large drive (~1.2TB, it and the HDD arrays are both 4x 500GB in RAID5). All RAID functionality is provided by the controllers (3 of them). This disk is presented to ESXi, which claims the partitioning with VMFS. From there, I create VMs, which sit on top of the Hypervisor, and create their "disks" (files to ESX), which get presented to a VM. My main archiving server has two main "drives", both are located on different P410s. They are not set as software RAID in windows, the files are simply written to both; specifically to avoid software RAID failure, actually, and allow me to just reassign the VMDKs to another OS if I want to upgrade/change my storage server's OS.

If the RAM were to corrupt the data on the way to a P410 (which does it's RAID calculations without the System CPU/RAM, obviously), then the file wouldn't work upon initial write. I have two copies, plus my original. If one failed, it would be deleted and another copy made.

If the main "file" (the VMDK) were to be corrupted by a P410 against the MD5 file ESXi compares it to automatically, it would throw a warning. But they haven't. If the file in the NTFS/BTRFS/FS-of-choice inside the VMDK failed, the files wouldn't work, and in the case of the archive, I'd make a copy from the other RAID array.

In the event all that fails, Servers 1 and 2 take the occasional VMDK backups of one another, which is good, because if either server fails, I can assign the VMX to the server's roster and fire it up in seconds.
Quote:


> Originally Posted by *Prophet4NO1*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Found a lot of really cheap registered/buffered ECC RAM at less than 15 USD but need Opteron or Xeons for them. Non-registered/buffered RAM is more expensive but can still be found for around the same price as regular RAM and as I need more RAM for my server I think I'll get some anyway.
> 
> 
> 
> I can not speak for AMD, but on Intel some Celeron and Pentium chips support ECC. My freenas server was using a Pentium with ECC before the bigger Xeon was added for plex. My pfSense router is also using ECC with a $30 Celeron. You just need to check the Intel Ark page before you buy. Pretty sure there are a few i3 chips too.
Click to expand...

AMD's chips can, but you need a motherboard that supports it. All 900-series Gigabyte boards do, as far as I am aware.

Correct, but be careful, not all of them do. Be certain the model you're buying.


----------



## Nightfallx

Quote:


> Originally Posted by *Liranan*
> 
> Found a lot of really cheap registered/buffered ECC RAM at less than 15 USD but need Opteron or Xeons for them. Non-registered/buffered RAM is more expensive but can still be found for around the same price as regular RAM and as I need more RAM for my server I think I'll get some anyway.


I have tons of ecc ram if that's what you are needing.


----------



## DogeTactical

I want to eventually get an dual Opteron setup


----------



## Liranan

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Found a lot of really cheap registered/buffered ECC RAM at less than 15 USD but need Opteron or Xeons for them. Non-registered/buffered RAM is more expensive but can still be found for around the same price as regular RAM and as I need more RAM for my server I think I'll get some anyway.
> 
> 
> 
> I can not speak for AMD, but on Intel some Celeron and Pentium chips support ECC. My freenas server was using a Pentium with ECC before the bigger Xeon was added for plex. My pfSense router is also using ECC with a $30 Celeron. You just need to check the Intel Ark page before you buy. Pretty sure there are a few i3 chips too.
Click to expand...

All AMD CPU's since the Phenom era at least support ECC, the problems are that most motherboards don't and the RAM has to be unbuffered rather than registered. I've found a lot of registered RAM for extremely cheap (12 USD for 8GB) but UDIMM's are almost thrice the price so I would rather go with a cheap Xeon system and save money in the long term than save money on the CPU and spend more because the RAM is more expensive.


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Prophet4NO1*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Found a lot of really cheap registered/buffered ECC RAM at less than 15 USD but need Opteron or Xeons for them. Non-registered/buffered RAM is more expensive but can still be found for around the same price as regular RAM and as I need more RAM for my server I think I'll get some anyway.
> 
> 
> 
> I can not speak for AMD, but on Intel some Celeron and Pentium chips support ECC. My freenas server was using a Pentium with ECC before the bigger Xeon was added for plex. My pfSense router is also using ECC with a $30 Celeron. You just need to check the Intel Ark page before you buy. Pretty sure there are a few i3 chips too.
> 
> Click to expand...
> 
> All AMD CPU's since the Phenom era at least support ECC, the problems are that most motherboards don't and the RAM has to be unbuffered rather than registered. I've found a lot of registered RAM for extremely cheap (12 USD for 8GB) but UDIMM's are almost thrice the price so I would rather go with a cheap Xeon system and save money in the long term than save money on the CPU and spend more because the RAM is more expensive.
Click to expand...

x2 because FreeNAS loves ram more than CPU anyway.


----------



## Prophet4NO1

Quote:


> Originally Posted by *Liranan*
> 
> All AMD CPU's since the Phenom era at least support ECC, the problems are that most motherboards don't and the RAM has to be unbuffered rather than registered. I've found a lot of registered RAM for extremely cheap (12 USD for 8GB) but UDIMM's are almost thrice the price so I would rather go with a cheap Xeon system and save money in the long term than save money on the CPU and spend more because the RAM is more expensive.


The unbuffered ECC i am using is cheap. About the same as normal RAM. The regustered ECC in my game hosting server (came with it) is something like twice the price. But it is, in the end all down to usage. If i did not have plex on my freenas box, the pentium would have been fine. Never used more than 20-50% load for anyrhing. Usually sat at 0-2%. The quad core xeon is just helpfull for transcoding in plex. Thats it. What is the point of a really powerfull cpu if all the server is for is home storage? You are basically paying more for nothing. Where as spending a little extra for ECC actually is a good thing for that added data security/reliability. Or you can just go my route and do the xeon and the ECC. ECC and a lower spec server /workstation board with ECC support are a small price to pay for data integrity.

But, thats me.


----------



## Liranan

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> All AMD CPU's since the Phenom era at least support ECC, the problems are that most motherboards don't and the RAM has to be unbuffered rather than registered. I've found a lot of registered RAM for extremely cheap (12 USD for 8GB) but UDIMM's are almost thrice the price so I would rather go with a cheap Xeon system and save money in the long term than save money on the CPU and spend more because the RAM is more expensive.
> 
> 
> 
> The unbuffered ECC i am using is cheap. About the same as normal RAM. The regustered ECC in my game hosting server (came with it) is something like twice the price. But it is, in the end all down to usage. If i did not have plex on my freenas box, the pentium would have been fine. Never used more than 20-50% load for anyrhing. Usually sat at 0-2%. The quad core xeon is just helpfull for transcoding in plex. Thats it. What is the point of a really powerfull cpu if all the server is for is home storage? You are basically paying more for nothing. Where as spending a little extra for ECC actually is a good thing for that added data security/reliability. Or you can just go my route and do the xeon and the ECC. ECC and a lower spec server /workstation board with ECC support are a small price to pay for data integrity.
> 
> But, thats me.
Click to expand...

8GB registered 1333MHz for 75 RMB.

Samsung:

https://item.taobao.com/item.htm?scm=1007.10009.31621.100200300000001&id=524256561945&pvid=6cadf820-8492-46a6-b224-ff9d8dd0118e

Hynix:

https://item.taobao.com/item.htm?scm=1007.10009.31621.100200300000001&id=525988905870&pvid=918a28c5-abc3-446f-9775-6ccc9508e1c4

8GB unregistered Hynix 1600MHz for 200:

https://item.taobao.com/item.htm?id=35150425532&ns=1&abbucket=18#detail

I've given you the prices so you don't need to wait for half an hour for the pages to load. As you can see RDIMM is far cheaper than UDIMM, which is why I'm getting RDIMM instead. Potentially I could get a G6950 for 5 USD (they are that cheap) over an X3430 (about 22 USD) but then the price difference in RAM will make the G6950 more expensive, especially if I need more than 8GB. The other problem is that due to the nature of UDIMM I will be limited to 16GB maximum instead of the 32GB maximum I would have with RDIMM, so to me it's an obvious choice.

Edit: this is the board I'm intending to get. The board supports 1333MHz only and speed isn't nearly as important to me as price and amount. While I'm not sure I will using FreeNas (Win 8.1 Storage Spaces seems so easy to use), I still want to have the ability to add more RAM just in case and have flexibility in the platform.

http://www.supermicro.com/products/motherboard/Xeon3000/3400/X8SIL.cfm?IPMI=Y


----------



## Liranan

Quote:


> Originally Posted by *Nightfallx*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Found a lot of really cheap registered/buffered ECC RAM at less than 15 USD but need Opteron or Xeons for them. Non-registered/buffered RAM is more expensive but can still be found for around the same price as regular RAM and as I need more RAM for my server I think I'll get some anyway.
> 
> 
> 
> I have tons of ecc ram if that's what you are needing.
Click to expand...

I don't know where you live but I assume shipping costs would exceed the price I can get the DIMM for here.


----------



## nexxusty

Xeon E1230 (Sandy)
Supermicro X9SCM (IPMI FTW)
8gb DDR3 1333 ECC
LSI 8 port SAS HBA
6x2TB WD Red in RAID 5
2tb WD Green
60gb SSD

Running Proxmox (Debian).


----------



## bobfig

Quote:


> Originally Posted by *nexxusty*
> 
> Xeon E1230 (Sandy)
> Supermicro X9SCM (IPMI FTW)
> 8gb DDR3 1333 ECC
> LSI 8 port SAS HBA
> 6x2TB WD Red in RAID 5
> 2tb WD Green
> 60gb SSD
> 
> Running Proxmox (Debian).


nice should be a good server. just like mine but slightly different hard drives.


----------



## nexxusty

Quote:


> Originally Posted by *bobfig*
> 
> nice should be a good server. just like mine but slightly different hard drives.


Thanks man!

In terms of VM's and file serving it does a wonderful job. Having all the VM's on the SSD helps a lot.

Love the IPMI too, best feature.


----------



## swingarm

Maybe I'll post pictures later as I just got it running just the way I like.

Antec P190 Case
Supermicro PWS-665-PQ Power Supply
Supermicro X8DTE Motherboard
Freenas 9.10 Stable
2X Intel Xeon X5650 LGA1366 CPU
2X Noctua NH-U9DXi4 Heatsink/Fan
Hynix 24GB (6x 4GB) PC3-10600R DDR3-1333MHz ECC Registered Ram
Adaptec 1430SA 4 port SATA PCIE card
2X Evercool Armor Dual 5.25" Bay HDD Cooler(for future hard drives)

Hard Drives:
Kingspec 8GB SSD SATA Drive(boot/OS)
2X Western Digital WD800 JD 80GB SATA Drive*
Seagate ST380021A 80GB IDE(w/ SATA adapter) Drive*
Western Digital WD4000F9YZ 4TB SATA Drive
Western Digital WD4003FZEX 4TB SATA Drive
*3 80GB Drives configured in Freenas as a 240GB Drive


----------



## Liranan

According to the WD website their Reds have some sort of ECC built into the drive. Can someone shed some light on this for me, please? I don't understand what the benefits are and how it works.


----------



## Naz

My old i5 750 based home backup/ file/plex server finally bit the dust, so time for a new 24/7 workhorse:


----------



## EvilMonk

Quote:


> Originally Posted by *Prophet4NO1*
> 
> The unbuffered ECC i am using is cheap. About the same as normal RAM. The regustered ECC in my game hosting server (came with it) is something like twice the price. But it is, in the end all down to usage. If i did not have plex on my freenas box, the pentium would have been fine. Never used more than 20-50% load for anyrhing. Usually sat at 0-2%. The quad core xeon is just helpfull for transcoding in plex. Thats it. What is the point of a really powerfull cpu if all the server is for is home storage? You are basically paying more for nothing. Where as spending a little extra for ECC actually is a good thing for that added data security/reliability. Or you can just go my route and do the xeon and the ECC. ECC and a lower spec server /workstation board with ECC support are a small price to pay for data integrity.
> 
> But, thats me.


If you take some time to look on ebay you can find some sweet deals on registered ECC memory. I got 12 dual rank (2Rx4) 8Gb Registered HP PC3-10600R (DDR3-1333) Hynix Sticks for 120$ with shipping... so 96Gb of DDR3 1333 Registered ECC RAM for 120 shipped at my door, that was pretty sweet.


----------



## Tokkan

Bought an Asus RT-AC1200G+, has GBit capabilities. Reading from the server at a steady 105MB/s, no port aggregation configured yet.
Will buy a switch that supports it, already got one under my sights.

TP Link - TL-SG108E

Bought this router because it supports VPN server, 5Ghz and AC and obviously its a gigabit router.

Now I gotta ask a question, if a single port runs at gigabit speeds how will aggregating two of them help me in speed issues? Or is it for failproofing specifically?


----------



## Liranan

Quote:


> Originally Posted by *Tokkan*
> 
> Bought an Asus RT-AC1200G+, has GBit capabilities. Reading from the server at a steady 105MB/s, no port aggregation configured yet.
> Will buy a switch that supports it, already got one under my sights.
> 
> TP Link - TL-SG108E
> 
> Bought this router because it supports VPN server, 5Ghz and AC and obviously its a gigabit router.
> 
> Now I gotta ask a question, if a single port runs at gigabit speeds how will aggregating two of them help me in speed issues? Or is it for failproofing specifically?


Unless the router supports speeds in excess of a gigabit aggregating them is just a failsafe in case one cable or port fails as the switch simply can't go over those speeds anyway.


----------



## Tokkan

Okay yea you're right. Reading the FreeNAS forums found out a document where a person explains that on a single load it makes no difference, but paralel loads it could help.
I don't really know currently how much do I even care to configure that, currently freeNAS recognises both nics, they both got a cable plugged in and they got IP address assigned to them.
Connecting to them gives me the webgui on both.
Speed seems according to be what is expected so I guess I'll finish the media streaming part of freenas.


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Tokkan*
> 
> Bought an Asus RT-AC1200G+, has GBit capabilities. Reading from the server at a steady 105MB/s, no port aggregation configured yet.
> 
> Will buy a switch that supports it, already got one under my sights.
> 
> TP Link - TL-SG108E
> 
> Bought this router because it supports VPN server, 5Ghz and AC and obviously its a gigabit router.
> 
> Now I gotta ask a question, if a single port runs at gigabit speeds how will aggregating two of them help me in speed issues? Or is it for failproofing specifically?
> 
> 
> 
> *Unless the router supports speeds in excess of a gigabit* aggregating them is just a failsafe in case one cable or port fails as the switch simply can't go over those speeds anyway.
Click to expand...

That isn't how a switch works. The internals are FAR beyond "gigabit", and usually enough to handle at least half the maximum traffic even in cheap models; a 5-port gigabit + wifi will probably have at minimum a 6gbps capacity internally. Good ones can handle full everything. Teaming, especially if it supports the feature, should never be a concern. My 4849Es can handle a lot more throughput than the physical connections (48x 1gbps, 4x 10gbps, full duplex) can. Several _times_ (just over x4 for "normal" bandwidth) more actually.

There are also several types of link aggregation, and as it so happens, my motherboard (and all my Server NICs, and my L3 switches) supplies several of them. Aggregation as a tech absolutely will increase total available bandwidth, but depending on which version used, it _may_ not be for one "single threaded" network connection (and some can; I use them). Some forms of teaming are failover only, but aggregation isn't one of them, and if the router supports the feature it will be more than capable of backing it up.

Regardless, 2x 1gbps teamed links to the router does you zero good unless either the thing doing it is being accessed by sources which are capable of saturating a single link, or if you have another target you would like to access at those speeds, Which requires both sides AND the entire network between them to be set that way. If it is only your desktop, you will not see any real benefit from teaming at all.

On the flip side, I can copy at 2gbps to my RAID array and my servers have 4/6gbps total links to the switch, allowing any VMs on one server to access the full bandwidth to transfer something to another VM (or HV to HV, whichever) on the other server. My switches also have a 4gbps link between them to stop my desktop's 2gbps from killing other connections to the server on this switch. Backbones are fun.


----------



## nexxusty

Quote:


> Originally Posted by *KyadCK*
> 
> That isn't how a switch works. The internals are FAR beyond "gigabit", and usually enough to handle at least half the maximum traffic even in cheap models; a 5-port gigabit + wifi will probably have at minimum a 6gbps capacity internally. Good ones can handle full everything. Teaming, especially if it supports the feature, should never be a concern. My 4849Es can handle a lot more throughput than the physical connections (48x 1gbps, 4x 10gbps, full duplex) can. Several _times_ (just over x4 for "normal" bandwidth) more actually.
> 
> There are also several types of link aggregation, and as it so happens, my motherboard (and all my Server NICs, and my L3 switches) supplies several of them. Aggregation as a tech absolutely will increase total available bandwidth, but depending on which version used, it _may_ not be for one "single threaded" network connection (and some can; I use them). Some forms of teaming are failover only, but aggregation isn't one of them, and if the router supports the feature it will be more than capable of backing it up.
> 
> Regardless, 2x 1gbps teamed links to the router does you zero good unless either the thing doing it is being accessed by sources which are capable of saturating a single link, or if you have another target you would like to access at those speeds, Which requires both sides AND the entire network between them to be set that way. If it is only your desktop, you will not see any real benefit from teaming at all.
> 
> On the flip side, I can copy at 2gbps to my RAID array and my servers have 4/6gbps total links to the switch, allowing any VMs on one server to access the full bandwidth to transfer something to another VM (or HV to HV, whichever) on the other server. My switches also have a 4gbps link between them to stop my desktop's 2gbps from killing other connections to the server on this switch. Backbones are fun.


Just bought a 802.12ad switch for some 2gbps action.

Should hold me over until MultiGIG Ethernet comes out. Lol.

Curious...How do you feel about MultiGIG?


----------



## Liranan

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Tokkan*
> 
> Bought an Asus RT-AC1200G+, has GBit capabilities. Reading from the server at a steady 105MB/s, no port aggregation configured yet.
> 
> Will buy a switch that supports it, already got one under my sights.
> 
> TP Link - TL-SG108E
> 
> Bought this router because it supports VPN server, 5Ghz and AC and obviously its a gigabit router.
> 
> Now I gotta ask a question, if a single port runs at gigabit speeds how will aggregating two of them help me in speed issues? Or is it for failproofing specifically?
> 
> 
> 
> *Unless the router supports speeds in excess of a gigabit* aggregating them is just a failsafe in case one cable or port fails as the switch simply can't go over those speeds anyway.
> 
> Click to expand...
> 
> That isn't how a switch works. The internals are FAR beyond "gigabit", and usually enough to handle at least half the maximum traffic even in cheap models; a 5-port gigabit + wifi will probably have at minimum a 6gbps capacity internally. Good ones can handle full everything. Teaming, especially if it supports the feature, should never be a concern. My 4849Es can handle a lot more throughput than the physical connections (48x 1gbps, 4x 10gbps, full duplex) can. Several times (just over x4 for "normal" bandwidth) more actually.
> 
> There are also several types of link aggregation, and as it so happens, my motherboard (and all my Server NICs, and my L3 switches) supplies several of them. Aggregation as a tech absolutely will increase total available bandwidth, but depending on which version used, it may not be for one "single threaded" network connection (and some can; I use them). Some forms of teaming are failover only, but aggregation isn't one of them, and if the router supports the feature it will be more than capable of backing it up.
> 
> Regardless, 2x 1gbps teamed links to the router does you zero good unless either the thing doing it is being accessed by sources which are capable of saturating a single link, or if you have another target you would like to access at those speeds, Which requires both sides AND the entire network between them to be set that way. If it is only your desktop, you will not see any real benefit from teaming at all.
> 
> On the flip side, I can copy at 2gbps to my RAID array and my servers have 4/6gbps total links to the switch, allowing any VMs on one server to access the full bandwidth to transfer something to another VM (or HV to HV, whichever) on the other server. My switches also have a 4gbps link between them to stop my desktop's 2gbps from killing other connections to the server on this switch. Backbones are fun.
Click to expand...

Thanks a lot for the explanation.


----------



## Tokkan

Yea I know having the two of them plugged into the router serves no real purpose on its own, I did that cause I couldnt get the server online. Thought it could be one if the nics but its a DNS issue. The router is using Yandex DNS while the server was set-up with Google DNS.So the router doesnt give web access while that happens. My jails are also offline currently because of this reason, FreeNAS tells me that they are still using Google DNS and Idk how I did that lmao. Already took out all my data and getting ready to restore FreeNAS.
Big learning experience.
Saw that it has a virtualbox section. Possible voice/web/code/game server with debian? Will venture into that later. Debian is my preferred linux distro atm btw, if anything works out better than debian for this purpose shoot it.


----------



## wiretap

Quote:


> Originally Posted by *nexxusty*
> 
> Should hold me over until MultiGIG Ethernet comes out. Lol.


It has been out for quite a while. You can buy some 10GigE NIC's for pretty cheap off Natex. Just throw one in each PC and link them up point to point if you don't feel like dropping $300-$400 on a 10GigE switch.


----------



## KyadCK

Quote:


> Originally Posted by *nexxusty*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> That isn't how a switch works. The internals are FAR beyond "gigabit", and usually enough to handle at least half the maximum traffic even in cheap models; a 5-port gigabit + wifi will probably have at minimum a 6gbps capacity internally. Good ones can handle full everything. Teaming, especially if it supports the feature, should never be a concern. My 4849Es can handle a lot more throughput than the physical connections (48x 1gbps, 4x 10gbps, full duplex) can. Several _times_ (just over x4 for "normal" bandwidth) more actually.
> 
> There are also several types of link aggregation, and as it so happens, my motherboard (and all my Server NICs, and my L3 switches) supplies several of them. Aggregation as a tech absolutely will increase total available bandwidth, but depending on which version used, it _may_ not be for one "single threaded" network connection (and some can; I use them). Some forms of teaming are failover only, but aggregation isn't one of them, and if the router supports the feature it will be more than capable of backing it up.
> 
> Regardless, 2x 1gbps teamed links to the router does you zero good unless either the thing doing it is being accessed by sources which are capable of saturating a single link, or if you have another target you would like to access at those speeds, Which requires both sides AND the entire network between them to be set that way. If it is only your desktop, you will not see any real benefit from teaming at all.
> 
> On the flip side, I can copy at 2gbps to my RAID array and my servers have 4/6gbps total links to the switch, allowing any VMs on one server to access the full bandwidth to transfer something to another VM (or HV to HV, whichever) on the other server. My switches also have a 4gbps link between them to stop my desktop's 2gbps from killing other connections to the server on this switch. Backbones are fun.
> 
> 
> 
> Just bought a 802.12ad switch for some 2gbps action.
> 
> Should hold me over until MultiGIG Ethernet comes out. Lol.
> 
> Curious...How do you feel about MultiGIG?
Click to expand...

http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise-networks/catalyst-multigigabit-switching/multigigabit-ethernet-technology.pdf
Quote:


> Multiple speeds: Cisco multigigabit technology supports auto-negotiation of multiple speeds on switch
> ports. The supported speeds are 100 Mbps, 1 Gbps, 2.5 Gbps, and 5 Gbps on Cat 5e cable and up to 10
> Gbps over Cat 6a cabling


Quote:


> Q. Will a multigigabit switch port auto-negotiate the link speed?
> A. Yes, when the far-end device is also multigigabit-capable, both multigigabit switch ports will auto-negotiate the
> highest speed they can support over the cable you use (Cat 5e, Cat 6, Cat 6a). For example, if the far-end
> device is 10 Gbps-capable and the cable can support the speed, the two devices would negotiate to the 10
> Gbps speed.


tl;dr, they took a 10gbps port, and instead of making it 100/1k/10k they made it 100/1k/2.5k/5k/10k based on cable and what it thinks it can do.

Cool tech, but until support is on basically everything, it wont be very helpful. What good does it do if your non-Cisco NIC wont negotiate 2.5gbps? Then again, I'm speaking from a position of having multible 10gbps jacks at my call, so less than that has less appeal to me than someone with 1gbps everywhere.

My current opinion is "Neat, but useful only for places that both run pure cisco (and compatible), do not use fiber/CAT7 10gbps already, and do not want to replace the cables". I suppose someone who wired their house would fit in this category, but you'll need to buy network cards around the concept unless Realtek, Killer, and Intel start supporting it.


----------



## swingarm

Quote:


> Maybe I'll post pictures later as I just got it running just the way I like.
> 
> Antec P190 Case
> Supermicro PWS-665-PQ Power Supply
> Supermicro X8DTE Motherboard
> Freenas 9.10 Stable
> 2X Intel Xeon X5650 LGA1366 CPU
> 2X Noctua NH-U9DXi4 Heatsink/Fan
> Hynix 24GB (6x 4GB) PC3-10600R DDR3-1333MHz ECC Registered Ram
> Adaptec 1430SA 4 port SATA PCIE card
> 2X Evercool Armor Dual 5.25" Bay HDD Cooler(for future hard drives)
> 
> Hard Drives:
> Kingspec 8GB SSD SATA Drive(boot/OS)
> 2X Western Digital WD800 JD 80GB SATA Drive*
> Seagate ST380021A 80GB IDE(w/ SATA adapter) Drive*
> Western Digital WD4000F9YZ 4TB SATA Drive
> Western Digital WD4003FZEX 4TB SATA Drive
> *3 80GB Drives configured in Freenas as a 240GB Drive


A little more detail then some pictures. Relocated Antec 200MM side fan/filter to the outside to get more space inside the case. Case also has a usb powered work light that's next to useless. Put a Prolima 120MM x 15MM fan in front of the lower 4 HDD bay for improved cooling. No pics of the front, has 2 Noctua 120MM filtered intakes and I removed the front door. 2 Noctua 140MM filtered fan intakes on the top, Noctua 120MM fan between the lower 4 HDD bay and PSU, and rear Noctua 120MM fan for exhaust. Replaced original Antec 2 PSU system as they were to poorly designed to power this system. Lastly just for fun a picture of my modified 512GB Buffalo Linkstation(now 2TB).


----------



## Paul17041993

Somewhat odd that 10Gb is still rare and expensive considering how old it is now and WiFi getting as much as 2Gb or more in the right conditions, I'd love to run 10Gb everywhere if it were cheaper and readily available...

That being said, I'm pretty sure both my switch and router support aggregation, but that still means getting compatible cards for the devices that would make use of it...


----------



## nexxusty

Quote:


> Originally Posted by *KyadCK*
> 
> http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise-networks/catalyst-multigigabit-switching/multigigabit-ethernet-technology.pdf
> 
> tl;dr, they took a 10gbps port, and instead of making it 100/1k/10k they made it 100/1k/2.5k/5k/10k based on cable and what it thinks it can do.
> 
> Cool tech, but until support is on basically everything, it wont be very helpful. What good does it do if your non-Cisco NIC wont negotiate 2.5gbps? Then again, I'm speaking from a position of having multible 10gbps jacks at my call, so less than that has less appeal to me than someone with 1gbps everywhere.
> 
> My current opinion is "Neat, but useful only for places that both run pure cisco (and compatible), do not use fiber/CAT7 10gbps already, and do not want to replace the cables". I suppose someone who wired their house would fit in this category, but you'll need to buy network cards around the concept unless Realtek, Killer, and Intel start supporting it.


Heh, my good friend just finished wiring his house with CAT6. He's planning on going Multi-GIG. That's the best thing about it really, existing wires can do 2.5g/5gbps links. Or so they say....

I don't mind having to buy Cisco gear, I usually only buy Cisco and Netgear anyway. Also the PC's connected with it will be two, at most. I just hate bottlenecks and I can't really afford 10gbps.

The guy who mentioned 2 10gbps NIC's and a crossover cable had a decent idea, however the PC I'd need a fast link from/to is my server... I'd have to create a different network/subnet just to do that and still keep them connected to the internet and my current network. Less than ideal unfortunately. I'd need a switch that can do 1gbps and 10gbps. I doubt one of those can be had for a decent price so..... Multi-GIG it is.

The waiting is killing me though... I really want to see some damn Multi-GIG NIC's! Switches are out.... lets go with the NIC's already!


----------



## Paul17041993

Quote:


> Originally Posted by *nexxusty*
> 
> Heh, my good friend just finished wiring his house with CAT6. He's planning on going Multi-GIG. That's the best thing about it really, existing wires can do 2.5g/5gbps links. Or so they say....
> 
> I don't mind having to buy Cisco gear, I usually only buy Cisco and Netgear anyway. Also the PC's connected with it will be two, at most. I just hate bottlenecks and I can't really afford 10gbps.
> 
> The guy who mentioned 2 10gbps NIC's and a crossover cable had a decent idea, however the PC I'd need a fast link from/to is my server... I'd have to create a different network/subnet just to do that and still keep them connected to the internet and my current network. Less than ideal unfortunately. I'd need a switch that can do 1gbps and 10gbps. I doubt one of those can be had for a decent price so..... Multi-GIG it is.
> 
> The waiting is killing me though... I really want to see some damn Multi-GIG NIC's! Switches are out.... lets go with the NIC's already!


Smart switches with 2-4 10Gb SFP+ ports aren't much more expensive than SFP+ PCIe cards, additionally you could use SFP+ pass-through cables if the distance is short enough. However it's all still pretty costly at least here in AU...


----------



## tiro_uspsss

Quote:


> Originally Posted by *Paul17041993*
> 
> Smart switches with 2-4 10Gb SFP+ ports aren't much more expensive than SFP+ PCIe cards, additionally you could use SFP+ pass-through cables if the distance is short enough. However it's all still pretty costly at least here in AU...










I bought my 10Gb SFP+ cards for ~AUD$40.. what are you smoking??

edit, here:

http://www.ebay.com.au/itm/Lot-of-2-Mellanox-ConnectX-2-Single-Port-SFP-10GBE-Network-Card-MNPA19-XTR-/301689994552?hash=item463e1ffd38:g:hEAAAOSwgQ9VqDx2


----------



## Paul17041993

Quote:


> Originally Posted by *tiro_uspsss*
> 
> 
> 
> 
> 
> 
> 
> 
> I bought my 10Gb SFP+ cards for ~AUD$40.. what are you smoking??
> 
> edit, here:
> 
> http://www.ebay.com.au/itm/Lot-of-2-Mellanox-ConnectX-2-Single-Port-SFP-10GBE-Network-Card-MNPA19-XTR-/301689994552?hash=item463e1ffd38:g:hEAAAOSwgQ9VqDx2


Huh, that's interesting, though I suppose those cards lack full certification, testing and warranty...

But for home use they should do a good job, just need a server set up as a switch to connect everything together, unless you can find super cheap SFP+ switches too?

edit; quite a lot of cheap single and dual SFP+ cards on ebay, didn't think of that...


----------



## KyadCK

Quote:


> Originally Posted by *nexxusty*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise-networks/catalyst-multigigabit-switching/multigigabit-ethernet-technology.pdf
> 
> tl;dr, they took a 10gbps port, and instead of making it 100/1k/10k they made it 100/1k/2.5k/5k/10k based on cable and what it thinks it can do.
> 
> Cool tech, but until support is on basically everything, it wont be very helpful. What good does it do if your non-Cisco NIC wont negotiate 2.5gbps? Then again, I'm speaking from a position of having multible 10gbps jacks at my call, so less than that has less appeal to me than someone with 1gbps everywhere.
> 
> My current opinion is "Neat, but useful only for places that both run pure cisco (and compatible), do not use fiber/CAT7 10gbps already, and do not want to replace the cables". I suppose someone who wired their house would fit in this category, but you'll need to buy network cards around the concept unless Realtek, Killer, and Intel start supporting it.
> 
> 
> 
> Heh, my good friend just finished wiring his house with CAT6. He's planning on going Multi-GIG. That's the best thing about it really, existing wires can do 2.5g/5gbps links. Or so they say....
> 
> I don't mind having to buy Cisco gear, I usually only buy Cisco and Netgear anyway. Also the PC's connected with it will be two, at most. I just hate bottlenecks and I can't really afford 10gbps.
> 
> The guy who mentioned 2 10gbps NIC's and a crossover cable had a decent idea, however the PC I'd need a fast link from/to is my server... I'd have to create a different network/subnet just to do that and still keep them connected to the internet and my current network. Less than ideal unfortunately. *I'd need a switch that can do 1gbps and 10gbps. I doubt one of those can be had for a decent price* so..... Multi-GIG it is.
> 
> The waiting is killing me though... I really want to see some damn Multi-GIG NIC's! Switches are out.... lets go with the NIC's already!
Click to expand...

Quote:


> Originally Posted by *Paul17041993*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tiro_uspsss*
> 
> 
> 
> 
> 
> 
> 
> 
> I bought my 10Gb SFP+ cards for ~AUD$40.. what are you smoking??
> 
> edit, here:
> 
> http://www.ebay.com.au/itm/Lot-of-2-Mellanox-ConnectX-2-Single-Port-SFP-10GBE-Network-Card-MNPA19-XTR-/301689994552?hash=item463e1ffd38:g:hEAAAOSwgQ9VqDx2
> 
> 
> 
> Huh, that's interesting, though I suppose those cards lack full certification, testing and warranty...
> 
> But for home use they should do a good job, just need a server set up as a switch to connect everything together, *unless you can find super cheap SFP+ switches too?*
> 
> edit; quite a lot of cheap single and dual SFP+ cards on ebay, didn't think of that...
Click to expand...

Define cheap I guess? My 4849Es were $350 each.

That's the fun of buying business-class stuff, there's always a huge used market when they move to the new toys.


----------



## lowfat

Quote:


> Originally Posted by *tiro_uspsss*
> 
> 
> 
> 
> 
> 
> 
> 
> I bought my 10Gb SFP+ cards for ~AUD$40.. what are you smoking??
> 
> edit, here:
> 
> http://www.ebay.com.au/itm/Lot-of-2-Mellanox-ConnectX-2-Single-Port-SFP-10GBE-Network-Card-MNPA19-XTR-/301689994552?hash=item463e1ffd38:g:hEAAAOSwgQ9VqDx2


Damn those have gotten cheap. Would consider moving from 10GbE to 10GB IB if ESXi supported a virtual switch w/ one, yet still keeping RDMA.


----------



## xxpenguinxx

Wish 10Gb switches would come down in price. Was going to buy a few of those cards.


----------



## ivoryg37

What is the best way to test a new hard drive? I decided to rebuild my NAS so I bought 4x4TB HGST drives and want to test all of them before putting them into the build with xpenology OS and giving synology hybrid raid a go.


----------



## EvilMonk

Quote:


> Originally Posted by *ivoryg37*
> 
> What is the best way to test a new hard drive? I decided to rebuild my NAS so I bought 4x4TB HGST drives and want to test all of them before putting them into the build with xpenology OS and giving synology hybrid raid a go.


What do you mean test a drive?
You want to test them individually?
I run 2 SANs each on SAS cards with 12 2tb Hitachi Travelstar 7.2k and 16 3tb Toshiba x300 7.2k drives both in raid 6 and I get that you might want to test 4 drives... You can low level format them and if they hold the run you kind of have you answer... that would be a good test to do. Beside that if one fails, redundancy is there for that, claim it on warranty then just replace it... That's what it's meant for no?


----------



## ivoryg37

Well i just bought 4 disk from newegg and received them yesterday. I wanted to make sure all four survived the shipping before putting them to use. The only thing ive done so far is crystaldisk to check them. I was thinking of doing a Windows slow format as well on each one of them, just figured i ask to see if there is any stress test programing to try


----------



## cones

I'd just check smart.


----------



## broadbandaddict

Quote:


> Originally Posted by *ivoryg37*
> 
> What is the best way to test a new hard drive? I decided to rebuild my NAS so I bought 4x4TB HGST drives and want to test all of them before putting them into the build with xpenology OS and giving synology hybrid raid a go.


I always do a full read, full zero, full read with the manufacturer software or WD Lifeguard if they don't have one. I started doing that about 5 years ago when I used unRAID and it's been a very reliable way to pick out bad drives. You can also/alternately use software like Stablebit Scanner to actively watch the drives SMART and filesystem health.


----------



## tiro_uspsss

Quote:


> Originally Posted by *EvilMonk*
> 
> What do you mean test a drive?
> You want to test them individually?
> I run 2 SANs each on SAS cards with 12 2tb Hitachi Travelstar 7.2k and 16 3tb Toshiba x300 7.2k drives both in raid 6 and I get that you might want to test 4 drives... You can low level format them and if they hold the run you kind of have you answer... that would be a good test to do. Beside that if one fails, redundancy is there for that, claim it on warranty then just replace it... That's what it's meant for no?


This. You simply put them into production as that is the exact environment they need to survive in. Your back-ups & redundancy should cover any premature fails, which you then claim on warranty.


----------



## nexxusty

Quote:


> Originally Posted by *ivoryg37*
> 
> What is the best way to test a new hard drive? I decided to rebuild my NAS so I bought 4x4TB HGST drives and want to test all of them before putting them into the build with xpenology OS and giving synology hybrid raid a go.


Zeroing a drive is THE ONLY way to test it. Anything else is not as thorough.


----------



## ivoryg37

Thanks for all the answers. I will try zeroing all four drives then checking it. Figure I at least try check them now while I can still return to newegg within 30 days then just throw it into a system then rely on manufacturing warranty. At least this way I have the newegg return and warranty if one were to be bad within the 30 days


----------



## CJston15

I have 2 physical servers. Below are the specs...

Server 1
Intel Xeon E3-1230v3
16gb RAM
2 x 120gb SSD Raid 1
2 x 3TB HDD Raid 1
I also have a 4TB external drive for backing up my Raid 1 storage drive that contains media files mostly.

Server 2
Intel Xeon E5-1620v3
16gb RAM
2 x 120gb SSD Raid 1
I have a variety of 300GB Velociraptor drives, 500, 750, and 1tb WD Black drives. Couple 1.5tb WD Green drives. Can put whatever in this if need be.

What I would like to do is have one physician DC and then use the other for VM's running Plex, Backups (Recommendations?), TS3 server, VPN, MDT imaging server, etc...

Couple questions. Which would be suited better for the DC and which for VMs? And should I also make the DC a DHCP server so it's also physical. DNS would then be a VM correct?

Any suggestions or recommendations is much appreciated. Always looking for cool new things to play around with!

Thanks


----------



## loud681

Server 2 could be your DC and server 1 is for your VM's and what not since it has more hard drive space. Typically on a DC you would also have it running DHCP and DNS you could also run a secondary VM DC on the other server and have failover for DNS and DHCP


----------



## littleredwagen

Quote:


> Originally Posted by *CJston15*
> 
> I have 2 physical servers. Below are the specs...
> 
> Server 1
> Intel Xeon E3-1230v3
> 16gb RAM
> 2 x 120gb SSD Raid 1
> 2 x 3TB HDD Raid 1
> I also have a 4TB external drive for backing up my Raid 1 storage drive that contains media files mostly.
> 
> Server 2
> Intel Xeon E5-1620v3
> 16gb RAM
> 2 x 120gb SSD Raid 1
> I have a variety of 300GB Velociraptor drives, 500, 750, and 1tb WD Black drives. Couple 1.5tb WD Green drives. Can put whatever in this if need be.
> 
> What I would like to do is have one physician DC and then use the other for VM's running Plex, Backups (Recommendations?), TS3 server, VPN, MDT imaging server, etc...
> 
> Couple questions. Which would be suited better for the DC and which for VMs? And should I also make the DC a DHCP server so it's also physical. DNS would then be a VM correct?
> 
> Any suggestions or recommendations is much appreciated. Always looking for cool new things to play around with!
> 
> Thanks


Personally I would use both as Hyper-V Hosts and have the DCs virtually. Process would be setup on one box with Hyper-V Install VM, on the VM install AD DS, DNS, DHCP (makes it easier to find the DC without manually inputting the DNS settings). Join the HyperV host to the Domain, Setup 2nd HyperV host join to the Domain, you can then setup multiple VMs. A couple of tips in HyperV manager you can add the 2nd host so you can see both servers, and if you grant kerberos delegation you can use share nothing live migration to VMs from one server to another with the VMs still running. Also if you want to learn and play at spinning up servers VMs I would consider creating a VM template of 2012 patched and updated it can take lots of time to do that.


----------



## KyadCK

Quote:


> Originally Posted by *CJston15*
> 
> I have 2 physical servers. Below are the specs...
> 
> Server 1
> Intel Xeon E3-1230v3
> 16gb RAM
> 2 x 120gb SSD Raid 1
> 2 x 3TB HDD Raid 1
> I also have a 4TB external drive for backing up my Raid 1 storage drive that contains media files mostly.
> 
> Server 2
> Intel Xeon E5-1620v3
> 16gb RAM
> 2 x 120gb SSD Raid 1
> I have a variety of 300GB Velociraptor drives, 500, 750, and 1tb WD Black drives. Couple 1.5tb WD Green drives. Can put whatever in this if need be.
> 
> What I would like to do is have one physician DC and then use the other for VM's running Plex, Backups (Recommendations?), TS3 server, VPN, MDT imaging server, etc...
> 
> Couple questions. Which would be suited better for the DC and which for VMs? And should I also make the DC a DHCP server so it's also physical. DNS would then be a VM correct?
> 
> Any suggestions or recommendations is much appreciated. Always looking for cool new things to play around with!
> 
> Thanks


Virtualize both (please not with Hyper-V







), make _everything_ in VMs, put/move VMs between servers as required. There is no good reason to go OS-to-drive unless you want to run something like FreeNAS. No Domain/DHCP/MDT etc server needs enough CPU or RAM to justify dedicating an entire CPU to it.

Obviously anything requiring storage goes where the HDDs are.


----------



## littleredwagen

Quote:


> Originally Posted by *KyadCK*
> 
> Virtualize both (please not with Hyper-V
> 
> 
> 
> 
> 
> 
> 
> ), make _everything_ in VMs, put/move VMs between servers as required. There is no good reason to go OS-to-drive unless you want to run something like FreeNAS. No Domain/DHCP/MDT etc server needs enough CPU or RAM to justify dedicating an entire CPU to it.
> 
> Obviously anything requiring storage goes where the HDDs are.


The only thing Hyper-V doesn't do well is pass graphics to VMs easily. 2012 R2 Hyper-V can run linux VMs and has Several advantages over other Hypervisors when working with Windows Server VMs Plus it is free if you want just Hyper-V server. It all depends on the environment you run.


----------



## CJston15

I would prefer ESXI since I have experience with it but I don't believe either server is supported for ESXI 6. I have never done it before but I assume I can run Hyper-V from a USB stick correct? Also, are the days of having a physical DC/DHCP server instead of it being virtualized a thing of the past - sounds like it?


----------



## littleredwagen

Quote:


> Originally Posted by *CJston15*
> 
> I would prefer ESXI since I have experience with it but I don't believe either server is supported for ESXI 6. I have never done it before but I assume I can run Hyper-V from a USB stick correct? Also, are the days of having a physical DC/DHCP server instead of it being virtualized a thing of the past - sounds like it?


You can indeed run Hyper-V Server from a USB Drive See this Technet Link
https://technet.microsoft.com/en-us/library/jj733589%28v=ws.11%29.aspx?f=255&MSPPError=-2147217396

though it might be easier for the first time and similar price points to put a 120gb SSD in the machine a run a familiar windows front end. once setup you can always remove the GUI and switch to server core if you like

Yes they pretty much are, can't think of any reason why you waste a box just ad ds dns dhcp in even in a small environment since you can seperate out services to other VMs it keeps the VMs running on the light side instead of crowded boxes plus it makes replication and disaster recovery that much easier and simpler


----------



## loud681

I tend to prefer to have a dedicated physical computer as a DC since it is the core of a domain. But since this isn't a production environment i would have a VM DC on each server for failover.....just a thought


----------



## KyadCK

Quote:


> Originally Posted by *littleredwagen*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Virtualize both (please not with Hyper-V
> 
> 
> 
> 
> 
> 
> 
> ), make _everything_ in VMs, put/move VMs between servers as required. There is no good reason to go OS-to-drive unless you want to run something like FreeNAS. No Domain/DHCP/MDT etc server needs enough CPU or RAM to justify dedicating an entire CPU to it.
> 
> Obviously anything requiring storage goes where the HDDs are.
> 
> 
> 
> The only thing Hyper-V doesn't do well is pass graphics to VMs easily. 2012 R2 Hyper-V can run linux VMs and has Several advantages over other Hypervisors when working with Windows Server VMs Plus it is free if you want just Hyper-V server. It all depends on the environment you run.
Click to expand...

ESXi is free as a whole, and I'd love you to try to name some of those advantages. I can think of 20 off the top of my head in favor of ESXi.
Quote:


> Originally Posted by *CJston15*
> 
> I would prefer ESXI since I have experience with it but I don't believe either server is supported for ESXI 6. I have never done it before but I assume I can run Hyper-V from a USB stick correct? Also, are the days of having a physical DC/DHCP server instead of it being virtualized a thing of the past - sounds like it?


One of my servers is a 960T on a 970A-UD3, and I'm on ESXi 6 update 2. Worth a shot.

Also, install ESXi/Hyper-V to a USB stick, not one of your HDDs, yes. That way you can swap sticks and re-assign the VMX files when you want to upgrade, and still have your old stick in case it fails.








Quote:


> Originally Posted by *littleredwagen*
> 
> Quote:
> 
> 
> 
> Originally Posted by *CJston15*
> 
> I would prefer ESXI since I have experience with it but I don't believe either server is supported for ESXI 6. I have never done it before but I assume I can run Hyper-V from a USB stick correct? Also, are the days of having a physical DC/DHCP server instead of it being virtualized a thing of the past - sounds like it?
> 
> 
> 
> You can indeed run Hyper-V Server from a USB Drive See this Technet Link
> https://technet.microsoft.com/en-us/library/jj733589%28v=ws.11%29.aspx?f=255&MSPPError=-2147217396
> 
> though it might be easier for the first time and similar price points to put a 120gb SSD in the machine a run a familiar windows front end. once setup you can always remove the GUI and switch to server core if you like
> 
> Yes they pretty much are, can't think of any reason why you waste a box just ad ds dns dhcp in even in a small environment since you can seperate out services to other VMs it keeps the VMs running on the light side instead of crowded boxes plus it makes replication and disaster recovery that much easier and simpler
Click to expand...

Yup. Virtualization is at the point now that there is no more concept of "computer", simply abstracted resource pools. In addition to replication and recovery, simply being able to move your entire datastore to another newer computer and clicking go is a big deal.
Quote:


> Originally Posted by *loud681*
> 
> I tend to prefer to have a dedicated physical computer as a DC since it is the core of a domain. But since this isn't a production environment i would have a VM DC on each server for failover.....just a thought


Why would you want a Dedi that is tied to system drivers and thus the motherboard? The entire point of VMs is to remove hardware dependency and add redundancy, EXACTLY what you need for a primary never-fail server.


----------



## loud681

I've seen more VM's fail then dedicated boxes.......


----------



## PuffinMyLye

Quote:


> Originally Posted by *loud681*
> 
> I've seen more VM's fail then dedicated boxes.......


Enter hyperconverged clusters







.


----------



## herkalurk

Quote:


> Originally Posted by *loud681*
> 
> I've seen more VM's fail then dedicated boxes.......


Who's building those crap VMs....


----------



## Versa

Quote:


> Originally Posted by *loud681*
> 
> I've seen more VM's fail then dedicated boxes.......


Hows is this possible? I never seen any production VM just fail and any in my test labs or others are due to software testing or just user-related.

Also Snapshots are a thing


----------



## loud681

Lets just ay that there are people out there that don't think spending money on there IT infrastructure is a top priority until critical systems fail lol


----------



## littleredwagen

Quote:


> Originally Posted by *KyadCK*
> 
> ESXi is free as a whole, and I'd love you to try to name some of those advantages. I can think of 20 off the top of my head in favor of ESXi.


I have used ESXI and it is quite good not arguing that. When running 2012 Server as VMs the biggest is the checkpoint/snapshot issues. HyperV will issue a VM Generation ID that help guard against the split brain (yes I know snaphots are no bueno on DCs) The integrated Integration services it auto assigns to the guest. Server 2012 also does native nic teaming so you no longer need to assign dedicated nics to each VM even if one is rarely used and the others are maxed out, It will allow access to those nics. Personally for me HyperV being Active directory integrated is the best it allows for central management of all Windows Servers. If you run a mixed environment with say more linux than windows a lot of those go away so ESXI be a better option. In the end we all use what we are most familiar with Both ESXI and HyperV are great Hypervisors for enterprise/server applications.

I also forgot Hyper-V 2012 supports Domain Controller Cloning, there are few steps but it works very well. makes it easy to spin up another DC without starting from scratch
Quote:


> Originally Posted by *loud681*
> 
> Lets just ay that there are people out there that don't think spending money on there IT infrastructure is a top priority until critical systems fail lol


The worst i've had was when some IT company went to a former customer of mine, left that job so they found someone else. There was a failed disk in the RAID 5 storage array. They did not replace it with the same model disk, and as a consequence the VM would not start on that storage array. They of course blamed sharepoint, windows and everything else software wise. They get in touch with me I go take a look realized what they did moved the VM to another disk and boom the VM started instantly. So most of the VMs I've seen failed are due to host issues


----------



## littleredwagen

duplicate post


----------



## herkalurk

Quote:


> Originally Posted by *littleredwagen*
> 
> I also forgot Hyper-V 2012 supports Domain Controller Cloning, there are few steps but it works very well. makes it easy to spin up another DC without starting from scratch
> The worst i've had was when some IT company went to a former customer of mine, left that job so they found someone else. There was a failed disk in the RAID 5 storage array. They did not replace it with the same model disk, and as a consequence the VM would not start on that storage array. They of course blamed sharepoint, windows and everything else software wise. They get in touch with me I go take a look realized what they did moved the VM to another disk and boom the VM started instantly. So most of the VMs I've seen failed are due to host issues


I worked for a company that liked to cheap out on everything infrastructure. The only thing they purchased for their hosting backend was a pair of new switches(2 Ciscos in a virtual chassis, only $7000), only because the current switches blew up and cost us 2 days of downtime for our clients, and they lost about 20% of their clients because of it.

When I left they asked what one of the reasons was, and it was the place I was going wasn't skimping on infrastructure. They tried to offer me more money and I asked if you're willing to pay me more why not put more into the equipment? They didn't have a good response.


----------



## littleredwagen

Quote:


> Originally Posted by *herkalurk*
> 
> I worked for a company that liked to cheap out on everything infrastructure. The only thing they purchased for their hosting backend was a pair of new switches(2 Ciscos in a virtual chassis, only $7000), only because the current switches blew up and cost us 2 days of downtime for our clients, and they lost about 20% of their clients because of it.
> 
> When I left they asked what one of the reasons was, and it was the place I was going wasn't skimping on infrastructure. They tried to offer me more money and I asked if you're willing to pay me more why not put more into the equipment? They didn't have a good response.


wow good move


----------



## herkalurk

Yeah this move suited me long term. That company was small, they were good to their employees but for my job they didn't really invest in infrastructure even though they hosted on their own. To be fair my boss believed in a few years they would just cloud host everything instead of self host to reduce need on admins.


----------



## nexxusty

Quote:


> Originally Posted by *littleredwagen*
> 
> I have used ESXI and it is quite good not arguing that. When running 2012 Server as VMs the biggest is the checkpoint/snapshot issues. HyperV will issue a VM Generation ID that help guard against the split brain (yes I know snaphots are no bueno on DCs) The integrated Integration services it auto assigns to the guest. Server 2012 also does native nic teaming so you no longer need to assign dedicated nics to each VM even if one is rarely used and the others are maxed out, It will allow access to those nics. Personally for me HyperV being Active directory integrated is the best it allows for central management of all Windows Servers. If you run a mixed environment with say more linux than windows a lot of those go away so ESXI be a better option. In the end we all use what we are most familiar with Both ESXI and HyperV are great Hypervisors for enterprise/server applications.
> 
> I also forgot Hyper-V 2012 supports Domain Controller Cloning, there are few steps but it works very well. makes it easy to spin up another DC without starting from scratch
> The worst i've had was when some IT company went to a former customer of mine, left that job so they found someone else. There was a failed disk in the RAID 5 storage array. They did not replace it with the same model disk, and as a consequence the VM would not start on that storage array. They of course blamed sharepoint, windows and everything else software wise. They get in touch with me I go take a look realized what they did moved the VM to another disk and boom the VM started instantly. So most of the VMs I've seen failed are due to host issues


Try Proxmox.


----------



## EvilMonk

Quote:


> Originally Posted by *loud681*
> 
> I tend to prefer to have a dedicated physical computer as a DC since it is the core of a domain. But since this isn't a production environment i would have a VM DC on each server for failover.....just a thought


Seriously? At work all our DCs are running on VMware with VMotion/DRS... running a DC in a vm isn't going to slow it down if you give priority to the ressources allocated to it properly


----------



## EvilMonk

Quote:


> Originally Posted by *loud681*
> 
> I've seen more VM's fail then dedicated boxes.......


Well those VM were probably built by someone who doesn't know what he's doing with servers / server OSes and never heard of VMotion / DRS / high availability architectures...


----------



## herkalurk

Quote:


> Originally Posted by *EvilMonk*
> 
> Well those VM were probably built by someone who doesn't know what he's doing with servers / server OSes and never heard of VMotion / DRS / high availability architectures...


Some people don't want to spend the money. Worked for a customer that had HA, but no DRS. Their cluster was so tight on ram they had a spreadsheet with each VM, it's memory, total available mem on each host, then a list of which VMs belong to which host. So if it went down, that's where you put them back to. I asked them what about vmware updates, need to restart hosts to do it. They hadn't updated vmware since 5.0, at this point 5.5 had been out for over a year...


----------



## Liranan

I was going to post photos of my server but I've had to send everything back. I couldn't get the board, CPU and RAM to work together so after trying several different sticks (registered and unbuffered) and even two CPU's (X3430 and G6950) I'm getting a refund and going with an AMD AM3 CPU and Asus board. FreeNas forums can claim that AMD are no good all they like but the one stick of unbuffered RAM works just fine with my FX8320 but not with the Intel system so after two weeks of fighting with it I am going with an AMD system. As I intend to run Raid 5 I might just install Win 8.1 and run Spaces with parity.


----------



## vaeron

I've updated my servers / in the process of upgrading servers. I just moved in to a new place that doesn't have room for my 48u rack...I'm kind of bummed about it. That being said, this set up will work for now.



So let's get started (from bottom up)

*Dell PowerEdge R710*

2x Quad Core Xeon
64 GB DDR3 *work in progress*
6 x 600 GB 15k 3.5" SAS drives *work in progress*
ESXi 6
*IBM System X3690 X5*

2x Intel Xeon E7-2803 6-core @ 1.73GHz
24x 8 GB DDR3 ECC (Total 192 GB)
IBM ServeRaid M1015
8x 146 GB 15k RPM 2.5" Hard Drives *work in progress, have 2*
*Dell PowerEdge 2850*

2x Dual Core Xeon
16 GB DDR2 ECC *work in progress, have 8 GB*
34x 300 GB 10k RPM 3.5" SCSI (6 in the 2850, 14x in each Powervault 220s)
*Powervault 220s*

*Imaging Station*

Going from the bottom up, there is currently no OS installed on the PowerEdge R710. I don't have the drives in for it yet so don't have a way to run anything. I'm hoping that after I finish paying some medical bills I'll be able to get back to my toys. The System X3690 has ESXi 6 with several different VMs running. I'm running my domain through a couple of Windows Server 2012 R2 installs, I have an Ubuntu server running and authenticating through AD, and a Minecraft server set up for now. Every VM has its own physical connection. I've barely touched the system resources on this beast. The PowerEdge 2850 has FreeNAS installed and is directly connected to the X3690 running the storage over 2 gigabit network connections. I'm running a Cisco 24 port managed switch w/POE and a Meraki MR12 POE AP.

My imaging station is used to rebuild workstations for my clients. It's very basic now as I don't have my normal setup anymore, but it still functional. I'm hoping to get my 8 port KVM set up again so I can rack them all. That's all for now. If y'all have any suggestions or


----------



## nexxusty

Quote:


> Originally Posted by *vaeron*
> 
> I've updated my servers / in the process of upgrading servers. I just moved in to a new place that doesn't have room for my 48u rack...I'm kind of bummed about it. That being said, this set up will work for now.
> 
> 
> 
> So let's get started (from bottom up)
> 
> *Dell PowerEdge R710*
> 
> 2x Quad Core Xeon
> 64 GB DDR3 *work in progress*
> 6 x 600 GB 15k 3.5" SAS drives *work in progress*
> ESXi 6
> *IBM System X3690 X5*
> 
> 2x Intel Xeon E7-2803 6-core @ 1.73GHz
> 24x 8 GB DDR3 ECC (Total 192 GB)
> IBM ServeRaid M1015
> 8x 146 GB 15k RPM 2.5" Hard Drives *work in progress, have 2*
> *Dell PowerEdge 2850*
> 
> 2x Dual Core Xeon
> 16 GB DDR2 ECC *work in progress, have 8 GB*
> 34x 300 GB 10k RPM 3.5" SCSI (6 in the 2850, 14x in each Powervault 220s)
> *Powervault 220s*
> 
> *Imaging Station*
> 
> Going from the bottom up, there is currently no OS installed on the PowerEdge R710. I don't have the drives in for it yet so don't have a way to run anything. I'm hoping that after I finish paying some medical bills I'll be able to get back to my toys. The System X3690 has ESXi 6 with several different VMs running. I'm running my domain through a couple of Windows Server 2012 R2 installs, I have an Ubuntu server running and authenticating through AD, and a Minecraft server set up for now. Every VM has its own physical connection. I've barely touched the system resources on this beast. The PowerEdge 2850 has FreeNAS installed and is directly connected to the X3690 running the storage over 2 gigabit network connections. I'm running a Cisco 24 port managed switch w/POE and a Meraki MR12 POE AP.
> 
> My imaging station is used to rebuild workstations for my clients. It's very basic now as I don't have my normal setup anymore, but it still functional. I'm hoping to get my 8 port KVM set up again so I can rack them all. That's all for now. If y'all have any suggestions or


I don't understand the fascination with ESXi.... I find Proxmox to be much better.


----------



## spinFX

Description / Usage: *Backups, Media & Network Storage*

OS: *Ubuntu Server 14.04 LTS* (headless)
Case: *Phobia Open-Air Bench Case*
CPU: *X5660* (not oc'd yet)
Motherboard: *Asus P6X58D-E*
Memory: *24GB (6x4GB) DDR3*
PSU: *Thermaltake Toughpower 750Watt*
OS SDD (If you have one): *Samsung 850 Evo 250GB*
SAS/HBA: LSI 9211-8i Host Bus Adapter (2x SAS = 8x SATA 6Gbps)
Storage HDD(s):
Currently have a SnapRAID pool setup with 1 parity (soon to be 2 parity) and (5 data disks [1 isnt in the system right now])

Samsung 840 Evo 250GB (For VM Vdisks)
2TB WD Blue (Data)
2TB WD Black (Data)
3TB WD Blue (Data)
2TB WD Green (Data)
4TB WD Black (Parity)
4TB WD Red (Data - Currently getting filled up from a mates server)
Server Manufacturer: *Me (re-purposing an old - but still ver capable - rig)*

You can see in one of the pics theres a 2 port HP gigabit NIC with x1 PCIe connection to go in the last free PCIe slot on the board to give a total of 3 gigabit ports. I'll probably dedicate one to the VM, one to the server for direct connection to the net, and one for an always-on VPN connection that certain traffic is routed to.
Alternatively I'll let the VM share the nic with the host, and aggregate the two ports on the HP NIC to get full bandwidth to multiple machines from the server. (If anyone has any tips or tricks for this on Ubuntu Server 14.04LTS, my ears are open







)

Next plan is to setup a 6 disk ZFS array (3 x (Mirrored-2disk zPools) striped together) - which apparently is the fastest setup as well as extremely realiable - for another backup server and fast network attached storage (SnapRAID performance is the same as a single disk, as there is no on-the-fly parity calculations and data is not striped across disks - and I would like to have some faster, secure, large volume storage than that if possible)


----------



## vaeron

Quote:


> Originally Posted by *nexxusty*
> 
> I don't understand the fascination with ESXi.... I find Proxmox to be much better.


I've used ESXi for years and it's never let me down. Never used Proxmox. What do you find that is better about it?


----------



## nexxusty

Quote:


> Originally Posted by *vaeron*
> 
> I've used ESXi for years and it's never let me down. Never used Proxmox. What do you find that is better about it?


Same here for Proxmox. I haven't used ESXi as Proxmox has served me well.

I just don't understand everyone using ESXi for a VM box instead of Proxmox. Is ESXi free?


----------



## vaeron

Yes it is. It also came embedded on all of my servers and just works.


----------



## EvilMonk

Quote:


> Originally Posted by *nexxusty*
> 
> I don't understand the fascination with ESXi.... I find Proxmox to be much better.


Well for some of us sys admins who are working in businesses that are paying for us to be certified with VMware, it's a lot easier to have our own servers on ESXi when we work with it on a daily basis







We use VSphere at work so it's running with a VCenter server but it's roughly the same as ESXi minus all the paid features exclusive to VSphere.


----------



## vaeron

Quote:


> Originally Posted by *EvilMonk*
> 
> Well for some of us sys admins who are working in businesses that are paying for us to be certified with VMware, it's a lot easier to have our own servers on ESXi when we work with it on a daily basis
> 
> 
> 
> 
> 
> 
> 
> We use VSphere at work so it's running with a VCenter server but it's roughly the same as ESXi minus all the paid features exclusive to VSphere.


Also... this.


----------



## ondoy

waiting for my case to arrive...


----------



## ElectroGeek007

Finally bit the bullet and built a proper NAS for my media collection.









OS: FreeNAS 9.10
Case: Corsair 400R
CPU: Intel Pentium G3460 (3.5 GHz)
Motherboard: Supermicro MBD-X10SLL-F-O (Micro ATX)
Memory: Crucial 16GB (2 x 8GB) ECC Unbuffered DDR3L 1600MHz RAM (CT2KIT102472BD160B)
PSU: EVGA SuperNOVA 550 G2
OS HDD: 16GB SanDisk USB 3.0 Flash Drive
RAID Card: LSI 9211-8i (flashed to IT Mode)
Storage HDD(s): 7x6TB White Label Hard Drives (these ones)
Array Setup: ZFS RAIDZ2 (~38TB raw, 24.5TB usable)
Server Manufacturer: Me!


----------



## herkalurk

Quote:


> Originally Posted by *ElectroGeek007*
> 
> Finally bit the bullet and built a proper NAS for my media collection.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> OS: FreeNAS 9.10
> Case: Corsair 400R
> CPU: Intel Pentium G3460 (3.5 GHz)
> Motherboard: Supermicro MBD-X10SLL-F-O (Micro ATX)
> Memory: Crucial 16GB (2 x 8GB) ECC Unbuffered DDR3L 1600MHz RAM (CT2KIT102472BD160B)
> PSU: EVGA SuperNOVA 550 G2
> OS HDD: 16GB SanDisk USB 3.0 Flash Drive
> RAID Card: LSI 9211-8i (flashed to IT Mode)
> Storage HDD(s): 7x6TB White Label Hard Drives (these ones)
> Array Setup: ZFS RAIDZ2 (~38TB raw, 24.5TB usable)
> Server Manufacturer: Me!


Looks like an effective little box, one question though, why did you buy a raid card for ZFS? Isn't that a software raid?


----------



## stumped

Quote:


> Originally Posted by *herkalurk*
> 
> Looks like an effective little box, one question though, why did you buy a raid card for ZFS? Isn't that a software raid?


As noted, the card was flashed to IT mode, which means it can do pass through (or whatever the proper term is for giving each disk to the system). Having a card like this can increase overall throughput sometimes. Also, it could be that the supermicro board didn't have enough SATA ports for the 7 drives being used.


----------



## ndoggfromhell

Why the "white label" hard drives? I know they're less money, but a 1 year warranty and a no-name drive makes me nervous. Are you storing stuff that's not important if it's lost? I noticed no optical drive listed, so i'm guess you're not backing up to disc media and obviously not tape either.
Quote:


> Originally Posted by *ElectroGeek007*
> 
> Finally bit the bullet and built a proper NAS for my media collection.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> OS: FreeNAS 9.10
> Case: Corsair 400R
> CPU: Intel Pentium G3460 (3.5 GHz)
> Motherboard: Supermicro MBD-X10SLL-F-O (Micro ATX)
> Memory: Crucial 16GB (2 x 8GB) ECC Unbuffered DDR3L 1600MHz RAM (CT2KIT102472BD160B)
> PSU: EVGA SuperNOVA 550 G2
> OS HDD: 16GB SanDisk USB 3.0 Flash Drive
> RAID Card: LSI 9211-8i (flashed to IT Mode)
> Storage HDD(s): 7x6TB White Label Hard Drives (these ones)
> Array Setup: ZFS RAIDZ2 (~38TB raw, 24.5TB usable)
> Server Manufacturer: Me!


----------



## nexxusty

Quote:


> Originally Posted by *ndoggfromhell*
> 
> Why the "white label" hard drives? I know they're less money, but a 1 year warranty and a no-name drive makes me nervous. Are you storing stuff that's not important if it's lost? I noticed no optical drive listed, so i'm guess you're not backing up to disc media and obviously not tape either.
> Quote:
> 
> 
> 
> Originally Posted by *ElectroGeek007*
> 
> Finally bit the bullet and built a proper NAS for my media collection.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> OS: FreeNAS 9.10
> Case: Corsair 400R
> CPU: Intel Pentium G3460 (3.5 GHz)
> Motherboard: Supermicro MBD-X10SLL-F-O (Micro ATX)
> Memory: Crucial 16GB (2 x 8GB) ECC Unbuffered DDR3L 1600MHz RAM (CT2KIT102472BD160B)
> PSU: EVGA SuperNOVA 550 G2
> OS HDD: 16GB SanDisk USB 3.0 Flash Drive
> RAID Card: LSI 9211-8i (flashed to IT Mode)
> Storage HDD(s): 7x6TB White Label Hard Drives (these ones)
> Array Setup: ZFS RAIDZ2 (~38TB raw, 24.5TB usable)
> Server Manufacturer: Me!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> They're Seagate drives.... that's really all that needs to be said.
> 
> I do not understand why people support Seagate anymore. They are a joke. ALWAYS buy WD. Always.
Click to expand...


----------



## ElectroGeek007

These WL drives are most likely non-branded WD Red drives according to what I've read, not sure where you got Seagate from (I agree with your opinion of Seagate btw, just had one of their drives start failing on me the other day). I chose the WL drives pretty much because of the cost, and I gambled on most drives that fail probably failing within the first year. Time will tell if that was a smart choice or not.







Most of the stuff I have on this system is replaceable/re-downloadable, and everything important is backed up elsewhere anyway. And yes the RAID card is because there were not enough ports on the motherboard and this model was the recommended solution over on the FreeNAS forums (I need the SMART data for each drive to be accessible, which may not be the case with other types of SATA cards).


----------



## nexxusty

Quote:


> Originally Posted by *ElectroGeek007*
> 
> These WL drives are most likely non-branded WD Red drives according to what I've read, not sure where you got Seagate from (I agree with your opinion of Seagate btw, just had one of their drives start failing on me the other day). I chose the WL drives pretty much because of the cost, and I gambled on most drives that fail probably failing within the first year. Time will tell if that was a smart choice or not.
> 
> 
> 
> 
> 
> 
> 
> Most of the stuff I have on this system is replaceable/re-downloadable, and everything important is backed up elsewhere anyway. And yes the RAID card is because there were not enough ports on the motherboard and this model was the recommended solution over on the FreeNAS forums (I need the SMART data for each drive to be accessible, which may not be the case with other types of SATA cards).


Looks like a Seagate logo in the picture?

***, am I crazy here? Lol.


----------



## ElectroGeek007

I don't want to pull one out of my server right now, so here is a picture of it I found. Not a Seagate logo


----------



## nexxusty

Quote:


> Originally Posted by *ElectroGeek007*
> 
> I don't want to pull one out of my server right now, so here is a picture of it I found. Not a Seagate logo


That would be what I saw... man I totally thought it was a Seagate.

My bad there.


----------



## loud681

Never seen that brand of hard drive before


----------



## parityboy

Quote:


> Originally Posted by *loud681*
> 
> Never seen that brand of hard drive before


It's a Western Digital, _definitely_.


----------



## Zeus

Quote:


> Originally Posted by *ElectroGeek007*
> 
> I don't want to pull one out of my server right now, so here is a picture of it I found. Not a Seagate logo


That looks like a WD 6TB Red (NASware 3.0 version). Drive on the left of the image below


----------



## Aussiejuggalo

Just rebuilt my games "server"







.

CPU: Intel Xeon E3 1245 V5.
Motherboard: MSI C236M Workstation.
RAM: Crucial CT8G4WFD8213 8GB DDR4 ECC 2133MHz x2.
SSD 1: Samsung 750 EVO 120GB (Windows).
SSD 2: Samsung 750 EVO 120GB (Servers).
HDD 1: Western Digital Red.
PSU: Corsair VS350 350w.
Case: Fractal Design Core 1000.
CPU Cooler: Corsair H80i V2.
OS: Windows 10 Enterprise 64 Bit.

Fitting the H80i was a bit of a pain, I had to put the block first than the rad but I got it in, the tubes only just touch the panel as well.



It's not as neat as I would of liked but no where to hide the cables, the red SATA is so I know which is Windows.



Hid all the useless cables, front USB2, audio & excess USB3 as well as most of the power / LED cables (power button doesn't work for some reason on mine).



Next to my NAS, was a pain squeezing in the old Fractal R4 but the Core 1000 fits not a problem.



Pretty happy with the temps I'm getting from the H80i, on the silent profile with CPU at 100% its not cracking 50°, this thing folds 24/7 so these temps are perfect.


----------



## stevef9432203

Box Ive built up for fun and therapy. Had a stroke did this as physical therapy..

Cpus: xeon e5-2690 x 2
Mobo: asus z10pe-d16 ws
Ram: crucial 16gb registered ram 2133 x 8
Psu: corsair 860i
Gpu: gigabyte gtx 970 g1 x2
Dual Antec H2o 1250 clc with custom software
Drives Mixed bag dual segate sshd hybrid boot drives (win10, fedora 24)
Segate 4tb data volume using Bcache against an Samsung 250gb drive as caching driv
USB3 DRIVE stack for misc segate drives, for video serviio dlna use
Lcd front crystalfonz 635+scab fan controller
Multimedia bay + bluray drie
Extra usbe port out the ass for all the extras














Dual Xeon 12 core,128gb ram win10 & linux,
Boinc and other projrcts


----------



## nexxusty

Quote:


> Originally Posted by *stevef9432203*
> 
> Box Ive built up for fun and therapy. Had a stroke did this as physical therapy..
> 
> Cpus: xeon e5-2690 x 2
> Mobo: asus z10pe-d16 ws
> Ram: crucial 16gb registered ram 2133 x 8
> Psu: corsair 860i
> Gpu: gigabyte gtx 970 g1 x2
> Dual Antec H2o 1250 clc with custom software
> Drives Mixed bag dual segate sshd hybrid boot drives (win10, fedora 24)
> Segate 4tb data volume using Bcache against an Samsung 250gb drive as caching driv
> USB3 DRIVE stack for misc segate drives, for video serviio dlna use
> Lcd front crystalfonz 635+scab fan controller
> Multimedia bay + bluray drie
> Extra usbe port out the ass for all the extras
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Dual Xeon 12 core,128gb ram win10 & linux,
> Boinc and other projrcts


Noice box bro!

Sorry to hear of your health troubles. You'll bounce back, just don't push things too hard.

Great idea building a box and vegging out for awhile.

To your health!


----------



## Unknownm

This bad boy is running Atheros AR9344 @ 560MHz . Got my USB HDD hooked up sharing my files across my network


----------



## nexxusty

Quote:


> Originally Posted by *Unknownm*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This bad boy is running Atheros AR9344 @ 560MHz . Got my USB HDD hooked up sharing my files across my network


Lol. Technically a server...


----------



## CSCoder4ever

what firmware/OS though?


----------



## Unknownm

Quote:


> Originally Posted by *CSCoder4ever*
> 
> what firmware/OS though?


Soon to be open wrt once I figure out how to install the GUI


----------



## cones

Quote:


> Originally Posted by *Unknownm*
> 
> Soon to be open wrt once I figure out how to install the GUI


It's just a command through SSH.


----------



## CSCoder4ever

Quote:


> Originally Posted by *Unknownm*
> 
> Quote:
> 
> 
> 
> Originally Posted by *CSCoder4ever*
> 
> what firmware/OS though?
> 
> 
> 
> Soon to be open wrt once I figure out how to install the GUI
Click to expand...

neato. Does this mean my routers are also servers if I connect some form of storage on them?








Quote:


> Originally Posted by *cones*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Unknownm*
> 
> Soon to be open wrt once I figure out how to install the GUI
> 
> 
> 
> It's just a command through SSH.
Click to expand...

yep.


----------



## Unknownm

Quote:


> Originally Posted by *CSCoder4ever*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Unknownm*
> 
> Quote:
> 
> 
> 
> Originally Posted by *CSCoder4ever*
> 
> what firmware/OS though?
> 
> 
> 
> Soon to be open wrt once I figure out how to install the GUI
> 
> Click to expand...
> 
> neato. Does this mean my routers are also servers if I connect some form of storage on them?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> .
Click to expand...

Quote:


> Originally Posted by *Jtvd78*
> 
> Just post a picture of your server setup so everyone here can see your Amazing servers. The more pictures the better
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The post format should be as follows:
> 
> Description / Usage(Print server, backups, file server, etc.)
> 
> *OS*:
> *Case*:
> *CPU*:
> *Motherboard*:
> *Memory*:
> *PSU*:
> *OS HDD *(If you have one):
> 
> *Storage HDD(s)*:
> *Server Manufacturer *(Ex: Dell, HP, You?)
> :
> 
> PICS PICS PICS!!!!


My N750 has everything listed here

WD OS (Soon Open WRT, Linux)
WD Case
ARM CPU
Unknown Motherboard
128MB DDR2
External PSU
OS ROM (16MB)

It's configured to be DHCP Server from Static IP and does have HDD with FTP connection. Unless "server" is defined as custom built only. I don't see this on the first post so I think routers are fine to post up!


----------



## ondoy

all BOINC...


----------



## unothegreat

Quote:


> Originally Posted by *ondoy*
> 
> 
> 
> 
> 
> all BOINC...


***......is that a dual xeon MATX board???? WHERE DID YOU GET IT!!!???? I MUST HAVE IT!!!


----------



## TheBloodEagle

=O It's so rare to even find an ATX version now but an M-ATX? Wow. I'd love that.


----------



## nexxusty

Have never seen mATX dual cpu.

That's impressive.

*edit*

Looking closer I'm sure it's ATX. Just looks smaller because of the heatsinks. Not to sound like a pompous ass, but if I don't know about it. Usually doesn't exist.

Lol.


----------



## Blackstare

Look where the first PCIe slot is, I dont think that's a matx board.


----------



## nexxusty

Quote:


> Originally Posted by *Blackstare*
> 
> Look where the first PCIe slot is, I dont think that's a matx board.


Agreed.


----------



## Liranan

It's definitely ATX.


----------



## ondoy

it's an ATX board from supermicro, X10DAL


----------



## littleredwagen

So I had intended to post an actual picture of my server(s) but it was too dark, so I will post later
But:

Server 1: HP Proliant DL380 G6
CPU: 2 x Xeon X5560 (8 Cores / 16 Threads)
Memory: 48GB DDR3R
Drives 8 x 146GB 10k SAS Drives (in Raid 5)
Nic: 4 Port Broadcom Server Nic (all Teamed together)

Server 2: Intel S5520HCT based Server
CPU: 2 x Xeon E5649 (12 Cores / 24 Threads)
Memory: 64GB DDR3R
Drives: 2 x 1TB 7200 (in Mirror) 4 x 2TB 7200 (in Raid 5)
Nic 1 x 4 Port Intel Server Nic, 2 x Onboard Intel PHYs, (all Teamed together)

Both Servers run Windows 2012 R2 Server /w Hyper-V. Joined to the Domain Controllers on the VMs. Use these testing and learning. My Plex is hosted on my Synology for now


----------



## nexxusty

Quote:


> Originally Posted by *littleredwagen*
> 
> So I had intended to post an actual picture of my server(s) but it was too dark, so I will post later
> But:
> 
> Server 1: HP Proliant DL380 G6
> CPU: 2 x Xeon X5560 (8 Cores / 16 Threads)
> Memory: 48GB DDR3R
> Drives 8 x 146GB 10k SAS Drives (in Raid 5)
> Nic: 4 Port Broadcom Server Nic (all Teamed together)
> 
> Server 2: Intel S5520HCT based Server
> CPU: 2 x Xeon E5649 (12 Cores / 24 Threads)
> Memory: 64GB DDR3R
> Drives: 2 x 1TB 7200 (in Mirror) 4 x 2TB 7200 (in Raid 5)
> Nic 1 x 4 Port Intel Server Nic, 2 x Onboard Intel PHYs, (all Teamed together)
> 
> Both Servers run Windows 2012 R2 Server /w Hyper-V. Joined to the Domain Controllers on the VMs. Use these testing and learning. My Plex is hosted on my Synology for now


DSM is the best. You'd better be VPN'ing and Videostation'ing off with that.

Heh.


----------



## littleredwagen

Quote:


> Originally Posted by *nexxusty*
> 
> DSM is the best. You'd better be VPN'ing and Videostation'ing off with that.
> 
> Heh.


I have no desire to move plex to the windows servers, as of right now I do very limited VPN'ing into my home network.


----------



## nexxusty

Quote:


> Originally Posted by *littleredwagen*
> 
> I have no desire to move plex to the windows servers, as of right now I do very limited VPN'ing into my home network.


Get rid of Plex and use Kodi....


----------



## littleredwagen

Quote:


> Originally Posted by *nexxusty*
> 
> Get rid of Plex and use Kodi....


Eventually


----------



## micul

Intel DQ67SW
Intel 2500S
16GB Corsair
1 X 120GB Kingston SSD for OS Windows Essentials 2012
3 X 1TB Seagate HDD for Backup - Two-Way Mirror
2 X 1TB WD Blacks - Two-Way Mirror
1 X 1TB WD Blue
1 X Syba 2 Port Sata card
1 X Syba 4 Port Sata card
I am using it mainly for backups , file , hosting a media server using Plex
Future upgrades will be replacing the 3 Seagates with 2TB WD REDs , raid card and possibly a NIC


----------



## bobfig

Quote:


> Originally Posted by *micul*
> 
> 
> 
> Intel DQ67SW
> Intel 2500S
> 16GB Corsair
> 1 X 120GB Kingston SSD for OS Windows Essentials 2012
> 3 X 1TB Seagate HDD for Backup - Two-Way Mirror
> 2 X 1TB WD Blacks - Two-Way Mirror
> 1 X 1TB WD Blue
> 1 X Syba 2 Port Sata card
> 1 X Syba 4 Port Sata card
> I am using it mainly for backups , file , hosting a media server using Plex
> Future upgrades will be replacing the 3 Seagates with 2TB WD REDs , raid card and possibly a NIC


nice start for a server. only thing i would say is to go with some 3tb+ drives as it is much nicer getting more space. when i was redoing my drives the 3tb size had the best TB per $. also WD Red's are a good choice but also if you like more speed go with the hgst drives for around $20 more.

as for the raid card a nice cheap one is the LSI 9650se-8lmpl that should fit and reads 4tb+ drives once firmware is updated to the latest. i have one in my server and it works perfectly. i have an extra one that should work fine but dosn't have the backup battery and to buy that alone they want your first born and maybe a kidney. that's why i ended getting another that came with one.

here is one that they may ship to ya tho its only a 4 drive version: http://www.ebay.com/itm/AMCC-3Ware-PCI-E-9650SE-4-8LPML-RAID-Controller-W-BBU-Module-03-04-/112020890984?hash=item1a14f72568


----------



## Dalchi Frusche

Quote:


> Originally Posted by *nexxusty*
> 
> Get rid of Plex and use Kodi....


I picked PLEX over kodi because PLEX transcodes on the server and not the end devices.


----------



## bobfig

Still emby > plex.


----------



## nexxusty

Quote:


> Originally Posted by *bobfig*
> 
> Still emby > plex.


Just from the feature set comparison, you seem correct.

Emby hmm?


----------



## cones

Yup like Emby also.


----------



## herkalurk

Quote:


> Originally Posted by *Dalchi Frusche*
> 
> I picked PLEX over kodi because PLEX transcodes on the server and not the end devices.


Since when does kodi transcode? Kodi just plays the source.


----------



## loud681

Plex only transcodes media when syncing to a portable device


----------



## cones

Quote:


> Originally Posted by *loud681*
> 
> Plex only transcodes media when syncing to a portable device


It should when ever the device doesn't support the format of the media. Syncing or streaming doesn't matter, if it is not in a format the device can use it will make it. Now there is probably a setting to always transcoded when syncing.


----------



## mbmumford

Speaking of Plex......

*Custom Plex / Folding Server*

*OS:* Windows 10 Pro
*Case:* Silverstone GD08
*CPU:* 2x Intel Xeon E5-2620 V4
*Motherboard:* Asus Z10PE-D8 WS
*Memory:* Crucial 16GB (4x 4GB) ECC Registered DDR4 2133 MHz
*PSU:* Corsair AX860
*OS HDD:* Samsung 950 PRO 256GB M.2 SSD
*Storage HDDs:* 3x WD 6TB Red 5400 RPM (Intel RSTe RAID5)
*Server Manufacturer:* Custom made by me.

I run this as a headless server operating 24/7 for streaming Plex to a few friends (and myself), and for CPU folding. With all 32 threads running 100% 24/7, I'm pulling about 180W and getting about 60K PPD.

Although this board has 2 ethernet ports, I'm only using 1 since I'm running a VPN. Considering how long it took to get the various programs to work with the VPN (especially Plex for remote access), I kind of regret not looking into how to direct traffic from specific programs to a specific port. Maybe in the future...

After spending nearly $5000 on this, I decided to hold off on getting a GPU for folding (1060 vs 1070 anyone?), and will be installing additional 6TB Red drives as needed. I was originally planning on installing 32GB of RAM, but honestly don't think I would every need that much for my use.

(I'm sorry, but it's not pretty).

 
 
 
 
 
 


I had to cut the ODD bay in order to fit the cooler on CPU 2.


----------



## Versa

Quote:


> Originally Posted by *mbmumford*
> 
> Speaking of Plex......
> 
> *Custom Plex / Folding Server*
> 
> *OS:* Windows 10 Pro
> *Case:* Silverstone GD08
> *CPU:* 2x Intel Xeon E5-2620 V4
> *Motherboard:* Asus Z10PE-D8 WS
> *Memory:* Crucial 16GB (4x 4GB) ECC Registered DDR4 2133 MHz
> *PSU:* Corsair AX860
> *OS HDD:* Samsung 950 PRO 256GB M.2 SSD
> *Storage HDDs:* 3x WD 6TB Red 5400 RPM (Intel RSTe RAID5)
> *Server Manufacturer:* Custom made by me.
> 
> I run this as a headless server operating 24/7 for streaming Plex to a few friends (and myself), and for CPU folding. With all 32 threads running 100% 24/7, I'm pulling about 180W and getting about 60K PPD.
> 
> Although this board has 2 ethernet ports, I'm only using 1 since I'm running a VPN. Considering how long it took to get the various programs to work with the VPN (especially Plex for remote access), I kind of regret not looking into how to direct traffic from specific programs to a specific port. Maybe in the future...
> 
> After spending nearly $5000 on this, I decided to hold off on getting a GPU for folding (1060 vs 1070 anyone?), and will be installing additional 6TB Red drives as needed. I was originally planning on installing 32GB of RAM, but honestly don't think I would every need that much for my use.
> 
> (I'm sorry, but it's not pretty).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I had to cut the ODD bay in order to fit the cooler on CPU 2.


This is exactly what I wanted to do as a 2620v4 build for an ESXi whitebox, how much power draw is idle or just plex transcode and the noise?

Pretty nice setup


----------



## mbmumford

Idle is about 100W. I never measured it during a transcode, however, I barely see the CPU usage increase when I do transcode.

As for noise, I specifically chose to use Noctua for the 60mm PWM fans along with the Low Noise Adaptors to reduce the noise as much as possible. I was planning on replacing the 120mm stock case fans also, however, there are not bad at all. The PSU was my biggest issue with coil whine at very low loads, however, that has since gone away.

Overall the unit is nearly silent. My headboard is directly on the other side of the wall and I can't hear it at all. My Asus G55VW-DH71 is louder.


----------



## TheBloodEagle

What are you using to control all 6 of the 4pin PWM fans? Just the auto based on temp in BIOS? Does the board have 6 4-pin connectors?

EDIT: NM, looked up the manual. It has 7 4-pin connectors for chassis fans; that's awesome. Are they true PWM signals?


----------



## LuckyJack456TX

Quote:


> Originally Posted by *mbmumford*
> 
> Speaking of Plex......
> 
> *Custom Plex / Folding Server*
> 
> *OS:* Windows 10 Pro
> *Case:* Silverstone GD08
> *CPU:* 2x Intel Xeon E5-2620 V4
> *Motherboard:* Asus Z10PE-D8 WS
> *Memory:* Crucial 16GB (4x 4GB) ECC Registered DDR4 2133 MHz
> *PSU:* Corsair AX860
> *OS HDD:* Samsung 950 PRO 256GB M.2 SSD
> *Storage HDDs:* 3x WD 6TB Red 5400 RPM (Intel RSTe RAID5)
> *Server Manufacturer:* Custom made by me.
> 
> I run this as a headless server operating 24/7 for streaming Plex to a few friends (and myself), and for CPU folding. With all 32 threads running 100% 24/7, I'm pulling about 180W and getting about 60K PPD.
> 
> Although this board has 2 ethernet ports, I'm only using 1 since I'm running a VPN. Considering how long it took to get the various programs to work with the VPN (especially Plex for remote access), I kind of regret not looking into how to direct traffic from specific programs to a specific port. Maybe in the future...
> 
> After spending nearly $5000 on this, I decided to hold off on getting a GPU for folding (1060 vs 1070 anyone?), and will be installing additional 6TB Red drives as needed. I was originally planning on installing 32GB of RAM, but honestly don't think I would every need that much for my use.
> 
> (I'm sorry, but it's not pretty).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I had to cut the ODD bay in order to fit the cooler on CPU 2.


Would you mind making one of these servers for me while you are at it?


----------



## mbmumford

Quote:


> Originally Posted by *TheBloodEagle*
> 
> What are you using to control all 6 of the 4pin PWM fans? Just the auto based on temp in BIOS? Does the board have 6 4-pin connectors?
> 
> EDIT: NM, looked up the manual. It has 7 4-pin connectors for chassis fans; that's awesome. Are they true PWM signals?


1) I tried various BIOS options for the fan speeds, and decided to leave it on AUTO.

2) Although the board has numerous fan connectors, I'm using a Y-splitter cables on the 3 "Front Fan" connectors on the board. This allows me to power the 2x 120mm stock fans, and future fans if I so decide.

3) The manual claims they are PWM signals, and iI have no information to dispute that as I didn't throw an oscilloscope on to check. Even with all fans running full speed, the case is still nearly silent.


----------



## mbmumford

Quote:


> Originally Posted by *LuckyJack456TX*
> 
> Would you mind making one of these servers for me while you are at it?


I would love to build another one! This was actually my first ever build, and I'm rather surprised how well it came together. The biggest thing benefit to building this is that it allows me to turn off my laptop, rather than run it 24/7, and when I upgrade my laptop I can get an ultrabook rather than a large ROG gaming laptop. More money now = much lower costs later. This unit was designed to future proof my life!


----------



## EvilMonk

Quote:


> Originally Posted by *littleredwagen*
> 
> So I had intended to post an actual picture of my server(s) but it was too dark, so I will post later
> But:
> 
> Server 1: HP Proliant DL380 G6
> CPU: 2 x Xeon X5560 (8 Cores / 16 Threads)
> Memory: 48GB DDR3R
> Drives 8 x 146GB 10k SAS Drives (in Raid 5)
> Nic: 4 Port Broadcom Server Nic (all Teamed together)
> 
> Server 2: Intel S5520HCT based Server
> CPU: 2 x Xeon E5649 (12 Cores / 24 Threads)
> Memory: 64GB DDR3R
> Drives: 2 x 1TB 7200 (in Mirror) 4 x 2TB 7200 (in Raid 5)
> Nic 1 x 4 Port Intel Server Nic, 2 x Onboard Intel PHYs, (all Teamed together)
> 
> Both Servers run Windows 2012 R2 Server /w Hyper-V. Joined to the Domain Controllers on the VMs. Use these testing and learning. My Plex is hosted on my Synology for now


Quote:


> Originally Posted by *cones*
> 
> It should when ever the device doesn't support the format of the media. Syncing or streaming doesn't matter, if it is not in a format the device can use it will make it. Now there is probably a setting to always transcoded when syncing.


This is right


----------



## CookieSayWhat

Spoiler: Freenas Box



Case - Mercury S8, CaseLabs
PSU - EVGA SuperNova G850
Motherboard - Supermicro X10SRL-F
CPU - Xeon E5-1650 V3
Memory - 128GB (4X32)DDR4 Samsung ECC R-DIMM
Boot Device - 2X Crucial 16GB USB Drives
Hard Drives - 12 HGST Deskstar NAS 4TB Mirrored VDevs
Hard Drives - 6 WD Red 6TB RaidZ 2
SSD's - 4 Samsung 850 EVO 500GB Mirrored VDevs
NIC - X520-DA2 10Gb, Intel
HBA - LSI-9207 8i
FreeNAS-9.10-STABLE-201606270534 (dd17351)



Used as my main file server, Plex Server, Transmission Server, VB Host, IP Camera VR and Minecraft server.


Spoiler: Backup FreeNas



Case - Supermicro CSE-827HD-R1400B FAT Twin
PSU - Redundant 1400W
Motherboard - X8DTT-HF+
CPU - 2x Xeon E5645
Memory - 96GB DDR3 ECC R-DIMMS
Boot Device - Crucial 16GB USB Drive
Hard Drives - 6 WD Red 6TB RaidZ 2
NIC - X520-DA2 10Gb, Intel
HBA - LSI-9207 4i4e
FreeNAS-9.10-STABLE-201606270534 (dd17351)



Used mainly as a backup target for the other servers, also hosts an Open VPN jail, VB Host, and Transmission Plugin.


Spoiler: ESXI Server



Case - Supermicro CSE-827HD-R1400B FAT Twin
PSU - Redundant 1400W
Motherboard - X8DTT-HF+
CPU - 2x Xeon X5680
Memory - 96GB DDR3 ECC R-DIMMS
Boot Device - Crucial 16GB USB Drive
Hard Drives - 4 WD Red 2TB
SSD - 2 Samsung 850 EVO 1TB
NIC - X520-DA2 10Gb, Intel



Finally my ESXI Server. It's the other half of the twin that the back up Freenas is on. Used for my Windows 10 VM for Blue Iris, Windows 8.1 VM for Testing, Windows 2012 Server, CentOS Server, Freenas VM, Sophos UTM VM, pfSense for home lab, Mint VM, and a bunch of other small stuff.


Spoiler: pfSense



Case - Supermicro SC512F-350B
PSU - 350 Watt
Motherboard - A1SRM-2558F
CPU - Intel Atom C2558
Memory - 16GB DDR3 ECC
Hard Drives - WD RE 80 GB
NIC - X520-DA2 10Gb, Intel



Firewall, OpenVPN Server, Router and etc.

I do have plans for one more server using a Supermicro SC846E16-1200RB but that's still a ways away. I'll post pictures when I get home hopefully.


----------



## Master__Shake

specs are in my sig


----------



## bobfig

Quote:


> Originally Posted by *Master__Shake*
> 
> 
> 
> 
> specs are in my sig


Im guessing you updated the server?


----------



## Master__Shake

Quote:


> Originally Posted by *bobfig*
> 
> Im guessing you updated the server?


nope the second box is attached to the fist via an sff-8088 cable.

the motherboard is for power for the intel expander.


----------



## nexxusty

Very, very simple for me with Plex vs Kodi....

Plex (AFAIK) does not play videos within archives. To me it's useless because of that.

Having a bunch of videos as single files on your server is high level n00batry IMO.


----------



## Bingbang

Dual Opteron quad cores
Dell Poweredge sc1435 motherboard
And a Radeon graphics card...it turns out this motherboard has an adaptor that powers the optional PCI-e configuration
Said adaptor and two 250GB enterprise sataII hard disks are in the mail.

Cooling is my topic right now. This will be for featherweight applications, probably to live in my closet.

This motherboard has IDE ports and supports virtualization, any learning links on using RAID (beneathe?) with XenServer will be watched and liked.


----------



## jibesh

Quote:


> Originally Posted by *nexxusty*
> 
> Very, very simple for me with Plex vs Kodi....
> 
> Plex (AFAIK) does not play videos within archives. To me it's useless because of that.
> 
> Having a bunch of videos as single files on your server is high level n00batry IMO.


K...you enjoy Kodi and us Plex users will enjoy our high level n00batry


----------



## nexxusty

Quote:


> Originally Posted by *jibesh*
> 
> K...you enjoy Kodi and us Plex users will enjoy our high level n00batry


LOL sounds good to me.


----------



## nerdalertdk

Quote:


> Originally Posted by *nexxusty*
> 
> Very, very simple for me with Plex vs Kodi....
> 
> Plex (AFAIK) does not play videos within archives. To me it's useless because of that.
> 
> Having a bunch of videos as single files on your server is high level n00batry IMO.


What ??

100 mkv's is easy to manage then 40 x zip * 100


----------



## bobfig

whats there to manage? i just plop what ever movie file in the "movie" folder and emby takes care of the rest. not all that hard to take care of.


----------



## Callist0

Quote:


> Originally Posted by *nexxusty*
> 
> Very, very simple for me with Plex vs Kodi....
> 
> Plex (AFAIK) does not play videos within archives. To me it's useless because of that.
> 
> Having a bunch of videos as single files on your server is high level n00batry IMO.


Hold up...this is possible? To have all your media content in an archive (.tar.gz or something) and such an application exists to extract it and play it on the fly?


----------



## herkalurk

gzip wouldn't do much for already compressed video anyway, it seems a bit overkill to gzip something that's already a smaller format like h264.


----------



## twerk

Quote:


> Originally Posted by *herkalurk*
> 
> gzip wouldn't do much for already compressed video anyway, it seems a bit overkill to gzip something that's already a smaller format like h264.


Exactly. When you transcode video to H.264 or other lossy format it goes through a Huffman coding coding process as well as lossy compression. Which is essentially exactly the same process as compressing a file via zipping. You may see a very small decrease in size, if any.


----------



## beatfried

Quote:


> Originally Posted by *herkalurk*
> 
> gzip wouldn't do much for already compressed video anyway, it seems a bit overkill to gzip something that's already a smaller format like h264.


Quote:


> Originally Posted by *twerk*
> 
> Exactly. When you transcode video to H.264 or other lossy format it goes through a Huffman coding coding process as well as lossy compression. Which is essentially exactly the same process as compressing a file via zipping. You may see a very small decrease in size, if any.


naaah - you're just n0000bs.


----------



## cones

I was also wondering what he was talking about. Everything is compressed or he has multiples of the same video in different formats.


----------



## herkalurk

Unless the source is like, uncompressed mpeg2...., but even then, just setup handbrake and start going smaller.

I had to do that with my recordings from mythtv. 1 hour show is 3-4 GB. After I remove commercials and down convert to 720p h264 mkv, it's about 800-900 MB.


----------



## cones

Quote:


> Originally Posted by *herkalurk*
> 
> Unless the source is like, uncompressed mpeg2...., but even then, just setup handbrake and start going smaller.
> 
> I had to do that with my recordings from mythtv. 1 hour show is 3-4 GB. After I remove commercials and down convert to 720p h264 mkv, it's about 800-900 MB.


Curious do you have the commercial removal automatic?


----------



## herkalurk

Quote:


> Originally Posted by *cones*
> 
> Curious do you have the commercial removal automatic?


When I'm not planning to keep the recorded video, so I'm not going to convert to MKV, the commercials aren't removed but they are marked and the player reads a list of frames to skip. So the watching result is that the playback will move forward X minutes for the skipped area. When I convert to MKV, another program uses that skipped frames list and actually removes those parts of the video so when it's converted since the skip part doesn't apply.


----------



## cones

Quote:


> Originally Posted by *herkalurk*
> 
> When I'm not planning to keep the recorded video, so I'm not going to convert to MKV, the commercials aren't removed but they are marked and the player reads a list of frames to skip. So the watching result is that the playback will move forward X minutes for the skipped area. When I convert to MKV, another program uses that skipped frames list and actually removes those parts of the video so when it's converted since the skip part doesn't apply.


That's using comskip?


----------



## nexxusty

Quote:


> Originally Posted by *Callist0*
> 
> Hold up...this is possible? To have all your media content in an archive (.tar.gz or something) and such an application exists to extract it and play it on the fly?


.RAR is the compressed file type used usually.

Yes, even with multi part rars the video will play fine. Videos in archives have nothing to do with compression....


----------



## herkalurk

Quote:


> Originally Posted by *cones*
> 
> That's using comskip?


Yes


----------



## cones

Quote:


> Originally Posted by *herkalurk*
> 
> Yes


How accurate is it? I've been curious to post with it but I don't really have a need, most info is a few years old now.


----------



## herkalurk

It has it's moments. I recorded the entire series of Bones and Castle from TNT and I only had to re-record probably 20 episodes of like 300+ because it completely left in a commercial break. To be fair it seems to be some channels just don't transition from commercials as cleanly. I get more commercials missed from shows on USA than any other channel.

The way the program works is it's trying to find some consistency in the frames to identify where commercials start and end. They try to use like network logos and such. So it always fails on the main network(abc,nbc,cbs) during weather issues cause they have that weather ticker on throughout the show and commercials, so you just have to manually skip. Same with like ESPN, they keep the score bar running through everything. For the most part, with sitcoms you should be fine. You may have to skip the occasional commercial, but meh.


----------



## cones

Thanks hopefully I'll be able to try it out sometime.


----------



## Prophet4NO1

Quick update of the Network/server area. Thing much has changed aside from the new Cisco SGE2000 switch I snagged for $50 new on craigslist.


----------



## legoman786

My homelab;

Brother MFC-8460N (it's a champ, even with the kids)
Planar 17" monitor for quick troubleshooting (KM are stored outside of this picture)
Netgear WNDR3400v2 for WAP
Motorola SURFBoard SB6121 DOCSIS 3.0 cable modem
XION XON-310 Case hold my pfSense router with an HP NC360T Dual-head NIC (Can't remember other specs off the top of my head)
HP Gen2 Z400 Xeon W3520 12GB RAM is my ESXi host (1x 16GB Sandisk Cruzer boot drive, 2x 250GB 2.5" HDDs for datastores, 4x 1TB drives and 2x 2TB drives in a mirrored vdev ZFS pool for 8TB RAW/4TB usable)
Ubuntu 16.04.1 as my Plex/File/Torrent server
24-port TP-Link TL-SG1024DE semi-managed switch

I'm currently undertaking my BSIT in InfoSec. I gotta have something available at home to mess around with.

EDIT: Please ignore the obvious mess. We just redid this room and still have yet to finalize it. Also, You can see my work laptop Dell Latitude E6520 and my first LCD monitor, Gateway FHD2401.


----------



## spice003

plex and VM server
specs in the sig


----------



## Deathjester1

I only have the one picture at the moment but I'll post more later:

I have a storage server which I'll post later, but this is my Hyper-V server; all the VM's sit on my storage server.

OS: Server 2012 R2 Hyper-V Core
Case: Phanteks Enthoo Luxe
CPU: 2x Intel Xeon E5 2670
Motherboard: Supermicro X9 DRi-F
Memory: 8 x 8Gb Samsung ECC registered 1333Mhz DDR3
PSU: Silverstone Strider 1000
OS HDD (If you have one): 2x 120gb SSDs
Storage HDD(s): N/A
Server Manufacturer (Ex: Dell, HP, You?): Me


----------



## emissary42

Some close-ups of my Convey HC-2ex


----------



## Liranan

My server:

955BE @ 3.2 (already had)

2x4GB Samsung DDR3 1333 ECC (40 USD)

Asus M5A78L-M/USB3 (30 USD)

Corsair H70 with one fan on 7V (already had)

Super Flower 400W (30 USD)

1 TB Toshiba HDD (7 years old and still in perfect condition according to SMART)

2TB WD Green (5 or 6 years old)

2TB Seagate (reallocated sector count increasing rapidly)

Arrived today: 3x3TB WD Reds at 8 USD each. I was wondering why they were so cheap until I tested one of them and there is no SMART data, there are no data entries so I am wondering whether I should keep them or return them. They are half the price of regular reds so I am hesitant to return them. These drives will replace the 2TB dying Seagate that is now running at 30MBps at times and will be used in RAID 5 mode with SnapRAID (I've come to really like SnapRAID).

Total cost of parts I didn't have: 340 USD. The board is second hand but the RAM and PSU are new.


----------



## bobfig

imo if you don't trust the drives i wouldn't run them.


----------



## Liranan

Quote:


> Originally Posted by *bobfig*
> 
> imo if you don't trust the drives i wouldn't run them.


The problem is lack of SMART data, I worry I won't be able to tell when things start to go wrong.


----------



## cones

No data at all because they haven't been used before?


----------



## Liranan

Quote:


> Originally Posted by *cones*
> 
> No data at all because they haven't been used before?




SMART doesn't even report the size properly but the drives are 3TB as Disk Management and even Linux report the size correctly (tested in Mint).

Edit: Data from Aida.





Only SMART is empty.

Copying over USB 3 these drives copy at 145MBps so I assume they're in good condition, unlike the Seagate that is just awfully slow.


----------



## cones

That's odd, never seen that before.


----------



## Liranan

Quote:


> Originally Posted by *cones*
> 
> That's odd, never seen that before.


I assume they're factory rejects. That would explain why they're half price. They come in factory prepacked anti static bags and they have the standard year warranty.


----------



## axipher

Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cones*
> 
> No data at all because they haven't been used before?
> 
> 
> 
> 
> 
> SMART doesn't even report the size properly but the drives are 3TB as Disk Management and even Linux report the size correctly (tested in Mint).
> 
> Edit: Data from Aida.
> 
> 
> 
> 
> 
> Only SMART is empty.
> 
> Copying over USB 3 these drives copy at 145MBps so I assume they're in good condition, unlike the Seagate that is just awfully slow.
Click to expand...

That first screenshot shows *USB (Serial ATA)*. Most USB to SATA adapters have never given me SMART data, or gives incorrect SATA data. Have you tried them plugged in directly to a SATA port?


----------



## cdoublejj

So Plex could keep my movie file folders in sync on all my HTPCs? I assume if the inet is fast enough it can stream to my devices like my own personal netflix? White Lable drives are also refurbished, more and more companies have capabilities and clean rooms like OnTrack does.


----------



## bobfig

Quote:


> Originally Posted by *cdoublejj*
> 
> So Plex could keep my movie file folders in sync on all my HTPCs? I assume if the inet is fast enough it can stream to my devices like my own personal netflix? White Lable drives are also refurbished, more and more companies have capabilities and clean rooms like OnTrack does.


imo emby>plex but both of those are kinda like personal netflix at home. the way my emby is set up is so that it watches a folder that i put all the movies in and by the file name tries to match it with whats on imdb and imports all the info and pics. makes for a pretty sweet setup. if you want to see what it looks like let me know and i could set up a temp account so you can look at mine.


----------



## cdoublejj

Quote:


> Originally Posted by *bobfig*
> 
> imo emby>plex but both of those are kinda like personal netflix at home. the way my emby is set up is so that it watches a folder that i put all the movies in and by the file name tries to match it with whats on imdb and imports all the info and pics. makes for a pretty sweet setup. if you want to see what it looks like let me know and i could set up a temp account so you can look at mine.


my HTPCs have 5 and 6 tb hdds respectively. can it also sync movies down to them when one is added to the server.


----------



## bobfig

Quote:


> Originally Posted by *cdoublejj*
> 
> my HTPCs have 5 and 6 tb hdds respectively. can it also sync movies down to them when one is added to the server.


i haven't tied to use that feature or plug in on emby. from a quick look you need to ether try a plug in to sync folders or donate for the built in one. ether way it looks like the movies need to be populated in the server then it will sync to a newtwork or external drive.

what emby/plex is is a server application that serves the videos to play and not really meant for being put on the htpc.


----------



## Liranan

Quote:
Originally Posted by *axipher* 

That first screenshot shows *USB (Serial ATA)*. Most USB to SATA adapters have never given me SMART data, or gives incorrect SATA data. Have you tried them plugged in directly to a SATA port?

I have considered this but as I, for the first time in years, don't have any spare SATA cables I can't test them directly connected to the motherboard. I will buy a few in a few hours and test again but the spare 500GB 2.5" drive I use as disconnected backup doesn't show this SMART anomaly.

I will report back in a few hours regardless as I need to connect all three to the motherboard anyway.

Edit: corrected my awful English.


----------



## Rbby258

Quote:


> Originally Posted by *cdoublejj*
> 
> my HTPCs have 5 and 6 tb hdds respectively. can it also sync movies down to them when one is added to the server.


Why would you want to do that? Just have one system that stores all videos and connect to it with a low power client. Even most smart tv's have a plex app nowadays.


----------



## cones

Quote:


> Originally Posted by *cdoublejj*
> 
> my HTPCs have 5 and 6 tb hdds respectively. can it also sync movies down to them when one is added to the server.


Quote:


> Originally Posted by *Rbby258*
> 
> Why would you want to do that? Just have one system that stores all videos and connect to it with a low power client. Even most smart tv's have a plex app nowadays.


Was wondering that also. I agree that Emby is better than Plex. Neither of those would really be good for what you want to do though. Kodi and Syncthing sounds better for what you want. But i do agree throw all those drives in one PC and setup Emby on it and use the others as clients with small HDDs.


----------



## cdoublejj

Quote:


> Originally Posted by *Rbby258*
> 
> Why would you want to do that? Just have one system that stores all videos and connect to it with a low power client. Even most smart tv's have a plex app nowadays.


Offline, one of my HTPCs is steam box/lan rig also redundancy.
Quote:


> Originally Posted by *bobfig*
> 
> i haven't tied to use that feature or plug in on emby. from a quick look you need to ether try a plug in to sync folders or donate for the built in one. ether way it looks like the movies need to be populated in the server then it will sync to a newtwork or external drive.
> 
> what emby/plex is is a server application that serves the videos to play and not really meant for being put on the htpc.


So i saw comments about server encoding. Would it possible for say my original xbox (OG) to play any movie fomr the emby/plex? would it re encode it to SD or a supported codec for the XBOG to play?

also sounds like worst case i can setup my own file sync, if need be.
Quote:


> Originally Posted by *cones*
> 
> Was wondering that also. I agree that Emby is better than Plex. Neither of those would really be good for what you want to do though. Kodi and Syncthing sounds better for what you want. But i do agree throw all those drives in one PC and setup Emby on it and use the others as clients with small HDDs.


Well i'll have one unraid server as back up/file storage. another server server, which i can run all my game servers and plex/emby servers etc etc. Except i actually run this at home made / ghetto co location. The HTPCs are at home and one HTPC travels with me. i'd like the 2 HTPCs to have local copies for internet outages and as back up and for when i travels but, i'd also like to be able to stream form any where if my phone or device and or internet can handle the streaming. Sounds a little wierd but, it's also more flexible and i'm better protect in the event of a say fire or lightening strike something. Kind of like a private cloud service and back up if you will.

My UnRaid has several HBAs and raid cards, I think i can fit 20 some + drives and if i make them all 1-5tb a pop, i'm hoping to have like 15TB+ to 30TB+ of storage. Then agian that's not much since the server has 7TB which also needs backed up and likely on the unraid.

EDIT: Then again I could just be complete bass ackwards and bonkers crazy and have no idea what i'm talking about.


----------



## cones

All of what you said is possible to do. Look into things like Emby/Plex/Kodi/Syncthing. The og Xbox would really only be good for standard definition though, plus you will have to mod it.


----------



## Liranan

Connecting the drives to SATA ports rather than through USB doesn't change the SMART data, it's still blank. These have got to be factory rejects that still work fine but can't be sold at full price.


----------



## NBrock

Quote:


> Originally Posted by *mbmumford*
> 
> Speaking of Plex......
> 
> *Custom Plex / Folding Server*
> 
> *OS:* Windows 10 Pro
> *Case:* Silverstone GD08
> *CPU:* 2x Intel Xeon E5-2620 V4
> *Motherboard:* Asus Z10PE-D8 WS
> *Memory:* Crucial 16GB (4x 4GB) ECC Registered DDR4 2133 MHz
> *PSU:* Corsair AX860
> *OS HDD:* Samsung 950 PRO 256GB M.2 SSD
> *Storage HDDs:* 3x WD 6TB Red 5400 RPM (Intel RSTe RAID5)
> *Server Manufacturer:* Custom made by me.
> 
> I run this as a headless server operating 24/7 for streaming Plex to a few friends (and myself), and for CPU folding. With all 32 threads running 100% 24/7, I'm pulling about 180W and getting about 60K PPD.
> 
> Although this board has 2 ethernet ports, I'm only using 1 since I'm running a VPN. Considering how long it took to get the various programs to work with the VPN (especially Plex for remote access), I kind of regret not looking into how to direct traffic from specific programs to a specific port. Maybe in the future...
> 
> After spending nearly $5000 on this, I decided to hold off on getting a GPU for folding (1060 vs 1070 anyone?), and will be installing additional 6TB Red drives as needed. I was originally planning on installing 32GB of RAM, but honestly don't think I would every need that much for my use.
> 
> (I'm sorry, but it's not pretty).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I had to cut the ODD bay in order to fit the cooler on CPU 2.


I bet that thing will like the new a7 WUs. Should pick up a good bit more ppd. I get them here and there (don't need beta flag).


----------



## bobfig

Quote:


> Originally Posted by *cdoublejj*
> 
> Offline, one of my HTPCs is steam box/lan rig also redundancy.
> So i saw comments about server encoding. Would it possible for say my original xbox (OG) to play any movie fomr the emby/plex? would it re encode it to SD or a supported codec for the XBOG to play?
> 
> also sounds like worst case i can setup my own file sync, if need be.
> Well i'll have one unraid server as back up/file storage. another server server, which i can run all my game servers and plex/emby servers etc etc. Except i actually run this at home made / ghetto co location. The HTPCs are at home and one HTPC travels with me. i'd like the 2 HTPCs to have local copies for internet outages and as back up and for when i travels but, i'd also like to be able to stream form any where if my phone or device and or internet can handle the streaming. Sounds a little wierd but, it's also more flexible and i'm better protect in the event of a say fire or lightening strike something. Kind of like a private cloud service and back up if you will.
> 
> My UnRaid has several HBAs and raid cards, I think i can fit 20 some + drives and if i make them all 1-5tb a pop, i'm hoping to have like 15TB+ to 30TB+ of storage. Then agian that's not much since the server has 7TB which also needs backed up and likely on the unraid.
> 
> EDIT: Then again I could just be complete bass ackwards and bonkers crazy and have no idea what i'm talking about.


why don't you download it and play with it as that's about all you can do right now. im not a sales person trying to get you to use it but seems like it has the features you want and its free to try. the folder sync feature you do have to pay for but most everything else to get up and running is free.


----------



## herkalurk

Quote:


> Originally Posted by *mbmumford*
> 
> Speaking of Plex......
> 
> *Custom Plex / Folding Server*
> 
> *OS:* Windows 10 Pro
> *Case:* Silverstone GD08
> *CPU:* 2x Intel Xeon E5-2620 V4
> *Motherboard:* Asus Z10PE-D8 WS
> *Memory:* Crucial 16GB (4x 4GB) ECC Registered DDR4 2133 MHz
> *PSU:* Corsair AX860
> *OS HDD:* Samsung 950 PRO 256GB M.2 SSD
> *Storage HDDs:* 3x WD 6TB Red 5400 RPM (Intel RSTe RAID5)
> *Server Manufacturer:* Custom made by me.
> 
> I run this as a headless server operating 24/7 for streaming Plex to a few friends (and myself), and for CPU folding. With all 32 threads running 100% 24/7, I'm pulling about 180W and getting about 60K PPD.
> 
> Although this board has 2 ethernet ports, I'm only using 1 since I'm running a VPN. Considering how long it took to get the various programs to work with the VPN (especially Plex for remote access), I kind of regret not looking into how to direct traffic from specific programs to a specific port. Maybe in the future...
> 
> After spending nearly $5000 on this, I decided to hold off on getting a GPU for folding (1060 vs 1070 anyone?), and will be installing additional 6TB Red drives as needed. I was originally planning on installing 32GB of RAM, but honestly don't think I would every need that much for my use.
> 
> (I'm sorry, but it's not pretty).
> SNIPPED IMAGES
> I had to cut the ODD bay in order to fit the cooler on CPU 2.


Just wondering, you're at load only using 180 W, seems like you could have saved some cash on the PSU.....


----------



## Rbby258

Quote:


> Originally Posted by *cdoublejj*
> 
> Offline, one of my HTPCs is steam box/lan rig also redundancy.
> So i saw comments about server encoding. Would it possible for say my original xbox (OG) to play any movie fomr the emby/plex? would it re encode it to SD or a supported codec for the XBOG to play?
> 
> also sounds like worst case i can setup my own file sync, if need be.
> Well i'll have one unraid server as back up/file storage. another server server, which i can run all my game servers and plex/emby servers etc etc. Except i actually run this at home made / ghetto co location. The HTPCs are at home and one HTPC travels with me. i'd like the 2 HTPCs to have local copies for internet outages and as back up and for when i travels but, i'd also like to be able to stream form any where if my phone or device and or internet can handle the streaming. Sounds a little wierd but, it's also more flexible and i'm better protect in the event of a say fire or lightening strike something. Kind of like a private cloud service and back up if you will.
> 
> My UnRaid has several HBAs and raid cards, I think i can fit 20 some + drives and if i make them all 1-5tb a pop, i'm hoping to have like 15TB+ to 30TB+ of storage. Then agian that's not much since the server has 7TB which also needs backed up and likely on the unraid.
> 
> EDIT: Then again I could just be complete bass ackwards and bonkers crazy and have no idea what i'm talking about.


Look at resilio sync, I use it to sync my download folder on my macbook to the download folder on my server as backup and also picks up torrent files to download automatically. Seems like exactly what you want and is free.


----------



## axipher

Quote:


> Originally Posted by *Liranan*
> 
> Connecting the drives to SATA ports rather than through USB doesn't change the SMART data, it's still blank. These have got to be factory rejects that still work fine but can't be sold at full price.


Hmm, that is odd. You could try contacting the manufacturer with the serial numbers to see if they can shed some light on if there is supposed to be SMART data or not.


----------



## Liranan

Quote:


> Originally Posted by *axipher*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Connecting the drives to SATA ports rather than through USB doesn't change the SMART data, it's still blank. These have got to be factory rejects that still work fine but can't be sold at full price.
> 
> 
> 
> Hmm, that is odd. You could try contacting the manufacturer with the serial numbers to see if they can shed some light on if there is supposed to be SMART data or not.
Click to expand...

Good idea, I will email WD later. Thanks for the advice.

After copying terabytes of data from the old HD's to these new ones I can honestly say I'm impressed. They may be 5400rpm only but they are superior to either the 2TB Seagate or the 1TB Hitachi (both 7200rpm). I now need to set up SnapRAID and set the third drive as parity.

Copying to and from the drives from my main PC saturates my gigabit connection by about 90-95% (100-115MBps) as opposed to the abysmal 30-50MBps I used to get with the old drives. It's amazing how one or two old drives can slow the entire system down, with the old 2TB Green being most likely the culprit as the system shot up to 100 or more once I had disabled the drive in Drive Management.

I'm very happy with these drive and don't see the need for any other form of RAID 5 than SnapRAID as the drives are more than fast enough and SnapRAID offers enough redundancy.


----------



## Rbby258

Quote:


> Originally Posted by *Liranan*
> 
> Good idea, I will email WD later. Thanks for the advice.
> 
> After copying terabytes of data from the old HD's to these new ones I can honestly say I'm impressed. They may be 5400rpm only but they are superior to either the 2TB Seagate or the 1TB Hitachi (both 7200rpm). I now need to set up SnapRAID and set the third drive as parity.
> 
> Copying to and from the drives from my main PC saturates my gigabit connection by about 90-95% (100-115MBps) as opposed to the abysmal 30-50MBps I used to get with the old drives. It's amazing how one or two old drives can slow the entire system down, with the old 2TB Green being most likely the culprit as the system shot up to 100 or more once I had disabled the drive in Drive Management.
> 
> I'm very happy with these drive and don't see the need for any other form of RAID 5 than SnapRAID as the drives are more than fast enough and SnapRAID offers enough redundancy.


https://support.wdc.com/warranty/warrantystatus.aspx?lang=en

Put serial numbers in, if in warrenty might be worth just getting them rma'd.

Edit: says you're in warrenty until 04/24/2019


----------



## mbmumford

Quote:


> Originally Posted by *NBrock*
> 
> I bet that thing will like the new a7 WUs. Should pick up a good bit more ppd. I get them here and there (don't need beta flag).


Up until I bought a Gigabyte GTX 1070 Mini OC card about 2 weeks ago, I was folding with all 32 cores 24/7 for about 3 months and was only getting about 50-60K PPD. Now that I have the 1070 installed I stopped folding on my CPUs. As the 1070 is running up to 79°C (Fan 100%) the CPUs are running 60°C (CPU 0) and 50°C (CPU 1) on idle due to the poor airflow in my case and the GPU fan blowing directly on CPU 0 (This is something I am actively looking into when I have time).

Quote:


> Originally Posted by *herkalurk*
> 
> Just wondering, you're at load only using 180 W, seems like you could have saved some cash on the PSU.....


You are absolutely right I could have, but for the sake of an extra ~$100 in a $5000 build, I wanted to over do it. Plus the idea behind this build was to allow me to upgrade as my needs progressed. By spending a little more upfront, it means I wouldn't need to buy a larger power supply later.


----------



## vaeron

I just added 4 new servers to my setup and am getting ready to start building a rack. I'm going to do a DIY rack until I can move in to a bigger place for my 42U rack to be able to fit again.

What I've added:

3x IBM x3550 m3

Processor: 2x Intel Xeon 5640 @ 2.66GHz
Memory: 72 GB DDR3 PC3-10600 Registered
HDD/SSD: 2x 146 GB Seagate Savvio 15k SAS drives (2.5")
OS: No OS decided on yet


1x Dell PowerEdge R710

Processor: 2x Intel Xeon 5504 @ 2.66GHz
Memory: 96 GB DDR3 PC3-10600 Registered
HDD/SSD: 2x 146 GB Seagate Cheetah 15k SAS drives
OS: Windows Server 2008 R2


I'm going to leave it up to the rest of you to tell me what to do with them!


----------



## KyadCK

Quote:


> Originally Posted by *vaeron*
> 
> I just added 4 new servers to my setup and am getting ready to start building a rack. I'm going to do a DIY rack until I can move in to a bigger place for my 42U rack to be able to fit again.
> 
> What I've added:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 3x IBM x3550 m3
> 
> Processor: 2x Intel Xeon 5640 @ 2.66GHz
> Memory: 72 GB DDR3 PC3-10600 Registered
> HDD/SSD: 2x 146 GB Seagate Savvio 15k SAS drives (2.5")
> OS: No OS decided on yet
> 1x Dell PowerEdge R710
> 
> Processor: 2x Intel Xeon 5504 @ 2.66GHz
> Memory: 96 GB DDR3 PC3-10600 Registered
> HDD/SSD: 2x 146 GB Seagate Cheetah 15k SAS drives
> OS: Windows Server 2008 R2
> 
> 
> 
> 
> I'm going to leave it up to the rest of you to tell me what to do with them!


ESXi cluster. Need a bit more HDD space though for that much ram/compute, but unless you have an actual compute task in mind you'll need more HDD space anyway, and frankly there's too may benefits of virtualization in the server world for me to list them all here. After that though, you'll have the same question; what do i do with them.

Hopping aboard the R710 train, I just got mine set up as well.


2x Xeon X5670 (6c/12t 2.93Ghz)
72GB (18x4GB)
1x Samsung 950 Pro NVMe 512GB
1x Samsung 840 Evo 250GB
6x 4TB (WD/HGST) in RAID 6 (15TB usable)
H700 RAID card, 512MB
iDRAC Enterprise (<3)
2x 870w PSU



Spoiler: Drives/RAID











Spoiler: Other









Oh and R710s can boot from 950 Pro NVMe drives;


I don't though, because ESXi is run off the USB stick on the internal slot. Kinda want one of the new FirePros to split it up between the VMs... First VM on the server is actually Unix7, which someone had suggested as a joke and now it works.









Either way, server will be added to the rack and either more R710s (with lower HDD requirements) and perhaps 720s if they didn't cost three times as much down the line as needs grow.

I love working in Dell servers. Everything makes a very satisfying _click._


----------



## vaeron

Quote:


> Originally Posted by *KyadCK*
> 
> ESXi cluster. Need a bit more HDD space though for that much ram/compute, but unless you have an actual compute task in mind you'll need more HDD space anyway, and frankly there's too may benefits of virtualization in the server world for me to list them all here. After that though, you'll have the same question; what do i do with them.
> 
> Hopping aboard the R710 train, I just got mine set up as well.
> 
> 
> 2x Xeon X5670 (6c/12t 2.93Ghz)
> 72GB (18x4GB)
> 1x Samsung 950 Pro NVMe 512GB
> 1x Samsung 840 Evo 250GB
> 6x 4TB (WD/HGST) in RAID 6 (15TB usable)
> H700 RAID card, 512MB
> iDRAC Enterprise (<3)
> 2x 870w PSU
> 
> 
> 
> Spoiler: Drives/RAID
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Other
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Oh and R710s can boot from 950 Pro NVMe drives;
> 
> 
> I don't though, because ESXi is run off the USB stick on the internal slot. Kinda want one of the new FirePros to split it up between the VMs... First VM on the server is actually Unix7, which someone had suggested as a joke and now it works.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Either way, server will be added to the rack and either more R710s (with lower HDD requirements) and perhaps 720s if they didn't cost three times as much down the line as needs grow.
> 
> I love working in Dell servers. Everything makes a very satisfying _click._


Awesome, welcome to the Dell R710 club! I would do an ESXi cluster except I've already got an esxi host that is an IBM x3690 x5 with 2x Xeon E7-2803 (hexcore), 256 GB of RAM and 15 TB of space on FreeNAS iSCSI (over dual 8gig fiber cards) so I'm good on the ESXi for now.


----------



## KyadCK

Quote:


> Originally Posted by *vaeron*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> ESXi cluster. Need a bit more HDD space though for that much ram/compute, but unless you have an actual compute task in mind you'll need more HDD space anyway, and frankly there's too may benefits of virtualization in the server world for me to list them all here. After that though, you'll have the same question; what do i do with them.
> 
> Hopping aboard the R710 train, I just got mine set up as well.
> 
> 
> 2x Xeon X5670 (6c/12t 2.93Ghz)
> 72GB (18x4GB)
> 1x Samsung 950 Pro NVMe 512GB
> 1x Samsung 840 Evo 250GB
> 6x 4TB (WD/HGST) in RAID 6 (15TB usable)
> H700 RAID card, 512MB
> iDRAC Enterprise (<3)
> 2x 870w PSU
> I don't though, because ESXi is run off the USB stick on the internal slot. Kinda want one of the new FirePros to split it up between the VMs... First VM on the server is actually Unix7, which someone had suggested as a joke and now it works.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Either way, server will be added to the rack and either more R710s (with lower HDD requirements) and perhaps 720s if they didn't cost three times as much down the line as needs grow.
> 
> I love working in Dell servers. Everything makes a very satisfying _click._
> 
> 
> 
> Awesome, welcome to the Dell R710 club! I would do an ESXi cluster except I've already got an esxi host that is an IBM x3690 x5 with 2x Xeon E7-2803 (hexcore), 256 GB of RAM and 15 TB of space on FreeNAS iSCSI (over dual 8gig fiber cards) so I'm good on the ESXi for now.
Click to expand...

[email protected] I guess, if you don't mind the power bill. Experiment with VDI for the experience perhaps?

Old school racks are cheap, but most people really don't need the massive multi-thread compute power they afford, and suggestions other than "Run this compute program", "virtualize", or "host things" are few and far between unless you're building something like a NAS, router, or iSCSI/FC SAN. Besides a few things like Domain/DNS/DHCP, some game server/voice hosting, and a VM for finances/legal paperwork etc, (all of which can be virtualized on a single server with ease), the only real use I have is simple, fast, and easy learning experience.

Thing is, even if you have an ESXi cluster already, there is literally no reason to put a non-hypervizor OS on the hardware for server tasks. It isn't a gaming PC, and the hardware abstraction alone pays for any sub-1% performance hit.


----------



## vaeron

P
Quote:


> Originally Posted by *KyadCK*
> 
> [email protected] I guess, if you don't mind the power bill. Experiment with VDI for the experience perhaps?
> 
> Old school racks are cheap, but most people really don't need the massive multi-thread compute power they afford, and suggestions other than "Run this compute program", "virtualize", or "host things" are few and far between unless you're building something like a NAS, router, or iSCSI/FC SAN. Besides a few things like Domain/DNS/DHCP, some game server/voice hosting, and a VM for finances/legal paperwork etc, (all of which can be virtualized on a single server with ease), the only real use I have is simple, fast, and easy learning experience.
> 
> Thing is, even if you have an ESXi cluster already, there is literally no reason to put a non-hypervizor OS on the hardware for server tasks. It isn't a gaming PC, and the hardware abstraction alone pays for any sub-1% performance hit.


I'll look into [email protected]. I'm not concerned about power consumption here. It's dirt cheap. I've got my NAS, my firewalls, my domain w/3 domain controllers, and my site to site VPNs. I do more than just personal stuff with my setup but managed to bring some more systems to have a little fun with. I'm not going to use VDI at home as it is part of my job at my other work. I am more looking for fun projects I might be able to do. I've got some home automation done but may turn one of them into my home automation servers. I've thought about hosting some pretty robust Minecraft servers as I have I have dedicated lines coming in and I have 4 port gig network cards I can throw in (I have gig fiber symmetrical).

EDIT: I started [email protected] last night on one of the x3550 m3's.


----------



## Prophet4NO1

Upgraded my CPU in the pfSense router to a Xeon E3 1231 V3. ClamAV was slamming into a brick wall when I pulled down anything to large and fast for it to process. So, more POWER!!!!! This replaced the dual core Pentium G3260 I had in the machine. And it only chews up about 5-10 watts more power according to my UPS in most situations.



Quick vid I made with shots results before and after the CPU upgrade.


----------



## ChRoNo16

five to ten watts is soooooooo worth the upgrade


----------



## Prophet4NO1

Quote:


> Originally Posted by *ChRoNo16*
> 
> five to ten watts is soooooooo worth the upgrade


Typical power draw from the UPS the router, switch, and WiFi are plugged into, about 75-80watts before. Stays around 80-85 now. Bumps up to around 90 watts from time to time. When the CPU was maxed out before, about 90-100 watts pull. Now, it jumps to about 120 watts. But the only time that happens is when the full 300Mbs is coming in and ClamAV is trying to scan all of it. Highest CPU usage I have seen is 70% during this time. So, the bigger draws are pretty large, but very short term for the most part. Most of the time it's barely even doing anything.


----------



## kgtuning

Hello gents, so I have never worked with or on a server but would like to tinker with one. Is a poweredge r610 or r710 decent? I see some good prices on ebay and it seems even the single cpu one can be upgraded to dual cpu. Any recommendations?


----------



## Prophet4NO1

Quote:


> Originally Posted by *kgtuning*
> 
> Hello gents, so I have never worked with or on a server but would like to tinker with one. Is a poweredge r610 or r710 decent? I see some good prices on ebay and it seems even the single cpu one can be upgraded to dual cpu. Any recommendations?


They are pretty decent. A lot depends on what you want to do with it though. Also, have you checked local at all? I picked one up a Dell 2900 at a local recycling/refurbishing facility for $100 with a pair of 15K SAS drives and 8GB of registered ECC memory. Well below the typical price on Ebay. Worth a look.


----------



## kgtuning

Quote:


> Originally Posted by *Prophet4NO1*
> 
> They are pretty decent. A lot depends on what you want to do with it though. Also, have you checked local at all? I picked one up a Dell 2900 at a local recycling/refurbishing facility for $100 with a pair of 15K SAS drives and 8GB of registered ECC memory. Well below the typical price on Ebay. Worth a look.


oh it'd just to tinker with. I haven't looked local yet but just thought I'd ask first before anything.


----------



## Prophet4NO1

There is not much a server does that is special if you are just planning to tinker. The word server basically explains it's self. It's device that serves something to the network. Files for example. You can have multiple machines doing varied tasks. Or, one larger machine with VM's doing many things. Depending on the task at hand consumer PC's can very easily do the same job. But, what you do tend to get is rock solid stability. usually. Components built to run flat out all day long for years. As a result they do tend to make a lot of noise. So keep that in mind. Fans are usually pretty high performance for the usual expected heavy loads.

You do get some cool stuff though. Like high end LSI RAID controllers and things like that. Depending on what you want to do storage wise that can be fun to play with.


----------



## lowfat

If you aren't familar w/ rack mount servers they can be LOUD. So make sure you can store it in some well ventilated closet or basement.


----------



## kgtuning

noise level isn't an issue. My caselabs sma8 had 12 EK ff4s at full speed for many months in my living room. But closet or basement is always an option also. I am just kicking the idea around.


----------



## Rbby258

Quote:


> Originally Posted by *kgtuning*
> 
> noise level isn't an issue. My caselabs sma8 had 12 EK ff4s at full speed for many months in my living room. But closet or basement is always an option also. I am just kicking the idea around.


Servers are another level of loud though.


----------



## Prophet4NO1

That is an understatement. Lol. They can be painfully loud at times. You really dont want to be in the same room for long with them if you dont have to be. I use a heavy blanket and partially shut the folding doors of the closet my Dell sits in. Just to lower the noise at lower rpm levels. It downright screems under load.


----------



## kgtuning

Quote:


> Originally Posted by *Prophet4NO1*
> 
> That is an understatement. Lol. They can be painfully loud at times. You really dont want to be in the same room for long with them if you dont have to be. I use a heavy blanket and partially shut the folding doors of the closet my Dell sits in. Just to lower the noise at lower rpm levels. It downright screems under load.


ok so let me rephrase what i said. I don't work on a server but I'm around a few at work all day.


----------



## twerk

It really just depends on the server. If you pick wisely they can be pretty quiet, especially modern severs.

HP Gen8/9 stuff in particular, I have a DL80 Gen9 in my front room and the 4 fans never go above 2000rpm. Idling much lower than that. Even at 2000rpm they are still quiet because of how high quality the bearings are.


----------



## ChRoNo16

Come on twerk, you know we want pictures and stats on it.


----------



## twerk

Quote:


> Originally Posted by *ChRoNo16*
> 
> Come on twerk, you know we want pictures and stats on it.


I'm away from home at the moment on business so no pics but here's the spec!

HP DL80 Gen9

Xeon E5-2620 v4 (8c16t Broadwell)

4x 8GB 2400MHz Micron DDR4

HP P440 RAID card w/4GB FBWC

2x Samsung PM863 128GB RAID 1 (boot)

6x WD Red 3TB RAID 5 (data)

4x HP fans (redundant)

In hindsight I really wish I went with 3TB HGST Deskstar NAS drives instead of WD Reds, oh well.


----------



## Dalchi Frusche

Quote:


> Originally Posted by *twerk*
> 
> I'm away from home at the moment on business so no pics but here's the spec!
> 
> HP DL80 Gen9
> Xeon E5-2620 v4 (8c16t Broadwell)
> 4x 8GB 2400MHz Micron DDR4
> HP P440 RAID card w/4GB FBWC
> 2x Samsung PM863 128GB RAID 1 (boot)
> 6x WD Red 3TB RAID 5 (data)
> 4x HP fans (redundant)
> 
> In hindsight I really wish I went with 3TB HGST Deskstar NAS drives instead of WD Reds, oh well.


Hey twerk,

I was wondering your reasoning for wanting to go the Deskstar route instead of Reds? I have a friend who's been planning to do REDs for a NAS build and would love any advice to pass along.


----------



## twerk

Quote:


> Originally Posted by *Dalchi Frusche*
> 
> Hey twerk,
> 
> I was wondering your reasoning for wanting to go the Deskstar route instead of Reds? I have a friend who's been planning to do REDs for a NAS build and would love any advice to pass along.


In brief - they're faster and more reliable. Reds are only 5400rpm vs 7200rpm of the Deskstar.

The Deskstar NAS is comparable to the WD Red Pro in terms of performance but closer in price to the standard Red.


----------



## bobfig

Quote:


> Originally Posted by *Dalchi Frusche*
> 
> Hey twerk,
> 
> I was wondering your reasoning for wanting to go the Deskstar route instead of Reds? I have a friend who's been planning to do REDs for a NAS build and would love any advice to pass along.


Quote:


> In brief - they're faster and more reliable. Reds are only 5400rpm vs 7200rpm of the Deskstar.
> 
> The Deskstar NAS is comparable to the WD Red Pro in terms of performance but closer in price to the standard Red.


everything he said. i have 3 - 3tb hgst nas drives and they been awesome so far. reds are a little cheaper but for a few bucks more for faster speed is nice to have. may not mater much if all you are doing is pulling stuff over the network as 3-4 drives in raid can saturate the connection but if you are doing something like a freenas with drive pooling where it may just be puling files off one drive you may see a difference.


----------



## Dalchi Frusche

Quote:


> Originally Posted by *twerk*
> 
> In brief - they're faster and more reliable. Reds are only 5400rpm vs 7200rpm of the Deskstar.
> 
> The Deskstar NAS is comparable to the WD Red Pro in terms of performance but closer in price to the standard Red.


Quote:


> Originally Posted by *bobfig*
> 
> everything he said. i have 3 - 3tb hgst nas drives and they been awesome so far. reds are a little cheaper but for a few bucks more for faster speed is nice to have. may not mater much if all you are doing is pulling stuff over the network as 3-4 drives in raid can saturate the connection but if you are doing something like a freenas with drive pooling where it may just be puling files off one drive you may see a difference.


Thank you! Those are good to knows. I will pass them along and see what he says.


----------



## swingarm

2 Cryorig R1 Ultimate's fitting in my server,........just barely. Looks like the case(Antec P180) was designed just for them.


----------



## Dalchi Frusche

Quote:


> Originally Posted by *swingarm*
> 
> 2 Cryorig R1 Ultimate's fitting in my server,........just barely. Looks like the case(Antec P180) was designed just for them.


That is a tight fit. Almost as tight as my R9 290 in my case haha. I think i ended with about 1mm of clearance


----------



## burksdb

my old decommissioned server

http://www.overclock.net/t/1618757/appraisal-nas-norco-4020-dual-6-cores/0_100#post_25727553


----------



## bobfig

seems like my 4 - 2gb ram sticks in my server are giving me problems so i replaced them with 2 - 8gb sticks. all seems well for now.


----------



## swingarm

Quote:


> Originally Posted by *Dalchi Frusche*
> 
> That is a tight fit. Almost as tight as my R9 290 in my case haha. I think i ended with about 1mm of clearance


That's about the space between the front cpu cooler and the top 140mm, the space between the two cpu coolers.


----------



## lowfat

http://hostthenpost.org

Inside still looks mostly like this. Just w/ a Connect X2 10GbE and a 256GB Samsung 950 Pro + PCIe adapter installed as well.




VMWare server:
AMD Opteron 6168
Supermicro H8SGL-F
64GB Kingston ECC RDIMMs
LSI 9211-8i
256GB Samsung 950 Pro
6x3TB in RAIDZ2 w/ 2 hot spares.
Mellanox Connect X2 10GbE

Other:
Reprap Prusa i3 3d printer. Connected to a Raspberry Pi 2B running Octoprint as a print server.
Netgear GS108T
Ubiquiti AP AC PRO (not pictured, running POE to other part of home).

Running 5 VMs at the moment.

PfSense + Squid for the gateway and transparent adblocking / caching.
FreeNAS 9.10 for storage.
Ubuntu 14.04 server for Emby Server (Media Server).
Ubuntu 14.04 server for FOG imaging server. Enables me to do create and restore images via PXE to my non-VM systems. Was able to create a 150GB image of my gaming rig in about 30 min, ended up being around 100GB compressed. Deploying an image is quite a bit faster. I did a test restore on a clean install of Win 8.1 and it took just over 5 minutes.
Lubuntu 16.10 which run LFTP to sync to my seedbox as well as Couch Potato. Set this up when I didn't have much experience w/ linux. I want to move to Ubuntu server for this as well. Both both LFTP and Couch Potato were such a pain to get working I woudn't want to spend the time to do it over.

http://hostthenpost.org



Been working on a 2 x E5-2670 rig for the last long while (years







). Should hopefully have it done within a month so I can retire this aging Opteron rig.


----------



## Prophet4NO1

I have been toying with the idea of putting all my machines in one VM, aside from my gaming rig. I'm just am not crazy about the whole having all my eggs in one basket. Would cup power consumption a lot, though.


----------



## lowfat

It is true if I have a hardware failure I have nothing. Server hardware is generally rather reliable though. You also could buy a server w/ multiple nodes for fault tolerance. Really once you go VMs you'll never go back. It is so convenient, uses significantly less power, you can restore from a backup in minutes. As long as you have enough ram and storage you can run pretty much unlimited amounts of VMs. You can have a sick setup and not really have to spend an arm and a leg on hardware since you only really need to buy 1 system, or two if you don't like having storage + compute running on one rig.


----------



## Prophet4NO1

I was thinking having storage, game hosting, and maybe remote rendering on a VM. Just leave my router out of the mix so the whole network wont go down if the machine craps out. Been looking at dual 2011 supermicro boards on ebay. You can get 8 core chips so cheap, it's very compelling. I keep thinking about it. One thing I would likely do as well as run one NIC per VM since I don't have 10GbE. Aside from virtualbox and hyper-v I have not really dived much into VM.


----------



## lowfat

You don't really need 1 NIC per VM. That really is just a waste. You create a bunch of virtual switches. For communication from one VM to the next, you use a virtual switch w/o a physical adapter. This will allow for 10GbE communication between all VMs. Then another virtual switch which connects all the VMs to the 1GbE adapter. On a home server you really won't be saturating more than one GbE link at a time anyways.

If your VM server is in the same room as your PC you can grab a pair of 10GbE cards + a cable for probably around $50 from eBay.

Example of how the networking on my server looks like. VMKernel = used to manage the host OS (ESXi).
http://hostthenpost.org


----------



## Prophet4NO1

My thought was more to for when the server is hit hard from the rest of the network. Not so much for intercommunication. Especially for the game server. Having the fileserver get pounded when I have 20 people in team speak and in at least one game server causes latency issues for the clients. Maybe at least a dual or quad port for a couple machines and virtual links between VMs?


----------



## lowfat

I think you'll need likely need 100+ players connecting to a game server + team speak to saturate a 1GbE link. You'll be CPU or I/O limited way before that point.


----------



## herkalurk

Quote:


> Originally Posted by *lowfat*
> 
> I think you'll need likely need 100+ players connecting to a game server + team speak to saturate a 1GbE link. You'll be CPU or I/O limited way before that point.


Way more than 100 players. Plus teamspeak is so low bandwidth, you'd need thousands of players using their talk on teamspeak all at the same time. I run a TS3 server and the default codec bit rate is 5.71 KB, which is .045 mbit. That means you'd need roughly 22,250 concurrent teamspeak users talking all at once to saturate 1Gbit connection. Also same with game servers, they're only sending player position and action updates. It's somewhere around 100-200 kbit per second on even the highest usage FPS games. Your CPU will die before you have enough users to use 1Gbit network connection.


----------



## Prophet4NO1

Quote:


> Originally Posted by *lowfat*
> 
> I think you'll need likely need 100+ players connecting to a game server + team speak to saturate a 1GbE link. You'll be CPU or I/O limited way before that point.


It's not the player saturation. It's the fileserver saturating the bandwidth and then trying to get timely game packets. It would not be a constant issue, but it crops up. At least that has been my experience when both machines where sharing one cable between a couple switches. Before getting the 24 port I have now. I guess I could mess with QoS to make things work better and keep a chunk of headroom always open.

I am using LACP with two connections on my file server now. And it maxes that out from time to time when a couple machines are copying lots of files to the server. I can only read at about 1.5Gbs from the drives currently in it. But It writes like crazy caching to the RAM first.


----------



## Blindsay

Quote:


> Originally Posted by *lowfat*
> 
> http://hostthenpost.org
> 
> Inside still looks mostly like this. Just w/ a Connect X2 10GbE and a 256GB Samsung 950 Pro + PCIe adapter installed as well.
> 
> 
> 
> 
> VMWare server:
> AMD Opteron 6168
> Supermicro H8SGL-F
> 64GB Kingston ECC RDIMMs
> LSI 9211-8i
> 256GB Samsung 950 Pro
> 6x3TB in RAIDZ2 w/ 2 hot spares.
> Mellanox Connect X2 10GbE
> 
> Other:
> Reprap Prusa i3 3d printer. Connected to a Raspberry Pi 2B running Octoprint as a print server.
> Netgear GS108T
> Ubiquiti AP AC PRO (not pictured, running POE to other part of home).
> 
> Running 5 VMs at the moment.
> 
> PfSense + Squid for the gateway and transparent adblocking / caching.
> FreeNAS 9.10 for storage.
> Ubuntu 14.04 server for Emby Server (Media Server).
> Ubuntu 14.04 server for FOG imaging server. Enables me to do create and restore images via PXE to my non-VM systems. Was able to create a 150GB image of my gaming rig in about 30 min, ended up being around 100GB compressed. Deploying an image is quite a bit faster. I did a test restore on a clean install of Win 8.1 and it took just over 5 minutes.
> Lubuntu 16.10 which run LFTP to sync to my seedbox as well as Couch Potato. Set this up when I didn't have much experience w/ linux. I want to move to Ubuntu server for this as well. Both both LFTP and Couch Potato were such a pain to get working I woudn't want to spend the time to do it over.
> 
> http://hostthenpost.org
> 
> 
> 
> Been working on a 2 x E5-2670 rig for the last long while (years
> 
> 
> 
> 
> 
> 
> 
> ). Should hopefully have it done within a month so I can retire this aging Opteron rig.


Nice setup! Have a pair of 2670s myself. Would like to go back to VMware but I hate how the free version limits VM's to no more than 8vcpu


----------



## lowfat

Quote:


> Originally Posted by *Blindsay*
> 
> Nice setup! Have a pair of 2670s myself. Would like to go back to VMware but I hate how the free version limits VM's to no more than 8vcpu


What are you using for a hypervisor? For me personally I don't need more than 8 CPUs per VM. Actually I could probably get away w/ 4 cores total. None of my VMs really needs powerful CPUs. But I already had the motherboard laying around, which is why I'll be using it.


----------



## herkalurk

Quote:


> Originally Posted by *Blindsay*
> 
> Nice setup! Have a pair of 2670s myself. Would like to go back to VMware but I hate how the free version limits VM's to no more than 8vcpu


I'm not sure what home apps even would require 8 CPU. Ram is always the first thing you have contention issues with in virtualization, not CPU. CPU is down the list after disk I/O.


----------



## Blindsay

Quote:


> Originally Posted by *herkalurk*
> 
> I'm not sure what home apps even would require 8 CPU. Ram is always the first thing you have contention issues with in virtualization, not CPU. CPU is down the list after disk I/O.


My Plex server which can be doing multiple full Blu-ray transcodes at a time (I have 14 people that use it)


----------



## KyadCK

Quote:


> Originally Posted by *Blindsay*
> 
> Nice setup! Have a pair of 2670s myself. Would like to go back to VMware but I hate how the free version limits VM's to no more than 8vcpu


Quote:


> Originally Posted by *Blindsay*
> 
> Quote:
> 
> 
> 
> Originally Posted by *herkalurk*
> 
> I'm not sure what home apps even would require 8 CPU. Ram is always the first thing you have contention issues with in virtualization, not CPU. CPU is down the list after disk I/O.
> 
> 
> 
> My Plex server which can be doing multiple full Blu-ray transcodes at a time (I have 14 people that use it)
Click to expand...

You can always set up more than one and point to the same storage.


----------



## herkalurk

Quote:


> Originally Posted by *Blindsay*
> 
> My Plex server which can be doing multiple full Blu-ray transcodes at a time (I have 14 people that use it)


Quote:


> Originally Posted by *KyadCK*
> 
> You can always set up more than one and point to the same storage.


Or you put your plex on hardware. I have an older HP ML350 with 2 X 6 core intel cpus for 24 logical threads. Plex has no problems with that horsepower, plus it continues to do all the other stuff my linux box is setup for.


----------



## Blindsay

Quote:


> Originally Posted by *herkalurk*
> 
> Or you put your plex on hardware. I have an older HP ML350 with 2 X 6 core intel cpus for 24 logical threads. Plex has no problems with that horsepower, plus it continues to do all the other stuff my linux box is setup for.


Not sure what you mean by that? If I run ESXi on the box it would have to be run within a vm


----------



## herkalurk

Quote:


> Originally Posted by *Blindsay*
> 
> Not sure what you mean by that? If I run ESXi on the box it would have to be run within a vm


Don't run esxi, install linux direct on the server, install plex, give it all the cores. I'm a cloud consultant, and while most anything can be run in a VM, it doesn't mean everything should. Plenty of companies run certain apps on hardware because they need the power.


----------



## Blindsay

Quote:


> Originally Posted by *herkalurk*
> 
> Don't run esxi, install linux direct on the server, install plex, give it all the cores. I'm a cloud consultant, and while most anything can be run in a VM, it doesn't mean everything should. Plenty of companies run certain apps on hardware because they need the power.


ah gotcha.

Yeah I just wanted to do it to get more experience with ESXi as well as there are a few other things that I wanted to run (that shouldn't be run on the same box)


----------



## TheBloodEagle

I really love the motherboard layout from lowfat's build. I really wish more gaming and workstation boards were like that. I think it's perfect in regards to airflow for a 1P system and I love how balanced it is aesthetically.


----------



## stevef9432203

I have a box with dual 2690s myself, memory is usually the limit factor. 128gb ram does nicely

Stevef9432203


----------



## Prophet4NO1

Quote:


> Originally Posted by *TheBloodEagle*
> 
> I really love the motherboard layout from lowfat's build. I really wish more boards were like that. I think it's perfect in regards to airflow for a 1P system and I love how balanced it is aesthetically.


Most server boards are laid out similar to this. It helps for cooling in rack cases.


----------



## TheBloodEagle

I've seen a few similar ones but usually the RAM is positioned on both sides of the cpu socket. I'm sure it's done for latency reasons and electrical in most cases. But I love how that one has all the RAM slots on one side. It's just so perfectly in balance with the PCIE slots. I haven't really dived too deep looking at server grade boards though, so it's probably common. But man, I wish some gaming/workstation boards did that.


----------



## lowfat

Quote:


> Originally Posted by *TheBloodEagle*
> 
> I've seen a few similar ones but usually the RAM is positioned on both sides of the cpu socket. I'm sure it's done for latency reasons and electrical in most cases. But I love how that one has all the RAM slots on one side. It's just so perfectly in balance with the PCIE slots. I haven't really dived too deep looking at server grade boards though, so it's probably common. But man, I wish some gaming/workstation boards did that.


Intel LGA2011 boards have DIMMs on both sides of the CPU socket. Opteron boards have all 8 DIMMs lined up in a row.


----------



## Prophet4NO1

LGA115x also have the RAM on one side.

Supermicro in my freeNAS box


----------



## jieddo

Name: Aquitaine
OS: Win Server 2008R2
Case: Rosewill RSV-L4000 4U Rackmount
Server Rack: Tripp Lite 18U Rack
CPU: Xeon E3-1231 V3
GPU: Quadro K4000
Motherboard: SuperMicro MBD-X10SLM-F-O
RAM: 16GB Crucial DDR3 ECC
PSU: EVGA supernova modular (forgot how many watts)
Cooler: Corsair H90 (modified case to fit in 4U rackmount)
RAID Controller: Megaraid 9260 4i
HDD: WD RED 4TB x6 (RAID 10 array)
HDD: WD RED 4TB x1 (something of a cache drive)
HDD: WD Enterprise 4TB x2 (RAID 10 array cont.)
HDD: WD Purple 6TB (surveillance)
HDD: Toshiba 2TB (system)
HDD Storage: Silverstone FS305 Front Panel Storage x2
NIC: HP 4x 1Gb/s Intel NIC (LACP 4Gb/s)
Switch: Netgear 24 Port 1Gb/s managed switch (cheap on Ebay)
Firewall: Netgate FW-7535 PFSense rackmountable appliance
UPS: CyberPower 900W rackmount

Primary Uses: PLEX Media Server, IP Camera NVR, File Server
Users < 10
PLEX Movies = 1299
PLEX TV Shows = 55 Complete Series
PLEX Anime = 25 Complete Series/Movies
PLEX Disk Space 14TB
PLEX Disk Space Used 8.60TB


----------



## cookiesowns

Proxmox + ZFS based storage I maintain


----------



## slow3v

AMD A8-6600K
MSI FM2-A75MA-E35
8GB G.Skill Sniper
2x2TB WD RE4
1TB Misc Drive
160GB OS Drive
500GB "Torrent Cache" Drive (heavy read/write)
1TB External for Backup
NZXT Source 210
CM Gemini II S524 HSF
PCP&C Silencer MKIII 500w

Windows Server 2012 R2 set up, with all home PC's connected. Auto backups, the whole 9.


----------



## herkalurk

Quote:


> Originally Posted by *cookiesowns*
> 
> Proxmox + ZFS based storage I maintain
> 
> 
> 
> 
> 
> 
> 
> 
> 
> SNIPPED


I did a contract for a very big company, and the group I worked with had an extra $1 million to spend or they would lose it back to the main budget if they didn't do something with it. They bought a new EMC Isilon NAS. 10 4U units, each could hold 36 drives. Full to the brim with 3T drives == 1 Peta byte.....

Yeah, they bought a petabyte nas because they didn't have anything else to spend the money on. To be fair, they were good on money, they default laptop we used was a top of the line mac book with i7, IPS, and 512 SSD.


----------



## rocklobsta1109

Quote:


> Originally Posted by *herkalurk*
> 
> I did a contract for a very big company, and the group I worked with had an extra $1 million to spend or they would lose it back to the main budget if they didn't do something with it. They bought a new EMC Isilon NAS. 10 4U units, each could hold 36 drives. Full to the brim with 3T drives == 1 Peta byte.....
> 
> Yeah, they bought a petabyte nas because they didn't have anything else to spend the money on. To be fair, they were good on money, they default laptop we used was a top of the line mac book with i7, IPS, and 512 SSD.


Super jelly, all the clients I worked for pinched pennies to the point of nearly blowing the projects timeline so far out that many times, the projects were postponed or scrapped all together.


----------



## herkalurk

Quote:


> Originally Posted by *rocklobsta1109*
> 
> Super jelly, all the clients I worked for pinched pennies to the point of nearly blowing the projects timeline so far out that many times, the projects were postponed or scrapped all together.


I left a job at a web development/ web hosting company. They poured all their money into the web development, but nothing into hosting. Too many times I had to fight fires because their equipment was outdated and they wouldn't let me take a downtime to update the software on the servers. When the next job was open I took it.


----------



## tiro_uspsss

my website/torrent/cloud server:







self built (duh)
SuperMicro X7DCL-i
2x Intel Xeon X5460 with Noctua NH-U12DX heatsinks
6x 4GB Hynix DDR2-667/PC2-5300 ECC+REG
Areca 1210 (yellow SATA cables are from DFI nF4 mobo!







)
Mellanox ConnectX-2 (10GbE NIC)
Intel Desktop 1000/Pro (1GbE NIC)
CoolerMaster Silent Pro 1000W (overkill, yes, I know







)
2x Hitachi 2TB (media for website)
Toshiba 3TB (torrents/cloud)
Crucial C300 128GB SSD (VMs)
Intel 330 120GB (OS: Windows Server 2012 R2)
Scythe fan controller (temp probe attached to northbridge heatsink)
Lian Li PC-A10B

the two mid-mounted fans are to cool the Areca and northbridge. The Areca didn't really need it, but the northbridge was getting rather toasty with no forced/direct airflow (~60C iirc).


----------



## Liranan

Quote:


> Originally Posted by *cookiesowns*
> 
> Proxmox + ZFS based storage I maintain












Seriously... 96HD's... I wish I had that kind of storage, I'd download the internet.


----------



## herkalurk

Quote:


> Originally Posted by *Liranan*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Seriously... 96HD's... I wish I had that kind of storage, I'd download the internet.


Which part of it? The internet is many many exabytes.....


----------



## Liranan

Quote:


> Originally Posted by *herkalurk*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Seriously... 96HD's... I wish I had that kind of storage, I'd download the internet.
> 
> 
> 
> Which part of it? The internet is many many exabytes.....
Click to expand...

I was joking of course. With those 96 HD's I would set up my dream Plex/media server and even consider creating a streaming service.


----------



## cookiesowns

Quote:


> Originally Posted by *Liranan*
> 
> I was joking of course. With those 96 HD's I would set up my dream Plex/media server and even consider creating a streaming service.


There's way more than just 96HD's. There's storage for proxmox VM's and thus also network storage









Next phase of this project is migrating the ZFS arrays onto a dual head node JBOD setup with 3-4 4U60's ( 60 drive 4U JBOD )


----------



## Liranan

Quote:


> Originally Posted by *cookiesowns*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> I was joking of course. With those 96 HD's I would set up my dream Plex/media server and even consider creating a streaming service.
> 
> 
> 
> There's way more than just 96HD's. There's storage for proxmox VM's and thus also network storage
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Next phase of this project is migrating the ZFS arrays onto a dual head node JBOD setup with 3-4 4U60's ( 60 drive 4U JBOD )
Click to expand...

I'm dying of jealousy. Right now I am trying to save my media. The replacement of the hard disk I RMA'd is also dying, when copying to the drive the drive gets stuck at 100% with no disk activity, which causes Windows to lock up. SMART doesn't report any drive errors so I suspect the drives board is defective rather than the platters. Of the three WD Red's this is the third RMA. I am not buying these drives again, HGST and Toshiba from now on.





I'm thinking about asking for a refund rather than yet another drive as this is getting ridiculous. I haven't bought WD drives in years because of quality control and my first experience in seven years is turning out just splendidly.


----------



## herkalurk

Quote:


> Originally Posted by *Liranan*
> 
> I'm dying of jealousy. Right now I am trying to save my media. The replacement of the hard disk I RMA'd is also dying, when copying to the drive the drive gets stuck at 100% with no disk activity, which causes Windows to lock up. SMART doesn't report any drive errors so I suspect the drives board is defective rather than the platters. Of the three WD Red's this is the third RMA. I am not buying these drives again, HGST and Toshiba from now on.
> 
> SNIPPED
> 
> I'm thinking about asking for a refund rather than yet another drive as this is getting ridiculous. I haven't bought WD drives in years because of quality control and my first experience in seven years is turning out just splendidly.


The only thing I'll say is raid....

If it's important it's on raid disks and backed up. Both of my servers run raid 5 for storage and the important stuff is backed up to the cloud with crashplan. I even have a raid 1 of 1 TB drives on my desktop.

I actually just upgraded my main server from 3 x 2 TB disks in raid 5 to 3 x 4 TB disks raid 5. Then used those 2 TB disks and another one I bought to upgrade the 4x1TB raid 5 in my linux server to 4x2 TB disks.


----------



## Zeus

Quote:


> Originally Posted by *Liranan*
> 
> The replacement of the hard disk I RMA'd is also dying, when copying to the drive the drive gets stuck at 100% with no disk activity, which causes Windows to lock up. SMART doesn't report any drive errors so I suspect the drives board is defective rather than the platters.


A few months ago I was getting the same thing, and I tracked it down to a data cable that had degraded. Once I replaced it, the problem hasn't returned.


----------



## shelter

Don't think I posted this before, but recently got my Mikrotik Routerboard installed. Otherwise it's 15TB Plex server, folding server, workstation, etc.


----------



## Liranan

Quote:


> Originally Posted by *Zeus*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> The replacement of the hard disk I RMA'd is also dying, when copying to the drive the drive gets stuck at 100% with no disk activity, which causes Windows to lock up. SMART doesn't report any drive errors so I suspect the drives board is defective rather than the platters.
> 
> 
> 
> A few months ago I was getting the same thing, and I tracked it down to a data cable that had degraded. Once I replaced it, the problem hasn't returned.
Click to expand...

That's the first thing I did. The SATA cable I was using was new but I replaced it with another SATA cable and even tried another SATA port but the problem persists so I am left with the only conclusion that it's the drive. I've decided that if the next drive is defective too I will return them all, ask for refund and replace them with Toshiba DT01ACA300 3TB's.

The 3TB in my main PC is a DT01ACA300 and it's been quiet, cool and wonderful so I will just get these instead. Backblaze have had thousands of these drives and only 7 have failed so hopefully that means that the 10 or 11 drives I will get will suffer very, very few failures. If they do work well then I will just continue getting them until I have filled my u4 dream chassis.

Edit: DT01ACA300's are not made by Hitachi/HGST but Toshiba.


----------



## mbmumford

Quote:


> Originally Posted by *jieddo*
> 
> Name: Aquitaine
> OS: Win Server 2008R2
> Case: Rosewill RSV-L4000 4U Rackmount
> Server Rack: Tripp Lite 18U Rack
> CPU: Xeon E3-1231 V3
> GPU: Quadro K4000
> Motherboard: SuperMicro MBD-X10SLM-F-O
> RAM: 16GB Crucial DDR3 ECC
> PSU: EVGA supernova modular (forgot how many watts)
> Cooler: Corsair H90 (modified case to fit in 4U rackmount)
> RAID Controller: Megaraid 9260 4i
> HDD: WD RED 4TB x6 (RAID 10 array)
> HDD: WD RED 4TB x1 (something of a cache drive)
> HDD: WD Enterprise 4TB x2 (RAID 10 array cont.)
> HDD: WD Purple 6TB (surveillance)
> HDD: Toshiba 2TB (system)
> HDD Storage: Silverstone FS305 Front Panel Storage x2
> NIC: HP 4x 1Gb/s Intel NIC (LACP 4Gb/s)
> Switch: Netgear 24 Port 1Gb/s managed switch (cheap on Ebay)
> Firewall: Netgate FW-7535 PFSense rackmountable appliance
> UPS: CyberPower 900W rackmount
> 
> Primary Uses: PLEX Media Server, IP Camera NVR, File Server
> Users < 10
> PLEX Movies = 1299
> PLEX TV Shows = 55 Complete Series
> PLEX Anime = 25 Complete Series/Movies
> PLEX Disk Space 14TB
> PLEX Disk Space Used 8.60TB


I really like your setup! This is quite similar to how I want my setup to look in the long run, I just need to find a 4U case that will comfortably fit my SSI-EEB board.

Until then however, I'm off to buy more 6TB WD Red drives as my RAID array is just about full...


----------



## thymedtd

https://www.amazon.com/Rosewill-Rackmount-Computer-Pre-Installed-RSV-L4500/dp/B0091IZ1ZG/ref=sr_1_1?ie=UTF8&qid=1484164416&sr=8-1&keywords=rosewill+server

This is actually a similar case to jieddo and the same as my eclipse server. I have a tyan s7012 which is ssi eeb form factor so you should have no trouble mbmumford. Plenty of room to work in and plenty of cooling.


----------



## ChRoNo16

Shelter I want details on that 2u box


----------



## Liranan

My two drives have become corrupted and I'm in the process of attempting to recover whatever I can before I send them back, demand my money back and replace them with Toshiba 3TB DT01ACA300's. My first experience with WD in years and it's been great, lost 5TB of data. I am not looking at WD again.


----------



## shelter

Quote:


> Originally Posted by *ChRoNo16*
> 
> Shelter I want details on that 2u box


It's an iStarUSA D-214-MATX. Link here. I use it as a Windows BluRay/DVD ripping/burning work station (since all of my ripping software is Windows based).


----------



## PuffinMyLye

Quote:


> Originally Posted by *ChRoNo16*
> 
> Shelter I want details on that 2u box


I use two of those myself.


----------



## mbmumford

Quote:


> Originally Posted by *thymedtd*
> 
> https://www.amazon.com/Rosewill-Rackmount-Computer-Pre-Installed-RSV-L4500/dp/B0091IZ1ZG/ref=sr_1_1?ie=UTF8&qid=1484164416&sr=8-1&keywords=rosewill+server
> 
> This is actually a similar case to jieddo and the same as my eclipse server. I have a tyan s7012 which is ssi eeb form factor so you should have no trouble mbmumford. Plenty of room to work in and plenty of cooling.


Thanks for for the info! I found that case and was wondering if I could simply make the board fit, but obviously it's not a problem. I leaning towards getting the RSV-L4500 and filling it with hot-swap bays as it's cheaper than buying the RSV-L4412.

Either that or get a Norco 4020 or something similar.


----------



## herkalurk

Quote:


> Originally Posted by *Liranan*
> 
> My two drives have become corrupted and I'm in the process of attempting to recover whatever I can before I send them back, demand my money back and replace them with Toshiba 3TB DT01ACA300's. My first experience with WD in years and it's been great, lost 5TB of data. I am not looking at WD again.


So make a raid array with disks of same stats from different vendors. My linux software raid 5 is 4 disks: 2 X samsung 2TB, 1 Seagate 2TB, and 1 WD Blue 2 TB

None of the disks were bought at the same time, reducing any QA issues with a batch. 3 of the 4 are out of warranty anyway, so if they start to fail, I'll just buy another, probably whatever is on sale. Sort of did the same with my 3 disk raid 5 in my windows server. They're all Seagate 4 T drives, but from 3 different vendors. No chance of a bad batch of disks since they are all from completely different orders. Checked the serials while doing some quick tests before putting them into the array.


----------



## LazarusIV

Quote:


> Originally Posted by *herkalurk*
> 
> So make a raid array with disks of same stats from different vendors. My linux software raid 5 is 4 disks: 2 X samsung 2TB, 1 Seagate 2TB, and 1 WD Blue 2 TB
> 
> None of the disks were bought at the same time, reducing any QA issues with a batch. 3 of the 4 are out of warranty anyway, so if they start to fail, I'll just buy another, probably whatever is on sale. Sort of did the same with my 3 disk raid 5 in my windows server. They're all Seagate 4 T drives, but from 3 different vendors. No chance of a bad batch of disks since they are all from completely different orders. Checked the serials while doing some quick tests before putting them into the array.


Question: What performance difference do you see with a 4 HDD RAID 5 compared to a 3 HDD RAID 5? I ask because in the future I'm torn between a 3 or 4 HDD RAID 5 or a 4 HDD RAID 1+0...


----------



## Boulard83

My little thing in the closet.

Asrock EP2C602-4L/D16
Dual Xeon E5-2670 (8c/16t x2)
2x Phanteks PH-TC14S
32gb, 8x4gb Kingston DDR3 ECC
Plextor M5S 128gb - OS
Agility 4 128gb - something something
4x 2TB Red - Raid 1
1x 1TB Black - other stuff
Asus GTX 650 Ti
Corsair HX850
Rosewill RSV-R4000 RLT
OPTI-UPS Thunder Shield UPS 2000VA 1200W


----------



## herkalurk

Quote:


> Originally Posted by *LazarusIV*
> 
> Question: What performance difference do you see with a 4 HDD RAID 5 compared to a 3 HDD RAID 5? I ask because in the future I'm torn between a 3 or 4 HDD RAID 5 or a 4 HDD RAID 1+0...


It's not amazing performance, but they're slow disks (5400 RPM) meant to be big and cheap. It can take a minute or 2 to copy over a 10GB mkv movie file depending on what other processes are using the I/O of that array. More spindles generally means it's faster, but raid 10 will always win compared to raid 5 with same disks. We have a 10 disk Raid 10 at work with 10 2TB drives. It's rather quick even though they're only 7200 RPM drives.

You need to weigh cost/GB vs performance needs. If money were no object, I'd have a raid 10 array based solely on 1 or 2 TB SSDs in a enterprise grade iSCSI san. But I am not rich, and I'm conservative, so I bought 3 X 4 TB Seagate for $99 on black friday. 8 TB usable is a nice amount of space. Plus I used the 3 old 2 TB disks I migrated from to replace the 4 X 1 TB array in my other server, bought a 2nd 2 TB drive and now that server is 4 X 2 TB. Then I used a couple 1 TB drives in my desktop for a local Raid 1, and the other 2 are going else where.


----------



## DVLux

Quote:


> Originally Posted by *Liranan*
> 
> My two drives have become corrupted and I'm in the process of attempting to recover whatever I can before I send them back, demand my money back and replace them with Toshiba 3TB DT01ACA300's. My first experience with WD in years and it's been great, lost 5TB of data. I am not looking at WD again.


And then the Toshiba drives get corrupted, and then you never buy Toshiba again.

Who will you buy from then?


----------



## Liranan

Quote:


> Originally Posted by *herkalurk*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> My two drives have become corrupted and I'm in the process of attempting to recover whatever I can before I send them back, demand my money back and replace them with Toshiba 3TB DT01ACA300's. My first experience with WD in years and it's been great, lost 5TB of data. I am not looking at WD again.
> 
> 
> 
> So make a raid array with disks of same stats from different vendors. My linux software raid 5 is 4 disks: 2 X samsung 2TB, 1 Seagate 2TB, and 1 WD Blue 2 TB
> 
> None of the disks were bought at the same time, reducing any QA issues with a batch. 3 of the 4 are out of warranty anyway, so if they start to fail, I'll just buy another, probably whatever is on sale. Sort of did the same with my 3 disk raid 5 in my windows server. They're all Seagate 4 T drives, but from 3 different vendors. No chance of a bad batch of disks since they are all from completely different orders. Checked the serials while doing some quick tests before putting them into the array.
Click to expand...

Quote:


> Originally Posted by *DVLux*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> My two drives have become corrupted and I'm in the process of attempting to recover whatever I can before I send them back, demand my money back and replace them with Toshiba 3TB DT01ACA300's. My first experience with WD in years and it's been great, lost 5TB of data. I am not looking at WD again.
> 
> 
> 
> And then the Toshiba drives get corrupted, and then you never buy Toshiba again.
> 
> Who will you buy from then?
Click to expand...

What happened is that my drives went from being formatted to being RAW overnight. Before this the drives were slowing down as in instead of writing at 120Mps they had dropped to 100, then 80 and now sometimes 50 MBps so I know the drives were defective. Not surprising really as they were cheap Reds at half price but they were advertised as new so I took the chance. Since then I've come to believe these drives are factory rejects, working but not in good condition and sold for cheap.

The seller will refund me and I will buy three or four brand new 3TB Toshiba's. The DT01ACA300's are 7200 RPM drives and have CCTL (TLER) enabled, thus are suitable for RAID (I will use SnapRAID as it's really easy and simple to use).


----------



## parityboy

*@Liranan*

So those Reds were eBay drives? From a private seller? For me, that is the biggest "No no NO!!!" ever. If there's one thing I refuse to buy from private sellers it's storage. _Caveat emptor_.


----------



## LazarusIV

Quote:


> Originally Posted by *herkalurk*
> 
> It's not amazing performance, but they're slow disks (5400 RPM) meant to be big and cheap. It can take a minute or 2 to copy over a 10GB mkv movie file depending on what other processes are using the I/O of that array. More spindles generally means it's faster, but raid 10 will always win compared to raid 5 with same disks. We have a 10 disk Raid 10 at work with 10 2TB drives. It's rather quick even though they're only 7200 RPM drives.
> 
> You need to weigh cost/GB vs performance needs. If money were no object, I'd have a raid 10 array based solely on 1 or 2 TB SSDs in a enterprise grade iSCSI san. But I am not rich, and I'm conservative, so I bought 3 X 4 TB Seagate for $99 on black friday. 8 TB usable is a nice amount of space. Plus I used the 3 old 2 TB disks I migrated from to replace the 4 X 1 TB array in my other server, bought a 2nd 2 TB drive and now that server is 4 X 2 TB. Then I used a couple 1 TB drives in my desktop for a local Raid 1, and the other 2 are going else where.


Ah, I see. Thanks for the info, that's EXACTLY what I was wondering! Looks like I'll stick with RAID 1 for now and depending on prices later, be flexible with regard to RAID 1+0 or RAID 5 in the future. Thank you!


----------



## Liranan

Quote:


> Originally Posted by *parityboy*
> 
> *@Liranan*
> 
> So those Reds were eBay drives? From a private seller? For me, that is the biggest "No no NO!!!" ever. If there's one thing I refuse to buy from private sellers it's storage. Caveat emptor.


Basically yes, they were from a private seller on China's Ebay; Taobao. The Toshiba's will be from an authorised seller and if the service is good I will exclusively buy these drives as the one in my PC is great.


----------



## PuffinMyLye

Installed one of the two new CoolJag BUF-E copper heatsinks on one if my two D-1537 motherboards. Idle temps dropped 15C right off the bat. Will be doing load testing later today but safe to say I'm very happy with the purchase so far.


----------



## Rbby258

Software recommendations....

I'd like to be able to use my off-site server as if I was sitting in front of it. I currently use team viewer as it was the quickest to set up, but I'm after something that works and feels better. Team viewer I feel like I couldn't do much more than check on the server info and move files around in file explorer.

When I next upgrade my server this will help me decide what hardware to go for. It would be nice to be able to use video editing programs and things like photoshop and various other softwares on the system, but in a lag free but over the air solution. Ideally, putting all my compute horsepower elsewhere and living with a MacBook.


----------



## Prophet4NO1

I recently jumped over to VNC. Works great and runs on pretty much any OS.


----------



## twerk

Quote:


> Originally Posted by *Rbby258*
> 
> Software recommendations....
> 
> I'd like to be able to use my off-site server as if I was sitting in front of it. I currently use team viewer as it was the quickest to set up, but I'm after something that works and feels better. Team viewer I feel like I couldn't do much more than check on the server info and move files around in file explorer.
> 
> When I next upgrade my server this will help me decide what hardware to go for. It would be nice to be able to use video editing programs and things like photoshop and various other softwares on the system, but in a lag free but over the air solution. Ideally, putting all my compute horsepower elsewhere and living with a MacBook.


If the server is running Windows you can't beat RDP over VPN in my opinion.


----------



## Rbby258

Quote:


> Originally Posted by *Prophet4NO1*
> 
> I recently jumped over to VNC. Works great and runs on pretty much any OS.


I've just found and testing AnyDesk and it seems really good so far.


----------



## PuffinMyLye

Quote:


> Originally Posted by *twerk*
> 
> If the server is running Windows you can't beat RDP over VPN in my opinion.


^This.

And in terms of lag, that's going to be mainly determined by the speed of both your home's upload connection and your remote download connection.


----------



## MrGuvernment

Quote:


> Originally Posted by *LazarusIV*
> 
> Question: What performance difference do you see with a 4 HDD RAID 5 compared to a 3 HDD RAID 5? I ask because in the future I'm torn between a 3 or 4 HDD RAID 5 or a 4 HDD RAID 1+0...


Unless your using SSD's, stop using raid 5...

https://community.spiceworks.com/topic/356919-why-raid-5-sucks

https://community.spiceworks.com/topic/1324304-is-raid5-really-bad

http://www.smbitjournal.com/2012/12/the-history-of-array-splitting

http://www.smbitjournal.com/2012/11/one-big-raid-10-a-new-standard-in-server-storage

http://www.smbitjournal.com/2012/11/choosing-raid-for-hard-drives-in-2013

http://www.smbitjournal.com/2012/11/choosing-a-raid-level-by-drive-count

http://www.smbitjournal.com/2012/11/hardware-and-software-raid

http://www.smbitjournal.com/2012/08/nearly-as-good-is-not-better

http://www.smbitjournal.com/2012/07/hot-spare-or-a-hot-mess

http://www.smbitjournal.com/2012/05/when-no-redundancy-is-more-reliable

http://www.smbitjournal.com/2011/09/spotlight-on-smb-storage

http://www.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805

http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162

http://queue.acm.org/detail.cfm?id=1670144


----------



## Prophet4NO1

ZFS!!!!


----------



## parityboy

Quote:


> Originally Posted by *MrGuvernment*
> 
> Unless your using SSD's, stop using raid 5...


You raise a good point. With the performance advantage of SSDs, does RAID 5 get a stay of execution?


----------



## LazarusIV

Quote:


> Originally Posted by *MrGuvernment*
> 
> Unless your using SSD's, stop using raid 5...
> 
> https://community.spiceworks.com/topic/356919-why-raid-5-sucks
> 
> https://community.spiceworks.com/topic/1324304-is-raid5-really-bad
> 
> http://www.smbitjournal.com/2012/12/the-history-of-array-splitting
> 
> http://www.smbitjournal.com/2012/11/one-big-raid-10-a-new-standard-in-server-storage
> 
> http://www.smbitjournal.com/2012/11/choosing-raid-for-hard-drives-in-2013
> 
> http://www.smbitjournal.com/2012/11/choosing-a-raid-level-by-drive-count
> 
> http://www.smbitjournal.com/2012/11/hardware-and-software-raid
> 
> http://www.smbitjournal.com/2012/08/nearly-as-good-is-not-better
> 
> http://www.smbitjournal.com/2012/07/hot-spare-or-a-hot-mess
> 
> http://www.smbitjournal.com/2012/05/when-no-redundancy-is-more-reliable
> 
> http://www.smbitjournal.com/2011/09/spotlight-on-smb-storage
> 
> http://www.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805
> 
> http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162
> 
> http://queue.acm.org/detail.cfm?id=1670144


Ah gotcha, I'll stick with RAID 1 for now, then RAID 1 + 0 when I get a couple more of these drives


----------



## Liranan

Quote:


> Originally Posted by *MrGuvernment*
> 
> Quote:
> 
> 
> 
> Originally Posted by *LazarusIV*
> 
> Question: What performance difference do you see with a 4 HDD RAID 5 compared to a 3 HDD RAID 5? I ask because in the future I'm torn between a 3 or 4 HDD RAID 5 or a 4 HDD RAID 1+0...
> 
> 
> 
> Unless your using SSD's, stop using raid 5...
> 
> https://community.spiceworks.com/topic/356919-why-raid-5-sucks
> 
> https://community.spiceworks.com/topic/1324304-is-raid5-really-bad
> 
> http://www.smbitjournal.com/2012/12/the-history-of-array-splitting
> 
> http://www.smbitjournal.com/2012/11/one-big-raid-10-a-new-standard-in-server-storage
> 
> http://www.smbitjournal.com/2012/11/choosing-raid-for-hard-drives-in-2013
> 
> http://www.smbitjournal.com/2012/11/choosing-a-raid-level-by-drive-count
> 
> http://www.smbitjournal.com/2012/11/hardware-and-software-raid
> 
> http://www.smbitjournal.com/2012/08/nearly-as-good-is-not-better
> 
> http://www.smbitjournal.com/2012/07/hot-spare-or-a-hot-mess
> 
> http://www.smbitjournal.com/2012/05/when-no-redundancy-is-more-reliable
> 
> http://www.smbitjournal.com/2011/09/spotlight-on-smb-storage
> 
> http://www.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805
> 
> http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162
> 
> http://queue.acm.org/detail.cfm?id=1670144
Click to expand...

Extremely sensationalist articles that do not apply to home users. For home use RAID 5 is more than adequate, depending on the array size. If the array is rather large then RAID 6, 7 and even 4 or 5 parity drives are necessary but for a 4 drive RAID array RAID 5 is just fine.

FreeNAS have just recently moved to RAID 7 (RAID Z3/3 parity drives) and UnRAID have just moved to RAID 6 (two parity). Personally I don't like these numbers of parity as I like to have n+1 but for my current 3 drive RAID 5 array there is no need to go n+1 or to adhere to the articles above.


----------



## PuffinMyLye

Quote:


> Originally Posted by *Liranan*
> 
> Extremely sensationalist articles that do not apply to home users. For home use RAID 5 is more than adequate, depending on the array size. If the array is rather large then RAID 6, 7 and even 4 or 5 parity drives are necessary but for a 4 drive RAID array RAID 5 is just fine.
> 
> FreeNAS have just recently moved to RAID 7 (RAID Z3/3 parity drives) and UnRAID have just moved to RAID 6 (two parity). Personally I don't like these numbers of parity as I like to have n+1 but for my current 3 drive RAID 5 array there is no need to go n+1 or to adhere to the articles above.


The amount of drives in the array isn't the concern (at least not mine), it's the size of the drives. If you are using any larger than 1TB drives in a RAID5 array you are at a high risk of getting a URE (unrecoverable read error) during a rebuild. If that happens, say goodbye to your entire array.


----------



## herkalurk

Quote:


> Originally Posted by *PuffinMyLye*
> 
> The amount of drives in the array isn't the concern (at least not mine), it's the size of the drives. If you are using any larger than 1TB drives in a RAID5 array you are at a high risk of getting a URE (unrecoverable read error) during a rebuild. If that happens, say goodbye to your entire array.


How high of a risk?

I just rebuilt 2 arrays a 3 disk and 4 disk raid 5 both with larger than 1 tb drives, no issues. I've rebuilt both of them a few times.


----------



## PuffinMyLye

Quote:


> Originally Posted by *herkalurk*
> 
> How high of a risk?
> 
> I just rebuilt 2 arrays a 3 disk and 4 disk raid 5 both with larger than 1 tb drives, no issues. I've rebuilt both of them a few times.


http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/


----------



## beatfried

also theres bitrot: http://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/


----------



## parityboy

Quote:


> Originally Posted by *PuffinMyLye*
> 
> The amount of drives in the array isn't the concern (at least not mine), it's the size of the drives. If you are using any larger than 1TB drives in a RAID5 array you are at a high risk of getting a URE (unrecoverable read error) during a rebuild. If that happens, say goodbye to your entire array.


The number of drives can be a concern from statistical perspective - the greater the number of drives in the array, the greater chance of a drive being weak and failing, whether during normal operation or during a rebuild.

The speed of the drives is a concern also - the less time the array spends rebuilding, the less chance of a drive dying during a rebuild, due to mechanical stresses. If I was going to build a RAID 5, I'd have no more than 7 drives, which each drive no more than 750GB in capacity. Even SSDs don't increase the drive count here, because apparently SSDs have a greater chance of data errors, even though they have less chance of drive failure.

The capacity of each drive is the third concern, with regard to hitting a URE during a rebuild.

As ever, _BAYF_ (Backups Are Your Friend).


----------



## PuffinMyLye

Quote:


> Originally Posted by *parityboy*
> 
> The number of drives can be a concern from statistical perspective - the greater the number of drives in the array, the greater chance of a drive being weak and failing, whether during normal operation or during a rebuild. The speed of the drives is a concern also - the less time the array spends rebuilding, the less chance of a drive dying during a rebuild, due to mechanical stresses. The capacity of each drive is the third concern, with regard to hitting a URE during a rebuild.
> 
> As ever, _BAYF_ (Backups Are Your Friend).


I agree with everything you said. My statement was only in reference to URE's.


----------



## parityboy

Quote:


> Originally Posted by *PuffinMyLye*
> 
> I agree with everything you said. My statement was only in reference to URE's.


I know.







I posted simply to clarify for other readers. Your post was a useful reference point, so I quoted it.


----------



## MrGuvernment

Quote:


> Originally Posted by *Liranan*
> 
> Extremely sensationalist articles that do not apply to home users. For home use RAID 5 is more than adequate, depending on the array size. If the array is rather large then RAID 6, 7 and even 4 or 5 parity drives are necessary but for a 4 drive RAID array RAID 5 is just fine.
> 
> FreeNAS have just recently moved to RAID 7 (RAID Z3/3 parity drives) and UnRAID have just moved to RAID 6 (two parity). Personally I don't like these numbers of parity as I like to have n+1 but for my current 3 drive RAID 5 array there is no need to go n+1 or to adhere to the articles above.


Not sensationalist at all, but fact and statistics. Sure some people have never had problems with their Raid 5, but just as many people have for the exact reasons stated, this is why even OEM's finally started removing Raid 5 from storage arrays or recommending it.

I know people who do raid 0 just to it and have not had a drive failure, does not mean that it always works.


----------



## xxpenguinxx

I manage a few hundred servers at work. If it's more than 3 drives, we use RAID 6 at minimum. We've lost whole arrays after a single drive failure on RAID 5 a few times. We even came close to losing a RAID 6 array that had 2 drives fail around the same time, and a 3rd throwing errors during the rebuild. You shouldn't be relying on RAID as a form of protection. It's there to help keep the system running during failures, not save you from failures.

Now about networking. I might of gone over this already here.

I want to setup a 10g network between my desktop and server. Even with Sata II my SSDs are limited by the 1gbps link. I was thinking of getting two Mellanox Connect X2 10GbE NICs since they're cheap on Ebay. Are there any vendor limitations on fiber modules? I'm also not familiar with fiber in general, like the nm ratings. I can always do direct attach but it's too thick to tuck under my rug, plus from what I've read it has more latency than fiber, still better than 1gb though. Currently my server runs on Server 2008 R2 and my desktop Windows 7 x64.


----------



## burksdb

Quote:


> Originally Posted by *xxpenguinxx*
> 
> I manage a few hundred servers at work. If it's more than 3 drives, we use RAID 6 at minimum. We've lost whole arrays after a single drive failure on RAID 5 a few times. We even came close to losing a RAID 6 array that had 2 drives fail around the same time, and a 3rd throwing errors during the rebuild. You shouldn't be relying on RAID as a form of protection. It's there to help keep the system running during failures, not save you from failures.
> 
> Now about networking. I might of gone over this already here.
> 
> I want to setup a 10g network between my desktop and server. Even with Sata II my SSDs are limited by the 1gbps link. I was thinking of getting two Mellanox Connect X2 10GbE NICs since they're cheap on Ebay. Are there any vendor limitations on fiber modules? I'm also not familiar with fiber in general, like the nm ratings. I can always do direct attach but it's too thick to tuck under my rug, plus from what I've read it has more latency than fiber, still better than 1gb though. Currently my server runs on Server 2008 R2 and my desktop Windows 7 x64.


ive used the intel x520-Da1 (can be found for 50-60$) sfp+ cards with a couple brocade transceivers Here and a multimode fiber run thru my house into the garage to connect my server and desktop. Worked without any issues and even worked after i picked up my quanta lb4m switch (48gb ports 2sfp+)

Ive also tried this setup with the brocade 1020 cards and everything works as well. I have not tired a mellonox connect x2 card but i might have one laying around here to test with.


----------



## Removed1

Quote:


> Originally Posted by *MrGuvernment*
> 
> Not sensationalist at all, but fact and statistics. Sure some people have never had problems with their Raid 5, but just as many people have for the exact reasons stated, this is why even OEM's finally started removing Raid 5 from storage arrays or recommending it.
> 
> *I know people who do raid 0 just to it and have not had a drive failure, does not mean that it always works*.


Sorry for the off topic, i do not own a server, but own raid 0 and this discussion about reliability of raid 5/6 is quite interesting.

So i m running a raid 0 with 3 sas disk on my gaming rig, when it fail it fail, the bip is so annoying and loud!
But how it could fail and still work, i just force the array back online and my raid boot again followed by windows, repair errors, job done.
So there we are speaking about a bite flip that destroy the array rebuild, then i would have lost my entire filesystem/windows.
How it could be with my raid 0 i still have windows working after a lot of raid fails, i'm even ruining without a battery (fails mainly because my disk move and lost the contact with the back plate).

This is a noob question regarding my use, i understand completely the reliability needed on business and company data management.


----------



## burksdb

Quote:


> Originally Posted by *Wimpzilla*
> 
> Sorry for the off topic, i do not own a server, but own raid 0 and this discussion about reliability of raid 5/6 is quite interesting.
> 
> So i m running a raid 0 with 3 sas disk on my gaming rig, when it fail it fail, the bip is so annoying and loud!
> But how it could fail and still work, i just force the array back online and my raid boot again followed by windows, repair errors, job done.
> So there we are speaking about a bite flip that destroy the array rebuild, then i would have lost my entire filesystem/windows.
> How it could be with my raid 0 i still have windows working after a lot of raid fails, i'm even ruining without a battery *(fails mainly because my disk move and lost the contact with the back plate).*
> 
> This is a noob question regarding my use, i understand completely the reliability needed on business and company data management.


having a drive fail due to hardware issues would be the difference. Your array is going "offline" when one of the disks disappears. Once you bring the array back online everything is as it was (will this work all the time no / sounds like you have been fortunate)

you would be unable to rebuild the array to access data if one drive had an actual hardware fault.


----------



## Liranan

Quote:


> Originally Posted by *PuffinMyLye*
> 
> Quote:
> 
> 
> 
> Originally Posted by *herkalurk*
> 
> How high of a risk?
> 
> I just rebuilt 2 arrays a 3 disk and 4 disk raid 5 both with larger than 1 tb drives, no issues. I've rebuilt both of them a few times.
> 
> 
> 
> http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/
Click to expand...

Highly sensationalised article with lots of exaggerations.

https://www.high-rely.com/blog/why-raid-5-stops-working-in-2009-not/

The problem with RAID 5 isn't rot but rebuild times.


----------



## Liranan

My new drives have arrived and the old WD Reds are on their way back to the shop. I will take a 15% loss but I'd rather lose 15% than lose all of my data (lost lots of data thanks to the WD Reds).

Toshiba P300 HDWD130 Manufacture date December 2016.











My cat was really interested in what had arrived.









Too bad you can't see, his eyes are green around the pupil. I do wish they stayed kitten sized, he used to sit on my shoulder and talk to me (he never shuts up, always talking) but now he's too big.

Edit: forgot to say these are the first three of a total of 12 3TB drives. My intention is to run them in RAID 7+1 (RAID 8?) with one hot swap and 7 data drives. I could skip the hot swap and just have 8 data drives but it all depends on how much I manage to amass.


----------



## xxpenguinxx

I knew this would happen after making that raid 5 post. Just lost another array at work because they used raid 5. Had 1 disk fail, and shortly after starting the rebuild a second one failed...

So, DO NOT use Raid 5. At minimum use raid 6, and if possible, try to buy drives with varying manufacturing dates so they don't all start failing at the same time.

Ended up ordering two Mellanox MNPA19-XTR and a direct attach cable. Might switch to fiber later down the road. I'll post results when they come in.


----------



## Prophet4NO1

Quote:


> Originally Posted by *xxpenguinxx*
> 
> I knew this would happen after making that raid 5 post. Just lost another array at work because they used raid 5. Had 1 disk fail, and shortly after starting the rebuild a second one failed...
> 
> So, DO NOT use Raid 5. At minimum use raid 6, and if possible, try to buy drives with varying manufacturing dates so they don't all start failing at the same time.
> 
> Ended up ordering two Mellanox MNPA19-XTR and a direct attach cable. Might switch to fiber later down the road. I'll post results when they come in.


Hope you had a backup.


----------



## xxpenguinxx

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Hope you had a backup.


I tried to recover the failed disks but no luck. Just sent out the confirmation, I'll find out in a few minutes...

On the bright side maybe they'll replace it already, it's probably 10 years old now.


----------



## PuffinMyLye

Quote:


> Originally Posted by *xxpenguinxx*
> 
> I tried to recover the failed disks but no luck. Just sent out the confirmation, I'll find out in a few minutes...
> 
> On the bright side maybe they'll replace it already, it's probably 10 years old now.


But did you not have an actual backup of the data on the array?


----------



## Prophet4NO1

Quote:


> Originally Posted by *xxpenguinxx*
> 
> I tried to recover the failed disks but no luck. Just sent out the confirmation, I'll find out in a few minutes...
> 
> On the bright side maybe they'll replace it already, it's probably 10 years old now.


Quote:


> Originally Posted by *PuffinMyLye*
> 
> But did you not have an actual backup of the data on the array?


Lesson to learn, RAID is not backup.


----------



## Liranan

Quote:


> Originally Posted by *xxpenguinxx*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Prophet4NO1*
> 
> Hope you had a backup.
> 
> 
> 
> I tried to recover the failed disks but no luck. Just sent out the confirmation, I'll find out in a few minutes...
> 
> On the bright side maybe they'll replace it already, it's probably 10 years old now.
Click to expand...

Any SMART errors prior to the failures?

RAID isn't backup, it's better than no backup but it's not a backup. If the data is mission critical it should be in a RAID 50/60 or higher level of RAID. For static, not so critical data such as media RAID 5/6 are perfect as they allow high read speeds, allowing a high number of people to access the data. Some people don't even RAID their data, they just have them in separate drives and if they get SMART warnings they replace the drives.

That doesn't protect from sudden failures so that is what RAID 5/6 are supposed to protect from but you need to keep an eye on SMART at all times. Even though my drives are new I check their SMART data every day, sometimes several times a day. After the last three drives failed I am really paranoid.

Read this to see how catastrophic failures can be, a SnapRAID array recovered after four drive failures:

https://sourceforge.net/p/snapraid/discussion/1677233/thread/eca5ed3d/

Personally I would have used 5 parity drives with 1 hot spare. I would lose two drives but once you lose data you realise that even the slightest bit of safety and security is better than nothing and he didn't keep an eye on SMART so he didn't see the failures coming.


----------



## MrGuvernment

delete


----------



## MrGuvernment

Quote:


> Originally Posted by *Liranan*
> 
> Any SMART errors prior to the failures?
> 
> RAID isn't backup, it's better than no backup but it's not a backup. If the data is mission critical it should be in a RAID 50/60 or higher level of RAID. For static, not so critical data such as media RAID 5/6 are perfect as they allow high read speeds, allowing a high number of people to access the data. Some people don't even RAID their data, they just have them in separate drives and if they get SMART warnings they replace the drives.
> 
> That doesn't protect from sudden failures so that is what RAID 5/6 are supposed to protect from but you need to keep an eye on SMART at all times. Even though my drives are new I check their SMART data every day, sometimes several times a day. After the last three drives failed I am really paranoid.
> 
> Read this to see how catastrophic failures can be, a SnapRAID array recovered after four drive failures:
> https://sourceforge.net/p/snapraid/discussion/1677233/thread/eca5ed3d/
> 
> Personally I would have used 5 parity drives with 1 hot spare. I would lose two drives but once you lose data you realise that even the slightest bit of safety and security is better than nothing and he didn't keep an eye on SMART so he didn't see the failures coming.


Hot spares are almost useless as it is not really a "hot spare" in that it is just on and your wearing out a drive while not using it, might as well make it part of the array and be useful, especially in a raid 5 or raid 50,. which now a days NO one should be using. Unless your system is far away and getting to it may take time and a hot spare could help you.

Having a hot spare in a Raid 5 is not really something you want, when a raid array fails, you want to stop and see why and work to recovery data BEFORE any rebuilds start, as we know, parity raid is slow to rebuild and strains other disk immensely, increases possible chance for another failure = good by to all your data.

Raid 10 or raid 6 and use all drives and have proper backups,


----------



## MrGuvernment

Quote:


> Originally Posted by *Wimpzilla*
> 
> Sorry for the off topic, i do not own a server, but own raid 0 and this discussion about reliability of raid 5/6 is quite interesting.
> 
> So i m running a raid 0 with 3 sas disk on my gaming rig, when it fail it fail, the bip is so annoying and loud!
> But how it could fail and still work, i just force the array back online and my raid boot again followed by windows, repair errors, job done.
> So there we are speaking about a bite flip that destroy the array rebuild, then i would have lost my entire filesystem/windows.
> How it could be with my raid 0 i still have windows working after a lot of raid fails, i'm even ruining without a battery (fails mainly because my disk move and lost the contact with the back plate).
> 
> This is a noob question regarding my use, i understand completely the reliability needed on business and company data management.


Firstly i will say, you likely are not seeing much difference with 3 SAS drives in raid 0 for gaming, gaming like windows often uses many small files to load, with the exception of some games with massive textures of map loads it may be faster.

Raid 0 is ideal for large file work / transfers as that is where you see it's true performance come out, for things like apps and games and windows, not so much.

As noted, it is likely the raid controller or something dropping a drive, either due to time out or something else and thus it is shutting down but no data is lost.


----------



## Liranan

Quote:


> Originally Posted by *MrGuvernment*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Any SMART errors prior to the failures?
> 
> RAID isn't backup, it's better than no backup but it's not a backup. If the data is mission critical it should be in a RAID 50/60 or higher level of RAID. For static, not so critical data such as media RAID 5/6 are perfect as they allow high read speeds, allowing a high number of people to access the data. Some people don't even RAID their data, they just have them in separate drives and if they get SMART warnings they replace the drives.
> 
> That doesn't protect from sudden failures so that is what RAID 5/6 are supposed to protect from but you need to keep an eye on SMART at all times. Even though my drives are new I check their SMART data every day, sometimes several times a day. After the last three drives failed I am really paranoid.
> 
> Read this to see how catastrophic failures can be, a SnapRAID array recovered after four drive failures:
> https://sourceforge.net/p/snapraid/discussion/1677233/thread/eca5ed3d/
> 
> Personally I would have used 5 parity drives with 1 hot spare. I would lose two drives but once you lose data you realise that even the slightest bit of safety and security is better than nothing and he didn't keep an eye on SMART so he didn't see the failures coming.
> 
> 
> 
> Hot spares are almost useless as it is not really a "hot spare" in that it is just on and your wearing out a drive while not using it, might as well make it part of the array and be useful, especially in a raid 5 or raid 50,. which now a days NO one should be using. Unless your system is far away and getting to it may take time and a hot spare could help you.
> 
> Having a hot spare in a Raid 5 is not really something you want, when a raid array fails, you want to stop and see why and work to recovery data BEFORE any rebuilds start, as we know, parity raid is slow to rebuild and strains other disk immensely, increases possible chance for another failure = good by to all your data.
> 
> Raid 10 or raid 6 and use all drives and have proper backups,
Click to expand...

I do not intend to use RAID 5, once I have my 12 drives I will use three or even four parity drives. It's a little excessive but once I need more space I will get a 4u 24 drive chassis, which is my intention anyway. Then I will have 24 drives of which 5 or even 6 will be parity drives. That way rebuild times and the burden on the parity drives are reduced somewhat.

With 12 drives I will have 7 or 8 data drives and with 24 drives I will have 17 or 18 data drives, which is more than enough for a home server.


----------



## Master__Shake

Long Live RAID 6!


----------



## Liranan

Quote:


> Originally Posted by *Master__Shake*
> 
> 
> 
> Long Live RAID 6!












I hate you man, you know how jealous I am of your setup so you just have to rub it in









Personally with all those drives I would do RAID 7 at least but seriously that is sexy









Have you attached all those drives to that one LSI card? As far as I can see it's only a 4 SATA port card so how did you do it? I don't really understand how LSI cards work as there are some with 4 and 8 ports. In the future I will need one myself so I would like to understand before I buy the wrong one.

https://www.broadcom.com/products/storage/raid-controllers/megaraid-sas-9260-8i

Quote:


> Connect up to 128 SATA or SAS drives with eight internal 6Gb/s ports


I only see two ports, I am so confused.


----------



## Master__Shake

Sas expanders.

Card goes from it to the Intel 24 port expander to an 8088 cable then to another case to an HP 36 port was expander.

The Intel controls up to 20 drives and the HP can control up to 28 aND then you can expand off them even further.

Theres a pic on here of my setup.


----------



## Liranan

Quote:


> Originally Posted by *Master__Shake*
> 
> Sas expanders.
> 
> Card goes from it to the Intel 24 port expander to an 8088 cable then to another case to an HP 36 port was expander.
> 
> The Intel controls up to 20 drives and the HP can control up to 28 aND then you can expand off them even further.
> 
> Theres a pic on here of my setup.


Does it matter which RAID I get if I don't intend to use the RAID capability of the card and just want to use it as extra SATA ports?


----------



## tiro_uspsss

Quote:


> Originally Posted by *Liranan*
> 
> Does it matter which RAID I get if I don't intend to use the RAID capability of the card and just want to use it as extra SATA ports?


If you aren't going to run any RAID, just buy an IBM M1015 or similar..

there's also a Chenbro 36-port expander


----------



## Liranan

A search for the IBM M1015 brings up the LSI 9240 and the Chenbro brings up 4U chassis for 1000 USD. I have already found some 4U chassis for 200 USD so I am not looking at them, I am just looking for a card that will allow me to pass the drives through to the OS without creating JBOD or any other form of RAID.

There are some cheap older LSI cards like the LSI MR 8708EM2. It seems to be a pretty weak card when it comes to RAID capability but as I said I don't really care, I just need extra SATA ports.


----------



## PuffinMyLye

Quote:


> Originally Posted by *Liranan*
> 
> I do not intend to use RAID 5, once I have my 12 drives I will use three or even four parity drives. It's a little excessive but once I need more space I will get a 4u 24 drive chassis, which is my intention anyway. Then I will have 24 drives of which 5 or even 6 will be parity drives. That way rebuild times and the burden on the parity drives are reduced somewhat.
> 
> With 12 drives I will have 7 or 8 data drives and with 24 drives I will have 17 or 18 data drives, which is more than enough for a home server.


What software RAID solution do you plan on using to allow for those 4+ parity drives?

*EDIT*: Nvmd just saw you are using SnapRAID.


----------



## Master__Shake

Quote:


> Originally Posted by *Liranan*
> 
> Does it matter which RAID I get if I don't intend to use the RAID capability of the card and just want to use it as extra SATA ports?


yes it does matter what card you buy.

you need an HBA, something that just presents the drives to the OS.

all my RAID cards don't have that ability.

all LSI.


----------



## PuffinMyLye

Quote:


> Originally Posted by *Master__Shake*
> 
> yes it does matter what card you buy.
> 
> you need an HBA, something that just presents the drives to the OS.
> 
> all my RAID cards don't have that ability.
> 
> all LSI.


Not sure if you were saying all LSI cards can or can't but all my LSI cards can so just clarifying that many LSI cards can do this. You just need to ensure they are flashed in IT mode.


----------



## Prophet4NO1

Quote:


> Originally Posted by *Master__Shake*
> 
> yes it does matter what card you buy.
> 
> you need an HBA, something that just presents the drives to the OS.
> 
> all my RAID cards don't have that ability.
> 
> all LSI.


Quote:


> Originally Posted by *PuffinMyLye*
> 
> Not sure if you were saying all LSI cards can or can't but all my LSI cards can so just clarifying that many LSI cards can do this. You just need to ensure they are flashed in IT mode.


Yeah, an LSI RAID card with the IT mode flash will usually be far more stable than most HBA cards out there. And if you go used, you can usually get a really good deal.


----------



## Master__Shake

i should have phrashed that better.

my LSI raid cards 3x 9260-4i, 9260-8i, 2x 8888ELP. can not do JBOD.

sorry.

i do however have an H310 that can be flashed to IT mode.

great card too. and cheap.


----------



## PuffinMyLye

I have two IBM ServeRAID M1015 (Flashed to LSI 9211-8i IT mode) that I may be selling shortly. Will let you guys know if and when I do.


----------



## tiro_uspsss

Quote:


> Originally Posted by *Liranan*
> 
> A search for the IBM M1015 brings up the LSI 9240 and the Chenbro brings up 4U chassis for 1000 USD.


no offence - your google-fu sucks if that's what you found from searching. If I type into google "chenbro sas expander" (the phrase 'sas expander has been mentioned a few times now already), both the first and second option are what you are looking for. The first is a list of expanders from chenbro & the second is the particular model I referred to.


----------



## Liranan

Quote:


> Originally Posted by *tiro_uspsss*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> A search for the IBM M1015 brings up the LSI 9240 and the Chenbro brings up 4U chassis for 1000 USD.
> 
> 
> 
> no offence - your google-fu sucks if that's what you found from searching. If I type into google "chenbro sas expander" (the phrase 'sas expander has been mentioned a few times now already), both the first and second option are what you are looking for. The first is a list of expanders from chenbro & the second is the particular model I referred to.
Click to expand...

As I live in China I have to make do with what I can get here. I can order from abroad but I only want a home server so will not do that. Also I have found what I was going wrong so I now just need to find the right card.


----------



## jieddo

I just completed a few upgrades on the Aquitaine. My netgate appliance decided to give up the ghost and took my entire network down with it, these things are expensive and I didnt want to shell out that much money again for something that only does one thing so Mikrotik Routerboared it is.
Picked up an RB3011 UiAS-RM cheap on Amazon. I also could't resist those cheap Facebook servers flooding Ebay so I picked up two E5-2670 Xeons and a SM X9DR3-FO, 64GB of EEC RAM and replaced my POE injector with a POE switch.
Also threw in a 1U 110 block to clean up the wiring a bit. I used the leftover parts from the original server to put together an ESXi virtualization box in a iStarUSA 2U rack server, I have a FreeNAS VM installed on it but I dont have any drives for it yet, gonna get two 14TB SMR drives and use those as a redundancy/backup for my plex library.


----------



## xxpenguinxx

ConnectX-2 10gb NICs came in. I get about 940mbps with jperf after tweaking the buffer sizes. Now I'm at the limits of the onboard Sata II. I get ~250MB/s transfer rate from my SSDs.


----------



## burksdb

Quote:


> Originally Posted by *xxpenguinxx*
> 
> ConnectX-2 10gb NICs came in. I get about 940mbps with jperf after tweaking the buffer sizes. Now I'm at the limits of the onboard Sata II. I get ~250MB/s transfer rate from my SSDs.


im pulling 9.90 GB/s without changing anything on the intel x520qda1 connected via copper.


----------



## tiro_uspsss

Quote:


> Originally Posted by *burksdb*
> 
> im pulling 9.90 G*b*/s without changing anything on the intel x520qda1 connected via copper.


corrected


----------



## TheOx

*Description / Usage:* Network Storage, WebDav, Backup, POSTBOOKS (xTuple) server.

*OS:* Ubuntu 16.04 Server
*Case:* Fractal Design Node 304
*CPU:* AMD A4-5000
*Motherboard:* ASROCK QC5000-ITX/PH
*Memory:* Kingston 8GB 1600MHz
*PSU:* Silverstone ST50F-PD 500W
*OS HDD*: OCZ Agility 3 120GB (My intel 540 took a dive when setting up this server, this was laying around while waiting for RMA)
*Storage HDD(s):* 2x Segate SV25 1TB (Raid 1,Important data), 2x WD7500BPKZ 750GB (Raid 1, POSTBOOKS), both backup to external media weekly. 1x WD Green 1TB (Random data)
*RAID Controller:* SUN X4150 w/ Battery
*UPS:* APC UPS BX1400U (apcupsd software)
*MISC:* My ghetto 3D printed cpu and raid fan brackets.
*Server Manufacturer:* Me, resides on my desk until I move in 2 weeks time.


----------



## mbmumford

Quote:


> Originally Posted by *jieddo*
> 
> I just completed a few upgrades on the Aquitaine. My netgate appliance decided to give up the ghost and took my entire network down with it, these things are expensive and I didnt want to shell out that much money again for something that only does one thing so Mikrotik Routerboared it is.
> Picked up an RB3011 UiAS-RM cheap on Amazon. I also could't resist those cheap Facebook servers flooding Ebay so I picked up two E5-2670 Xeons and a SM X9DR3-FO, 64GB of EEC RAM and replaced my POE injector with a POE switch.
> Also threw in a 1U 110 block to clean up the wiring a bit. I used the leftover parts from the original server to put together an ESXi virtualization box in a iStarUSA 2U rack server, I have a FreeNAS VM installed on it but I dont have any drives for it yet, gonna get two 14TB SMR drives and use those as a redundancy/backup for my plex library.


14TB drives?!?! I was not aware HDDs were available above 10TB yet.


----------



## Bitemarks and bloodstains

15TB SSDs are out https://www.scan.co.uk/products/15tb-samsung-pm1633a-enterprise-class-sas-30-12gb-s-ssd-25-3d-v-nand-mlc-145mm-195k-iops


----------



## Rbby258

Quote:


> Originally Posted by *Bitemarks and bloodstains*
> 
> 15TB SSDs are out https://www.scan.co.uk/products/15tb-samsung-pm1633a-enterprise-class-sas-30-12gb-s-ssd-25-3d-v-nand-mlc-145mm-195k-iops


404'd


----------



## Bitemarks and bloodstains

Ah they have taken it down on the store and FB page for some reason but it was £12K


----------



## frostbite

For backups and media

OS: Advanced server 2000
Case: Silverstone ps11b quiet
CPU: QX6800
Motherboard: P5K
Memory: 8Gb NANYA DDR2
PSU: Cougar 650w
OS HDD (If you have one): 250gb 2.5" Hitachi
Storage HDD(s): 4x 1TB seagate barracuda
Server Manufacturer (Ex: Dell, HP, You?): me


----------



## budgetgamer120

Quote:


> Originally Posted by *TheOx*
> 
> *Description / Usage:* Network Storage, WebDav, Backup, POSTBOOKS (xTuple) server.
> 
> *OS:* Ubuntu 16.04 Server
> *Case:* Fractal Design Node 304
> *CPU:* AMD A4-5000
> *Motherboard:* ASROCK QC5000-ITX/PH
> *Memory:* Kingston 8GB 1600MHz
> *PSU:* Silverstone ST50F-PD 500W
> *OS HDD*: OCZ Agility 3 120GB (My intel 540 took a dive when setting up this server, this was laying around while waiting for RMA)
> *Storage HDD(s):* 2x Segate SV25 1TB (Raid 1,Important data), 2x WD7500BPKZ 750GB (Raid 1, POSTBOOKS), both backup to external media weekly. 1x WD Green 1TB (Random data)
> *RAID Controller:* SUN X4150 w/ Battery
> *UPS:* APC UPS BX1400U (apcupsd software)
> *MISC:* My ghetto 3D printed cpu and raid fan brackets.
> *Server Manufacturer:* Me, resides on my desk until I move in 2 weeks time.


Nice, I like this


----------



## burksdb

finally got a rack!



*Patch panel and pdu on order. Will be replacing the switch with something that uses less power and picking up a ups soon

Top

Quanta Lb4m switch

Zues - 1u quanta server - dual e5-2670's, 64gb ram - Runs Server 2012R2 Setup with Hyper-V and 3 Vm's
Hermes - 4u Norco 4220 - dual L5640's, 24gb ram, 10tb running Unraid
Atlas - 4u Norco 422*4* - dual e5-2670's, 64gb ram, 35tb running Unraid


----------



## Dalchi Frusche

Quote:


> Originally Posted by *burksdb*
> 
> finally got a rack!
> 
> 
> 
> *Patch panel and pdu on order. Will be replacing the switch with something that uses less power and picking up a ups soon


Very nice on the rack, especially love your storage units. Can't wait to see the new pieces installed.


----------



## burksdb

Quote:


> Originally Posted by *Dalchi Frusche*
> 
> Very nice on the rack, especially love your storage units. Can't wait to see the new pieces installed.


Thanks im pretty excited. I work for a public schools IT department and we have to auction off surplus equipment. I lucked out and won this one for $12.60.

It came with a ups but im pretty sure its toast no response on it at all - not sure if its worth trying to replace the batteries on or not.


----------



## Dalchi Frusche

Quote:


> Originally Posted by *burksdb*
> 
> Thanks im pretty excited. I work for a public schools IT department and we have to auction off surplus equipment. I lucked out and won this one for $12.60.
> 
> It came with a ups but im pretty sure its toast no response on it at all - not sure if its worth trying to replace the batteries on or not.


Good freakin steal man! That's cheaper than the cost of my DIY rack.


----------



## mrsmoke

Quote:


> Originally Posted by *burksdb*
> 
> Thanks im pretty excited. I work for a public schools IT department and we have to auction off surplus equipment. I lucked out and won this one for $12.60.
> 
> It came with a ups but im pretty sure its toast no response on it at all - not sure if its worth trying to replace the batteries on or not.


If the UPS doesn't not turn on at all even with a shot battery, the entire unit is dead.


----------



## frostbite

I won a lian li pc70 on ebay for approx $70.
it even comes with the original box

I will be picking it up in a couple of days
not bad for a $250 case


----------



## burksdb

Quote:


> Originally Posted by *mrsmoke*
> 
> If the UPS doesn't not turn on at all even with a shot battery, the entire unit is dead.


yea i figured that would be the case not to worried about it right now.


----------



## mbmumford

Quote:


> Originally Posted by *burksdb*
> 
> finally got a rack!
> 
> 
> 
> *Patch panel and pdu on order. Will be replacing the switch with something that uses less power and picking up a ups soon
> 
> Top
> 
> Quanta Lb4m switch
> 
> Zues - 1u quanta server - dual e5-2670's, 64gb ram - Runs Server 2012R2 Setup with Hyper-V and 3 Vm's
> Hermes - 4u Norco 4220 - dual L5640's, 24gb ram, 10tb running Unraid
> Atlas - 4u Norco 4220 - dual e5-2670's, 64gb ram, 45tb running Unraid


Correct me if I am wrong, but isn't "Atlas" in a 4U Norco 4224?


----------



## burksdb

Quote:


> Originally Posted by *mbmumford*
> 
> Correct me if I am wrong, but isn't "Atlas" in a 4U Norco 4224?


you are correct small typo on my part


----------



## ChRoNo16

How much electricity is that?


----------



## burksdb

Quote:


> Originally Posted by *ChRoNo16*
> 
> How much electricity is that?


with that switch it is around 480 - 500 watts.

I decided to pull that switch for now and swap in a temporary one for now which has me sitting around 410-420 watts at idle im planning on testing max load today


----------



## burksdb

Idle the rack pulls 415 watts
max the rack pulls 900 watts

I can live with that.


----------



## budgetgamer120

Quote:


> Originally Posted by *burksdb*
> 
> Idle the rack pulls 415 watts
> max the rack pulls 900 watts
> 
> I can live with that.


....Wut 0_o


----------



## silvrr

Slowly virtualizing some of my needs.

Wife picked up a Dell Optiplex 9010 free from work. Added a 4 port Gb NIC and a spare SSD.

Dell Optiplex 9010
i5 3570
128 GB SSD
Intel PRO/1000 Quad Port Gigabit Ethernet Adapter LP - 45W1959

Running ESXi 6.5.0a

VM1 - pfSense
VM2 - Linux Mint -> Home Assistant ( https://home-assistant.io/ )

VM3(future) - NAS


----------



## frostbite

Finally a 32bit os with 8gs of ram


----------



## budgetgamer120

Quote:


> Originally Posted by *silvrr*
> 
> Slowly virtualizing some of my needs.
> 
> Wife picked up a Dell Optiplex 9010 free from work. Added a 4 port Gb NIC and a spare SSD.
> 
> Dell Optiplex 9010
> i5 3570
> 128 GB SSD
> Intel PRO/1000 Quad Port Gigabit Ethernet Adapter LP - 45W1959
> 
> Running ESXi 6.5.0a
> 
> VM1 - pfSense
> VM2 - Linux Mint -> Home Assistant ( https://home-assistant.io/ )
> 
> VM3(future) - NAS


Man how did I not know about this Home Assistant think. Can it do everything the expensive stuff from ADT does like, opening doors, thermstat etc?

How many cores and resources did you assign to the home assistant VM?

How many cores did you assign to pfSense?

i


----------



## silvrr

Quote:


> Originally Posted by *budgetgamer120*
> 
> Man how did I not know about this Home Assistant think. Can it do everything the expensive stuff from ADT does like, opening doors, thermstat etc?
> 
> How many cores and resources did you assign to the home assistant VM?
> 
> How many cores did you assign to pfSense?
> 
> i


Take a look at the components page for everything you can link in. Thermostat and sensing doors opening and closing is easily doable, remote locks are possible to but they are not cheap. Its more of a roll your own or homebrew option than some of the all-in-one packages that companies provide. However, it has no monthly service fees and you can keep everything in house if you want. No depenencies on the "cloud" and you can mix and match a lot of technologies (wifi, zwave, etc.) and aren't stuck with what a single hub can support.

Home assistant has 2 cores and 2GB RAM, its on 8GB of disk. It will run happily on a Raspberry Pi so it doesn't need much power.

pfSense has 2 cores and 4GB of RAM.

I am going to let it run as is for a bit and see how things go. I know the pfsense instance can be trimmed down to 2GB of RAM or even lower. The HA instance could be lighter if I can move to ubuntu server, I ran into a hiccup getting that running so I defaulted to Mint which I know works.

I am hoping to free up a core and some RAM so I have hardware to experiment on and I want to squeeze a file server in somewhere. Just a single disk (backup is manual) so I may just create a file share on the HA instance. Was hoping to have them separate though.


----------



## budgetgamer120

Quote:


> Originally Posted by *silvrr*
> 
> Take a look at the components page for everything you can link in. Thermostat and sensing doors opening and closing is easily doable, remote locks are possible to but they are not cheap. Its more of a roll your own or homebrew option than some of the all-in-one packages that companies provide. However, it has no monthly service fees and you can keep everything in house if you want. No depenencies on the "cloud" and you can mix and match a lot of technologies (wifi, zwave, etc.) and aren't stuck with what a single hub can support.
> 
> Home assistant has 2 cores and 2GB RAM, its on 8GB of disk. It will run happily on a Raspberry Pi so it doesn't need much power.
> 
> pfSense has 2 cores and 4GB of RAM.
> 
> I am going to let it run as is for a bit and see how things go. I know the pfsense instance can be trimmed down to 2GB of RAM or even lower. The HA instance could be lighter if I can move to ubuntu server, I ran into a hiccup getting that running so I defaulted to Mint which I know works.
> 
> I am hoping to free up a core and some RAM so I have hardware to experiment on and I want to squeeze a file server in somewhere. Just a single disk (backup is manual) so I may just create a file share on the HA instance. Was hoping to have them separate though.


yeah a file share on the linux mint sounds better than single core NAS.


----------



## silvrr

Quote:


> Originally Posted by *budgetgamer120*
> 
> yeah a file share on the linux mint sounds better than single core NAS.


If I am reading the ESXi docs correctly you can over provision your CPU. ESXi does not give direct access to a certain core(s) as I thought. It instead receives the request and sends that work to the available core(s) of the CPU.

Given that both PFsense and HA barely touch a 3570 I think its safe to over provision. Now my only problem is RAM.


----------



## KyadCK

Quote:


> Originally Posted by *silvrr*
> 
> Quote:
> 
> 
> 
> Originally Posted by *budgetgamer120*
> 
> yeah a file share on the linux mint sounds better than single core NAS.
> 
> 
> 
> If I am reading the ESXi docs correctly you can over provision your CPU. ESXi does not give direct access to a certain core(s) as I thought. It instead receives the request and sends that work to the available core(s) of the CPU.
> 
> Given that both PFsense and HA barely touch a 3570 I think its safe to over provision. Now my only problem is RAM.
Click to expand...

You tell it how many "cores" (threads) it is allowed to have. Beyond that you can limit it's Mhz usage from the total pool, so 4x 3Ghz = a 12,000Mhz pool. Giving it two "cores" and 4,000Mhz provisioned will still limit the VM to a max of 1/3rd of your compute even if you gave it "half" your cores.

It can be detrimental in latency sensitive situations, but things like a NAS can take a back seat in priority to other more important VMs.

Over-Provisioning is fine provided you keep an eye on it and not max yourself out. It's also much better form to give any particular VM at least 2 cores and limit the Mhz than to actually give them just one core.


----------



## budgetgamer120

Quote:


> Originally Posted by *silvrr*
> 
> Take a look at the components page for everything you can link in. Thermostat and sensing doors opening and closing is easily doable, remote locks are possible to but they are not cheap. Its more of a roll your own or homebrew option than some of the all-in-one packages that companies provide. However, it has no monthly service fees and you can keep everything in house if you want. No depenencies on the "cloud" and you can mix and match a lot of technologies (wifi, zwave, etc.) and aren't stuck with what a single hub can support.
> 
> Home assistant has 2 cores and 2GB RAM, its on 8GB of disk. It will run happily on a Raspberry Pi so it doesn't need much power.
> 
> pfSense has 2 cores and 4GB of RAM.
> 
> I am going to let it run as is for a bit and see how things go. I know the pfsense instance can be trimmed down to 2GB of RAM or even lower. The HA instance could be lighter if I can move to ubuntu server, I ran into a hiccup getting that running so I defaulted to Mint which I know works.
> 
> I am hoping to free up a core and some RAM so I have hardware to experiment on and I want to squeeze a file server in somewhere. Just a single disk (backup is manual) so I may just create a file share on the HA instance. Was hoping to have them separate though.


I think i will use a raspberry pi for Home Assistant. So if I turn off the server for any reason the Home Assistant will still work.


----------



## budgetgamer120

CPU-Z bench on my server









Pretty good for $40


----------



## zdude

I present the ugly duckling server....




8 3TB HDDs in a ZFS raidz2 array for 15TB of usable storage.
32GB RAM
Xeon E5-2697v2 ES CPU
256GB Enterprise Samsung SSD boot/VM drive
RX 480 originally intended for pass through, turns out my mobo won't do IOMMU groups








10Gb network going to the rest of the network.

The 140mm fan taped to the bottom of the H100 is there to keep the VRM's under 90C when encoding video. The H100 has push/pull fans to keep CPU under 65C when encoding 2 videos at the same time. Cable management sucks I know.

Software
2 always up VMs, one for streaming, one for vpn.
Plex
TS3 server
Mumble server
Ark server
Minecraft server (x4)
Host OS - ubuntu 16.04


----------



## mbmumford

Quote:


> Originally Posted by *zdude*
> 
> I present the ugly duckling server....
> 
> 
> 
> 
> 8 3TB HDDs in a ZFS raidz2 array for 15TB of usable storage.
> 32GB RAM
> Xeon E5-2697v2 ES CPU
> 256GB Enterprise Samsung SSD boot/VM drive
> RX 480 originally intended for pass through, turns out my mobo won't do IOMMU groups
> 
> 
> 
> 
> 
> 
> 
> 
> 10Gb network going to the rest of the network.
> 
> The 140mm fan taped to the bottom of the H100 is there to keep the VRM's under 90C when encoding video. The H100 has push/pull fans to keep CPU under 65C when encoding 2 videos at the same time. Cable management sucks I know.
> 
> Software
> 2 always up VMs, one for streaming, one for vpn.
> Plex
> TS3 server
> Mumble server
> Ark server
> Minecraft server (x4)
> Host OS - ubuntu 16.04


Does your VM for VPN require access to the same drives as your Plex? If so, what is your setup for that?


----------



## zdude

Quote:


> Originally Posted by *mbmumford*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> I present the ugly duckling server....
> 
> 
> 
> 
> 8 3TB HDDs in a ZFS raidz2 array for 15TB of usable storage.
> 32GB RAM
> Xeon E5-2697v2 ES CPU
> 256GB Enterprise Samsung SSD boot/VM drive
> RX 480 originally intended for pass through, turns out my mobo won't do IOMMU groups
> 
> 
> 
> 
> 
> 
> 
> 
> 10Gb network going to the rest of the network.
> 
> The 140mm fan taped to the bottom of the H100 is there to keep the VRM's under 90C when encoding video. The H100 has push/pull fans to keep CPU under 65C when encoding 2 videos at the same time. Cable management sucks I know.
> 
> Software
> 2 always up VMs, one for streaming, one for vpn.
> Plex
> TS3 server
> Mumble server
> Ark server
> Minecraft server (x4)
> Host OS - ubuntu 16.04
> 
> 
> 
> Does your VM for VPN require access to the same drives as your Plex? If so, what is your setup for that?
Click to expand...

Both VMs have access to the ZFS array as does Plex and various machines on my network. I have ZFS pool exported via SAMBA (I am using mostly windows clients on the network share). For security on the array I have daily snapshots of the ZFS pool and various volumes as well as different user groups owning different folders. This means that if logging in as me you have read only access to all media, and rw to my directories and read only to everybody elses. Logging onto the SAMBA share as a normal user you have access to your folder with full RWX permissions, read only to the media folders and no access to other folders on the share. In order to set it up I had to make my user account for samba share permissions with root to get universal read access. The VMs each have their own work directory with RW permissions and read only on the media.

I don't know how well that explains it though.


----------



## mbmumford

Quote:


> Originally Posted by *zdude*
> 
> Both VMs have access to the ZFS array as does Plex and various machines on my network. I have ZFS pool exported via SAMBA (I am using mostly windows clients on the network share). For security on the array I have daily snapshots of the ZFS pool and various volumes as well as different user groups owning different folders. This means that if logging in as me you have read only access to all media, and rw to my directories and read only to everybody elses. Logging onto the SAMBA share as a normal user you have access to your folder with full RWX permissions, read only to the media folders and no access to other folders on the share. In order to set it up I had to make my user account for samba share permissions with root to get universal read access. The VMs each have their own work directory with RW permissions and read only on the media.
> 
> I don't know how well that explains it though.


Interesting... What VPN service are you using?

I ask because my host is Windows 10 which is where my PLEX is installed, however, my VPN (PIA) blocks PLEX unless I use a port proxy every time the forwarded port changes, and even then it is severely limited quality.

I tried running a Mint18 VM to run the VPN but was unable to establish a connection from Mint to my RAID array on the host with PIA active in the VM.

I'm going to have to open it again to see where I went wrong. My biggest problem is that I know next to nothing about linux... Man do I wish I could remember my college classes.


----------



## zdude

Quote:


> Originally Posted by *mbmumford*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> Both VMs have access to the ZFS array as does Plex and various machines on my network. I have ZFS pool exported via SAMBA (I am using mostly windows clients on the network share). For security on the array I have daily snapshots of the ZFS pool and various volumes as well as different user groups owning different folders. This means that if logging in as me you have read only access to all media, and rw to my directories and read only to everybody elses. Logging onto the SAMBA share as a normal user you have access to your folder with full RWX permissions, read only to the media folders and no access to other folders on the share. In order to set it up I had to make my user account for samba share permissions with root to get universal read access. The VMs each have their own work directory with RW permissions and read only on the media.
> 
> I don't know how well that explains it though.
> 
> 
> 
> Interesting... What VPN service are you using?
> 
> I ask because my host is Windows 10 which is where my PLEX is installed, however, my VPN (PIA) blocks PLEX unless I use a port proxy every time the forwarded port changes, and even then it is severely limited quality.
> 
> I tried running a Mint18 VM to run the VPN but was unable to establish a connection from Mint to my RAID array on the host with PIA active in the VM.
> 
> I'm going to have to open it again to see where I went wrong. My biggest problem is that I know next to nothing about linux... Man do I wish I could remember my college classes.
Click to expand...

Ah I see what you mean, I have a isolated virtual network which VMs connect to in order to connect to the local storage even when a VPN is active. I am actually running my own custom OpenVPN server on the VPN VM. It could run on the host Ubuntu install however I messed up some network configs setting it up the first time so I put it in a VM for snapshot goodness and so I don't need to go through 500 config files to find the problem again ( the host quit talking over a network completely after I messed it up).


----------



## Liranan

Quote:


> Originally Posted by *zdude*
> 
> Both VMs have access to the ZFS array as does Plex and various machines on my network. I have ZFS pool exported via SAMBA (I am using mostly windows clients on the network share). For security on the array I have daily snapshots of the ZFS pool and various volumes as well as different user groups owning different folders. This means that if logging in as me you have read only access to all media, and rw to my directories and read only to everybody elses. Logging onto the SAMBA share as a normal user you have access to your folder with full RWX permissions, read only to the media folders and no access to other folders on the share. In order to set it up I had to make my user account for samba share permissions with root to get universal read access. The VMs each have their own work directory with RW permissions and read only on the media.
> 
> I don't know how well that explains it though.


How did you virtualise the VPN? I would like to know as it's extremely handy.


----------



## zdude

Quote:


> Originally Posted by *Liranan*
> 
> How did you virtualise the VPN? I would like to know as it's extremely handy.


I have a VM running an OpenVPN server running on it. I have this server port forwarded on 443 to the outside world. When connecting to the port it expects a particular RSA key, if it isn't provided the connection is dropped. The newtwork on that VM is then set up to forward connections allowing complete access to my local lan after connecting. If you wanted it to go the other way I would just set it up so that there is an active VPN connection and connect from the local network. It really doesn't work any different than a dedicated VPN box.

I know that isn't very clear but I am not entirely sure what you want to accomplish.


----------



## Liranan

Quote:


> Originally Posted by *zdude*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> How did you virtualise the VPN? I would like to know as it's extremely handy.
> 
> 
> 
> I have a VM running an OpenVPN server running on it. I have this server port forwarded on 443 to the outside world. When connecting to the port it expects a particular RSA key, if it isn't provided the connection is dropped. The newtwork on that VM is then set up to forward connections allowing complete access to my local lan after connecting. If you wanted it to go the other way I would just set it up so that there is an active VPN connection and connect from the local network. It really doesn't work any different than a dedicated VPN box.
> 
> I know that isn't very clear but I am not entirely sure what you want to accomplish.
Click to expand...

This is basically what I tried to accomplish with pfSense but failed so I am wondering how you set it up.


----------



## beers

Quote:


> Originally Posted by *Liranan*
> 
> This is basically what I tried to accomplish with pfSense but failed so I am wondering how you set it up.


I have a similar setup at home also with the ta.key.

On the WAN edge you essentially have to forward your UDP VPN port (such as 1194) to the OpenVPN server. You also need to set up a route statement for your VPN subnet that you assign to users from the router out to the OpenVPN server. After that it's all just configuration on the server itself.


----------



## Master__Shake

Quote:


> Originally Posted by *Bitemarks and bloodstains*
> 
> 15TB SSDs are out https://www.scan.co.uk/products/15tb-samsung-pm1633a-enterprise-class-sas-30-12gb-s-ssd-25-3d-v-nand-mlc-145mm-195k-iops


FINALLY

11g's pffft.

that's walking around money


----------



## shodan

Finally after some years with my humble i5-750 as my main server I decided to upgrade to a Dell R710 and a couple of HP DL360 G6

Description / Usage: HyperV,DC,Linux Web Server,Mail,Pfsense etc.

DELL
OS: Windows Server 2016
Case: DELL R710
CPU: 2 x E5645 (6 Cores / 12 threads)
Memory: 8 x 8GB = 64 GB
OS HDD: Micron 120GB SSD
Storage HDD(s): 3 x Toshiba 2TB DT01ACA200 Raid-5 (VMs and Torrents), 3 x Seagate 6TB Enterprise Capacity V5 Raid-5 (Main storage)
RAID Controller: Perc H700
UPS: APC UPS 1000 (apcupsd software)

HP
OS: Vsphere 6.5
Case: HP DL360 G6
CPU: 1 x L5540
Memory: 8 x 4GB = 32 GB
OS HDD: OCZ 60GB SSD
Storage HDD(s): 3 x 600GB Seagate savio 10K
RAID Controller: HP P410 with 512MB and battery


----------



## zdude

Quote:


> Originally Posted by *shodan*
> 
> Finally after some years with my humble i5-750 as my main server I decided to upgrade to a Dell R710 and a couple of HP DL360 G6
> 
> Description / Usage: HyperV,DC,Linux Web Server,Mail,Pfsense etc.
> 
> DELL
> OS: Windows Server 2016
> Case: DELL R710
> CPU: 2 x E5645 (6 Cores / 12 threads)
> Memory: 8 x 8GB = 64 GB
> OS HDD: Micron 120GB SSD
> Storage HDD(s): 3 x Toshiba 2TB DT01ACA200 Raid-5 (VMs and Torrents), 3 x Seagate 6TB Enterprise Capacity V5 Raid-5 (Main storage)
> RAID Controller: Perc H700
> UPS: APC UPS 1000 (apcupsd software)
> 
> HP
> OS: Vsphere 6.5
> Case: HP DL360 G6
> CPU: 1 x L5540
> Memory: 8 x 4GB = 32 GB
> OS HDD: OCZ 60GB SSD
> Storage HDD(s): 3 x 600GB Seagate savio 10K
> RAID Controller: HP P410 with 512MB and battery


Nice setup.

Is there a way to get some basic GPU accel in VSphere without buying a GRID card? I saw some references to sVGA when researching it but nothing that makes me willing to set up a whole second server to try it out...


----------



## kradkovich

Hey guys, i just ordered the Rosewill RSV-Z2600 2U server chassis and will slowly build from it. This will be my first server build so i want good quality products that will last awhile and perform well. My main usage for it is keeping data on it like movies, torrents, etc transferring it from my PC. I plan to install freenas once it is complete. I want to get feedback on what components i should use since im clearly not a server expert.

Chassis: Rosewill RSV-Z2600 2U
Motherboard: (Preferably below $150)
CPU: (Preferably below $150)
CPU COOLER: (keep them temps low!)
RAM: (Preferably 16GB)
PSU: (Preferably below $100)
SSD: (Will be from my PC - Crucial MX100 256GB)
HHD: (Heard the WD reds are good for servers)


----------



## deafboy

Things have changed a bit but it's still more or less like this, lol. The old green 4u box has been swapped with the Google 4u Box...


----------



## Liranan

Quote:


> Originally Posted by *deafboy*
> 
> Things have changed a bit but it's still more or less like this, lol. The old green 4u box has been swapped with the Google 4u Box...


What is the total data space in this beast and why have you painted Google on them or are these second hand Google 4U devices?


----------



## ozlay

Quote:


> Originally Posted by *Liranan*
> 
> What is the total data space in this beast and why have you painted Google on them or are these second hand Google 4U devices?


They are second hand google servers you can get them on the bay. They are pretty decent servers as is. But the cases can be reused.

The top blue one the yellow one and the green one are all google servers in his picture. I am not sure of the others.


----------



## deafboy

The top blue box is an old Google Search Appliance with some slightly updated hardware that I had laying around and turned into a pfSense box.

The yellow box is also an old Google Search Appliance (Dell R710 gen 2) that is my ESXI box (dual L5640 and 144GB of RAM)

The green box is a Google Radio Appliance, super old hardware but currently in the process of updating that to something else for a backup server.

The 24 bay server is a Norco 4224 that is my FreeNAS box. Started out as 6 4TB drives, but currently has 12 4TB drives in RaidZ2 (expanding in sets of 6).

Then the APC UPS, battery pack, and PDU for the rack and networking (SG300-10MPP + Ubiquiti AC APs)


----------



## zdude

Has anybody here done a ceph cluster for home use? I am starting to run out of space on my 15TB ZFS array, thinking about going to a ~60TB Ceph cluster that would be easier to expand....


----------



## shodan

Quote:


> Originally Posted by *shodan*
> 
> Finally after some years with my humble i5-750 as my main server I decided to upgrade to a Dell R710 and a couple of HP DL360 G6
> 
> Description / Usage: HyperV,DC,Linux Web Server,Mail,Pfsense etc.
> 
> DELL
> OS: Windows Server 2016
> Case: DELL R710
> CPU: 2 x E5645 (6 Cores / 12 threads)
> Memory: 8 x 8GB = 64 GB
> OS HDD: Micron 120GB SSD
> Storage HDD(s): 3 x Toshiba 2TB DT01ACA200 Raid-5 (VMs and Torrents), 3 x Seagate 6TB Enterprise Capacity V5 Raid-5 (Main storage)
> RAID Controller: Perc H700
> UPS: APC UPS 1000 (apcupsd software)
> 
> HP
> OS: Vsphere 6.5
> Case: HP DL360 G6
> CPU: 1 x L5540
> Memory: 8 x 4GB = 32 GB
> OS HDD: OCZ 60GB SSD
> Storage HDD(s): 3 x 600GB Seagate savio 10K
> RAID Controller: HP P410 with 512MB and battery


Added and a Dell LTO4 for backup with Veritas backup exec 15 as the software for backup


----------



## Prophet4NO1

For the pfSense users out there, 2.5 will require AES supporting CPU's.

http://www.overclock.net/t/1629516/netgate-pfsense-2-5-will-require-cpus-with-aes-ni


----------



## budgetgamer120

Quote:


> Originally Posted by *Prophet4NO1*
> 
> For the pfSense users out there, 2.5 will require AES supporting CPU's.
> 
> http://www.overclock.net/t/1629516/netgate-pfsense-2-5-will-require-cpus-with-aes-ni


Well there goes my plan to pfSense. I planned on using a regular desktop cpu.


----------



## Prophet4NO1

Quote:


> Originally Posted by *budgetgamer120*
> 
> Well there goes my plan to pfSense. I planned on using a regular desktop cpu.


You still can. Ark list filtered by AES support.

http://ark.intel.com/search/advanced/?AESTech=true


----------



## deafboy

Quote:


> Originally Posted by *Prophet4NO1*
> 
> For the pfSense users out there, 2.5 will require AES supporting CPU's.
> 
> http://www.overclock.net/t/1629516/netgate-pfsense-2-5-will-require-cpus-with-aes-ni


Welp time to look into upgrading the pfsense box, lol. Maybe snag a R210 II and call it a day.


----------



## Charles1

Nice servers here. Here is my home server roughly 40TB, though I am now looking into going to rackmount. As I want to have a central hub for all my pcs.


----------



## TheBloodEagle

Hey Charles, that's a pretty nice case. Any interior shots?


----------



## Charles1

Quote:


> Originally Posted by *TheBloodEagle*
> 
> Hey Charles, that's a pretty nice case. Any interior shots?


Sure just have not had time to do some cable management and longer black sata cables are on their way.


----------



## budgetgamer120

Quote:


> Originally Posted by *Charles1*
> 
> Sure just have not had time to do some cable management and longer black sata cables are on their way.


What case is that?


----------



## Charles1

Lian-Li PC 343 Modular aluminum cube. Note mine is the original version. I am seeing another that has more holes for cable management and cpu backplates.

Think that vetsion is

343B


----------



## zdude

Quote:


> Originally Posted by *Charles1*
> 
> Lian-Li PC 343 Modular aluminum cube. Note mine is the original version. I am seeing another that has more holes for cable management and cpu backplates.
> 
> Think that vetsion is
> 
> 343B


HBAs?


----------



## Charles1

Quote:


> Originally Posted by *zdude*
> 
> HBAs?


I am using Vantec 4 channel 6 port sata cards.

I did not need hardware raid i am using flex raid-f with no pool. I need to know for record keeping if a drive fails and for whatever reason i cant rebuild. I can refer to an excel sheet what was on a specific drive and then pull that data from my back up source.

For my use pooling the drives would not work for what I am using the server for.


----------



## akshep

Just picked up this 24U APC rack for 65 bucks on craigslist. Came with a keyboard tray and one shelf.



Top to Bottom


Keyboard Tray
Cable Management Tray
Ubiquiti Edge Router POE
HP Managed Gigabit Switch
Custom ESXi Server (2 x L5640s 24gb ram Super Micro X8DTL, 480gb SSD)
Server 2012 R2 (Domain Controller, DHCP, DNS)
Server 2012 R2 (Plex Server/What ever game I feel like hosting)
Ubuntu Server 16.04LTS (Web/email Server)
Ubuntu Server 16.04LTS (PiHole)
FreePBX (Runs a couple of IP Phones I have)

Old Dell Computer in an IStarUSA case (Used with Remote Desktop to Browse the interent at work)
White Box to the left is my unraid server. (1 x 3TB and 2 x 1TB.... I need some drives as this is full)
Not pictured are a couple of UAP-AC-Pros. This is nothing fancy, but I love it. It handles all it need it to with no issues.


----------



## burksdb

Quote:


> Originally Posted by *akshep*
> 
> Just picked up this 24U APC rack for 65 bucks on craigslist.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Came with a keyboard tray and one shelf.
> 
> 
> 
> Top to Bottom
> 
> 
> Keyboard Tray
> Cable Management Tray
> Ubiquiti Edge Router POE
> HP Managed Gigabit Switch
> Custom ESXi Server (2 x L5640s 24gb ram Super Micro X8DTL, 480gb SSD)
> Server 2012 R2 (Domain Controller, DHCP, DNS)
> Server 2012 R2 (Plex Server/What ever game I feel like hosting)
> Ubuntu Server 16.04LTS (Web/email Server)
> Ubuntu Server 16.04LTS (PiHole)
> FreePBX (Runs a couple of IP Phones I have)
> 
> Old Dell Computer in an IStarUSA case (Used with Remote Desktop to Browse the interent at work)
> White Box to the left is my unraid server. (1 x 3TB and 2 x 1TB.... I need some drives as this is full)
> Not pictured are a couple of UAP-AC-Pros. This is nothing fancy, but I love it. It handles all it need it to with no issues.


Very Nice Find!


----------



## jestream

Hey Guys,

I wanted to share my own virtualized home server.
Hope you like it!

MOBO: Z10PE-d16
CPUs: dual 2623 v3
RAM: 32GB REG ECC Crucial
Monitoring VGA: Aspeed integrated VGA with 7" front display
VGAs: dual GTX Titan 6GB (Original Titan!)
NVME drive: Samsung 960 EVO 500GB
SSDs: dual Samsung 850 EVO 500GB
HDs: dual WD Velociraptor 600GB SATA3
Audio card: Creative Sound Blaster ZxR
Expansion card: EVGA HD02 (Teradici 1.1) streaming card
Watercooling: dual daisy chained EK Predator
Case: Corsair 780T with CoolZero and self-made modifications

OS: VMware ESXi 6, UnRAID, Windows 10 and others...


----------



## Charles1

Here is my new addition. UNAS server. For whatever reason my lsi 9211-8i card did not play nice with the asrock 88x fm+ so decided to use what has worked for me. 4 port sata card plus use 5 ports from MB and done deal. Fyi its water cooled lol


----------



## deafboy

To replace my current pfsense box in prep for the pfsense AES update... guess I'll convert the current pfSense box into a box to run cold backups.


----------



## budgetgamer120

Is it safe to use usb hdd as backup for home server?

Whats preferred for backups?


----------



## twerk

Quote:


> Originally Posted by *budgetgamer120*
> 
> Is it safe to use usb hdd as backup for home server?
> 
> Whats preferred for backups?


It's fine as one layer of backup.

If your RAID controller or primary storage got corrupted, it would be a good backup for that. However if it was plugged in at the time of receiving malware, you would potentially lose the backup too. So I would recommend unplugging it when not performing a backup.

You should still backup your important data offsite.


----------



## budgetgamer120

Quote:


> Originally Posted by *twerk*
> 
> It's fine as one layer of backup.
> 
> If your RAID controller or primary storage got corrupted, it would be a good backup for that. However if it was plugged in at the time of receiving malware, you would potentially lose the backup too. So I would recommend unplugging it when not performing a backup.
> 
> You should still backup your important data offsite.


Thanks. Had a bad disk, but scan disk fixed it for now. Would have to rebuild the VM if it didn't.


----------



## shodan

Quote:


> Originally Posted by *shodan*
> 
> Added and a Dell LTO4 for backup with Veritas backup exec 15 as the software for backup




A new addition for backup
Tape library with 1 lto4 fiber


----------



## cdoublejj

Quote:


> Originally Posted by *zdude*
> 
> Has anybody here done a ceph cluster for home use? I am starting to run out of space on my 15TB ZFS array, thinking about going to a ~60TB Ceph cluster that would be easier to expand....


Never heard of it.

Got my Roper Whitney Punch in. Still putzing around with building my unRAID server. Did also get an Iron Wolf Pro 10TB for the parity drive. would like to get another for a second parity drive.


----------



## deafboy

Ditched the 2u coolers... I don't think these Noctuas could have been a more perfect fit!


----------



## TheBloodEagle

That looks awesome!


----------



## herkalurk

Quote:


> Originally Posted by *twerk*
> 
> You should still backup your important data offsite.


That's why I have crashplan. Data is in the cloud.


----------



## TheBloodEagle

I went Crashplan after trying out BackBlaze (disliked it so much). But anyway, I definitely agree that even if you have an extensive setup at home, a cloud backup is important if you really can't lose whatever data.


----------



## shodan

Quote:


> Originally Posted by *TheBloodEagle*
> 
> I went Crashplan after trying out BackBlaze (disliked it so much). But anyway, I definitely agree that even if you have an extensive setup at home, a cloud backup is important if you really can't lose whatever data.


I disagree as i believe to data privacy.
Best way for a backup is a LTO drive....
And if you do not have patience for changing tapes buy a tape library that is what i did


----------



## TheBloodEagle

The best practice is 3-2-1 which is 3 copies of your data, 2 in different storage types and 1 off site. LTO drive would fit into a different storage type. But if it's onsite with the rest of your gear, if you REALLY cherish that data, a fire could just wipe out all of it for example then. Crashplan uses 256-bit AES encryption and other sites use similar. You can even go further with keys. And then even further by zipping up the stuff you upload with an extra password over that. It's very difficult to ever break. If someone really wanted your data, it would be easier just to track you down and threaten to kill you if you didn't share your password, hah. The key point is that it's off site. And unless you trust setting up another box somewhere else, the cloud is fine. Not just any cloud service of course. But Crashplan & BackBlaze all care about security. In fact when I closed my BackBlaze account it specifically tells you it deletes EVERYTHING about you, EVERYTHING. Which is a huge welcome because they care about ownership and privacy. So does Crashplan.


----------



## cdoublejj

Trying to get a GOOD lab setup and the server over hauled yet again, along with stuff like battery back up, out of band management, 10 Gbps, and unRAID.


----------



## iandroo888

whats a good, cheaper cooler for LGA 2011 xeon? need 2. they are partially offset. into a hp z820 case/system

think a Cooler Master Hyper T4 or Hyper 212 Evo is good enough? getting mildly fed up with the bad 92mm fans. lookin for a 120mm to use so i can switch it out easily with the ample amount of 120s i have and super lack of 92s lol


----------



## bobfig

Quote:


> Originally Posted by *iandroo888*
> 
> whats a good, cheaper cooler for LGA 2011 xeon? need 2. they are partially offset. into a hp z820 case/system
> 
> think a Cooler Master Hyper T4 or Hyper 212 Evo is good enough? getting mildly fed up with the bad 92mm fans. lookin for a 120mm to use so i can switch it out easily with the ample amount of 120s i have and super lack of 92s lol


what is the max mounting height on that case. looks like 120mm fan coolers are too tall for that case.

seems like this may be nice?!? http://www.cryorig.com/h7ql.php


----------



## iandroo888

def dont need the LEDs but height is nice. 145mm mounted height. THANKS FOR REC !









the CM Hyper T4 s at 152.3mm. Hyper 212 Evo is 159 so definitely no.

tried stickin a ruler in and laying it on the socket bracket. i think im ok at < 150. id need to check closely for the T4. cutting really close.


----------



## mcdoc77

One and a half year later, time for a resumée.
Hardware-wise everything worked out pretty fine. I really like IPMI 
Software-wise I updated to Freenas Corral, which was announced to be discontinued 3 days later :-( It really pissed me off! Well, I backuped my data and changed to fedora server now, which I like very much. (Cockpit rulez!). The WD-Blue-Drives run like intended. On my BTRFS Raid 10 I got 450 MiB/s on writes and 550 MiB/s on reads (with dd).
No smart-errors, no issues whatsoever.

Quote:


> Originally Posted by *mcdoc77*
> 
> Name: Yggdrasil....because it is basically the root of my network
> OS: *FreeNas*
> Case:*Fractal Define R5*
> CPU:*Intel Core i3 6100*
> Motherboard:*Supermicro X11SSM-F*
> Memory: *2x 16GB 16384MB) Crucial CT16G4WFD8213 DDR4-2133 ECC DIMM CL15 Single*
> PSU: *Corsair CX600* plus UPS *Cyberpower Value Serie 800 VA / 480 Watt Tower*
> OS HDD (If you have one): Old 500GB 2,5" I-don't_know_and_I_don't_care
> Storage HDD(s): *8x 3000GB WD Purple WD30PURX 64MB @RAIDZ2*
> Server Manufacturer: *me*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Pre-build and build
> 
> 
> 
> Ok, the second drive cage isn't standard. I just had a spare one, since my 9yr old daughter would not use more than 3 HDs in her Corsair C70
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I never attached a Keyboard, mouse, Monitor or optical drive to this PC. IPMI is awesome.
> Some Pics accessing Bios (Yes, it is an UEFI, but it looks like the good old BIOS. Hey man, this is server grade stuff!) and doing Memtest via *REMOTE* . The Optical Drive can be simulated via IPMI/Web-Interface. Simply load the ISO and you are good to go.
> 
> 
> 
> 
> I am still testing, but till now I must admit: It works great! #nerdporn
> 
> WD Purple...well. First: They are rated for 24/7 and very reliable harddisks.
> But I guess the real question is "Why not WD red?"
> What ist the distinction between red and purple? Mainly TLER-support. What TLER does...well it interrupts the error-recovery so a harddisk would not kicked out of an arrray. This only makes sense if you got a contoller which can handle that (and would kick the hd out) and the need of real quick response. I do not have any of this.
> Ok, ZFS could handle this, yes, but I prefer an error-recovery who does it's job. Remember: This is a SOHO-Server. It is not designed for an SAN for ultra high availability databases or somthing like that.
> 
> Due to the fact that I got 9 HDs (8 Storage + Boot] I added a *Delock 89395 4 Port PCIe x4* Controller and connected 2 HDs. Works fine.
> 
> I tested the UPS...45 Minutes on idle! I Never expected that!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Edit: USV/UPS False Friends]
> [Edit: Forgot about the drive cage]
> [Edit: added Answer about WD Purple-HDs]
> [Edit: Forgot to mention the HD Controller]
> [Edit: Minor corrections plus UPS time]
> [Edit: RAIDZ2]
> I'll keep adding some informations to this post, if the discussion leads to some interesting issues. Just to have all information in the main Post available.


----------



## bobfig

Quote:


> Originally Posted by *iandroo888*
> 
> def dont need the LEDs but height is nice. 145mm mounted height. THANKS FOR REC !
> 
> 
> 
> 
> 
> 
> 
> 
> 
> the CM Hyper T4 s at 152.3mm. Hyper 212 Evo is 159 so definitely no.
> 
> tried stickin a ruler in and laying it on the socket bracket. i think im ok at < 150. id need to check closely for the T4. cutting really close.


yah i know leds are def not needed but the fitment of the cooler seems to work good


----------



## iandroo888

Quote:


> Originally Posted by *bobfig*
> 
> yah i know leds are def not needed but the fitment of the cooler seems to work good


yeah. definitely considering it. need to find some time to try to accurately measure the inside, then pull trigger(s), since i have to buy 2 T_T


----------



## bobfig

Quote:


> Originally Posted by *iandroo888*
> 
> yeah. definitely considering it. need to find some time to try to accurately measure the inside, then pull trigger(s), since i have to buy 2 T_T


you know that cryorig has printout ruler stuff that you can measure with http://www.cryorig.com/depthchecker.php


----------



## deafboy

Quote:


> Originally Posted by *bobfig*
> 
> you know that cryorig has printout ruler stuff that you can measure with http://www.cryorig.com/depthchecker.php


Oooh, that's useful. Thanks! +rep


----------



## jieddo

Well I am tired of running out of room on my NAS, I have doubled the storage capacity and in less than a year I am close to running out of room again so I said no more. I am working on a ZFS storage server build using a Supermicro SC846TQ 24 bay chassis from ebay.



CPU: E5-2650v2 ES
Motherboard: Supermicro X9SRA
RAM: Crucial DDR3 1600 8x8GB ECC UDIMM
Storage: 2x SATADOM 16GB mirror (boot), Intel S3500 80GB (SLOG), generic NVME 128GB drive (Z2ARC)
HBA: LSI 9210-8i (flashed to IT mode), Intel RES2SV240 24 port SAS expander with 6 SFF-8087 breakout cables.
OS: FreeNAS 9.10.2

I have not purchased any drives for the pool as that will be the most expensive part of this build. I plan to slowly purchase a few drives here and there per month. Using 4TB disks I will have 96TB of storage. I will most likely not use the SLOG and Z2ARC drives as they probably will hinder performance for this particular build.


----------



## zdude

Quote:


> Originally Posted by *jieddo*
> 
> Well I am tired of running out of room on my NAS, I have doubled the storage capacity and in less than a year I am close to running out of room again so I said no more. I am working on a ZFS storage server build using a Supermicro SC846TQ 24 bay chassis from ebay.
> 
> CPU: E5-2650v2 ES
> Motherboard: Supermicro X9SRA
> RAM: Crucial DDR3 1600 8x8GB ECC UDIMM
> Storage: 2x SATADOM 16GB mirror (boot), Intel S3500 80GB (SLOG), generic NVME 128GB drive (Z2ARC)
> HBA: LSI 9210-8i (flashed to IT mode), Intel RES2SV240 24 port SAS expander with 6 SFF-8087 breakout cables.
> OS: FreeNAS 9.10.2
> 
> I have not purchased any drives for the pool as that will be the most expensive part of this build. I plan to slowly purchase a few drives here and there per month. Using 4TB disks I will have 96TB of storage. I will most likely not use the SLOG and Z2ARC drives as they probably will hinder performance for this particular build.


Just so you know, if you are going to add to an existing zfs array, you need to add in complete vdevs. For instance, on my server I use 5 drive vdevs, 4 data and 1 parity. When adding a vdev the new ones REALLY REALLY should be the same size so in my case I am limited to adding 5 drives at a time. With a 24 bay server I would go with 4 5+1 vdevs so you can lose 4 drives and still have reasonable performace.

Side note, I would use the SLOG and L2. The slog is only used for sync writes which are VERY slow to HDDs, and the L2arc is an NVME drive and will reduce the latency of any files on it. (the biggest downside to ZFS is it is a fairly high latency file system).


----------



## jieddo

I need to do more research on the SLOG drive as I was under the impression that SLOG is only useful if you have a lot of writes going to your pool and it is slowing down the overall IOPS so the SLOG drive acts as a write cache and periodically dump the write data to the pool all at once improving performance. I have read the L2arc is basically a cache for spillover for the ARC (RAM) when it is completely saturated to prevent data from being pulled from the slow pool drives.


----------



## zdude

Quote:


> Originally Posted by *jieddo*
> 
> I need to do more research on the SLOG drive as I was under the impression that SLOG is only useful if you have a lot of writes going to your pool and it is slowing down the overall IOPS so the SLOG drive acts as a write cache and periodically dump the write data to the pool all at once improving performance. I have read the L2arc is basically a cache for spillover for the ARC (RAM) when it is completely saturated to prevent data from being pulled from the slow pool drives.


Unless you are running VMs or the sort off of the pool, a SLOG won't help much. With ZFS the slog is only used for sync writes. Sync writes block whatever program called them until the write is complete to persistent storage. Because ZFS by default writes into a ram cache a sync write must wait for all the disks in the pool to commit the information. This causes HDDs to "hash" as they spend more and more time seeking and the pool performance degrades significantly. With a SLOG sync writes are written to the SLOG then copied to the pool with any async writes at the next write cache flush.

http://www.freenas.org/blog/zfs-zil-and-slog-demystified/

may explain a little more clearly.


----------



## andyroo89

OK guys, it seems friend of mine received a dell poweredge 2800, and he put it next to the dumpster for me from his job, the person dropped it off for recycle. I might end up removing the gits and use the case with newer hardware, or does anyone have another idea?

Edit; I don't mess with server hardware too much, but is it possible the server mobo extend off of ATX screw layout? That would be nice. Unless each server company have their own standard.


----------



## cdoublejj

Quote:


> Originally Posted by *andyroo89*
> 
> OK guys, it seems friend of mine received a dell poweredge 2800, and he put it next to the dumpster for me from his job, the person dropped it off for recycle. I might end up removing the gits and use the case with newer hardware, or does anyone have another idea?
> 
> Edit; I don't mess with server hardware too much, but is it possible the server mobo extend off of ATX screw layout? That would be nice. Unless each server company have their own standard.


Not even close. there may be a supermicro or dell or hp mobo you could get that you could make fit or might bolt in for all i know.


----------



## andyroo89

Quote:


> Originally Posted by *cdoublejj*
> 
> Not even close. there may be a supermicro or dell or hp mobo you could get that you could make fit or might bolt in for all i know.


Oh regardless I have newer dell Poweredge 4315 I like to use. But I'm gonna scrap it sell parts if I can, but that's about it.


----------



## ComradeCommie

Dell Precision T7400. Everything in this, including the server itself, was free. So far...
Case: Dell Precision T7400
CPU: 1x Xeon x5450
Motherboard: Dell Precision T7400
Memory: 14GB FB-DDR2
PSU: 1KW Power Supply
OS HDD : Random 250GB HDD
Storage HDD(s): 4x Random 250GB HDD's
Server Manufacturer: Dell, 5% me
I plan to add a 2nd Xeon, and make an HDD rack and add 5 more RANDOM 250GB and 160GB HDD's in JBOD using the RAID card and random sata cables coming out the back, will add pics when I make the HDD rack. Currently hosting a 75slot MC server on it, until I can figure out how to host a TF2 or GMOD server.


----------



## ChRoNo16

they are power hungry, but I just got rid of one of those, Dual quads w/ HT 32gb ram.

Throw some nice drives in it- its a good solid server for general use/gaming.


----------



## deafboy

Shared this a while ago elsewhere but forgot to update on here. Got the new pfsense box in along with the new switch, finally have 10Gb to my FreeNAS, ESXi, and my gaming rig...


----------



## pvt.joker

Quote:


> Originally Posted by *deafboy*
> 
> Shared this a while ago elsewhere but forgot to update on here. Got the new pfsense box in along with the new switch, finally have 10Gb to my FreeNAS, ESXi, and my gaming rig...


What 10gb switch are you running? I'd been thinking about upgrading for the same things you have connected..

Sent from my Nexus 6P using Tapatalk


----------



## deafboy

Quote:


> Originally Posted by *pvt.joker*
> 
> What 10gb switch are you running? I'd been thinking about upgrading for the same things you have connected..
> 
> Sent from my Nexus 6P using Tapatalk


Using SFP+ for my 10Gb connections.

Using the H3C S5800-32C

28Gb ports
4 SFP+ ports

Then on the back has an expansion bay if you wanted to add more SFP+ ports.


----------



## cdoublejj

Quote:


> Originally Posted by *deafboy*
> 
> Using SFP+ for my 10Gb connections.
> 
> Using the H3C S5800-32C
> 
> 28Gb ports
> 4 SFP+ ports
> 
> Then on the back has an expansion bay if you wanted to add more SFP+ ports.


is it loud? or would that even matter with a noctua fan swap? Wondering if this switch used for $200 USD makes my purchase of the D-Link DGS-1510-28X @ $400 USD new dumb and ill advised?


----------



## deafboy

It's louder than I'd like it to be but I'm also used to silent... I ordered noctua fans for it though so we'll see how that goes, right now it's the loudest thing in my rack but it's still quieter than my old Netgear GS748Tv3

I don't know much about that d-link. For a homelab I'm sure that'd be just fine depending on what you want to do with it

I bought mine for $122/shipped. Not counting the noctua fans I suppose though.

Edit: Note though it doesn't exactly sip power, it's a pretty power hungry switch, so in that sense you may be better off with the dlink.


----------



## cdoublejj

That actually does make me feel better, another advantage is it won't suck the UPS dry as quickly. By chance is yours POE? I figured i'd get a rack mount POE injector if i ever needed POE.


----------



## deafboy

The top cisco switch is POE, the main switch isn't POE...

And yeah, the noctua fans made it a lot quieter.


----------



## pvt.joker

Thanks for the info! Was trying to find something 10gig without the extra copper ports, but i can't seem to find much that's under $1000


----------



## exwar

Hi i have a stupid question I got raid expander card res2sv240 do i need raid card or will it works without it?


----------



## burksdb

Quote:


> Originally Posted by *exwar*
> 
> Hi i have a stupid question I got raid expander card res2sv240 do i need raid card or will it works without it?


The card you have is a SAS Expander which you will need either a raid card or an hba behind to use.


----------



## exwar

Quote:


> Originally Posted by *burksdb*
> 
> The card you have is a SAS Expander which you will need either a raid card or an hba behind to use.


----------



## exwar

Another question will sas expander work with LSI 9211-8i if raid card is on IT mode?
Or are there any suggestions?

I will use this for my norco rpc-4020 os will be unraid


----------



## burksdb

Quote:


> Originally Posted by *exwar*
> 
> Another question will sas expander work with LSI 9211-8i if raid card is on IT mode?
> Or are there any suggestions?
> 
> I will use this for my norco rpc-4020 os will be unraid


I used that exact combo (hba, expander, Os and case) without any issues.


----------



## KyadCK

Quote:


> Originally Posted by *pvt.joker*
> 
> Thanks for the info! Was trying to find something 10gig without the extra copper ports, but i can't seem to find much that's under $1000


Depends if you buy new or not. There is a _lot_ of old used Enterprise equipment on places like ebay. My G8124 for example was $400 if I remember correctly. I was able to get it, three dual 10g nics, a G8000 (with dual SFP+ module), the DACs needed in the rack, four SFP+ modules and a pair of 75ft LC/LC fiber cables for the price you;re seeing just the switch for. Thats enough to hook by my servers, my rig, and the G8000 to the G8124 at 20gbps.

Slightly old picture; The Cisco is going away in favor of the G8000 soon for that backbone since the Cisco is only 1g even on the SFPs.


If you're willing to give up the warranty and have a way to deal with the noise there's tons of stuff like this available. How many ports do you think you'll need?

Also note I do not recommend the G8000 due to it's... _interesting_ required cabling.


----------



## deafboy

Something to keep an eye on:

Mikrotik, 16 SFP+, passive cooling, 42w, $399

https://mikrotik.com/product/crs317_1g_16s_rm


----------



## xxpenguinxx

Quote:


> Originally Posted by *deafboy*
> 
> Something to keep an eye on:
> 
> Mikrotik, 16 SFP+, passive cooling, 42w, $399
> 
> https://mikrotik.com/product/crs317_1g_16s_rm


I don't need it, I don't need it, I definitely don't need it...


----------



## pvt.joker

delete post..


----------



## pvt.joker

Quote:


> Originally Posted by *KyadCK*
> 
> Depends if you buy new or not. There is a _lot_ of old used Enterprise equipment on places like ebay. My G8124 for example was $400 if I remember correctly. I was able to get it, three dual 10g nics, a G8000 (with dual SFP+ module), the DACs needed in the rack, four SFP+ modules and a pair of 75ft LC/LC fiber cables for the price you;re seeing just the switch for. Thats enough to hook by my servers, my rig, and the G8000 to the G8124 at 20gbps.
> 
> Slightly old picture; The Cisco is going away in favor of the G8000 soon for that backbone since the Cisco is only 1g even on the SFPs.
> 
> 
> If you're willing to give up the warranty and have a way to deal with the noise there's tons of stuff like this available. How many ports do you think you'll need?
> 
> Also note I do not recommend the G8000 due to it's... _interesting_ required cabling.


What kinda of "interesting" cabling you mean? Ideally I'd want at least 8 10gb ports, 10-12 would be best for a little expansion down the road..

Quote:


> Originally Posted by *deafboy*
> 
> Something to keep an eye on:
> 
> Mikrotik, 16 SFP+, passive cooling, 42w, $399
> 
> https://mikrotik.com/product/crs317_1g_16s_rm


If it was just a 10gb switch and not router software etc already loaded in that would be a tempting price range.. IF i hadn't just spent all my play money for the year on the AC repairs for the house..


----------



## KyadCK

Quote:


> Originally Posted by *pvt.joker*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Depends if you buy new or not. There is a _lot_ of old used Enterprise equipment on places like ebay. My G8124 for example was $400 if I remember correctly. I was able to get it, three dual 10g nics, a G8000 (with dual SFP+ module), the DACs needed in the rack, four SFP+ modules and a pair of 75ft LC/LC fiber cables for the price you;re seeing just the switch for. Thats enough to hook by my servers, my rig, and the G8000 to the G8124 at 20gbps.
> 
> Slightly old picture; The Cisco is going away in favor of the G8000 soon for that backbone since the Cisco is only 1g even on the SFPs.
> 
> 
> If you're willing to give up the warranty and have a way to deal with the noise there's tons of stuff like this available. How many ports do you think you'll need?
> 
> Also note I do not recommend the G8000 due to it's... _interesting_ required cabling.
> 
> 
> 
> What kinda of "interesting" cabling you mean? Ideally I'd want at least 8 10gb ports, 10-12 would be best for a little expansion down the road..
> 
> Quote:
> 
> 
> 
> Originally Posted by *deafboy*
> 
> Something to keep an eye on:
> 
> Mikrotik, 16 SFP+, passive cooling, 42w, $399
> 
> https://mikrotik.com/product/crs317_1g_16s_rm
> 
> Click to expand...
> 
> If it was just a 10gb switch and not router software etc already loaded in that would be a tempting price range.. IF i hadn't just spent all my play money for the year on the AC repairs for the house..
Click to expand...

The G8000 has no hardware reset switch for factory reset, and the only "serial" port (look between the four lower SFP slots on the right side, middle switch) is a proprietary and hard to find USB Mini to CAT5 to Serial. The G8124 has a reset switch and a management/console port like any sane human being would create, but I don't see any under $500ish now.

Also ouch, but at least you'll be cool.


----------



## twerk

Bought myself a little present for my server. Love how it shows as "Genuine" Intel 0000.

It's an E5-2620v4.


----------



## redhat_ownage

Dell Poweredge R710 LFF, two Xeon X5670 2.9ghz 6 core cpus with HT, 48GB ECC reg ram, four 240GB Corsair force GT SSD's, two 2TB Seagate Constellation ES SAS disks, eight Gigabit Ethernet ports, Vmware Esxi 6.0 hypervisor.
also not pictured is a Netgear GS724T V2 24port switch.


----------



## deafboy

Deal was too good to pass up...although I'm still unsure whether I'm going to keep them or return them, lol.

Drive inside is the WD 8TB Red with 256MB cache

http://www.bestbuy.com/site/wd-easystore-8tb-external-usb-3-0-hard-drive-black/5792401.p?skuId=5792401

Pickup in store in possible

Thailand is the 256MB cache and China is the 128MB cache, look on the bottom of the box.


----------



## DerComissar

Whoa!

So much capacity there, and they're the 256MB cache version.

Decisions, decisions, lol.


----------



## xxpenguinxx

Here's my current server. Not a real server but it's used as such. I have teamspeak, vent, and minecraft servers running on it, and it's used to backup my desktop files. Specs in sig. Only running 8GB currently since the motherboard was having post issues. Been afraid to turn it off.


----------



## SimonOcean

Ugg sometimes I wish I lived in the States. My buddy is visiting me from California arriving tomorrow. I can't exactly ask him to take time off from work to visit Best Buy three times for me to buy 6 of these units, pack them in his luggage so that I can fill up my new NAS.


----------



## deafboy

He shouldn't have to make multiple trips, I bought all 6 of mine at once... they don't really care.


----------



## tiro_uspsss

Quote:


> Originally Posted by *deafboy*
> 
> Something to keep an eye on:
> 
> Mikrotik, 16 SFP+, passive cooling, 42w, $399
> 
> https://mikrotik.com/product/crs317_1g_16s_rm


is..is that 16x 10GbE ports??


----------



## twerk

Quote:


> Originally Posted by *tiro_uspsss*
> 
> is..is that 16x 10GbE ports??


Yup! Passively cooled too.


----------



## tiro_uspsss

Quote:


> Originally Posted by *twerk*
> 
> Yup! Passively cooled too.


specs state 'active case cooling' and pics show two fans at the rear :/

re: specs, says 'licence level: 6' - what does that mean?

is there a catch?? it seems really cheap (USD$400?) for 16x 10GbE


----------



## twerk

Quote:


> Originally Posted by *tiro_uspsss*
> 
> specs state 'active case cooling' and pics show two fans at the rear :/
> 
> re: specs, says 'licence level: 6' - what does that mean?
> 
> is there a catch?? it seems really cheap (USD$400?) for 16x 10GbE


Ah, sorry. I'm thinking of something else that has passive cooled 10GbE.

There's no catch, Mikrotik stuff is normally very good value. I prefer Ubiquiti myself but they don't currently offer any 10GbE switches close to this value.

Mikrotik unlock features in their RouterOS based on a level system. Level 6 is the highest, meaning all features, level 1 being lowest:

https://wiki.mikrotik.com/wiki/Manual:License


----------



## tiro_uspsss

Quote:


> Originally Posted by *twerk*
> 
> Ah, sorry. I'm thinking of something else that has passive cooled 10GbE.
> 
> There's no catch, Mikrotik stuff is normally very good value. I prefer Ubiquiti myself but they don't currently offer any 10GbE switches close to this value.
> 
> Mikrotik unlock features in their RouterOS based on a level system. Level 6 is the highest, meaning all features, level 1 being lowest:
> https://wiki.mikrotik.com/wiki/Manual:License


ah I see, cool thanks

you were partly right afterall tho.. the description states that the switch is passive cooled....until it gets too hot, then the fans kick in


----------



## deafboy

No, Twerk is correct...well, kind of

It is passively cooled UNTIL it hits a thermal then the fans kick in. At the wattage rating I wouldn't expect the fans to be on terribly often in a consumer setting

derp, just saw the last post... haha, you got it all sorted out


----------



## exwar

I bought this rack cabinet for 60 euro do i need buy mount accessories for back?


----------



## beatfried

if you don't just want to mount some lightweight switches you really need the back support.


----------



## cdoublejj

http://www.overclock.net/t/1635847/has-anyone-else-been-eyeing-epyc-fora-new-build

http://www.overclock.net/t/1635329/mini-redundant-psus-in-atx-form-factor-say-it-aint-so

didn't wanna muk up this thread but, this thread is where all the /server chat goes down?

EDIT: i've been looking at 40Gbps Infiniband too,

http://www.ebay.com/itm/Mellanox-IS5022-40Gbps-8-Port-Infiniband-QDR-Switch-851-0167-01-w-Rackmount-Kit/201996949731?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2055119.m1438.l2649

they can be had cheaper but, really only cost effective if you can get your hands on PCIe 40Gbps NICs for cheap or free. weather or not i/you have any VMs with enough RAM to bog down Vmotion over 10Gbps fibre is also worth considering. Could also add 40Gbps NICs on top of your 10Gbps if you have enough PCIe lanes that are fast enough and set it up server to server and leave 10Gbps to communicate with the rest of the network.


----------



## Dalchi Frusche

I just managed to snag this 42U cabinet for $35 USD!












*Edited to replace picture with actual rack I recieved


----------



## Prophet4NO1

I never find deals that good for racks.


----------



## pvt.joker

Quote:


> Originally Posted by *Prophet4NO1*
> 
> I never find deals that good for racks.


I once traded a 12pk of beer for a 42U rack.. the guy even let me and my buddy stand around and help him drink it while we loaded the rack in my truck


----------



## Dalchi Frusche

Quote:


> Originally Posted by *Prophet4NO1*
> 
> I never find deals that good for racks.


I thought I'd never find a rack under $100. Keep an eye out on Craigslist... especially if you have a college nearby.
Quote:


> Originally Posted by *pvt.joker*
> 
> I once traded a 12pk of beer for a 42U rack.. the guy even let me and my buddy stand around and help him drink it while we loaded the rack in my truck


Haha, that's one hell of a find. The beer time was a nice bonus as well.


----------



## cdoublejj

Quote:


> Originally Posted by *burksdb*
> 
> I used that exact combo (hba, expander, Os and case) without any issues.


lol i just bought LSI's and flashed them all to IT, mode probably should have used my head and weighed the pros and cons. Oh well, at least it has more blinking lights.


----------



## burksdb

Quote:


> Originally Posted by *cdoublejj*
> 
> lol i just bought LSI's and flashed them all to IT, mode probably should have used my head and weighed the pros and cons. Oh well, at least it has more blinking lights.


for unraid it wont make much of a difference but if you ever switch to something like zfs your better prepared for it


----------



## twerk

Anyone have any experience with the new Denverton based Supermicro boards?

I'm looking at the A2SDi-4C-HLN4F for a FreeNAS build, looks really nice.

https://www.supermicro.com/products/motherboard/atom/A2SDi-4C-HLN4F.cfm


----------



## cdoublejj

Quote:


> Originally Posted by *burksdb*
> 
> for unraid it wont make much of a difference but if you ever switch to something like zfs your better prepared for it


is that due to extra load or something? does ZFS benefit form the extra controllers over an expander?


----------



## cdoublejj

Quote:


> Originally Posted by *twerk*
> 
> Anyone have any experience with the new Denverton based Supermicro boards?
> 
> I'm looking at the A2SDi-4C-HLN4F for a FreeNAS build, looks really nice.
> https://www.supermicro.com/products/motherboard/atom/A2SDi-4C-HLN4F.cfm


level 1 techs had nice things to say about the intel Avoton Atom, this says 2017 so maybe it's continuation of Avoton or a cutdown version?


----------



## twerk

Quote:


> Originally Posted by *cdoublejj*
> 
> level 1 techs had nice things to say about the intel Avoton Atom, this says 2017 so maybe it's continuation of Avoton or a cutdown version?


It's based on the new Atom architecture, very similar just a smaller lithography with more connectivity.

I think I'm going to use it in my build, I posted this in the FreeNAS forums but it would be great to see what you all think too:

Case: *Fractal Design Node 304* (FD-CA-NODE-304-BL)
Fans: *1x Noctua NF-A14 PWM & 2x Noctua NF-A9 PWM*
Power Supply: *Seasonic G-450W* (SS-450RM)
Storage Drives: *6x WD Red 3TB *(WD30EFRX) - I already own 3 otherwise I'd probably go higher capacity
Boot SSD: *Samsung PM961 128GB *(MZVLW128HEGR)
Motherboard/CPU:* Supermicro A2SDi-4C-HLN4F*


----------



## cdoublejj

Quote:


> Originally Posted by *twerk*
> 
> It's based on the new Atom architecture, very similar just a smaller lithography with more connectivity.
> 
> I think I'm going to use it in my build, I posted this in the FreeNAS forums but it would be great to see what you all think too:
> 
> Case: *Fractal Design Node 304* (FD-CA-NODE-304-BL)
> Fans: *1x Noctua NF-A14 PWM & 2x Noctua NF-A9 PWM*
> Power Supply: *Seasonic G-450W* (SS-450RM)
> Storage Drives: *6x WD Red 3TB *(WD30EFRX) - I already own 3 otherwise I'd probably go higher capacity
> Boot SSD: *Samsung PM961 128GB *(MZVLW128HEGR)
> Motherboard/CPU:* Supermicro A2SDi-4C-HLN4F*


----------



## cdoublejj




----------



## Pawelr98

Atlhon II X2 250
1 core disabled, runs at 2GHz

Gigabyte 870A-USB3

4GB GoodRam 1333MHz

Ati X600 128MB

Pentagram SilentForce 460W

Ever ECO PRO 700 CDS (UPS)

HP P400 SAS controller:
-6xHitachi 2TB in RAID 6 (7.27TiB usable)
-2x160GB in RAID1

Onboard controller:
-40GB OS drive
-1TB Hitachi Ultrastar (dying)

All controlled by Debian 8.2.0 x64.

Case is some no-name which I modified to accommodate 10 HDD's.


----------



## exwar

Hi i need help what power supplies for norco rpc-270?


----------



## bobfig

Quote:


> Originally Posted by *exwar*
> 
> Hi i need help what power supplies for norco rpc-270?


https://www.newegg.com/Product/Product.aspx?Item=N82E16817338119

should work?!?


----------



## exwar




----------



## exwar

I am looking to get switch
http://www.ebay.com/itm/Cisco-Catalyst-WS-C3750-24PS-S-Managed-24-Port-Gigabit-Switch-Layer-3-VLAN-2xSFP-/263178852853?hash=item3d46ae99f5:g:vYcAAOSwx6FZsPWa

i have 2 10gb (base-T) device and more is coming. So do I buy cisco swich or is it better get 10 gb switch?


----------



## Simmons572

Quote:


> Originally Posted by *exwar*
> 
> I am looking to get switch
> http://www.ebay.com/itm/Cisco-Catalyst-WS-C3750-24PS-S-Managed-24-Port-Gigabit-Switch-Layer-3-VLAN-2xSFP-/263178852853?hash=item3d46ae99f5:g:vYcAAOSwx6FZsPWa
> 
> i have 2 10gb (base-T) device and more is coming. So do I buy cisco swich or is it better get 10 gb switch?


https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-3750-series-switches/product_data_sheet0900aecd80371991.html

According to the Cisco Datasheet, those 2 SFP ports are only rated for Gigabit. If you are looking for 10Gb, you will need to look at a different switch.

However, if you are happy with a Gigabit managed switch, it looks like a good option.


----------



## zdude

Just got my server converted over to proxmox and swapped to a 2p board. Pictures of the physical system to come.


----------



## cdoublejj

Does any on here in SoCal ever look to buy used chassis like an m1000e or used 10goga bit modules and stuff like that?


----------



## herkalurk

Quote:


> Originally Posted by *cdoublejj*
> 
> Does any on here in SoCal ever look to buy used chassis like an m1000e or used 10goga bit modules and stuff like that?


First of all, where is MO, USA? Second, I would like to know more about 10 goga bit modules.


----------



## Prophet4NO1

Quote:


> Originally Posted by *herkalurk*
> 
> First of all, where is MO, USA? Second, I would like to know more about 10 goga bit modules.


MO = Missouri


----------



## cdoublejj

every time i have to use my phone to get on the internet i want to throw it and punch a baby in the face. in this case i have some gear in southern california, a little north of san diego. i have a dell m100e chassis and 10 gig SFP+ modules and the power supplies that go to it.


----------



## deafboy

I don't live in SoCal but I do go to San Diego a lot, lmao.

Don't tempt me!


----------



## lowfat

Quote:


> Originally Posted by *zdude*
> 
> Just got my server converted over to proxmox and swapped to a 2p board. Pictures of the physical system to come.


Converted from what? I've been using ESXi for my home server for 4 years and am looking to ditch it for Proxmox so I don't have to run FreeNAS in a VM.

Not sure I'll be able to get any of my FusionIO drives working though.


----------



## cdoublejj

Quote:


> Originally Posted by *deafboy*
> 
> I don't live in SoCal but I do go to San Diego a lot, lmao.
> 
> Don't tempt me!


i got a fully loaded m1000e chassis minus the servers headed for the scrapper if we can't get a decent price for it. and also have the servers, think they are m620s or something like that. that and some 10gbe network modules ethernet and SFP+. i'm posting on reddit hardwareswap atm.

i'm trying to feel out if i can sell this thing, barely had any time to grab any photos of HALF of the stuff today,


http://imgur.com/Lrqoa


----------



## deafboy

Quote:


> Originally Posted by *cdoublejj*
> 
> i got a fully loaded m1000e chassis minus the servers headed for the scrapper if we can't get a decent price for it. and also have the servers, think they are m620s or something like that. that and some 10gbe network modules ethernet and SFP+. i'm posting on reddit hardwareswap atm.
> 
> i'm trying to feel out if i can sell this thing, barely had any time to grab any photos of HALF of the stuff today,
> 
> 
> http://imgur.com/Lrqoa


try out homelabsales.reddit.com as well


----------



## zdude

Quote:


> Originally Posted by *lowfat*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> Just got my server converted over to proxmox and swapped to a 2p board. Pictures of the physical system to come.
> 
> 
> 
> 
> 
> Converted from what? I've been using ESXi for my home server for 4 years and am looking to ditch it for Proxmox so I don't have to run FreeNAS in a VM.
> 
> Not sure I'll be able to get any of my FusionIO drives working though.
Click to expand...

I went from a ubuntu manually managed system to Proxmox. For storage I just imported my ZFS array, told proxmox it was there and everything worked just fine. The ubuntu base was a mess because I run alot of applications that didn't need a VM but should have been isolated, but never went through the effort of setting up containers. Proxmox has allowed me to give each gameserver, plex and file server host (samba, nfs, ceph) its own container to run from. This way when I break one it shouldn't break everything running on the server...


----------



## cdoublejj

Anyone know where i can find a dell usb Remote Access Key "RAK" dell part number DKTC7? They are likely older than dirt from what i can tell.


----------



## stevef9432203

Looks like Ebay may have several cards and key pairs 4sale


----------



## stevef9432203

Hello All

Stevef9432203 here.
Using AsRock X99 Extreme4 to build basically a Nas Storage appliance.

Model No. SST-CS380B


Xeon e5 2660 10 core (ebay)
8 Hatachi 3TB Enterprise drives (ebay)
32GB Ram, 512GB Samsung M2 card (ebay)
Corsair CX750M PSU (on hand)
Asus Thunderbolt Tx3 card (ebay)
NetExtreme II dual port ether card (ebay)
Gigabyte Nvidia 960 card (on hand)

Fedora F26 as Os, mdraid, etc


----------



## link1393

I guys,

I am looking to build a opnsense firewall and I would like to have some sugestion for my motherboard.

Here is the hardware I am looking to use :

- Case : Supermicro SC505 203B Link

- MB : ASRock C2550D4I Link

- CPU : Intel Avoton C2550 (onboard)

- RAM : 4 or 8Gb Standard DDR3 (already owned)

- HDD : 500Gb 2.5" (already owned)

I would like to drop the cost a litlle bit by changing the MB, but I don't find another MB with fanless CPU (onboard) that support AES and IPMI.

Any advise for a good motherboard ?

I'm in canada 

I'm now in my own house and I am going to build my homelab


----------



## cdoublejj

Quote:


> Originally Posted by *stevef9432203*
> 
> Looks like Ebay may have several cards and key pairs 4sale


what search term did you use?


----------



## stevef9432203

dell remote access key


----------



## Rayleyne

Guys i am looking for a cheap (As cheap as possible) Rack mountable case that will take ATX hardware and a crap ton of drives, If anyone is able to provide model numbers that'd be great, Just needs to fit an ATX psu and ATX board as i already have my chosen nas hardware.


----------



## Simmons572

Quote:


> Originally Posted by *Rayleyne*
> 
> Guys i am looking for a cheap (As cheap as possible) Rack mountable case that will take ATX hardware and a crap ton of drives, If anyone is able to provide model numbers that'd be great, Just needs to fit an ATX psu and ATX board as i already have my chosen nas hardware.


Cheap as possible, you say?


















In all seriousness, you're probably best off trying to check a surplus computer store, craigslist or ebay for a good deal on some pre-owned hardware. I just checked Newegg, and the cheapest 4U server chassis they have is a 20 drive Norco for $328.

https://www.newegg.com/Product/Product.aspx?Item=9SIA3912D86915

EDIT: I just noticed you are in Ausi land, and I imagine that the prices are pretty extravagant down there


----------



## Rayleyne

Quote:


> Originally Posted by *Simmons572*
> 
> Cheap as possible, you say?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In all seriousness, you're probably best off trying to check a surplus computer store, craigslist or ebay for a good deal on some pre-owned hardware. I just checked Newegg, and the cheapest 4U server chassis they have is a 20 drive Norco for $328.
> 
> https://www.newegg.com/Product/Product.aspx?Item=9SIA3912D86915
> 
> EDIT: I just noticed you are in Ausi land, and I imagine that the prices are pretty extravagant down there


yeah prices here are obscene, It actually costs less than 200 bucks for me to get a second hand dual hex system with 32GB Ddr3 ECC but finding a rack mount chassis that supports ATX and has 16 drive bays? Hahahahahahahaha...ha...ha


----------



## zdude

It is not 16 drives and no hot swap but I don't think you will get much cheaper.

https://www.amazon.com/Rosewill-Rackmount-Computer-Pre-Installed-RSV-L4500/dp/B0091IZ1ZG/ref=sr_1_4?ie=UTF8&qid=1505232224&sr=8-4&keywords=4u+chassis


----------



## cdoublejj

why's everyone switching to proxmox? I know wendell form levle1techs likes it. i wonder if it can do Shared Virtual Graphics like ESXi can?


----------



## mbmumford

This past week or so I have been researching the *insert-favorite-expletive* out of Unraid & Proxmox to install as my host OS.

I'm leaning towards Proxmox myself also. I'm running a nested version at the moment to try and get a feel for it.


----------



## zdude

At this time I don't belive that it can do shared graphics like ESXi, however whenever AMD gets around to releasing their instinct cards proxmox should fairly quickly pick up vGPU. Nvidia only allows it on Tesla cards (I won't buy teslas for my server)


----------



## cdoublejj

Quote:


> Originally Posted by *zdude*
> 
> At this time I don't belive that it can do shared graphics like ESXi, however whenever AMD gets around to releasing their instinct cards proxmox should fairly quickly pick up vGPU. Nvidia only allows it on Tesla cards (I won't buy teslas for my server)


instinct cards? ....go on.....

EDIT: AMD fell of vSGA compatibility on ESXi 6.5 i have heard AMD is cooking up a new series and or technology for virtualization but, idk if the that's ESXi specific or what.


----------



## zdude

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> At this time I don't belive that it can do shared graphics like ESXi, however whenever AMD gets around to releasing their instinct cards proxmox should fairly quickly pick up vGPU. Nvidia only allows it on Tesla cards (I won't buy teslas for my server)
> 
> 
> 
> instinct cards? ....go on.....
> 
> EDIT: AMD fell of vSGA compatibility on ESXi 6.5 i have heard AMD is cooking up a new series and or technology for virtualization but, idk if the that's ESXi specific or what.
Click to expand...

The cards AMD are going to be releasing are supposed to support SR-IOV, that means that they should allow for vGPU on arbitrary hyper-visors not just any single one not vSGA and true hardware sharing, will have to see it to believe it. I have worked with the nvidia grid stuff at work, all I can say about it is if you are not paying me to deal with that licensing nightmare I will not... Haven't ever worked with vSGA though, that is just a software GPU correct?

Link to AMDs website
https://instinct.radeon.com/en-us/product/mi/

I am hoping that one of the consumer grade cards can be modded to run as a Instinct card, I have a Fury and RX 480 I will be trying on and if it is possible I may pick up a Vega FE as well.


----------



## cdoublejj

Quote:


> Originally Posted by *zdude*
> 
> The cards AMD are going to be releasing are supposed to support SR-IOV, that means that they should allow for vGPU on arbitrary hyper-visors not just any single one not vSGA and true hardware sharing, will have to see it to believe it. I have worked with the nvidia grid stuff at work, all I can say about it is if you are not paying me to deal with that licensing nightmare I will not... Haven't ever worked with vSGA though, that is just a software GPU correct?
> 
> Link to AMDs website
> https://instinct.radeon.com/en-us/product/mi/
> 
> I am hoping that one of the consumer grade cards can be modded to run as a Instinct card, I have a Fury and RX 480 I will be trying on and if it is possible I may pick up a Vega FE as well.


vSGA is when the hardware GPU's resources are shared with several guests. in my case the cores and vRAM on my W7000 split up and given to the guests which have a special driver. vSGA lacks the full set of graphics APIs the like the default software rendered but, lacks a lot less and is back by hardware GPU. this helps reduce CPU load too. off topic but, on that note There is another device called PCoIP Hardware accelerator that can hardware accelerate the of converting all the frames and controls to PCoIP, look up the terradici card. it seemed hit and miss on forums so i didn't get one.also PCIe lanes and ports are limited on socket 1366. Ryzen or Epyc might be a cool platform but, all nodes would need upgraded since you can't or should mix intel and AMD nodes on ESXi or most hypervisors?

https://www.lewan.com/blog/2015/03/30/vgpu-vsga-vdga-software-why-do-i-care

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-horizon-view-graphics-acceleration-deployment.pdf

these concepts are the same but, apparently going away AMD is cooking up something to replace vSGA if my memory server correctly based on some comment i received on a GPU forum i asked on.

i'll link some youtube videos but, they are old and does not necessarily represent each case scenario/environment's real performance or use case.































This looks more promising to me, older games, less GPU and API demands. That's what i wanna play with. Grid does not lack APIs like Soft 3D or vSGA, Soft 3D lack so so much it's almost unusable. Another option would be to pass through a video card, which the SR-IOV cards will be able to help do easier and install something like steam streaming or moonlight but, you won't get vmotion and or i might assume you mightn to get live migration in other hypervisors? vSGA, GRID and Soft 3D all get vMotion. Finally when i asked about the teradici card, i've been hearing blast protocol is the way to go with vmware horizon view. Just not the Sega Genesis kind.


----------



## zdude

I know that Proxmox doesn't support horizon on its own, but there is no reason that horizon can't be installed within the guests of a proxmox host. As for the vSGA (software GPU), vGPU (GRID), vDGA (Passthrough) here is my understanding.

vSGA -- A software layer that provides API call paths and forwards those to a physical GPU such as a tesla or firepro, fairly slow with a fair amount of CPU overhead. GPU drivers are VMware specific. VMotion supported
vGPU -- A partitioning of the physical hardware. API calls go directly through Nvidia/AMD drivers to hardware. Fairly little CPU overhead. GPU drivers are GPU vendor specific. VMotion not supported (This is SR-IOV or GRID)
vDGA -- Passthrough. A single GPU is connected directly to a single guest. This guest uses the vendor drivers and has the highest performance of all graphics options. VMotion not supported

Getting the image to a client through things such as horizon shouldn't be tied to the vGPU type at all.

Helpful little graphic from VMware



Let me know if this makes any sense.


----------



## cdoublejj

Quote:


> Originally Posted by *zdude*
> 
> I know that Proxmox doesn't support horizon on its own, but there is no reason that horizon can't be installed within the guests of a proxmox host. As for the vSGA (software GPU), vGPU (GRID), vDGA (Passthrough) here is my understanding.
> 
> vSGA -- A software layer that provides API call paths and forwards those to a physical GPU such as a tesla or firepro, fairly slow with a fair amount of CPU overhead. GPU drivers are VMware specific. VMotion supported
> vGPU -- A partitioning of the physical hardware. API calls go directly through Nvidia/AMD drivers to hardware. Fairly little CPU overhead. GPU drivers are GPU vendor specific. VMotion not supported (This is SR-IOV or GRID)
> vDGA -- Passthrough. A single GPU is connected directly to a single guest. This guest uses the vendor drivers and has the highest performance of all graphics options. VMotion not supported
> 
> Getting the image to a client through things such as horizon shouldn't be tied to the vGPU type at all.
> 
> Helpful little graphic from VMware
> 
> 
> 
> Let me know if this makes any sense.


I didn't thin proxmox support horizon view, i though Xen Server and Prox Mox had their own thin client, a little thin client os too.






gpu stuff in xen seemed like such a PITA, in hind sight i think it was/is the same steps for KVM in linux which is/was no where near as easy as it is on ESXi so i spring for ESXi. now you can get VMUG for $200 or less a year.


----------



## zdude

Xenserver does have their own Thin Client OS, there is no reason you can't install the Horizon server on a random guest OS and treat it just like you do on ESXi. I usually have just ended up using KVM paired with splashtop for good performance streaming to save the $200 per year for VMUG.


----------



## cdoublejj

never heard of splash top. so Horizon view isn't built in to the VMware tools and vmware software suite or what ever? i guess i never thought about how it works or installs, always had put that on my to do list.


----------



## Rbby258

Anydesk i found works the best.


----------



## zdude

Quote:


> Originally Posted by *cdoublejj*
> 
> never heard of splash top. so Horizon view isn't built in to the VMware tools and vmware software suite or what ever? i guess i never thought about how it works or installs, always had put that on my to do list.


My understanding is it is hardware agnostic, I know a VMware rep was trying to convince us to move our physical workstations into the "cloud" and use Horizon to access them at work. Based on that I would be lead to believe that it can be installed on anything running windows, but that was on the promise of a sales guy so take it with a grain of salt.
Quote:


> Originally Posted by *Rbby258*
> 
> Anydesk i found works the best.


I will need to play with anydesk, their website is a little sparse on actual technical details and a little heavy on its shiny in big bold letters for me to be overly confident it will work best, but who knows they may live up to the promises they are making there. Just wish they would say how...


----------



## herkalurk

Quote:


> Originally Posted by *zdude*
> 
> My understanding is it is hardware agnostic, I know a VMware rep was trying to convince us to move our physical workstations into the "cloud" and use Horizon to access them at work. Based on that I would be lead to believe that it can be installed on anything running windows, but that was on the promise of a sales guy so take it with a grain of salt.


In regards to VMware Horizon View they have clients for Windows and Mac. Still need the infrastructure to run it. I've used the VDI connection on both and they work well.


----------



## cdoublejj

there is a linux client too
Quote:


> Originally Posted by *zdude*
> 
> My understanding is it is hardware agnostic, I know a VMware rep was trying to convince us to move our physical workstations into the "cloud" and use Horizon to access them at work. Based on that I would be lead to believe that it can be installed on anything running windows, but that was on the promise of a sales guy so take it with a grain of salt.
> I will need to play with anydesk, their website is a little sparse on actual technical details and a little heavy on its shiny in big bold letters for me to be overly confident it will work best, but who knows they may live up to the promises they are making there. Just wish they would say how...


the _client_ can be installed on anything i know that much, i've never though about researching if the server side can be used with a non vmware hypervisor .


----------



## t0adphr0g

*OS*:Windows 10 Professional
*Case*:Thermaltake Level 10 GT Snow Edition
*CPU*:Intel i5 3570k
*Motherboard*:ASUS P8Z77-V
*Memory*:16 GB Corsair DDR3
*PSU*C Power & Cooling Silencer Mark II 950Watt
*OS HDD *lextor PX-256M5Pro
*Storage HDD(s)*:4x 6TB WD Black(s)
*Server Manufacturer *:t0adphr0g (Me)


----------



## zdude

Now that is an interesting server build. How well do the single 120mm fans handle the heat when the server is under load, you have a GPU of some kind in there which means it is probably ~500W under load....

Does anybody have any experience with these ecc ddr3 sitcks?

http://www.ebay.com/itm/SAMSUNG-16GB-PC3L-10600R-DDR3-1333-REGISTERED-ECC-MEMORY-MODULE-M393B2K70DMB-YH9-/311719536272?epid=1001652468&hash=item4893eea690:g:ScwAAOSwHm5ZutOY

Just want to make sure they are not a power hog, thinking about putting either 8 or 16 of them into my server atm....


----------



## t0adphr0g

Quote:


> Originally Posted by *zdude*
> 
> Now that is an interesting server build. How well do the single 120mm fans handle the heat when the server is under load, you have a GPU of some kind in there which means it is probably ~500W under load....


Not shown are the exhaust and intake fans 240mm on the left and right of the cabinet. So far under a full load I see temps not exceeding 75c. I have considered going with liquid cooling, and having the rad outside the arcade cabinet.

Now this is a Plex Server with 5 max users. (Anime, Movies, TV Shows, Pictures ,and Music). I am probably not going to have the heat and throughput as most of the other servers posted in here.


----------



## zdude

Quote:


> Originally Posted by *t0adphr0g*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> Now that is an interesting server build. How well do the single 120mm fans handle the heat when the server is under load, you have a GPU of some kind in there which means it is probably ~500W under load....
> 
> 
> 
> Not shown are the exhaust and intake fans 240mm on the left and right of the cabinet. So far under a full load I see temps not exceeding 75c. I have considered going with liquid cooling, and having the rad outside the arcade cabinet.
> 
> Now this is a Plex Server with 5 max users. (Anime, Movies, TV Shows, Pictures ,and Music). I am probably not going to have the heat and throughput as most of the other servers posted in here.
Click to expand...

Out of curiosity, what is the GPU and what is it used for in the server?


----------



## t0adphr0g

Quote:


> (SNIP!)...Out of curiosity, what is the GPU and what is it used for in the server?


Nvidia Geforce GTX 970 for the 22" 1080p AOC monitor, the smaller 17" USB (marquee) monitor needs no video card, runs thru a USB using DisplayLink.

The Arcade cabinet sits in my basement (surrounded by MD red clay), upon a cold tile floor


----------



## Liranan

Quote:


> Originally Posted by *zdude*
> 
> Does anybody have any experience with these ecc ddr3 sitcks?
> 
> http://www.ebay.com/itm/SAMSUNG-16GB-PC3L-10600R-DDR3-1333-REGISTERED-ECC-MEMORY-MODULE-M393B2K70DMB-YH9-/311719536272?epid=1001652468&hash=item4893eea690:g:ScwAAOSwHm5ZutOY
> 
> Just want to make sure they are not a power hog, thinking about putting either 8 or 16 of them into my server atm....


I use 2x4GB DDR3 1333 ECC sticks in my server and they work quite well. They are rated at 1.35V so they use the same amount of power as the two non-ECC Kingston 1600MHz sticks I have in my gaming PC.

I don't know whether it's possible to check for number of corrected bits but my server has had an uptime of 30 days so I am happy with them. Sadly I had a power blackout a month ago, otherwise uptime would have been two months.

Sadly one of my Hitachi ACA300's is broken and is ticking badly so once the replacement I've ordered has arrived I will RMA the broken one. I don't really need another 3TB as my media library hasn't grown much the past while but another 3TB is always nice to have.


----------



## Muskaos

Quote:


> Originally Posted by *zdude*
> 
> It is not 16 drives and no hot swap but I don't think you will get much cheaper.
> 
> https://www.amazon.com/Rosewill-Rackmount-Computer-Pre-Installed-RSV-L4500/dp/B0091IZ1ZG/ref=sr_1_4?ie=UTF8&qid=1505232224&sr=8-4&keywords=4u+chassis


I use one of those as my rack mount file server. See Polio in my signature. It is running ubuntu server atm, but it ran Windows Home Server 2011 for years before that.


----------



## mbmumford

Quote:


> Originally Posted by *zdude*
> 
> Just got my server converted over to proxmox and swapped to a 2p board. Pictures of the physical system to come.


I'm trying to setup Proxmox in what looks to be the same way you have (Plex in a container, Windows install, ZFS pool for Plex media , etc).

As I have almost no experience with Linux, I'm slowly forging my way through everything. Was there any guide you followed or saw that you would recommend?

I have Proxmox nested within VMware Player, and at the moment have Plex installed in a container. As the Windows install should be a simple process, I'm currently trying to understand how to setup the ZFS pool, and how best to partition my SSD (which will ultimately host Proxmox once I figure all this out) to use as a cache drive for the ZFS pool.


----------



## zdude

Quote:


> Originally Posted by *mbmumford*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> Just got my server converted over to proxmox and swapped to a 2p board. Pictures of the physical system to come.
> 
> 
> 
> 
> 
> I'm trying to setup Proxmox in what looks to be the same way you have (Plex in a container, Windows install, ZFS pool for Plex media , etc).
> 
> As I have almost no experience with Linux, I'm slowly forging my way through everything. Was there any guide you followed or saw that you would recommend?
> 
> I have Proxmox nested within VMware Player, and at the moment have Plex installed in a container. As the Windows install should be a simple process, I'm currently trying to understand how to setup the ZFS pool, and how best to partition my SSD (which will ultimately host Proxmox once I figure all this out) to use as a cache drive for the ZFS pool.
Click to expand...

I actually moved to Proxmox form a completely CLI managed Ubuntu install so I didn't really follow any set guides. The easiest way to get ZFS set up is to go to the shell window run a zpool create command. Below is the one I used on my system...

zpool create tank raidz2 sda sdb sdc sde sdf sdg sdh sdi

zpool create is needed no matter what
tank is the name of the pool and means the pool is mounted in /tank
raidz2 means a vdev will be created with a raid6 style redundancy (double parity)
sd* are the devices in the raidz2 vdev.

You can then add it to the UI following this page

https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks

You will need to create a samba container to export the pool so you can mount it as a file-system on a windows client as well.


----------



## mbmumford

Quote:


> Originally Posted by *zdude*
> 
> I actually moved to Proxmox form a completely CLI managed Ubuntu install so I didn't really follow any set guides. The easiest way to get ZFS set up is to go to the shell window run a zpool create command. Below is the one I used on my system...
> 
> zpool create tank raidz2 sda sdb sdc sde sdf sdg sdh sdi
> 
> zpool create is needed no matter what
> tank is the name of the pool and means the pool is mounted in /tank
> raidz2 means a vdev will be created with a raid6 style redundancy (double parity)
> sd* are the devices in the raidz2 vdev.
> 
> You can then add it to the UI following this page
> 
> https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks
> 
> You will need to create a samba container to export the pool so you can mount it as a file-system on a windows client as well.


Thanks for the info!

By the looks of your zpool create command you did not use a cache drive. I'm assuming you converted your "Thor" build and installed Proxmox on the SSD; any reason you decided not to use a partition on your SSD for the cache?


----------



## zdude

I actually haven't updated sig machines in a while, THOR has been retired for almost 6 months since RyZen came out. I am using a server chassis and 2x E5 2697v2 CPUs. I chose not to use a cache drive because I am not actually booting VMs off of ZFS, I only store media and general files on my ZFS pool so most access is sequential. The sequential performance on the my pool is ~400MB/s read over the 10Gb network to my desktop.

Using an L2ARC has a few disadvantages. A L2ARC is always assumed to be empty on boot and is assumed to always be faster than the pool. The empty on boot means that if VMs are being run on the pool the ARC won't help with boot up speeds and when actually running most of the random reads are fairly repetitive and should be fit within the L1ARC in memory. When the cache is actually full if you have more than ~8 drives in the pool the pool should read almost as fast sequentially as an SSD meaning that the L2ARC is just consuming CPU cycles to manage and L1ARC space to index.

For VMs I am actually running a 4 drive Ceph node/cluster. Ceph allows for direct writes to the drives and reduces latency.


----------



## mbmumford

I had not yet seen someone explain the disadvantage of a cache drive for this purpose, but that made sense.

You just saved me a fair bit of aggravation, and I thank you.

Did you have to deal with setting up the PCIe pass through to your Windows VM?


----------



## zdude

I do not actually have the windows VM on my system set up with pass-through, but have done it several times. It has always been finicky for me. Not something that is as easy as some people like to make it sound... But once it is working it is usually set and forget.

I only use the windows VM to host a windows only server application.


----------



## cdoublejj

Quote:


> Originally Posted by *mbmumford*
> 
> I'm trying to setup Proxmox in what looks to be the same way you have (Plex in a container, Windows install, ZFS pool for Plex media , etc).
> 
> As I have almost no experience with Linux, I'm slowly forging my way through everything. Was there any guide you followed or saw that you would recommend?
> 
> I have Proxmox nested within VMware Player, and at the moment have Plex installed in a container. As the Windows install should be a simple process, I'm currently trying to understand how to setup the ZFS pool, and how best to partition my SSD (which will ultimately host Proxmox once I figure all this out) to use as a cache drive for the ZFS pool.


does that ZFS pool require matching drives?


----------



## zdude

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *mbmumford*
> 
> I'm trying to setup Proxmox in what looks to be the same way you have (Plex in a container, Windows install, ZFS pool for Plex media , etc).
> 
> As I have almost no experience with Linux, I'm slowly forging my way through everything. Was there any guide you followed or saw that you would recommend?
> 
> I have Proxmox nested within VMware Player, and at the moment have Plex installed in a container. As the Windows install should be a simple process, I'm currently trying to understand how to setup the ZFS pool, and how best to partition my SSD (which will ultimately host Proxmox once I figure all this out) to use as a cache drive for the ZFS pool.
> 
> 
> 
> does that ZFS pool require matching drives?
Click to expand...

I do not have matching drives in the pool, or even same capacity drives. One of the drives is a true 3.0TB drive the rest show up as 2.7TB drives.


----------



## mbmumford

Quote:


> Originally Posted by *cdoublejj*
> 
> does that ZFS pool require matching drives?


From my understanding you can use different sized drives, however, this requires partitioning the larger drives to the size of the smallest.

Different brand drives should never matter if that was what you were asking.

All of that said, I have yet to try building any ZFS pool, and this is all from what I have read. My personal setup will use 4x 6TB WD Red drives.

Edit: Looks like @zdude beat me to the punch in answering. Since he has experience in this, I defer to his knowledge.


----------



## Prophet4NO1

Found a rack for $50. Even comes with a couple 4U cases and a switch. No details yet, but I am picking it up on Sunday. Pics when I get it home!


----------



## Prophet4NO1

So, managed to grab my rack today. Only issue, I can not get it in the house. lol. Going to pick up a powered stair climber dolly tomorrow morning. Should help me get it in the house and up the stairs. the cases are pretty basic. Big 4U ones but no drive bays aside from optical bays. Need to look into seeing if I can get drive racks for the opposite side. Once I figure out the brand. The switch is a 3com gigabit switch. Don't really need it since I have my cisco one. But, never hurts to have it.


----------



## Rayleyne

I should probably get around to posting pics of the server cabinet im slowly filling and my obscene pfsense box, to which no-one can give me a straight answer on.


----------



## zdude

Quote:


> Originally Posted by *Rayleyne*
> 
> I should probably get around to posting pics of the server cabinet im slowly filling and my obscene pfsense box, to which no-one can give me a straight answer on.


I should post some too, instead I am writing up a thing on ceph/zfs for home use after all the discussion the past week or so here. I haven't really looked into what pfsense does, what benefit does it provide?

For the purposes of this guide/info post I am going to assume that a linux distro is being used. My personal preference is either Ubuntu server LTS or Proxmox VE. On these platforms there are 5 main file-systems available for use which provide redundancy each with their own befits and drawbacks. The five are BTRFS, MDADM, Hardware raid, ZFS and Ceph.

BTRFS is eventually going to be ZFS but better, however it is currently very early prototype stage and with drive failures it is possible to lose data when you shouldn't, therefore I don't recommend using BTRFS for anything beyond experimentation. I am not saying that you will lose your data if a drive fails, only that there are corner cases which are still not covered for failures.

MDADM is the built in Linux raid system. I am not overly familiar with the MDADM raid arrays, however to the best of my knowledge they do not provide any caching and can be more difficult to work with than a ZFS or Raid card based array.

Hardware RAID is traditionally very stable, allows for an arbitrary OS and any file-system to be placed on the storage. Hardware Raid has the ability to utilize caching and typically offers the full suite of redundancy options. However if a RAID card fails, it must be replaced with an IDENTICAL raid card. The replacement RAID card needs to be the same all the way to the firmware version or the RAID array may not be imported correctly.

ZFS is a software raid implementation originally made by Oracle. ZFS provides the ability to make arbitrary RAID arrays form generic hard drives and provides multiple caching systems to use for increased performance. ZFS allows for any system running ZFS to import a pool (ZFS calls an array a pool) meaning as long as the physical disks survive a hardware failure data can be recovered. Downsides to ZFS include a necessity for a fair quantity of ram, and inflexibility after the pool has been created.

Ceph is the most flexible system that I have considered for home use. Ceph allows arbitrary hard drives to be added. It is possible to use 1TB through 12TB drives all in the same ceph "cluster." Ceph is the most complex to implement and manage but with multiple systems allows for complete system fault tolerance.



Spoiler: Detailed ZFS Explanation



ZFS is a files-system that provides data stability first, performance second. This means that unless your change some very specific settings ZFS will be one of the most reliable file-systems available today. When configuring ZFS for home use, there is unlikely for there to be more than a few people connected at a time and almost certainly a 10Gb network at maximum. From my personal usage of ZFS, 8 7200RPM 3TB hard drives is enough to max out a 10Gb network read and write with parity included. However this will be affected by several things


Spoiler: vdev configuration



ZFS configures it's pools into vdevs. A Vdev can be thought of a raid array in its own right, however because ZFS stripes across all available vdevs if one fails they all fail. For raw performance it is better to use more smaller vdevs, however for home use platter drives and slower networks are typically used meaning a single larger vdev may be better than many small ones. The ZFS vdevs have the options of the following redundancy options

mirror - exactly what it says, if one drive in the vdev is functioning the data is there, allows n drives to be mirrored.
Raidz1 - Similar to raid 5, can tolerate the failure of a single drive, if a second drive fails the vdev has failed, and as a result the pool fails.
Raidz2 - Similar to raid 6, can tolerate the failure of two drives, if a third drive fails the vdev has failed, and as a result the pool fails.
Raidz3 - Similar to raidz2 with an additional parity drive, can tolerate the failure of three drives, if a fourth drive fails the vdev has failed, and as a result the pool fails.

For example, with my 8 drive pool there are two ways for the pool to be configured. I can either have two raidz1 vdevs with 4 drives each or a single 8 drive raidz2 vdev. On paper both options provide the same redundancy and speed, but in practice the second is slightly slower but more reliable. This is because the first with two vdevs has a significantly increased risk of failure in the event of a drive failure during the rebuild and because a couple of my personal drives are the notorious ST3000DM001 drives that isn't a risk I want to take.

Recommendations for vdevs

If you are using 10 drives or fewer I recommend a single large raidz1 or raidz2 vdev for ZFS. Beyond that it is up to you how to configure the vdevs, for 20 drives I would probably do 2 raidz2 vdevs and call it pretty safe. If you want a raid 10 style system, ZFS while functional may be inferior to Ceph for home use.





Spoiler: ARC Caches



ZFS provides two levels of read cache. L1ARC and L2ARC. For home use applications I don't know that there is any situation that a L2ARC will provide any benefit. When a L2ARC is activiated, a portion of L1ARC is set aside as a table so ZFS will know what is an L2ARC "hit" and where to get it from the L2ARC. This effectively reduces the size of the very fast L1ARC which resides in system memory. On boot, the system assumes that L2ARC is empty so the L2ARC will not be used for boot procedures if Vms are stored on the ZFS pool.

All together this means that in a home use scenario there is little benefit to an L2ARC outside of specific use cases. The money spent on a L2ARC drive would typically be better spent on more memory for a larger L1ARC.





Spoiler: SLOG



This is ONLY relevant when VMs are being booted from the ZFS array or other synchronous write heavy workloads are being used. Otherwise just ignore using a SLOG.

If you must use a slog for sync writes, use a dedicated drive like this one

https://www.amazon.com/Crucial-MX300-275GB-Internal-Solid/dp/B01IAGSDJ0

The slog drive should be only used for slog to ensure the fastest performance and longest endurance because the slog WILL be written to every time a sync write is committed to the pool.








Spoiler: Detailed Ceph Explanation



Ceph was originally written for large scale high performance applications. It is the file system that quite a few of the top 500 use. For home use on first look it would appear to be complete overkill. However it is entirely possible to run Ceph on a single server with as few as two HDDs. Ceph is inherently faster than ZFS due to fewer layers of software to transition and better parallelization. Ceph has a drawback in that the file-system must be run on a raid 10 style system.


Spoiler: Ceph Redundancy Options



Ceph offers to redundancy options, a n way mirror which is internally refered to as size or repliaction and a m+n failure scheme referred to a erasure. Documentation for these two redundancy levels can be found below

here for erasure
http://docs.ceph.com/docs/master/rados/operations/erasure-code/

here for replication
http://docs.ceph.com/docs/master/rados/operations/pools/

CephFS does not support an erasure pool which limits practical use to mirrors only. In practice with some SSDs and REALLY cleaver use of the ceph configuration a erasure pool can be used but is not recommend.





Spoiler: Why Ceph?



Ceph allows for arbitrary hardware failure and expansion, while ZFS must be expanded by whole vdevs, Ceph can be added to one drive at a time. Additionally when it becomes impractical to add more drives to a single system Ceph makes it easy to add a second file server which can then be configured to allow complete system failures.





Spoiler: How to run a single node



See this page to configure ceph to run on a single server

https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/






I am posting this here first to get some feedback and expand on the ideas before creating a separate thread for it.


----------



## Prophet4NO1

Here it is. She is a bit war worn, but should do nicely. Especially for $50.







The two cases have no drive bays in them. So, not sure what I will do with them. Might just sell them. Or scrap them. The switch is only 10/100 with two 1gb uplink ports. So, again, kind of useless for me.


----------



## Gunfire

Sell each case for $20 and the switch for $10, BAM made you're money back


----------



## Rayleyne

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Here it is. She is a bit war worn, but should do nicely. Especially for $50.
> 
> 
> 
> 
> 
> 
> 
> The two cases have no drive bays in them. So, not sure what I will do with them. Might just sell them. Or scrap them. The switch is only 10/100 with two 1gb uplink ports. So, again, kind of useless for me.


funny enough i've been trying to hunt down cases like that on the cheap


----------



## lowfat

Quote:


> Originally Posted by *cdoublejj*
> 
> why's everyone switching to proxmox? I know wendell form levle1techs likes it. i wonder if it can do Shared Virtual Graphics like ESXi can?


For me it would be native ZFS so I'm not running my storage array in a VM.


----------



## Prophet4NO1

Quote:


> Originally Posted by *Rayleyne*
> 
> funny enough i've been trying to hunt down cases like that on the cheap


I would offer, but shipping would be killer.


----------



## cdoublejj

i think i just took one of the drive cages out of a case like that before we scrapped it.


----------



## hawkeye071292

I want to get one of those like 24U APC racks like this: https://www.cdw.com/shop/products/APC-NetShelter-SX-24U-Deep-Server-Rack-Enclosure-3006-lbs/1557398.aspx

It fits perfectly through door frames so it would go great in my literal closet.


----------



## cdoublejj

i've been running the apc su3000net and also the 2500va variant. the 3000va has hook up for external batteries. probably just need to pick up 4 car AGM batteries or something.

it's tower style all my stuff is non rack.


----------



## mbmumford

Quote:


> Originally Posted by *zdude*
> 
> I do not actually have the windows VM on my system set up with pass-through, but have done it several times. It has always been finicky for me. Not something that is as easy as some people like to make it sound... But once it is working it is usually set and forget.
> 
> I only use the windows VM to host a windows only server application.


You were not kidding when you said PCIe pass-through wasn't as easy as people make it sound. It took me at least 5 or 6 installs of Windows 10 to get the proper configuration that allowed me to finally pass-through my GPU. However, as of last night my whole system is up and running in a way I am happy with. Now I just need to transfer the Plex database files from my old Windows version of Plex to my Ubuntu container. Maybe this weekend I will tackle that...

Thanks for the pointers along the way. The ZFS cache explanation alone saved me a massive amount of pain.


----------



## zdude

Quote:


> Originally Posted by *mbmumford*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> I do not actually have the windows VM on my system set up with pass-through, but have done it several times. It has always been finicky for me. Not something that is as easy as some people like to make it sound... But once it is working it is usually set and forget.
> 
> I only use the windows VM to host a windows only server application.
> 
> 
> 
> You were not kidding when you said PCIe pass-through wasn't as easy as people make it sound. It took me at least 5 or 6 installs of Windows 10 to get the proper configuration that allowed me to finally pass-through my GPU. However, as of last night my whole system is up and running in a way I am happy with. Now I just need to transfer the Plex database files from my old Windows version of Plex to my Ubuntu container. Maybe this weekend I will tackle that...
> 
> Thanks for the pointers along the way. The ZFS cache explanation alone saved me a massive amount of pain.
Click to expand...

Glad to hear you got it working. Why do you need to transfer the database or do you mean the media files?

ALSO:
I am looking at getting a radeon instinct MI6 and trying to find a way to mod the RX480/580 into the MI6 for MxGPU pass-through (multiple VMs one GPU)


----------



## mbmumford

Quote:


> Originally Posted by *zdude*
> 
> Glad to hear you got it working. Why do you need to transfer the database or do you mean the media files?


It took some time to figure out how to transfer the media files (I ended up just installing windows and transferring it through samba), but I got it done.

I want to copy over the database files so that all the metadata, watched/unwatched status, naming, etc. doesn't need to be redone. 12+ TB of media takes a while to sift through to make sure it is all correct. This way I can just copy the old Plex files to the new install, and it is an exact copy of my old setup.
Quote:


> Originally Posted by *zdude*
> 
> ALSO:
> I am looking at getting a radeon instinct MI6 and trying to find a way to mod the RX480/580 into the MI6 for MxGPU pass-through (multiple VMs one GPU)


...And now I have to research why I don't need this. As my Plex & Samba are headless I don't have a need for this yet, however, I'm sure there will be a day it will be useful.


----------



## zdude

Quote:


> Originally Posted by *mbmumford*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> ALSO:
> I am looking at getting a radeon instinct MI6 and trying to find a way to mod the RX480/580 into the MI6 for MxGPU pass-through (multiple VMs one GPU)
> 
> 
> 
> ...And now I have to research why I don't need this. As my Plex & Samba are headless I don't have a need for this yet, however, I'm sure there will be a day it will be useful.
Click to expand...

That's the beauty of MxGPU or vGPU, they both run headless. The video output from the VMs are streamed over the network using RDP or another in home streaming protocol.


----------



## hawkeye071292

Quote:


> Originally Posted by *zdude*
> 
> That's the beauty of MxGPU or vGPU, they both run headless. The video output from the VMs are streamed over the network using RDP or another in home streaming protocol.


I don't think you could stream 4k over RDP like that.


----------



## zdude

Quote:


> Originally Posted by *hawkeye071292*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> That's the beauty of MxGPU or vGPU, they both run headless. The video output from the VMs are streamed over the network using RDP or another in home streaming protocol.
> 
> 
> 
> I don't think you could stream 4k over RDP like that.
Click to expand...

The underlying GPU doesn't have the compute power to do anything more than playback at 4k, but with nvidia vGPU it is very possible. Using Tesla M60's you can have up to 4 4k streams per GPU. Each stream will only be ~30fps but it is vGPU, not passthrough. MxGPU and vGPU are meant to fill the hole between no graphics on many clients and very fast graphics on a few clients. They provide a method to have meh graphics on a fairly large number of VMs per system.


----------



## hawkeye071292

Quote:


> Originally Posted by *zdude*
> 
> The underlying GPU doesn't have the compute power to do anything more than playback at 4k, but with nvidia vGPU it is very possible. Using Tesla M60's you can have up to 4 4k streams per GPU. Each stream will only be ~30fps but it is vGPU, not passthrough. MxGPU and vGPU are meant to fill the hole between no graphics on many clients and very fast graphics on a few clients. They provide a method to have meh graphics on a fairly large number of VMs per system.


Would you need something from like the Quadro line to do that though? I didnt think the consumer grade cards could do vGPU.


----------



## zdude

Quote:


> Originally Posted by *hawkeye071292*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> The underlying GPU doesn't have the compute power to do anything more than playback at 4k, but with nvidia vGPU it is very possible. Using Tesla M60's you can have up to 4 4k streams per GPU. Each stream will only be ~30fps but it is vGPU, not passthrough. MxGPU and vGPU are meant to fill the hole between no graphics on many clients and very fast graphics on a few clients. They provide a method to have meh graphics on a fairly large number of VMs per system.
> 
> 
> 
> Would you need something from like the Quadro line to do that though? I didnt think the consumer grade cards could do vGPU.
Click to expand...

Not even quadro works. It has to be a Tesla for vGPU. After nvidia made it practically impossible to hard mod the consumer cards into Tesla's it is not practical for consumers like us to use. Hence why after the Instinct cards finally become available I want to try modding the consumer card into the Instinct cards.

It stands to reason it is possible because the RX 480 can be modded into the RX 580 with just a Bios flash.


----------



## Liranan

Currently my server is in a standard ATX case and am thinking of getting the following 4U 15 bay drive case:

https://item.taobao.com/item.htm?spm=a230r.1.14.56.355c3afc9BUQTO&id=43526530469&ns=1&abbucket=17#detail









It's not hot swappable but the HD bay swivels, allowing 'easier' access. Currently I have 5 3TB HD's and increasing so I would like to get a case that I won't need to replace in the future. I have looked at hot swappable cases but they are several times more expensive than cases like this and this has a filter at the front.

I don't have a rack for it so it will be placed on it's side in the desk that I have here. The case will be well ventilates so airflow is not a problem. My only problem is whether this case is suitable for a server as the front fans are small 8CM by the looks of it, with 12CM right behind the drives.

I will also replace the H70 cooler I have in the server as the fans are vibrating and I don't like it anyway.

I have looked at cases like this 4U 24 bay chassis but it's over four times more expensive than the one above:





What is your recommendation?


----------



## Rbby258

I have the same case and its nice and easy to add extra drives and also has plenty of room for hardware. It's by far the best bang for the buck.


----------



## Liranan

Quote:


> Originally Posted by *Rbby258*
> 
> I have the same case and its nice and easy to add extra drives and also has plenty of room for hardware. It's by far the best bang for the buck.


Can you show me photos of you case and drives? What worries me is drive replacement if one of them breaks.


----------



## beatfried

Quote:


> Originally Posted by *Liranan*
> 
> Can you show me photos of you case and drives? What worries me is drive replacement if one of them breaks.


you can find some pictures of mine here: http://www.overclock.net/t/731801/post-your-server/3440_20#post_25013494


----------



## Liranan

Quote:


> Originally Posted by *beatfried*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Can you show me photos of you case and drives? What worries me is drive replacement if one of them breaks.
> 
> 
> 
> you can find some pictures of mine here: http://www.overclock.net/t/731801/post-your-server/3440_20#post_25013494
Click to expand...

Thanks, is it easy to remove and replace drives?


----------



## Prophet4NO1

Since many of you use pfsense.

UPDATE 2.4.0 is out!

32-bit is deprecated with this update. So, anyone still on 32-bit it's the end of the line.

https://www.netgate.com/blog/pfsense-2-4-0-release-now-available.html


Spoiler: Warning: Spoiler!



Highlights

Version 2.4.0 includes a long list of significant changes in pfSense software and in the underlying operating system and dependencies. Changes for pfSense 2.4.0 include:

FreeBSD 11.1-RELEASE as the base Operating System
New pfSense installer based on bsdinstall, with support for ZFS, UEFI, and multiple types of partition layouts (e.g. GPT, BIOS)
Support for Netgate ARM devices such as the SG-1000
OpenVPN 2.4.x support, which brings features like AES-GCM ciphers, speed improvements, Negotiable Crypto Parameters (NCP), TLS encryption, and dual stack/multihome
Translation of the GUI into 13 different languages! For more information on contributing to the translation effort, read our previous blog post and visit the project on Zanata
WebGUI improvements, such as a new login page, improved GET/POST CSRF handling, significant improvements to the Dashboard and its AJAX handling
Certificate Management improvements including CSR signing and international character support
Captive Portal has been rewritten to work without multiple instances of ipfw
Additional benefits of FreeBSD 11.0 and 11.1 include:

Security enhancements such as address space guards to address Stack Clash
New and updated drivers for a variety of hardware
Updated 802.11 wireless stack
Updated IPsec kernel implementation
Support for Microsoft® Hyper-V™ Generation 2 virtual machines, and other Hyper-V support improvements
Elastic Networking Adapter (ENA) support using the ena(4) FreeBSD driver for "next generation" enhanced networking on the Amazon® EC2™ platform


----------



## Rbby258

Quote:


> Originally Posted by *Liranan*
> 
> Thanks, is it easy to remove and replace drives?


My setup is currently in a bit of a mess. Drives are as easy as this.


----------



## zdude

Just realized that I haven't actually posted my physical box yet.





Specs
Case: Random old 3u 16 bay Chassis w/ redundand PSUs from work
Mobo: ASRock EP2C602-4L/D16 SSI EEB
CPUs: Xeon E5-2697 v2
Ram: 4x 8GB ddr3 (needs to be upgraded to 128GB ecc)
Boot Drive: Samsung 850 evo 250GB
ZFS drives: 8x assorted 3TBish drives
Ceph drives: 4x 1TB Seagates
OS: Proxmox
NIC: Mellanox Connectx-2
HBAs: Supermicro Proprietary


----------



## Liranan

Quote:


> Originally Posted by *Rbby258*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Thanks, is it easy to remove and replace drives?
> 
> 
> 
> My setup is currently in a bit of a mess. Drives are as easy as this.
Click to expand...

You are right, it is great for the price. Cable management isn't perfect but it also doesn't cost 200 USD, which is what the 24 bay chassis I was looking at costs.


----------



## Master__Shake

you need to make some of these.


----------



## cdoublejj

Quote:


> Originally Posted by *Rbby258*
> 
> My setup is currently in a bit of a mess. Drives are as easy as this.


what case?


----------



## Rbby258

Quote:


> Originally Posted by *Liranan*
> 
> You are right, it is great for the price. Cable management isn't perfect but it also doesn't cost 200 USD, which is what the 24 bay chassis I was looking at costs.


Yeah it does a good job for the price. The cables can be much cleaner, i've recently moved things around and not bothered tidying it up again as it works perfectly fine as is. Also my sas cables are massively long which don't help, think they're 1.5M cables and really only need to be around half that for this case. Also the one you linked to has a mid plate which will help on the cable management side of things. I got my case before they switched to having a mid plate.
Quote:


> Originally Posted by *Master__Shake*
> 
> you need to make some of these.


Yes they'd make a big difference, its on my todo list








Quote:


> Originally Posted by *cdoublejj*
> 
> what case?


Liranan posted a link for the case a few posts back. I'm not sure the exact model / make.


----------



## Liranan

Quote:


> Originally Posted by *cdoublejj*
> 
> what case?










It's this 4U 15 bay chassis.

https://item.taobao.com/item.htm?spm=a230r.1.14.56.355c3afc9BUQTO&id=43526530469&ns=1&abbucket=17#detail


----------



## Liranan

Quote:


> Originally Posted by *Master__Shake*
> 
> you need to make some of these.


I need to find power cables like that as I only have the ones that Robby has, which are pretty long but are messy.


----------



## Master__Shake

make them out of these

http://www.ebay.com/itm/4-Pin-Molex-to-2X-Twin-SATA-Power-Supply-Connector-Adapter-36cm-/130495594072?epid=1539053918&hash=item1e6224be58:g:vhkAAMXQCgpRwsKF

http://www.ebay.com/itm/10-x-Sata-Power-Connectors-complete-with-10-inline-4-end-caps-/152348407665?hash=item2378ac3771:g0YAAOSwEzxYSQqv
Quote:


> Originally Posted by *Liranan*
> 
> I need to find power cables like that as I only have the ones that Robby has, which are pretty long but are messy.


----------



## Jobotoo

I am planning on building my own NAS/File server and was wondering which method (RAID/JBOD) will allow me to add more drives later to the array while still having parity? So lets say I have 4 x 4TB drives in my setup, and I want to add more space and want to add two more 4TB drives, without having to completely rebuild the array. Is this even possible?

The purpose or my NAS/Fileserver is to have a central repository of all my(my family's) files.

On a side note, my router and my PC both have 10GbE. Will I see a noticeable difference having and using 10GbE on my NAS, vs using 1Gbit?


----------



## twerk

Quote:


> Originally Posted by *Jobotoo*
> 
> I am planning on building my own NAS/File server and was wondering which method (RAID/JBOD) will allow me to add more drives later to the array while still having parity? So lets say I have 4 x 4TB drives in my setup, and I want to add more space and want to add two more 4TB drives, without having to completely rebuild the array. Is this even possible?
> 
> The purpose or my NAS/Fileserver is to have a central repository of all my(my family's) files.
> 
> On a side note, my router and my PC both have 10GbE. Will I see a noticeable difference having and using 10GbE on my NAS, vs using 1Gbit?


LVM on top of mdadm will enable you to do array expansion, it's what Synology SHR is based on and from my experience it's really robust.

Some hardware controllers can do it to but I'm not sure how well.

If you have 10GbE capability already, I would say it's worth hooking your NAS up via 10GbE too.


----------



## Liranan

Quote:


> Originally Posted by *Jobotoo*
> 
> I am planning on building my own NAS/File server and was wondering which method (RAID/JBOD) will allow me to add more drives later to the array while still having parity? So lets say I have 4 x 4TB drives in my setup, and I want to add more space and want to add two more 4TB drives, without having to completely rebuild the array. Is this even possible?
> 
> The purpose or my NAS/Fileserver is to have a central repository of all my(my family's) files.
> 
> On a side note, my router and my PC both have 10GbE. Will I see a noticeable difference having and using 10GbE on my NAS, vs using 1Gbit?


OpenMediaVault (Debian stable) with SnapRAID plugin will give you extreme flexibility. UnRAID will give you the same functionality but is paid for so if you want a free solution I highly recommend OMV first and then whichever Linux distro you are comfortable with.

I like Mint so I run Mint XFCE on my server but I will switch to OMV in the future once version four is out as OMV is just really easy to work with.


----------



## zdude

Quote:


> Originally Posted by *Jobotoo*
> 
> I am planning on building my own NAS/File server and was wondering which method (RAID/JBOD) will allow me to add more drives later to the array while still having parity? So lets say I have 4 x 4TB drives in my setup, and I want to add more space and want to add two more 4TB drives, without having to completely rebuild the array. Is this even possible?
> 
> The purpose or my NAS/Fileserver is to have a central repository of all my(my family's) files.
> 
> On a side note, my router and my PC both have 10GbE. Will I see a noticeable difference having and using 10GbE on my NAS, vs using 1Gbit?


If you are not afraid of diving into some very heavy enterprise stuff, Ceph would provide that ability and more. However it is not for the faint of heart to set up and there is not a single good UI available. In exchange it provides the ability to add arbitrary hard drives (not even the same size is needed) and to add more systems, and with more than 1 system running the ability to tolerate the loss of a complete server.

A basic overview:
http://docs.ceph.com/docs/master/start/intro/

Quick start guide:
http://docs.ceph.com/docs/giant/start/quick-ceph-deploy/

Single node configuration guide:
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/


----------



## JedixJarf

Quote:


> Originally Posted by *zdude*
> 
> At this time I don't belive that it can do shared graphics like ESXi, however whenever AMD gets around to releasing their instinct cards proxmox should fairly quickly pick up vGPU. Nvidia only allows it on Tesla cards (I won't buy teslas for my server)


I passthrough an NVDIA 1050 Ti through ESX 6.5 to my HTPC windows 10 VM just fine : )


----------



## zdude

Quote:


> Originally Posted by *JedixJarf*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> At this time I don't belive that it can do shared graphics like ESXi, however whenever AMD gets around to releasing their instinct cards proxmox should fairly quickly pick up vGPU. Nvidia only allows it on Tesla cards (I won't buy teslas for my server)
> 
> 
> 
> I passthrough an NVDIA 1050 Ti through ESX 6.5 to my HTPC windows 10 VM just fine : )
Click to expand...

That is Pass through, not vGPU. Pass-through can be made to work with some hacking on Nvidia cards, but the vGPU stuff is in the vBios and requires a Tesla of some kind.


----------



## Liranan

Quote:


> Originally Posted by *JedixJarf*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> At this time I don't belive that it can do shared graphics like ESXi, however whenever AMD gets around to releasing their instinct cards proxmox should fairly quickly pick up vGPU. Nvidia only allows it on Tesla cards (I won't buy teslas for my server)
> 
> 
> 
> I passthrough an NVDIA 1050 Ti through ESX 6.5 to my HTPC windows 10 VM just fine : )
Click to expand...

I didn't think passthrough worked in ESXi. Is there a tutorial for this?


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *JedixJarf*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> At this time I don't belive that it can do shared graphics like ESXi, however whenever AMD gets around to releasing their instinct cards proxmox should fairly quickly pick up vGPU. Nvidia only allows it on Tesla cards (I won't buy teslas for my server)
> 
> 
> 
> I passthrough an NVDIA 1050 Ti through ESX 6.5 to my HTPC windows 10 VM just fine : )
> 
> Click to expand...
> 
> I didn't think passthrough worked in ESXi. Is there a tutorial for this?
Click to expand...

Pardon it being in another language, but it's what google provided. Layouts the same in any language:


That's ESXi 4.1, they've had it forever, using the vSphere client, which still works "mostly".

EDIT: This makes it "available". From there you need to open VM settings and assign the card you passed. It can all be done in GUI in like 5 minutes and a reboot.

EDIT2: Here's a video:


----------



## Liranan

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *JedixJarf*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> At this time I don't belive that it can do shared graphics like ESXi, however whenever AMD gets around to releasing their instinct cards proxmox should fairly quickly pick up vGPU. Nvidia only allows it on Tesla cards (I won't buy teslas for my server)
> 
> 
> 
> I passthrough an NVDIA 1050 Ti through ESX 6.5 to my HTPC windows 10 VM just fine : )
> 
> Click to expand...
> 
> I didn't think passthrough worked in ESXi. Is there a tutorial for this?
> 
> Click to expand...
> 
> Pardon it being in another language, but it's what google provided. Layouts the same in any language:
> 
> 
> That's ESXi 4.1, they've had it forever, using the vSphere client, which still works "mostly".
> 
> EDIT: This makes it "available". From there you need to open VM settings and assign the card you passed. It can all be done in GUI in like 5 minutes and a reboot.
> 
> EDIT2: Here's a video:
Click to expand...

Fortunately I can read German









Thanks a lot, I will have to test this at some point.

Edit: is vSphere free?


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *JedixJarf*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> At this time I don't belive that it can do shared graphics like ESXi, however whenever AMD gets around to releasing their instinct cards proxmox should fairly quickly pick up vGPU. Nvidia only allows it on Tesla cards (I won't buy teslas for my server)
> 
> 
> 
> I passthrough an NVDIA 1050 Ti through ESX 6.5 to my HTPC windows 10 VM just fine : )
> 
> Click to expand...
> 
> I didn't think passthrough worked in ESXi. Is there a tutorial for this?
> 
> Click to expand...
> 
> Pardon it being in another language, but it's what google provided. Layouts the same in any language:
> 
> 
> 
> That's ESXi 4.1, they've had it forever, using the vSphere client, which still works "mostly".
> 
> EDIT: This makes it "available". From there you need to open VM settings and assign the card you passed. It can all be done in GUI in like 5 minutes and a reboot.
> 
> EDIT2: Here's a video:
> 
> 
> 
> 
> 
> Click to expand...
> 
> Fortunately I can read German
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Thanks a lot, I will have to test this at some point.
> 
> Edit: is vSphere free?
Click to expand...

ESXi is re-branded to vSphere and yes it's free up to certain limits. See here, under Tech Specs, Specifications: https://www.vmware.com/products/vsphere-hypervisor.html

The vSphere client is technically depreciated, and is also free. And downloadable from HTTP://YOUR.ESX.SERVER.IP


----------



## Liranan

Quote:


> Originally Posted by *KyadCK*
> 
> ESXi is re-branded to vSphere and yes it's free up to certain limits. See here, under Tech Specs, Specifications: https://www.vmware.com/products/vsphere-hypervisor.html
> 
> The vSphere client is technically depreciated, and is also free. And downloadable from HTTP://YOUR.ESX.SERVER.IP


The only problem I see with vSphere is the vCPU limit. While 8 virtual cores were not a problem in the past now it's a serious limit, especially with Zen and Intel making 8 cores mainstream.


----------



## twerk

Quote:


> Originally Posted by *Liranan*
> 
> The only problem I see with vSphere is the vCPU limit. While 8 virtual cores were not a problem in the past now it's a serious limit, especially with Zen and Intel making 8 cores mainstream.


Remember that's the limit on vCPUs assigned to a VM, not physical cores in the machine. The limit on physical cores is 480. So you could have a 64 core machine, with 8 VMs each assigned 8 cores. That is allowed on the free license.

I've never assigned more than that many cores to a VM anyway.


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> ESXi is re-branded to vSphere and yes it's free up to certain limits. See here, under Tech Specs, Specifications: https://www.vmware.com/products/vsphere-hypervisor.html
> 
> The vSphere client is technically depreciated, and is also free. And downloadable from HTTP://YOUR.ESX.SERVER.IP
> 
> 
> 
> The only problem I see with vSphere is the vCPU limit. While 8 virtual cores were not a problem in the past now it's a serious limit, especially with Zen and Intel making 8 cores mainstream.
Click to expand...

Per VM.

It actually isn't possible to hit the ESXi CPU cap at the moment even with an 8P 28C Skylake-X server.

EDIT: I run two R710s with dual 6c/12t chips. Your main limitation is that your free licence is limited to three hardware servers total.


----------



## Liranan

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> ESXi is re-branded to vSphere and yes it's free up to certain limits. See here, under Tech Specs, Specifications: https://www.vmware.com/products/vsphere-hypervisor.html
> 
> The vSphere client is technically depreciated, and is also free. And downloadable from HTTP://YOUR.ESX.SERVER.IP
> 
> 
> 
> The only problem I see with vSphere is the vCPU limit. While 8 virtual cores were not a problem in the past now it's a serious limit, especially with Zen and Intel making 8 cores mainstream.
> 
> Click to expand...
> 
> Per VM.
> 
> It actually isn't possible to hit the ESXi CPU cap at the moment even with an 8P 28C Skylake-X server.
> 
> EDIT: I run two R710s with dual 6c/12t chips. Your main limitation is that your free licence is limited to three hardware servers total.
Click to expand...

What is the performance penalty of running VM's in ESXi/vSphere using pass through as opposed to running the OS natively? Is the performance similar to other Linux pass through technologies? I'm asking as ESXi/vSphere are Linux based.


----------



## twerk

Quote:


> Originally Posted by *Liranan*
> 
> What is the performance penalty of running VM's in ESXi/vSphere using pass through as opposed to running the OS natively? Is the performance similar to other Linux pass through technologies? I'm asking as ESXi/vSphere are Linux based.


ESXi is not Linux or in fact Unix based, the kernel was built from the ground up.

The performance impact is pretty much non-existent. The only noticeable thing is a slight memory overhead.


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> What is the performance penalty of running VM's in ESXi/vSphere using pass through as opposed to running the OS natively? Is the performance similar to other Linux pass through technologies? I'm asking as ESXi/vSphere are Linux based.


What @twerk said, it is not *Nix based.

Near as I can tell there is no penalty besides a couple hundred megabytes of RAM. I've never experienced one anyway. Benefits like vSwitch and hardware abstraction make up for the extra usage easily on their own.

Issue is you need to pass the GPU AND a USB controller if you want to do it that way, and then if you don't use HDMI for sound you'll need to also add a sound card or a USB DAC of some kind, and you'll have only those USB ports, so probably a hub too. Recent versions of ESXi don't support consumer Realtek/Killer NICs anymore either, so you may need a network card if you're doing this on consumer hardware unless you'd like to try and get 3rd party drivers working (they do exist).

I actually have an ooooooooold youtube video of me splitting an 8320 and two 6970s into two VMs with ESXi, with functional graphics drivers and Win7, it always comes down to hardware.


----------



## hawkeye071292

Quote:


> Originally Posted by *Liranan*
> 
> Fortunately I can read German


Fortunately I can read esxi! xD


----------



## Liranan

Quote:


> Originally Posted by *twerk*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> What is the performance penalty of running VM's in ESXi/vSphere using pass through as opposed to running the OS natively? Is the performance similar to other Linux pass through technologies? I'm asking as ESXi/vSphere are Linux based.
> 
> 
> 
> ESXi is not Linux or in fact Unix based, the kernel was built from the ground up.
> 
> The performance impact is pretty much non-existent. The only noticeable thing is a slight memory overhead.
Click to expand...

The guy who sued them lost the case and I didn't realise it. The loss of the case was due to legality, and not due to the merit of the case itself, which is why an appeal is pending.

If they have rewritten the code, as they said they would, then it is, indeed, not Linux based now.

If RAM overhead is all that there is to worry about then it's nothing. Adding a little more RAM is easy enough.

Edit: VMWare has the ability to virtualise an existing OS. Can this VM then be imported into vSphere?


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *twerk*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> What is the performance penalty of running VM's in ESXi/vSphere using pass through as opposed to running the OS natively? Is the performance similar to other Linux pass through technologies? I'm asking as ESXi/vSphere are Linux based.
> 
> 
> 
> ESXi is not Linux or in fact Unix based, the kernel was built from the ground up.
> 
> The performance impact is pretty much non-existent. The only noticeable thing is a slight memory overhead.
> 
> Click to expand...
> 
> The guy who sued them lost the case and I didn't realise it. The loss of the case was due to legality, and not due to the merit of the case itself, which is why an appeal is pending.
> 
> If they have rewritten the code, as they said they would, then it is, indeed, not Linux based now.
> 
> If RAM overhead is all that there is to worry about then it's nothing. Adding a little more RAM is easy enough.
> 
> Edit: VMWare has the ability to virtualise an existing OS. Can this VM then be imported into vSphere?
Click to expand...

All vmware vms are a .vmx config file, and a .vmdk virtual drive, both on workstation and esxi, so yes. They can be coppied between Workstation, Player, and vSphere, and are supported cross version.


----------



## cdoublejj

yeah i guess vSGA is dead, i wonder if AMD will be able to do vGPU or if licensing cost are a big hurdle if vGPU comes to linux based or OS hypervisors with nivdia and AMD WITH easy setup then uh... ESXi is gonna have some competition, in that respect. i already think they have their own versions of Vmotion.

EDIT: good write up on the ESXi is not based on linux topic, https://www.v-front.de/2013/08/a-myth-busted-and-faq-esxi-is-not-based.html ,it even explains where/why the notion even is a thing.


----------



## zdude

Quote:


> Originally Posted by *cdoublejj*
> 
> yeah i guess vSGA is dead, i wonder if AMD will be able to do vGPU or if licensing cost are a big hurdle if vGPU comes to linux based or OS hypervisors with nivdia and AMD WITH easy setup then uh... ESXi is gonna have some competition, in that respect. i already think they have their own versions of Vmotion.
> 
> EDIT: good write up on the ESXi is not based on linux topic, https://www.v-front.de/2013/08/a-myth-busted-and-faq-esxi-is-not-based.html ,it even explains where/why the notion even is a thing.


My bet is that there will be vGPU on kvm (standard linux) within a year. If that will be achievable with consumer hardware I don't know yet.


----------



## Liranan

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *twerk*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> What is the performance penalty of running VM's in ESXi/vSphere using pass through as opposed to running the OS natively? Is the performance similar to other Linux pass through technologies? I'm asking as ESXi/vSphere are Linux based.
> 
> 
> 
> ESXi is not Linux or in fact Unix based, the kernel was built from the ground up.
> 
> The performance impact is pretty much non-existent. The only noticeable thing is a slight memory overhead.
> 
> Click to expand...
> 
> The guy who sued them lost the case and I didn't realise it. The loss of the case was due to legality, and not due to the merit of the case itself, which is why an appeal is pending.
> 
> If they have rewritten the code, as they said they would, then it is, indeed, not Linux based now.
> 
> If RAM overhead is all that there is to worry about then it's nothing. Adding a little more RAM is easy enough.
> 
> Edit: VMWare has the ability to virtualise an existing OS. Can this VM then be imported into vSphere?
> 
> Click to expand...
> 
> All vmware vms are a .vmx config file, and a .vmdk virtual drive, both on workstation and esxi, so yes. They can be coppied between Workstation, Player, and vSphere, and are supported cross version.
Click to expand...

I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.

Quote:


> Originally Posted by *zdude*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cdoublejj*
> 
> yeah i guess vSGA is dead, i wonder if AMD will be able to do vGPU or if licensing cost are a big hurdle if vGPU comes to linux based or OS hypervisors with nivdia and AMD WITH easy setup then uh... ESXi is gonna have some competition, in that respect. i already think they have their own versions of Vmotion.
> 
> EDIT: good write up on the ESXi is not based on linux topic, https://www.v-front.de/2013/08/a-myth-busted-and-faq-esxi-is-not-based.html ,it even explains where/why the notion even is a thing.
> 
> 
> 
> My bet is that there will be vGPU on kvm (standard linux) within a year. If that will be achievable with consumer hardware I don't know yet.
Click to expand...

AMD make passthrough relatively easy compared with nVidia so it depends on whether they want to and considering nVidia want to sell Tesla's and Grids they definitely don't want to make it achievable.


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.


Word of advice. Unplug all your HDDs and install vSphere to a USB stick or SD card. You don't want your install bound to your drives and certainly not a raid array, as then you'll be forced to update instead of just making another stick with the new version and copying the configs. Only thing it will affect is vSphere boot time. Whole point of ESX is the abstraction after all.

Second word of advice, be sure to back up everything as vSphere uses a proprietary partition format. You can copy to/from it using any of the clients from any OS (modern one is web based), but the partition itself will be largely unreadable and require a format. ESX will not read NTFS, FAT, EXT, or anything else.


----------



## Liranan

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.
> 
> 
> 
> Word of advice. Unplug all your HDDs and install vSphere to a USB stick or SD card. You don't want your install bound to your drives and certainly not a raid array, as then you'll be forced to update instead of just making another stick with the new version and copying the configs. Only thing it will affect is vSphere boot time. Whole point of ESX is the abstraction after all.
> 
> Second word of advice, be sure to back up everything as vSphere uses a proprietary partition format. You can copy to/from it using any of the clients from any OS (modern one is web based), but the partition itself will be largely unreadable and require a format. ESX will not read NTFS, FAT, EXT, or anything else.
Click to expand...

ESXi will wipe a drive and use the entire drive? If so I will need to back up all the data on my SSD first.


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.
> 
> 
> 
> Word of advice. Unplug all your HDDs and install vSphere to a USB stick or SD card. You don't want your install bound to your drives and certainly not a raid array, as then you'll be forced to update instead of just making another stick with the new version and copying the configs. Only thing it will affect is vSphere boot time. Whole point of ESX is the abstraction after all.
> 
> Second word of advice, be sure to back up everything as vSphere uses a proprietary partition format. You can copy to/from it using any of the clients from any OS (modern one is web based), but the partition itself will be largely unreadable and require a format. ESX will not read NTFS, FAT, EXT, or anything else.
> 
> Click to expand...
> 
> ESXi will wipe a drive and use the entire drive? If so I will need to back up all the data on my SSD first.
Click to expand...

ESXi itself will use whatever drive you give it, that's why I suggest installing it to a USB stick. A simple 4GB stick is more than enough, you can even get away with a 1GB stick.

As for file handling in ESX...
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.storage.doc/GUID-5AC611E0-7CEB-4604-A03C-F600B1BA2D23.html

Hardware level:
- USB drives
- SATA drives
- NAS drives
- SAN LUNs
- M.2/PCI-e drives
- Hardware RAID arrays

ESX level:
- ESX "drive"
- Datastores

VM level:
- VMDK files

Guest OS level:
- "Drives"

Datastores are like your NTFS or EXT partitions at the hypervisor level, to ESX they are your usable space. There can be multiple datastores on one drive and assumedly they will only use the space you tell them to. I've never been in a situation where they would get anything less than the full drive. Datastores may be uploaded to or downloaded from using the client.

VMDKs are your virtual disks for the guest OS, and they can be configured as "Thin" (only use the space they actually use, up to a certain cap) or Thick (space is pre-allocated and used, even if only zeroed). VMDKs can move and VMs can have multiple VMDKs from multiple drives, IE OS on SSD, data on HDD. You can effectively treat it like it's hardware level and just assign as you wish, this shouldn't be different from other VM software aside from the terminology.

But you can also just straight up assign a physical drive directly to a VM if you want and skip the whole file system thing, it just isn't recommended because now you can't copy or back up the drive at all outside of traditional means.

vSphere isn't exactly designed to share is all, it is designed to be on a server as the base OS.


----------



## Liranan

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.
> 
> 
> 
> Word of advice. Unplug all your HDDs and install vSphere to a USB stick or SD card. You don't want your install bound to your drives and certainly not a raid array, as then you'll be forced to update instead of just making another stick with the new version and copying the configs. Only thing it will affect is vSphere boot time. Whole point of ESX is the abstraction after all.
> 
> Second word of advice, be sure to back up everything as vSphere uses a proprietary partition format. You can copy to/from it using any of the clients from any OS (modern one is web based), but the partition itself will be largely unreadable and require a format. ESX will not read NTFS, FAT, EXT, or anything else.
> 
> Click to expand...
> 
> ESXi will wipe a drive and use the entire drive? If so I will need to back up all the data on my SSD first.
> 
> Click to expand...
> 
> ESXi itself will use whatever drive you give it, that's why I suggest installing it to a USB stick. A simple 4GB stick is more than enough, you can even get away with a 1GB stick.
> 
> As for file handling in ESX...
> https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.storage.doc/GUID-5AC611E0-7CEB-4604-A03C-F600B1BA2D23.html
> 
> Hardware level:
> - USB drives
> - SATA drives
> - NAS drives
> - SAN LUNs
> - M.2/PCI-e drives
> - Hardware RAID arrays
> 
> ESX level:
> - ESX "drive"
> - Datastores
> 
> VM level:
> - VMDK files
> 
> Guest OS level:
> - "Drives"
> 
> Datastores are like your NTFS or EXT partitions at the hypervisor level, to ESX they are your usable space. There can be multiple datastores on one drive and assumedly they will only use the space you tell them to. I've never been in a situation where they would get anything less than the full drive. Datastores may be uploaded to or downloaded from using the client.
> 
> VMDKs are your virtual disks for the guest OS, and they can be configured as "Thin" (only use the space they actually use, up to a certain cap) or Thick (space is pre-allocated and used, even if only zeroed). VMDKs can move and VMs can have multiple VMDKs from multiple drives, IE OS on SSD, data on HDD. You can effectively treat it like it's hardware level and just assign as you wish, this shouldn't be different from other VM software aside from the terminology.
> 
> But you can also just straight up assign a physical drive directly to a VM if you want and skip the whole file system thing, it just isn't recommended because now you can't copy or back up the drive at all outside of traditional means.
> 
> vSphere isn't exactly designed to share is all, it is designed to be on a server as the base OS.
Click to expand...

So basically it acts just like VMWare or VBox and creates a VM file.


----------



## cdoublejj

Quote:


> Originally Posted by *Liranan*
> 
> I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.
> 
> AMD make passthrough relatively easy compared with nVidia so it depends on whether they want to and considering nVidia want to sell Tesla's and Grids they definitely don't want to make it achievable.


Yeah but, passthrough will only power 1 VM and the more GPUs the less slots for other stuff like raid and 10gbps fiber.
Quote:


> Originally Posted by *KyadCK*
> 
> Word of advice. Unplug all your HDDs and install vSphere to a USB stick or SD card. You don't want your install bound to your drives and certainly not a raid array, as then you'll be forced to update instead of just making another stick with the new version and copying the configs. Only thing it will affect is vSphere boot time. Whole point of ESX is the abstraction after all.
> 
> Second word of advice, be sure to back up everything as vSphere uses a proprietary partition format. You can copy to/from it using any of the clients from any OS (modern one is web based), but the partition itself will be largely unreadable and require a format. ESX will not read NTFS, FAT, EXT, or anything else.


Quote:


> Originally Posted by *Liranan*
> 
> I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.
> 
> AMD make passthrough relatively easy compared with nVidia so it depends on whether they want to and considering nVidia want to sell Tesla's and Grids they definitely don't want to make it achievable.


Quote:


> Originally Posted by *KyadCK*
> 
> Word of advice. Unplug all your HDDs and install vSphere to a USB stick or SD card. You don't want your install bound to your drives and certainly not a raid array, as then you'll be forced to update instead of just making another stick with the new version and copying the configs. Only thing it will affect is vSphere boot time. Whole point of ESX is the abstraction after all.
> 
> Second word of advice, be sure to back up everything as vSphere uses a proprietary partition format. You can copy to/from it using any of the clients from any OS (modern one is web based), but the partition itself will be largely unreadable and require a format. ESX will not read NTFS, FAT, EXT, or anything else.


also if you wish to rebuild or do a new RAID you can just migrate the data vs reinstall the OS. SATAdoms are also faster alternative to SD cards and USB.


----------



## hawkeye071292

You have to make sure you are using the proper vmware workstation virtual machine version as the ESXi you are putting it on. If you are using esxi 6.5, but your VMs are a much older virtual machine version you should step upgrade them. Being on the latest virtual machine build is recommended, and make your datastore VMFS 6 not 5. VMFS 6 is much better. More features too like reclaiming space. I almost always thin provision my VMs as well. Unless it is storing an database. If you are using 2016 server, I also recommend using ReFS instead of NTFS as well.


----------



## cdoublejj

Does anyone know if setting up as NUMA at least on nehalem has any benefit of a normal setup with ESXi or any other hypervisor? i noticed i a bios option in my T5500 to setup NUMA.


----------



## KyadCK

Quote:


> Originally Posted by *cdoublejj*
> 
> Does anyone know if setting up as NUMA at least on nehalem has any benefit of a normal setup with ESXi or any other hypervisor? i noticed i a bios option in my T5500 to setup NUMA.


I actually went to VMWorld, and their answer was "unless you know exactly what you're doing, just change the number of CPUs and let ESX do it's thing".









EDIT: Built in NUMA recognition that is competent.


----------



## Liranan

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.
> 
> AMD make passthrough relatively easy compared with nVidia so it depends on whether they want to and considering nVidia want to sell Tesla's and Grids they definitely don't want to make it achievable.
> 
> 
> 
> Yeah but, passthrough will only power 1 VM and the more GPUs the less slots for other stuff like raid and 10gbps fiber.
> 
> also if you wish to rebuild or do a new RAID you can just migrate the data vs reinstall the OS. SATAdoms are also faster alternative to SD cards and USB.
Click to expand...

I have no use for RAID or 10GBE cards in my main PC, so I have free PCIE slots to test passthrough. If it works well I will start using ESXi and run Windows and Linux on the machine. My next PC will definitely have 32GB RAM as 16 just isn't enough anymore.


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.
> 
> AMD make passthrough relatively easy compared with nVidia so it depends on whether they want to and considering nVidia want to sell Tesla's and Grids they definitely don't want to make it achievable.
> 
> 
> 
> Yeah but, passthrough will only power 1 VM and the more GPUs the less slots for other stuff like raid and 10gbps fiber.
> 
> also if you wish to rebuild or do a new RAID you can just migrate the data vs reinstall the OS. SATAdoms are also faster alternative to SD cards and USB.
> 
> Click to expand...
> 
> I have no use for RAID or 10GBE cards in my main PC, so I have free PCIE slots to test passthrough. If it works well I will start using ESXi and run Windows and Linux on the machine. My next PC will definitely have 32GB RAM as 16 just isn't enough anymore.
Click to expand...

As long as you know you'll need a 2nd PC to actually access any VM but the one you assign the GPU to as well as most of the config work. It does not do any video output locally besides a config gui, and VMs that don;t have a dedi GPU are remote access.


----------



## cekim

Quote:


> Originally Posted by *twerk*
> 
> ESXi is not Linux or in fact Unix based, the kernel was built from the ground up.
> 
> The performance impact is pretty much non-existent. The only noticeable thing is a slight memory overhead.


It's time for me to test this again... The last time this claim was made reviews showed a 5-8% hit for computes, but I'm not sure I've seen tests on a haswell xeon or newer which have been busy improving the IOMMU.

When I tested my personal apps (typical 1-10G run-time memory image heavy IPC multi-threaded) I saw as much as a 15-18% hit on the same machine for such things, but that was with ESXi 6.0 and I honestly don't recall if that was haswell or sandy bridge? It's been a while since I did that...

Such a pain to test apples:apples, but I'd love to see this overhead promise finally come true for my needs...

I know as recently as BW xeon KVM showed improvement, but still in the 10%+ range for my use-case...


----------



## Liranan

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.
> 
> AMD make passthrough relatively easy compared with nVidia so it depends on whether they want to and considering nVidia want to sell Tesla's and Grids they definitely don't want to make it achievable.
> 
> 
> 
> Yeah but, passthrough will only power 1 VM and the more GPUs the less slots for other stuff like raid and 10gbps fiber.
> 
> also if you wish to rebuild or do a new RAID you can just migrate the data vs reinstall the OS. SATAdoms are also faster alternative to SD cards and USB.
> 
> Click to expand...
> 
> I have no use for RAID or 10GBE cards in my main PC, so I have free PCIE slots to test passthrough. If it works well I will start using ESXi and run Windows and Linux on the machine. My next PC will definitely have 32GB RAM as 16 just isn't enough anymore.
> 
> Click to expand...
> 
> As long as you know you'll need a 2nd PC to actually access any VM but the one you assign the GPU to as well as most of the config work. It does not do any video output locally besides a config gui, and VMs that don;t have a dedi GPU are remote access.
Click to expand...

Will each GPU need to be connected to their own dedicated screen or can all VM's share the same screen by connecting the screen one of the GPU's?


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.
> 
> AMD make passthrough relatively easy compared with nVidia so it depends on whether they want to and considering nVidia want to sell Tesla's and Grids they definitely don't want to make it achievable.
> 
> 
> 
> Yeah but, passthrough will only power 1 VM and the more GPUs the less slots for other stuff like raid and 10gbps fiber.
> 
> also if you wish to rebuild or do a new RAID you can just migrate the data vs reinstall the OS. SATAdoms are also faster alternative to SD cards and USB.
> 
> Click to expand...
> 
> I have no use for RAID or 10GBE cards in my main PC, so I have free PCIE slots to test passthrough. If it works well I will start using ESXi and run Windows and Linux on the machine. My next PC will definitely have 32GB RAM as 16 just isn't enough anymore.
> 
> Click to expand...
> 
> As long as you know you'll need a 2nd PC to actually access any VM but the one you assign the GPU to as well as most of the config work. It does not do any video output locally besides a config gui, and VMs that don't have a dedi GPU are remote access.
> 
> Click to expand...
> 
> Will each GPU need to be connected to their own dedicated screen or can all VM's share the same screen by connecting the screen one of the GPU's?
Click to expand...

It's hardware passthrough. Nothing can even see the GPU except the VM it is assigned to, full stop, not even ESXi.

You'll need a screen per VM, you'll need a KB/mouse per VM, you'll need to assign USB cards per VM, and anything else you want them to have as well for any VM you plan to connect to physically instead of remotely. If you want to access two VMs this way then you'll need an absolute minimum of 4 PCI-e slots in use.

ESXi is server software, it's not designed to be accessed locally. Assigning cards like this is for giving your CAD server a GPU and still access it remotely, not to plug in. The good news is that doing so this way massively cuts latency compared to software redirects and sharing, much closer to native.


----------



## Liranan

Quote:


> Originally Posted by *KyadCK*
> 
> It's hardware passthrough. Nothing can even see the GPU except the VM it is assigned to, full stop, not even ESXi.
> 
> You'll need a screen per VM, you'll need a KB/mouse per VM, you'll need to assign USB cards per VM, and anything else you want them to have as well for any VM you plan to connect to physically instead of remotely. If you want to access two VMs this way then you'll need an absolute minimum of 4 PCI-e slots in use.
> 
> ESXi is server software, it's not designed to be accessed locally. Assigning cards like this is for giving your CAD server a GPU and still access it remotely, not to plug in. The good news is that doing so this way massively cuts latency compared to software redirects and sharing, much closer to native.


So it's just like normal passthrough. I will definitely test this as I would like to create two VM's, one Windows for gaming and another Linux for everything else. Sounds like great fun, I just need to deal with the two GPU's and peripherals. If this doesn't work well I will try regular Linux passthrough.


----------



## cekim

Quote:


> Originally Posted by *Liranan*
> 
> So it's just like normal passthrough. I will definitely test this as I would like to create two VM's, one Windows for gaming and another Linux for everything else. Sounds like great fun, I just need to deal with the two GPU's and peripherals. If this doesn't work well I will try regular Linux passthrough.


We are getting tantalizingly close to this dream, but as of now its still very hit-or-miss. In particularly I've found that usb and sound connectivity coupled with random glitches in GPU as well produce a "work in progress" experience that varies wildly with your hardware selection. Some works better than others whether it be MB, USB, Sound or GPU...

regarding USB specifically, I have an ESXi VM that requires a USB hardware dongle for licensing of software. It works find _most_ of the time, but occasionally, randomly, it cannot find the dongle...

The very latest versions of their software have eliminated the dongle, but when you are dealing with $multi-thousand software, you upgrade when you must, not just for convenience. My point was that if the USB is dropping out there, it is dropping out elsewhere which makes for interesting gaming problems given latency sensitivity.

I've been dreaming of a world where I did not have to dual boot or have multiple machines for games and work for 25 years now.... Still dreaming, but so, so close. The distance now is stability and latency, not basic functionality (though the loss of SLI means its hard to achieve 144-165Hz @ 1440p with current GPUs)


----------



## KyadCK

Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> So it's just like normal passthrough. I will definitely test this as I would like to create two VM's, one Windows for gaming and another Linux for everything else. Sounds like great fun, I just need to deal with the two GPU's and peripherals. If this doesn't work well I will try regular Linux passthrough.
> 
> 
> 
> We are getting tantalizingly close to this dream, but as of now its still very hit-or-miss. In particularly I've found that usb and sound connectivity coupled with random glitches in GPU as well produce a "work in progress" experience that varies wildly with your hardware selection. Some works better than others whether it be MB, USB, Sound or GPU...
> 
> regarding USB specifically, I have an ESXi VM that requires a USB hardware dongle for licensing of software. It works find _most_ of the time, but occasionally, randomly, it cannot find the dongle...
> 
> The very latest versions of their software have eliminated the dongle, but when you are dealing with $multi-thousand software, you upgrade when you must, not just for convenience. My point was that if the USB is dropping out there, it is dropping out elsewhere which makes for interesting gaming problems given latency sensitivity.
> 
> I've been dreaming of a world where I did not have to dual boot or have multiple machines for games and work for 25 years now.... Still dreaming, but so, so close. The distance now is stability and latency, not basic functionality (though the loss of SLI means its hard to achieve 144-165Hz @ 1440p with current GPUs)
Click to expand...

That's why you do PCI-e pass through for an entire USB expansion card instead or relying on software pass, given his use case.









Also who says you can't do SLI this way? You absolutely can provided you have the money and the lanes.


----------



## cekim

Quote:


> Originally Posted by *KyadCK*
> 
> That's why you do PCI-e pass through for an entire USB expansion card instead or relying on software pass, given his use case.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Also who says you can't do SLI this way? You absolutely can provided you have the money and the lanes.


Nvidia requires something like "different SLI" which is iffy itself...


----------



## KyadCK

Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> That's why you do PCI-e pass through for an entire USB expansion card instead or relying on software pass, given his use case.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Also who says you can't do SLI this way? You absolutely can provided you have the money and the lanes.
> 
> 
> 
> Nvidia requires something like "different SLI" which is iffy itself...
Click to expand...

IOMMU is an address redirect table, nothing about that forces any hardware to work in an abnormal way.


----------



## zdude

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> That's why you do PCI-e pass through for an entire USB expansion card instead or relying on software pass, given his use case.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Also who says you can't do SLI this way? You absolutely can provided you have the money and the lanes.
> 
> 
> 
> Nvidia requires something like "different SLI" which is iffy itself...
> 
> Click to expand...
> 
> IOMMU is an address redirect table, nothing about that forces any hardware to work in an abnormal way.
Click to expand...

I can attest, SLI does work in passthrough, so does crossfire... I did it using KVM but I see no reason that it wouldn't work on another hyper-visor.


----------



## Liranan

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Liranan*
> 
> So it's just like normal passthrough. I will definitely test this as I would like to create two VM's, one Windows for gaming and another Linux for everything else. Sounds like great fun, I just need to deal with the two GPU's and peripherals. If this doesn't work well I will try regular Linux passthrough.
> 
> 
> 
> We are getting tantalizingly close to this dream, but as of now its still very hit-or-miss. In particularly I've found that usb and sound connectivity coupled with random glitches in GPU as well produce a "work in progress" experience that varies wildly with your hardware selection. Some works better than others whether it be MB, USB, Sound or GPU...
> 
> regarding USB specifically, I have an ESXi VM that requires a USB hardware dongle for licensing of software. It works find _most_ of the time, but occasionally, randomly, it cannot find the dongle...
> 
> The very latest versions of their software have eliminated the dongle, but when you are dealing with $multi-thousand software, you upgrade when you must, not just for convenience. My point was that if the USB is dropping out there, it is dropping out elsewhere which makes for interesting gaming problems given latency sensitivity.
> 
> I've been dreaming of a world where I did not have to dual boot or have multiple machines for games and work for 25 years now.... Still dreaming, but so, so close. The distance now is stability and latency, not basic functionality (though the loss of SLI means its hard to achieve 144-165Hz @ 1440p with current GPUs)
> 
> Click to expand...
> 
> That's why you do PCI-e pass through for an entire USB expansion card instead or relying on software pass, given his use case.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Also who says you can't do SLI this way? You absolutely can provided you have the money and the lanes.
Click to expand...

What is a USB expansion card?


----------



## xxpenguinxx

Quote:


> Originally Posted by *Liranan*
> 
> What is a USB expansion card?


Most likely you would need a PCIe to USB card. Something like this.

Link to one on Newegg with decent reviews: https://www.newegg.com/Product/Product.aspx?item=N82E16815287016


----------



## Liranan

Even with regular Linux passthrough I will need two GPU's and at least one PCIE USB card so this is not a problem. The only problem is how to manage all of this.

I think I will need to connect both cards to my screen, one through VGA or HDMI and the other DP and then switch between the cards on the screen, I need to test whether this is possible with this screen, though alternatively I will have to get a second cheap screen to run the non-gaming OS (Linux Mint most likely). Keyboard and mouse are going to be a little more messy as I dislike wireless but it might be the only possibility.

Fortunately GPu's can be had for quite cheap so that's not a problem at all.


----------



## cekim

Quote:


> Originally Posted by *zdude*
> 
> I can attest, SLI does work in passthrough, so does crossfire... I did it using KVM but I see no reason that it wouldn't work on another hyper-visor.


It's been about 9 months since I tried, but it was explicitly blocked by Nvidia's drivers save hacks like "different SLI". Using KVM which chip-set option did you choose?

It's not a hardware issue, its Nvidia blocking features in their drivers.

Yeah, just did another quick search and all the same responses come back - I'd love to hear more about your setup for Nvidia SLI on KVM?

Crossfire is a different animal, as far as I know, it does not check for chipset compatibility or licensing so you don't have to spoof it.


----------



## cdoublejj

Quote:


> Originally Posted by *KyadCK*
> 
> I actually went to VMWorld, and their answer was "unless you know exactly what you're doing, just change the number of CPUs and let ESX do it's thing".
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EDIT: Built in NUMA recognition that is competent.


what do you mean by change then umber of cores? like the VMs? leave numa off?


----------



## KyadCK

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> I actually went to VMWorld, and their answer was "unless you know exactly what you're doing, just change the number of CPUs and let ESX do it's thing".
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EDIT: Built in NUMA recognition that is competent.
> 
> 
> 
> what do you mean by change then umber of cores? like the VMs? leave numa off?
Click to expand...

Not cores. CPUs.

Change number of sockets, not number of cores per socket. ESXi is coherent enough to understand NUMA structures natively and work around them. Making cores per socket more than one disables this.


----------



## Liranan

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> I actually went to VMWorld, and their answer was "unless you know exactly what you're doing, just change the number of CPUs and let ESX do it's thing".
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EDIT: Built in NUMA recognition that is competent.
> 
> 
> 
> what do you mean by change then umber of cores? like the VMs? leave numa off?
> 
> Click to expand...
> 
> Not cores. CPUs.
> 
> Change number of sockets, not number of cores per socket. ESXi is coherent enough to understand NUMA structures natively and work around them. Making cores per socket more than one disables this.
Click to expand...

I am confused as to what you mean. Does this mean that if you want to assign 8 cores you need to virtualise two sockets and assign 4 cores to each?


----------



## zdude

Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> I can attest, SLI does work in passthrough, so does crossfire... I did it using KVM but I see no reason that it wouldn't work on another hyper-visor.
> 
> 
> 
> It's been about 9 months since I tried, but it was explicitly blocked by Nvidia's drivers save hacks like "different SLI". Using KVM which chip-set option did you choose?
> 
> It's not a hardware issue, its Nvidia blocking features in their drivers.
> 
> Yeah, just did another quick search and all the same responses come back - I'd love to hear more about your setup for Nvidia SLI on KVM?
> 
> Crossfire is a different animal, as far as I know, it does not check for chipset compatibility or licensing so you don't have to spoof it.
Click to expand...

I did SLI with quadros (p6000s) at work, so it may be very possible on the geforce line they limit it just like they code 43 any time a VM is detected.


----------



## KyadCK

Quote:


> Originally Posted by *Liranan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> I actually went to VMWorld, and their answer was "unless you know exactly what you're doing, just change the number of CPUs and let ESX do it's thing".
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EDIT: Built in NUMA recognition that is competent.
> 
> 
> 
> what do you mean by change then umber of cores? like the VMs? leave numa off?
> 
> Click to expand...
> 
> Not cores. CPUs.
> 
> Change number of sockets, not number of cores per socket. ESXi is coherent enough to understand NUMA structures natively and work around them. Making cores per socket more than one disables this.
> 
> 
> 
> 
> Click to expand...
> 
> I am confused as to what you mean. Does this mean that if you want to assign 8 cores you need to virtualise two sockets and assign 4 cores to each?
Click to expand...

Set sockets to 8, leave cores per socket to one, ESXi will handle NUMA on it's own. It's just a picture to provide wording examples because it pays to be specific.


----------



## cdoublejj

Quote:


> Originally Posted by *KyadCK*
> 
> Not cores. CPUs.
> 
> Change number of sockets, not number of cores per socket. ESXi is coherent enough to understand NUMA structures natively and work around them. Making cores per socket more than one disables this.


your wording still doesn't tell me if i will get performance increase by checking enabled or disabled able for the numa option in the bios.


----------



## cekim

Quote:


> Originally Posted by *zdude*
> 
> I did SLI with quadros (p6000s) at work, so it may be very possible on the geforce line they limit it just like they code 43 any time a VM is detected.


Correct - code 43 you can work around with a qemu setting, but SLI triggers a "certified for SLI" check of the MB chip-set in the GeForce cards. There are hacks you can use to bypass this as well, but I did not find them to be very stable.

The quadro cards/drivers enable multiple VM features that GeForce is trying to lock down including SR-IOV (and mult-root MR-IOV) and SLI.it seems...

I just can't justify the price premium of any given quadro series card vs the comparable GeForce at this point, so I'm limited by what is available there.


----------



## bobfig

welp, replaced my server cpu (e3 - 1230) with a lower power e3-1260l fo preperation for a u-nas case. seems to work exactly how i need it. and at half the TDP.


----------



## zdude

Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> I did SLI with quadros (p6000s) at work, so it may be very possible on the geforce line they limit it just like they code 43 any time a VM is detected.
> 
> 
> 
> Correct - code 43 you can work around with a qemu setting, but SLI triggers a "certified for SLI" check of the MB chip-set in the GeForce cards. There are hacks you can use to bypass this as well, but I did not find them to be very stable.
> 
> The quadro cards/drivers enable multiple VM features that GeForce is trying to lock down including SR-IOV (and mult-root MR-IOV) and SLI.it seems...
> 
> I just can't justify the price premium of any given quadro series card vs the comparable GeForce at this point, so I'm limited by what is available there.
Click to expand...

Quadros do not do SR-IOV, Nvidia only enables SR-IOV on select Tesla Cards. The only feature that Quadros provide for gaming in VMs that I am aware of (lots of documentation available) is the removal of alot of Nvidia's software checks.


----------



## exwar

I have a problem i got LSI 9211-8i but i dont see my drive (sata) on unraid is this card only sas?


----------



## wiretap

Quote:


> Originally Posted by *exwar*
> 
> I have a problem i got LSI 9211-8i but i dont see my drive (sata) on unraid is this card only sas?


Did you flash the controller to IT mode? Otherwise, it would be looking for a virtual disk that isn't there. And yes, SATA drives will work on that controller.


----------



## exwar

Quote:


> Originally Posted by *wiretap*
> 
> Did you flash the controller to IT mode? Otherwise, it would be looking for a virtual disk that isn't there. And yes, SATA drives will work on that controller.


it was already IT mode so what now?


----------



## Liranan

Speaking of the LSI 9211-8i, do these cards have a size limit? E.g. a lot of previous cards can't read 3TB drives, can these cards read drives of 3TB and over?


----------



## bobfig

you may need to set that drive up as a Jbod


----------



## cekim

Quote:


> Originally Posted by *zdude*
> 
> Quadros do not do SR-IOV, Nvidia only enables SR-IOV on select Tesla Cards. The only feature that Quadros provide for gaming in VMs that I am aware of (lots of documentation available) is the removal of alot of Nvidia's software checks.


Oh, my mistake, I thought they did provide SR-IOV. ... but yes the SLI check/lock is purely a driver issue that does nto have the same easy qemu work-around since its checking the chipset ID as far as I can tell. So, if the quadro drivers skip all of that, then they'd work better than Geforce...


----------



## zdude

Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> Quadros do not do SR-IOV, Nvidia only enables SR-IOV on select Tesla Cards. The only feature that Quadros provide for gaming in VMs that I am aware of (lots of documentation available) is the removal of alot of Nvidia's software checks.
> 
> 
> 
> Oh, my mistake, I thought they did provide SR-IOV. ... but yes the SLI check/lock is purely a driver issue that does nto have the same easy qemu work-around since its checking the chipset ID as far as I can tell. So, if the quadro drivers skip all of that, then they'd work better than Geforce...
Click to expand...

If I am not mistaken it is possible to spoof chipset IDs, but it may not be documented very well/at all.


----------



## cekim

Quote:


> Originally Posted by *zdude*
> 
> If I am not mistaken it is possible to spoof chipset IDs, but it may not be documented very well/at all.


and is likely to break other things... If you tell it it has chipset foo and the usb, sata, etc... drivers find the 440BX or Q35, you are going to have a bad time.

you'd need to not only spoof it, but spoof it specifically for nvidia driver's request for compatibility check.


----------



## cdoublejj

Quote:


> Originally Posted by *zdude*
> 
> Quadros do not do SR-IOV, Nvidia only enables SR-IOV on select Tesla Cards. The only feature that Quadros provide for gaming in VMs that I am aware of (lots of documentation available) is the removal of alot of Nvidia's software checks.


i thought SR-IOV was just for passing through pcie devices entirely easier?


----------



## twerk

Quote:


> Originally Posted by *cdoublejj*
> 
> i thought SR-IOV was just for passing through pcie devices entirely easier?


SR-IOV is used to present one physical GPU as multiple vGPUs allowing you to share physical hardware among multiple VMs.

Used a lot in VDI type scenarios. We are trialling it at work to replace Quadro workstations. Just ordered some Tesla M6s.

It's not simply a driver lock like many things, as the GPU has to do the thread scheduling.


----------



## zdude

Quote:


> Originally Posted by *twerk*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cdoublejj*
> 
> i thought SR-IOV was just for passing through pcie devices entirely easier?
> 
> 
> 
> SR-IOV is used to present one physical GPU as multiple vGPUs allowing you to share physical hardware among multiple VMs.
> 
> Used a lot in VDI type scenarios. We are trialling it at work to replace Quadro workstations. Just ordered some Tesla M6s.
> 
> It's not simply a driver lock like many things, as the GPU has to do the thread scheduling.
Click to expand...

I was under the impression that the Nvidia GRID drivers handled the scheduling. To change the scheduling settings it is a reg edit/config file rather than changing the value in a register on the GPU. I thought that vGPU as implemented on Nvidia cards was not true SR-IOV but a very efficient vSGA driver that got installed onto the hyper-visor. If it was true SR-IOV then the limiting factor on the number of clients would be in the command engine, not in the H264 encoders as it stands today.

The above may be completely wrong but is just my understanding of how nvidia vGPU is implemented today.


----------



## cdoublejj

so i guess that was my next thought vSGA and SR-IOV are different apparently. i thought it was all architecture and drivers


----------



## cdoublejj

so i got one of these brand new never used for the price of shipping!!!
https://www.asus.com/us/Commercial-Servers-Workstations/RS926E7RS8/
what would be nice is if i could find some CPUs under 95 watts that would work with it? or would i just have to hope there is a power saving mode?

also got some pics of the ddr2 FB-DIMM G5 machine i built. i'm posting extra pictures for others who might find this, there are a lot of little tricks to fitting an SSIEB mobo tray in a G5! i even had to shave one corner of the board with a file!


----------



## Prophet4NO1

Patch update that includes the fix for the WPA2 KRACK hack.
Quote:


> Highlights
> 
> In case you missed the pfSense 2.4.0 release changes, see the 2.4.0 Release Notes and the previous 2.4.0 Release Highlights post.
> 
> pfSense software version 2.4.1 has a brief, but important, list of changes which include:
> 
> Fixes for the set of WPA2 Key Reinstallation Attack issues commonly known as KRACK
> Fixed a VT console race condition panic at boot on VMware platforms (especially ESXi 6.5.0U1) #7925
> Fixed a bsnmpd problem that causes it to use excess CPU and RAM with the hostres module in cases where drives support removable media but have no media inserted #6882
> Fixed an upgrade problem due to FreeBSD 11 removing legacy ada aliases, which caused some older installs to fail when mounting root post-upgrade #7937
> Changed the boot-time fsck process the ensure the disk is mounted read-only before running fsck in preen mode
> Changed the VLAN interface names to use the 'dotted' format now utilized by FreeBSD, which is shorter and helps to keep the interface name smaller than the limit (16) This fixes the 4 digit VLAN issues when the NIC name is 6 bytes long. This change was made not only to fix the name length issue, but also to reduce the differences between how FreeBSD uses VLANs and how they are used by pfSense interface functions.
> 
> These VLAN changes prevent PPP sessions from working on VLAN parent interfaces, see #7981
> Fixed setting VLAN Priority in VLAN interface configuration #7748


https://www.netgate.com/blog/pfsense-2-4-1-release-now-available.html


----------



## cdoublejj

i wonder how cheap i can get a cheap managed 24 port (preferably with sfp/sfp+) idk is the dell power connect 2624 will work since it's unmanaged. seeing as i'll be using 2 ubiquity AC pros, idk if it would be smart enough to recognize the modem and APs?


----------



## CJston15

Quote:


> Originally Posted by *cdoublejj*
> 
> i wonder how cheap i can get a cheap managed 24 port (preferably with sfp/sfp+) idk is the dell power connect 2624 will work since it's unmanaged. seeing as i'll be using 2 ubiquity AC pros, idk if it would be smart enough to recognize the modem and APs?


I snagged a 24port managed switch from Ubiquiti for less than $200 earlier this year. I did not get the PoE version and just used the PoE injectors that came with my AC PRO's. I also use a Ubiquiti USG and am about to order and install 4-5 Ubiquiti security cameras. I love the Unifi software and how everything is integrated - looks sharp and does more than I need for my home!


----------



## cdoublejj

Quote:


> Originally Posted by *CJston15*
> 
> I snagged a 24port managed switch from Ubiquiti for less than $200 earlier this year. I did not get the PoE version and just used the PoE injectors that came with my AC PRO's. I also use a Ubiquiti USG and am about to order and install 4-5 Ubiquiti security cameras. I love the Unifi software and how everything is integrated - looks sharp and does more than I need for my home!


Nice, sounds like i might as well snag a POE to save space.


----------



## twerk

Well... I've got some servers to buy/build... didn't think I'd win.


----------



## fg2chase

here she is


----------



## Charles1

Quote:


> Originally Posted by *fg2chase*
> 
> 
> 
> here she is


Very nice bet those hdds add to her weight lol. My media server is heavy i opted for wheels to roll it arounds lol


----------



## fg2chase

Quote:


> Originally Posted by *Charles1*
> 
> Very nice bet those hdds add to her weight lol. My media server is heavy i opted for wheels to roll it arounds lol


Oh yeah shes pretty hefty, Id say in the 70-80lbs range.


----------



## zdude

Does anybody here have an idea for cheap 1TB SSDs? I need some low latency storage for my Ceph RDB and metadata pools.


----------



## Rbby258

Quote:


> Originally Posted by *zdude*
> 
> Does anybody here have an idea for cheap 1TB SSDs? I need some low latency storage for my Ceph RDB and metadata pools.


Possibly black friday external drive deals.


----------



## nookkin

My home server setup. Utterly overkill for my actual needs but is fun to play around with. It runs numerous VMs on Hyper-V, the main ones being a DHCP/DNS server and a media server (SMB + Emby + custom internal-only streaming website). Initially it was meant to be my main workstation, hence the overkill specs for what it's actually being used for. It's effectively "free" to run in the winter since my apartment has electric heat.

*Main server*
OS: Windows Server 2012 R2
Case: Logisys rackmount
CPU: 2x Xeon X5670 (6 core / 12 thread each, 24 thread total)
Motherboard: Supermicro X8DT3-LN4F
RAM: 96GB (12x8GB DDR3 RDIMM)
PSU: Delta 1060W
Total storage: 12TB, 4TB inside the server
Network: 4x onboard gigabit, 1x Mellanox 10Gbps SFP+ with OM3 fiber running to my desktop (connected to Hyper-V virtual switch so I get 10Gbps between my desktop and any VM)

Excuse the cable mess, I will eventually do something about it...



*Storage enclosure*
Case: Ancient Intel server with 5 hot-swap SATA drive bays. I removed everything except the SATA backplane and added an ATX PSU.
Storage: 4x 2TB SATA drives
Connected via LSI HBA on the main server -> SFF-8088 cable -> adapter card -> fanout cable to the SATA drive bays themselves



*Network*
Allied Telesyn AT-9924T 24-port managed gigabit switch


----------



## MrBalll

For anyone in the NM area saw this on CG while looking myself. It's a 42U rack. https://roswell.craigslist.org/sop/d/42u-server-rack/6380191811.html
I was just looking around in here and saw a lot of people wanting racks so figured I'd post to hopefully help someone. I'd never have use for it so hopefully someone here will.


----------



## twerk

Another one to the collection... I need to stop and get therapy.


----------



## exwar

i got this switch WS-C3750-48PS-E is this only 100mbps?
because when i look at the nic settings it say speed is 100mbps.


----------



## silvrr

Quote:


> Originally Posted by *exwar*
> 
> i got this switch WS-C3750-48PS-E is this only 100mbps?
> because when i look at the nic settings it say speed is 100mbps.


Looks like it. From here:
https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-3750-series-switches/product_data_sheet0900aecd80371991.html

It looks like the G model variants are 10/100/1000


----------



## KyadCK

Quote:


> Originally Posted by *exwar*
> 
> i got this switch WS-C3750-48PS-E is this only 100mbps?
> because when i look at the nic settings it say speed is 100mbps.


Yes, though the SFPs should be 1gbps and it should be a PoE switch.


----------



## exwar

Quote:


> Originally Posted by *KyadCK*
> 
> Yes, though the SFPs should be 1gbps and it should be a PoE switch.


ok Can you recommend the Cisco switch (48-24 port) where it is all port is 1gbps


----------



## KyadCK

I personally got a WS-C4948-E, though all ports including the SFPs are 1gbps only.

In the end I swapped it out with an IBM G8000F with a dual SFP+ addon and a G8124, as I expanded my backbone to 20gbps.

Note that none of these are PoE.

Please don't assume these are the best options for you, as there may be others that are more cost efficient for you. The easiest way to know what a switch can do is to just google search for the model and look for the product sheet, which will detail what each switch model is capable of. Be very specific about the models though, as a WS-C4948-E and a WC-4948E are different for example (the 2nd one has SFP+ jacks for 10gbps uplinks).


----------



## twerk

Latest addition to my collection (top). DL380p Gen8 - 2x Xeon E5-2650 - 96GB DDR3 - Dual 10Gb NIC

Came with 4x 600GB 10k SAS drives which I managed to sell for £75 a piece. So all-in-all only cost me £130


----------



## deafboy

Nice grab! What do you plan on using those guys for?


----------



## twerk

Quote:


> Originally Posted by *deafboy*
> 
> Nice grab! What do you plan on using those guys for?


The DL80 Gen9 (bottom) now has FreeNAS on it and stores all my media.

The DL380p Gen8 has ESXi on it and runs all my VMs. I'd like to get another at some point to form a two node cluster.


----------



## zdude

Got all my data moved from ZFS to ceph finally. Probably shouldn't have bought so many 8TB WD reds on Friday but impulse control, what is it?


----------



## parityboy

Quote:


> Originally Posted by *zdude*
> 
> 
> 
> Got all my data moved from ZFS to ceph finally. Probably shouldn't have bought so many 8TB WD reds on Friday but impulse control, what is it?


Can you share more about your Ceph cluster? How many hosts? What is the inter-node network - GigE, 10GigE, Infiniband? What's the write performance like? How do you expose its storage to the "client" side of the network?

Thanks.


----------



## zdude

Quote:


> Originally Posted by *parityboy*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> 
> 
> Got all my data moved from ZFS to ceph finally. Probably shouldn't have bought so many 8TB WD reds on Friday but impulse control, what is it?
> 
> 
> 
> Can you share more about your Ceph cluster? How many hosts? What is the inter-node network - GigE, 10GigE, Infiniband? What's the write performance like? How do you expose its storage to the "client" side of the network?
> 
> Thanks.
Click to expand...

At the moment I am running it on a single node. Looking at picking up two more nodes to implement full fail-over ability but the Mrs. would probably beat me to death with them right now.

I have two seperate roots configured within the cluster. One is dedicated to storing data for CephFS (my client section) the other root is used for metadata for CephFS (size=3) and VM storage (size=2).



That is what my crush map looks like right now.

I get access to CephFS inside containers by mounting CephFS on the host and using binds to the containers so that there is a single client serving everything on the host. For VMs I connect using an internal network to mount CephFS and for windows mounts I have a samba server in a container that has CephFS bound to the container and re-exported via my 10Gb direct attach to my desktop.

Write speeds are okay for only having 6 data drives on CephFS, In short spurts I will get ~700MB/s but 3 of the 6 data drives are ST8000DM004 drives and slow down after a few GB of writes limiting large transfers to ~100MB/s.


----------



## PuffinMyLye

New backup server build which will be installed offsite in my parents house accessible via site-to-site VPN. Here it is sitting on top of my rack as I finish up data replication before moving it offsite. Full build log *here*.


----------



## twerk

So you're jimphreak on /r/homelab huh...


----------



## PuffinMyLye

Quote:


> Originally Posted by *twerk*
> 
> So you're jimphreak on /r/homelab huh...


Yes sir







.


----------



## bobfig

Quote:


> Originally Posted by *PuffinMyLye*
> 
> New backup server build which will be installed offsite in my parents house accessible via site-to-site VPN. Here it is sitting on top of my rack as I finish up data replication before moving it offsite. Full build log *here*.


i want one of those cases pretty badly just don't want to spend that much on something like that at this time.


----------



## twerk

Quote:


> Originally Posted by *PuffinMyLye*
> 
> Yes sir
> 
> 
> 
> 
> 
> 
> 
> .


Very nice build. What's 10Gb switch are you running?


----------



## PuffinMyLye

Quote:


> Originally Posted by *twerk*
> 
> Very nice build. What's 10Gb switch are you running?


Top switch is Dell X1052. Bottom switch is Cisco SG350XG-24F.


----------



## twerk

Quote:


> Originally Posted by *PuffinMyLye*
> 
> Top switch is Dell X1052. Bottom switch is Cisco SG350XG-24F.


Those SG350XG's are so nice. Just picked up an SG300-52 for myself, can't afford anything more at the moment


----------



## PuffinMyLye

Quote:


> Originally Posted by *twerk*
> 
> Those SG350XG's are so nice. Just picked up an SG300-52 for myself, can't afford anything more at the moment


Yea it definitely set me back but after hating my experience with the Ubiquiti ES-16-XG and thinking about how reliable Cisco switches have been for me at work I'm hoping that having this switch for the next 8-10 years will make the investment worthwhile.


----------



## cdoublejj

Quote:


> Originally Posted by *PuffinMyLye*
> 
> New backup server build which will be installed offsite in my parents house accessible via site-to-site VPN. Here it is sitting on top of my rack as I finish up data replication before moving it offsite. Full build log *here*.


So that was you on reddit the other night.


----------



## PuffinMyLye

Quote:


> Originally Posted by *cdoublejj*
> 
> So that was you on reddit the other night.


Yes sir







.


----------



## pvt.joker

Quote:


> Originally Posted by *PuffinMyLye*
> 
> Yea it definitely set me back but after hating my experience with the Ubiquiti ES-16-XG and thinking about how reliable Cisco switches have been for me at work I'm hoping that having this switch for the next 8-10 years will make the investment worthwhile.


What issues did you run into with the Ubiquiti ES-16-XG? I was considering the jump to 10gb for that price.. but if it's more hassle than it's worth, i'll probably hold off..


----------



## zdude

Has anybody here worked with the Nvidia GRID M40? Wondering how it works for pass-through.


----------



## cdoublejj

Quote:


> Originally Posted by *zdude*
> 
> Has anybody here worked with the Nvidia GRID M40?


i wish!


----------



## zdude

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> Has anybody here worked with the Nvidia GRID M40?
> 
> 
> 
> i wish!
Click to expand...

I found a pair of them for $300 each. From the looks of it it is just a board with 4 750Tis crammed onto it. Could be something that is fun to play with.


----------



## twerk

Quote:


> Originally Posted by *zdude*
> 
> Has anybody here worked with the Nvidia GRID M40? Wondering how it works for pass-through.


Worked with the Tesla M6 in GPU pass through with ESXi. Works amazingly.


----------



## zdude

Finally have enough ram to really start playing with the server now. Went from 56GB of ram (4 8GB sticks and 12 2GB sticks) to 128GB of ECC. Still booting everything back up but it is already running noticeably better!



Ceph cluster during initialization of various servers


----------



## cdoublejj

Quote:


> Originally Posted by *zdude*
> 
> I found a pair of them for $300 each. From the looks of it it is just a board with 4 750Tis crammed onto it. Could be something that is fun to play with.










dang i knew they could be had fora deal from time to time but, dang. for that cheap I think i'll ditch the vSGA and and wait pick me up up 2 or 3.
Quote:


> Originally Posted by *twerk*
> 
> Worked with the Tesla M6 in GPU pass through with ESXi. Works amazingly.


As far as i know that teslas don't support vGPU on esxi. is this not true?


----------



## parityboy

*@zdude*

I'm really starting to like the look of Ceph and Proxmox VE, although I would have a separate Ceph cluster rather than use hyper-convergence. Am I right in remembering that the RAM sizing for an OSD node is 1GB per TB of a single OSD?


----------



## cdoublejj

is vGPU on proxmox plug and play like it is on ESXi yet?


----------



## zdude

Quote:


> Originally Posted by *parityboy*
> 
> *@zdude*
> 
> I'm really starting to like the look of Ceph and Proxmox VE, although I would have a separate Ceph cluster rather than use hyper-convergence. Am I right in remembering that the RAM sizing for an OSD node is 1GB per TB of a single OSD?


I went from ZFS and manual debian with KVM to Proxmox with ZFS to Proxmox with Ceph and am loving it. Ceph is far faster to boot VMs off of and can handle more than a single workload type unlike ZFS. Unfortunatly I had to go to a mirrored style pool to use Ceph. The rule that I use at work is 3GB of ram per HDD OSD and 5GB of ram per SSD OSD. The 1GB ram per TB of storage rule is often quoted for ZFS.

Quote:


> Originally Posted by *cdoublejj*
> 
> is vGPU on proxmox plug and play like it is on ESXi yet?


Not out of the box no.


----------



## parityboy

Quote:


> Originally Posted by *zdude*
> 
> I went from ZFS and manual debian with KVM to Proxmox with ZFS to Proxmox with Ceph and am loving it. Ceph is far faster to boot VMs off of and can handle more than a single workload type unlike ZFS. Unfortunatly I had to go to a mirrored style pool to use Ceph. The rule that I use at work is 3GB of ram per HDD OSD and 5GB of ram per SSD OSD. The 1GB ram per TB of storage rule is often quoted for ZFS.


I found this page, which seems quite useful. Also, have you ever played with Ceph erasure coding? It's something I've been keeping an eye on, it's one of the reasons I've not used ZFS - the notion of having to add storage in large chunks does not appeal, and replication (while robust) can be storage-expensive. Currently I have ~20TiB of data which would have to go onto a Ceph cluster, which means 40TiB of storage minimum.


----------



## zdude

Quote:


> Originally Posted by *parityboy*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> I went from ZFS and manual debian with KVM to Proxmox with ZFS to Proxmox with Ceph and am loving it. Ceph is far faster to boot VMs off of and can handle more than a single workload type unlike ZFS. Unfortunatly I had to go to a mirrored style pool to use Ceph. The rule that I use at work is 3GB of ram per HDD OSD and 5GB of ram per SSD OSD. The 1GB ram per TB of storage rule is often quoted for ZFS.
> 
> 
> 
> I found this page, which seems quite useful. Also, have you ever played with Ceph erasure coding? It's something I've been keeping an eye on, it's one of the reasons I've not used ZFS - the notion of having to add storage in large chunks does not appeal, and replication (while robust) can be storage-expensive. Currently I have ~20TiB of data which would have to go onto a Ceph cluster, which means 40TiB of storage minimum.
Click to expand...

I have tried erasure encoding time and time again at work and have never had it actually work well. On a cluster with 5 nodes with 24 1TB SSDs each and 100Gb network Ceph will do ~3GB/s with erasure encoding, the same 5 nodes will do ~27GB/s with replication and size = 2.

And that 1GB/TB of ram is listed as only during recovery. I haven't ever really seen Ceph become memory bottle-necked, much like ZFS it likes to cache stuff but unlike ZFS doesn't really need to.

The only reason I was able to justify the cost of setting up ceph for myself at home is I got 6 8TB HDDs from worst buy for $130 each on black friday.


----------



## parityboy

Quote:


> Originally Posted by *zdude*
> 
> I have tried erasure encoding time and time again at work and have never had it actually work well. On a cluster with 5 nodes with 24 1TB SSDs each and 100Gb network Ceph will do ~3GB/s with erasure encoding, the same 5 nodes will do ~27GB/s with replication and size = 2.
> 
> And that 1GB/TB of ram is listed as only during recovery. I haven't ever really seen Ceph become memory bottle-necked, much like ZFS it likes to cache stuff but unlike ZFS doesn't really need to.
> 
> The only reason I was able to justify the cost of setting up ceph for myself at home is I got 6 8TB HDDs from worst buy for $130 each on black friday.


Recovery is when you replace a failed OSD? Like resilvering a RAID array? Also, the number of OSD nodes must be an odd number for a majority vote, right? Or is that the number of monitor nodes?


----------



## twerk

Quote:


> Originally Posted by *cdoublejj*
> 
> 
> 
> 
> 
> 
> 
> 
> dang i knew they could be had fora deal from time to time but, dang. for that cheap I think i'll ditch the vSGA and and wait pick me up up 2 or 3.
> As far as i know that teslas don't support vGPU on esxi. is this not true?


They definitely do! You need the NVIDIA Grid VIB.


----------



## cdoublejj

Quote:


> Originally Posted by *twerk*
> 
> They definitely do! You need the NVIDIA Grid VIB.


i assume you mean they definitly do have plug and play, i assume the VIB is software package of sorts?
hows the "vmotion" on proxmox?


----------



## zdude

Quote:


> Originally Posted by *parityboy*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> I have tried erasure encoding time and time again at work and have never had it actually work well. On a cluster with 5 nodes with 24 1TB SSDs each and 100Gb network Ceph will do ~3GB/s with erasure encoding, the same 5 nodes will do ~27GB/s with replication and size = 2.
> 
> And that 1GB/TB of ram is listed as only during recovery. I haven't ever really seen Ceph become memory bottle-necked, much like ZFS it likes to cache stuff but unlike ZFS doesn't really need to.
> 
> The only reason I was able to justify the cost of setting up ceph for myself at home is I got 6 8TB HDDs from worst buy for $130 each on black friday.
> 
> 
> 
> Recovery is when you replace a failed OSD? Like resilvering a RAID array? Also, the number of OSD nodes must be an odd number for a majority vote, right? Or is that the number of monitor nodes?
Click to expand...

Recovery is just like resilvering a RAID array. There can be any number of both OSDs and mon daemons. However only an odd number of mon daemons will ever be active.


----------



## parityboy

Quote:


> Originally Posted by *zdude*
> 
> Recovery is just like resilvering a RAID array. There can be any number of both OSDs and mon daemons. However only an odd number of mon daemons will ever be active.


What I was trying to ask re: monitor nodes is avoiding a lack of consensus/majority vote if you have an even number of monitors. Is there a relationship between the number of monitors and the number of OSD nodes? Also, you have a 100Gb/s network at work?







Is that single port or 2 x 50Gb? Or Infiniband?


----------



## zdude

Quote:


> Originally Posted by *parityboy*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> Recovery is just like resilvering a RAID array. There can be any number of both OSDs and mon daemons. However only an odd number of mon daemons will ever be active.
> 
> 
> 
> What I was trying to ask re: monitor nodes is avoiding a lack of consensus/majority vote if you have an even number of monitors. Is there a relationship between the number of monitors and the number of OSD nodes? Also, you have a 100Gb/s network at work?
> 
> 
> 
> 
> 
> 
> 
> Is that single port or 2 x 50Gb? Or Infiniband?
Click to expand...

http://www.mellanox.com/page/products_dyn?product_family=201 -- these cards are awesome.

I have never noticed any particular requirements pertaining to managers and OSDs. I have run 150 OSD nodes with a single manager and ~30 clients with no noticeable slowdowns/issues (that was not in a production environment, in production it is recommended to use an odd number of monitors).


----------



## cdoublejj

so i figured for vmotion 40gbs would probably make a difference over 10gb :-/ kind of wish i had just gone for gold instead of getting a 24 port l2/l3 switch with 10gbs ports BUT, 100 GBPS !?


----------



## parityboy

Quote:


> Originally Posted by *cdoublejj*
> 
> so i figured for vmotion 40gbs would probably make a difference over 10gb :-/ kind of wish i had just gone for gold instead of getting a 24 port l2/l3 switch with 10gbs ports BUT, 100 GBPS !?


If your VMs are on shared storage, I can't see 40Gb/s making that much difference - I've seen VM teleporting happen almost instantly on 10Gb/s - are you running a shared-nothing infrastructure?


----------



## cdoublejj

Quote:


> Originally Posted by *parityboy*
> 
> If your VMs are on shared storage, I can't see 40Gb/s making that much difference - I've seen VM teleporting happen almost instantly on 10Gb/s - are you running a shared-nothing infrastructure?


the 24 1gb ports for various workstations that burns CDs and computers needing to downloads updates and the servers will go on on the 4x 10gbps ports on the DGS-1510-28X. Might be helpfull to find out weather or not those can bonded for 20gbps or something like that should i want to upgrade to a fabric switch or something. I say that because i ahve thought about if i get more servers i only have 4x 10g ports so i'll need a bigger 10gb or 40gb switch.

google conversations 10gbps is 1.25 gigabyte. if a VM has 10gb of ram ??? Perhaps with HA and vmotion it keeps the VM's memory on stand by on another server and transfers the difference? maybe it also uses compression? My understanding is that vmotion can helps with a servers blows chunks? what when a server hard locks?


----------



## cdoublejj

Quote:


> Originally Posted by *zdude*
> 
> I found a pair of them for $300 each. From the looks of it it is just a board with 4 750Tis crammed onto it. Could be something that is fun to play with.


dud you got buku lucky your closer to $1K USD a pop from what i'm seeing. the "K" series are little cheaper but, i'd have research on specs and compatibility. if the M40 is newer and it's basically 750Tis how ancient would the "K" series be. 750 ti would be decent level performance. vSGA while i'd like to test it limits me to ESXi 6.0 U3 :-/


----------



## zdude

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> I found a pair of them for $300 each. From the looks of it it is just a board with 4 750Tis crammed onto it. Could be something that is fun to play with.
> 
> 
> 
> dud you got buku lucky your closer to $1K USD a pop from what i'm seeing. the "K" series are little cheaper but, i'd have research on specs and compatibility. if the M40 is newer and it's basically 750Tis how ancient would the "K" series be. 750 ti would be decent level performance. vSGA while i'd like to test it limits me to ESXi 6.0 U3 :-/
Click to expand...

The problem with the GRID M40s is that they were custom PCBs for somebody and even nvidia enterprise support doesn't really know what they are. There is no garentee that any drivers will work with them







It could also be classified as a Tesla M10 with half the VRAM...


----------



## parityboy

*@zdude*

Quick question re: the dual-port VPI cards. If I were to use these cards to connect three Ceph OSD nodes together using Infiniband - and wanted to avoid the use of an Infiniband switch - how much of a performance penalty would there be?


----------



## zdude

Quote:


> Originally Posted by *parityboy*
> 
> *@zdude*
> 
> Quick question re: the dual-port VPI cards. If I were to use these cards to connect three Ceph OSD nodes together using Infiniband - and wanted to avoid the use of an Infiniband switch - how much of a performance penalty would there be?


If you can get the routing to work correctly I would expect it to actually work faster. However, I am not sure you would be able to get it to work, it is my understanding that Ceph binds the OSD to a particular port and the OSD does not communicate on any other ports. So you would need to make a virtual interface that then routes between the two physical interfaces to create a single virtual network on a single subnet.


----------



## cdoublejj

Quote:


> Originally Posted by *zdude*
> 
> The problem with the GRID M40s is that they were custom PCBs for somebody and even nvidia enterprise support doesn't really know what they are. There is no garentee that any drivers will work with them
> 
> 
> 
> 
> 
> 
> 
> It could also be classified as a Tesla M10 with half the VRAM...


i plan on using with ESXi MAYBE proxmox or Xen. you use the VIB and or software packages for the ESXi. VMWare also has compatibility guide

https://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsga

but, it not longer lists vGPU at all, probably because it's a given? i say this because of the options, IE pass-through or vSGA, vDGA do not list the M40 as compatible though some of the Teslas are. that doesn't necessarily mean the Teslas will do gpu acceleration for graphics in the guests though, i think some of the Teslas are for compute.

EDIT: i may need to re read this: https://www.lewan.com/blog/2015/03/30/vgpu-vsga-vdga-software-why-do-i-care

EDIT: probably because only the K1 and K2 have vGPU support so why bother dropping that in the guide when only 2 models are compatible?

EDIT: i'm not seeing the M$0 as compatible with ESXi but, i haven't check Xen server

EDIT: I have to wonder about numbers for nvidia to make who it was must have order a butt ton of them. this is all i could find

http://images.nvidia.com/content/tesla/pdf/tesla-m40-product-brief.pdf

it's GM200


----------



## zdude

Some minor updates to my server this past week. Upgraded my RBD and metadata storage from ST1000DM001 HDDs to 3 Samsung 850 EVOs. Noticed a ~15x speedup in the containers that were booted from them. Paired with the 128GB memory upgrade I did recently, in the past 2 weeks my server's capacity has about doubled without any changes on the CPU









Dashboard right now....


Seeing ~2k IOPS to the RBD pool after switching to SSDs, was only seeing ~130 with the HDDs


----------



## zdude

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> The problem with the GRID M40s is that they were custom PCBs for somebody and even nvidia enterprise support doesn't really know what they are. There is no garentee that any drivers will work with them
> 
> 
> 
> 
> 
> 
> 
> It could also be classified as a Tesla M10 with half the VRAM...
> 
> 
> 
> i plan on using with ESXi MAYBE proxmox or Xen. you use the VIB and or software packages for the ESXi. VMWare also has compatibility guide
> 
> https://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsga
> 
> but, it not longer lists vGPU at all, probably because it's a given? i say this because of the options, IE pass-through or vSGA, vDGA do not list the M40 as compatible though some of the Teslas are. that doesn't necessarily mean the Teslas will do gpu acceleration for graphics in the guests though, i think some of the Teslas are for compute.
> 
> EDIT: i may need to re read this: https://www.lewan.com/blog/2015/03/30/vgpu-vsga-vdga-software-why-do-i-care
> 
> EDIT: probably because only the K1 and K2 have vGPU support so why bother dropping that in the guide when only 2 models are compatible?
> 
> EDIT: i'm not seeing the M$0 as compatible with ESXi but, i haven't check Xen server
> 
> EDIT: I have to wonder about numbers for nvidia to make who it was must have order a butt ton of them. this is all i could find
> 
> http://images.nvidia.com/content/tesla/pdf/tesla-m40-product-brief.pdf
> 
> it's GM200
Click to expand...

https://www.servethehome.com/nvidia-grid-m40-4x-maxwell-gpus-16gb-ram-cards/

Like I said they are very poorly documented cards that were customized for some customer.


----------



## cdoublejj

i wonder if anyone else is running socket 1366 xeon 6 cores. i wonder how few VMs i can actually run, especially when /if /with 50% of them under load.


----------



## zdude

Quote:


> Originally Posted by *cdoublejj*
> 
> i wonder if anyone else is running socket 1366 xeon 6 cores. i wonder how few VMs i can actually run, especially when /if /with 50% of them under load.


It all depends what the load is on those VMs. I personally wouldn't run anything older than sandy bridge at this point due to power consumption. It is even getting to the point that in a couple of years I will probably upgrade my two 12 core v2 CPUs...


----------



## parityboy

Quote:


> Originally Posted by *zdude*
> 
> If you can get the routing to work correctly I would expect it to actually work faster. However, I am not sure you would be able to get it to work, it is my understanding that Ceph binds the OSD to a particular port and the OSD does not communicate on any other ports. So you would need to make a virtual interface that then routes between the two physical interfaces to create a single virtual network on a single subnet.


You gave me an idea: if I use 10GbE instead of Infiniband, I might be able to use one of the ports on each card to create a 3-port distributed switch. The three other ports would each then bridge to its "local twin". Hmmm...or I could just experiment with basic bridging on each host and just assign an IP to each bridge. That should work. I don't know if Infiniband supports bridging though.

Quote:


> Originally Posted by *zdude*
> 
> Some minor updates to my server this past week. Upgraded my RBD and metadata storage from ST1000DM001 HDDs to 3 Samsung 850 EVOs. Noticed a ~15x speedup in the containers that were booted from them. Paired with the 128GB memory upgrade I did recently, in the past 2 weeks my server's capacity has about doubled without any changes on the CPU
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Dashboard right now....
> 
> 
> Seeing ~2k IOPS to the RBD pool after switching to SSDs, was only seeing ~130 with the HDDs


Quick question re: RBD - does it support any kind of clustered block-level access like OCFS2 or would that have to be layered separately and above it? I'm thinking of a scenario where an RBD holds VMs and container images and it needs to be shared by more than one VM host, kind of like what VMFS can do.


----------



## pvt.joker

Quote:


> Originally Posted by *cdoublejj*
> 
> i wonder if anyone else is running socket 1366 xeon 6 cores. i wonder how few VMs i can actually run, especially when /if /with 50% of them under load.


I've been debating "upgrading" my file server.. i've got a dual supermicro 1366 board with xeon x5650's just collecting dust.. but the power consumption issue is a concern.. but then i wonder if it'd be any worse than my "old" dual socket 771 setup running currently.. Time will tell (and the power bill) i guess!


----------



## cdoublejj

Quote:


> Originally Posted by *pvt.joker*
> 
> I've been debating "upgrading" my file server.. i've got a dual supermicro 1366 board with xeon x5650's just collecting dust.. but the power consumption issue is a concern.. but then i wonder if it'd be any worse than my "old" dual socket 771 setup running currently.. Time will tell (and the power bill) i guess!


they have 65 watt six cores for it that don't cost too much.

https://en.wikipedia.org/wiki/List_of_Intel_Xeon_microprocessors#"Westmere-EP"_(32_nm)_Efficient_Performance

though to compare

https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+X5675+%40+3.07GHz

https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+L5640+%40+2.27GHz (no passmark available for the L5645)

https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5645+%40+2.40GHz (80 watts)

Quote:


> Originally Posted by *parityboy*
> 
> You gave me an idea: if I use 10GbE instead of Infiniband, I might be able to use one of the ports on each card to create a 3-port distributed switch. The three other ports would each then bridge to its "local twin". Hmmm...or I could just experiment with basic bridging on each host and just assign an IP to each bridge. That should work. I don't know if Infiniband supports bridging though.
> Quick question re: RBD - does it support any kind of clustered block-level access like OCFS2 or would that have to be layered separately and above it? I'm thinking of a scenario where an RBD holds VMs and container images and it needs to be shared by more than one VM host, kind of like what VMFS can do.


3 Ports!? i haven't seen a 3 port 10Gb LC before, I though about that, 1 port each to switch and then maybe the second port of say the primary ESXi Server to the Second port of the Secondary ESXi.


----------



## zdude

Quote:


> Originally Posted by *parityboy*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> If you can get the routing to work correctly I would expect it to actually work faster. However, I am not sure you would be able to get it to work, it is my understanding that Ceph binds the OSD to a particular port and the OSD does not communicate on any other ports. So you would need to make a virtual interface that then routes between the two physical interfaces to create a single virtual network on a single subnet.
> 
> 
> 
> 
> 
> You gave me an idea: if I use 10GbE instead of Infiniband, I might be able to use one of the ports on each card to create a 3-port distributed switch. The three other ports would each then bridge to its "local twin". Hmmm...or I could just experiment with basic bridging on each host and just assign an IP to each bridge. That should work. I don't know if Infiniband supports bridging though.
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> Some minor updates to my server this past week. Upgraded my RBD and metadata storage from ST1000DM001 HDDs to 3 Samsung 850 EVOs. Noticed a ~15x speedup in the containers that were booted from them. Paired with the 128GB memory upgrade I did recently, in the past 2 weeks my server's capacity has about doubled without any changes on the CPU
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Dashboard right now....
> 
> 
> Seeing ~2k IOPS to the RBD pool after switching to SSDs, was only seeing ~130 with the HDDs
> 
> Click to expand...
> 
> Quick question re: RBD - does it support any kind of clustered block-level access like OCFS2 or would that have to be layered separately and above it? I'm thinking of a scenario where an RBD holds VMs and container images and it needs to be shared by more than one VM host, kind of like what VMFS can do.
Click to expand...

RBD is a storage method in Ceph itself, meaning it is inherently clustered.


----------



## KyadCK

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *pvt.joker*
> 
> I've been debating "upgrading" my file server.. i've got a dual supermicro 1366 board with xeon x5650's just collecting dust.. but the power consumption issue is a concern.. but then i wonder if it'd be any worse than my "old" dual socket 771 setup running currently.. Time will tell (and the power bill) i guess!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> they have 65 watt six cores for it that don't cost too much.
> 
> https://en.wikipedia.org/wiki/List_of_Intel_Xeon_microprocessors#"Westmere-EP"_(32_nm)_Efficient_Performance
> 
> though to compare
> 
> https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+X5675+%40+3.07GHz
> 
> https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+L5640+%40+2.27GHz (no passmark available for the L5645)
> 
> https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5645+%40+2.40GHz (80 watts)
> 
> Quote:
> 
> 
> 
> Originally Posted by *parityboy*
> 
> You gave me an idea: if I use 10GbE instead of Infiniband, I might be able to use one of the ports on each card to create a 3-port distributed switch. The three other ports would each then bridge to its "local twin". Hmmm...or I could just experiment with basic bridging on each host and just assign an IP to each bridge. That should work. I don't know if Infiniband supports bridging though.
> Quick question re: RBD - does it support any kind of clustered block-level access like OCFS2 or would that have to be layered separately and above it? I'm thinking of a scenario where an RBD holds VMs and container images and it needs to be shared by more than one VM host, kind of like what VMFS can do.
> 
> Click to expand...
> 
> *3 Ports!? i haven't seen a 3 port 10Gb LC before,* I though about that, 1 port each to switch and then maybe the second port of say the primary ESXi Server to the Second port of the Secondary ESXi.
Click to expand...

He means using 3 dual-nic 10gb cards and using one nic from each.


----------



## parityboy

Quote:


> Originally Posted by *zdude*
> 
> RBD is a storage method in Ceph itself, meaning it is inherently clustered.


So a single RBD supports simultaneous read/write access from two different clients, with no data corruption? Do RBDs have to be formatted with a filesystem, similar to an iSCSI LUN?

*EDIT:*
I had a quick read here, it appears that RBDs need to be formatted with a filesystem, e.g. ext4.
Quote:


> Originally Posted by *KyadCK*
> 
> He means using 3 dual-nic 10gb cards and using one nic from each.


This. One port from each tied together to form a "switch", and the other three ports each "connected" to the "switch" and assigned an IP.


----------



## cdoublejj

yeah i guess no one ever talks about making your own router/switch with 10Gb but, idk how well that would work since most x86 OSes or all x86 isn't real time like a legit switch but, that's probably neither here nor there.


----------



## deafboy

Would certainly be more overhead/latency

And really not terribly necessary with the option out there...

I'll actually more than likely be moving away from my 10Gb network as I don't utilize it as much anymore now that most things have been migrated to other servers haha


----------



## cekim

Quote:


> Originally Posted by *deafboy*
> 
> Would certainly be more overhead/latency
> 
> And really not terribly necessary with the option out there...
> 
> I'll actually more than likely be moving away from my 10Gb network as I don't utilize it as much anymore now that most things have been migrated to other servers haha


Blasphemy!









I've been fine with 8x10G fast path to suppliment a bigger 24x1G switch for a while, but the ebay devils have been whispering tales of 40G, fiber and other evils into my head movies...

At least for now, power delivery has capped what I can run in my rack. I'd have to add a third circuit and/or 20A lines to do more.


----------



## parityboy

Quote:


> Originally Posted by *deafboy*
> 
> Would certainly be more overhead/latency
> 
> And really not terribly necessary with the option out there...
> 
> I'll actually more than likely be moving away from my 10Gb network as I don't utilize it as much anymore now that most things have been migrated to other servers haha


Yeah, well the options out there - i.e. a 10Gb Ethernet switch - are still somewhat expensive where I am if you want a known brand, even on eBay. Anyone heard of Quanta? Infiniband isn't much better, unfortunately.


----------



## deafboy

Quote:


> Originally Posted by *parityboy*
> 
> Yeah, well the options out there - i.e. a 10Gb Ethernet switch - are still somewhat expensive where I am if you want a known brand, even on eBay. Anyone heard of Quanta? Infiniband isn't much better, unfortunately.


Yeah, Quanta is okay...

All depends on the interface you want, how much power you're okay with, and how much noise you're okay with. All my stuff is SFP+ so it's quite a bit cheaper than RJ45. My 10Gb network upgrade was like $200-250 or something like that. Switch was like $100 or so (HP), the NICs were like $20 and that came with DAC cables, then I bought some fiber for a longer run.


----------



## cekim

Quote:


> Originally Posted by *parityboy*
> 
> Yeah, well the options out there - i.e. a 10Gb Ethernet switch - are still somewhat expensive where I am if you want a known brand, even on eBay. Anyone heard of Quanta? Infiniband isn't much better, unfortunately.


It seems like NetGear has priced it pretty much perfectly to make trade-off with 8 ports 1:1 with copper or fiber... (cheaper switch vs expensive cables/ports)

One way or another ~$100/port before NICs


----------



## cekim

Quote:


> Originally Posted by *deafboy*
> 
> Yeah, Quanta is okay...
> 
> All depends on the interface you want, how much power you're okay with, and how much noise you're okay with. All my stuff is SFP+ so it's quite a bit cheaper than RJ45. My 10Gb network upgrade was like $200-250 or something like that. Switch was like $100 or so (HP), the NICs were like $20 and that came with DAC cables, then I bought some fiber for a longer run.


hmm, I haven't been able to get per-port cost down that low... between ports and fiber, I was at ~$100 per port. Maybe I need to look again?

40Gb sounds pretty nice... (as does the latency improvement for NFS)


----------



## deafboy

How many 10Gb ports are you looking for?

I only needed 3 so I have one empty port and my switch has the option to add in another 4 ports via an expansion card/module


----------



## cekim

Quote:


> Originally Posted by *deafboy*
> 
> How many 10Gb ports are you looking for?
> 
> I only needed 3 so I have one empty port and my switch has the option to add in another 4 ports via an expansion card/module


I'm using all 8. I _could_ use a couple more... (bonded port to NFS server on this sub-net)


----------



## KyadCK

SFP+ and DACs is going to be far easier in the long run, and gives you more options.

Like @deafboy My 10gbps upgrade was pretty cheap, $400 for a 24-port SFP+ switch (IBM Blade G8124), and I got cheap DACs to the servers and my 48-port 1gbps breakout switch. Each server, the breakout, and my main rig get 20gbps, and the most expensive one to implement was my rig because 75ft DACs don't exist, forcing me to run fiber and get transceivers. Even then the fiber itself was laughably cheap.









EDIT: If you need it to not scream like a banshee, your options are _significantly_ reduced. Cheap used enterprise toys is where it's at.


----------



## cekim

Quote:


> Originally Posted by *KyadCK*
> 
> SFP+ and DACs is going to be far easier in the long run, and gives you more options.
> 
> Like @deafboy My 10gbps upgrade was pretty cheap, $400 for a 24-port SFP+ switch (IBM Blade G8124), and I got cheap DACs to the servers and my 48-port 1gbps breakout switch. Each server, the breakout, and my main rig get 20gbps, and the most expensive one to implement was my rig because 75ft DACs don't exist, forcing me to run fiber and get transceivers. Even then the fiber itself was laughably cheap.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EDIT: If you need it to not scream like a banshee, your options are _significantly_ reduced. Cheap used enterprise toys is where it's at.


I have 2 10GbE lines to my office, but the rest lives in the basement, so it can make noise (within a wide margin of reason).

Presently held up by ram prices... I have some idle parts that I'd be using and some servers I'd be replacing, but prices being 2x what they were a year ago, I'm going to wait a bit.

I'm hoping xeon v3/v4 hardware comes down as well on that secondary market you mentioned. I have a pair of 2690v4's that need a new home and better purpose, but... that ram price... So, they keep doing what they are doing for now...

Once I have a better home with ECC, they will become a ZFS/VM server.


----------



## PuffinMyLye

Quote:


> Originally Posted by *pvt.joker*
> 
> What issues did you run into with the Ubiquiti ES-16-XG? I was considering the jump to 10gb for that price.. but if it's more hassle than it's worth, i'll probably hold off..


If you are planning to use optics you may not run into any issues. Using DAC cables will cause you to rip your hair out. On top of that, the WebUI did not live up to my standards (albeit I'm used to Cisco). And lastly, I was already using 10 of the 12 available SFP+ ports and wanted more room to expand in the future. That last point is obviously just a personal decision. When I mustered up the coin for the SG350XG-24F I justified it by saying to myself that this could be my core switch for the next 10 years.


----------



## zdude

Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *deafboy*
> 
> Yeah, Quanta is okay...
> 
> All depends on the interface you want, how much power you're okay with, and how much noise you're okay with. All my stuff is SFP+ so it's quite a bit cheaper than RJ45. My 10Gb network upgrade was like $200-250 or something like that. Switch was like $100 or so (HP), the NICs were like $20 and that came with DAC cables, then I bought some fiber for a longer run.
> 
> 
> 
> hmm, I haven't been able to get per-port cost down that low... between ports and fiber, I was at ~$100 per port. Maybe I need to look again?
> 
> 40Gb sounds pretty nice... (as does the latency improvement for NFS)
Click to expand...

You do realize that 40Gb has the same latency as 10Gb, there are just 4 channels rather than 1 channel on the QSFP port.


----------



## mbmumford

Quote:


> Originally Posted by *cekim*
> 
> I have 2 10GbE lines to my office, but the rest lives in the basement, so it can make noise (within a wide margin of reason).
> 
> Presently held up by ram prices... I have some idle parts that I'd be using and some servers I'd be replacing, but prices being 2x what they were a year ago, I'm going to wait a bit.
> 
> I'm hoping xeon v3/v4 hardware comes down as well on that secondary market you mentioned. I have a pair of 2690v4's that need a new home and better purpose, but... that ram price... So, they keep doing what they are doing for now...
> 
> Once I have a better home with ECC, they will become a ZFS/VM server.


I will give the pair of 2690 v4's a good home, and give you a pair of 2620 v4's in exchange.

It's Win-Win!


----------



## cekim

Quote:


> Originally Posted by *zdude*
> 
> You do realize that 40Gb has the same latency as 10Gb, there are just 4 channels rather than 1 channel on the QSFP port.


It's my understanding that 10GBaseT has higher latency than SFP+ (significantly in relative terms).


----------



## cekim

Quote:


> Originally Posted by *mbmumford*
> 
> I will give the pair of 2690 v4's a good home, and give you a pair of 2620 v4's in exchange.
> 
> It's Win-Win!


mine,mine,mine all mine! My precious...









They are busy computing computes and computey things and will remain as such... They could just do more if ECC DDR4 was more reasonably priced...


----------



## zdude

Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> You do realize that 40Gb has the same latency as 10Gb, there are just 4 channels rather than 1 channel on the QSFP port.
> 
> 
> 
> It's my understanding that 10GBaseT has higher latency than SFP+ (significantly in relative terms).
Click to expand...

My understanding (could be wrong) is that 10GBaseT has similar latency as SFP+ which has identical latency to QSFP on short runs. The fiber solutions will be quicker over long distances but nobody here is running long distances.


----------



## zdude

Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *mbmumford*
> 
> I will give the pair of 2690 v4's a good home, and give you a pair of 2620 v4's in exchange.
> 
> It's Win-Win!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> mine,mine,mine all mine! My precious...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> They are busy computing computes and computey things and will remain as such... They could just do more if ECC DDR4 was more reasonably priced...
Click to expand...

That translated to mining XMR or Aeon in my mind....


----------



## cekim

Quote:


> Originally Posted by *zdude*
> 
> That translated to mining XMR or Aeon in my mind....


Nothing so immediately profitable... more long-term profitable in the discovery of knowledge and bugs...


----------



## cekim

Quote:


> Originally Posted by *zdude*
> 
> My understanding (could be wrong) is that 10GBaseT has similar latency as SFP+ which has identical latency to QSFP on short runs. The fiber solutions will be quicker over long distances but nobody here is running long distances.


0.2-3uS for SFP+ vs 2.5-3uS for 10GBaseT is the typical range I've always seen.

Order of magnitude relative latency - which only matters to many small files, but guess what chokes NFS and random block access?


----------



## zdude

Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> That translated to mining XMR or Aeon in my mind....
> 
> 
> 
> Nothing so immediately profitable... more long-term profitable in the discovery of knowledge and bugs...
Click to expand...

Ah, so actually doing productive things. Unlike the spare cycles on my servers









Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> My understanding (could be wrong) is that 10GBaseT has similar latency as SFP+ which has identical latency to QSFP on short runs. The fiber solutions will be quicker over long distances but nobody here is running long distances.
> 
> 
> 
> 0.2-3uS for SFP+ vs 2.5-3uS for 10GBaseT is the typical range I've always seen.
> 
> Order of magnitude relative latency - which only matters to many small files, but guess what chokes NFS and random block access?
Click to expand...

I did not know that, although I have done very little experimentation on 10GBaseT.


----------



## cekim

Quote:


> Originally Posted by *zdude*
> 
> Ah, so actually doing productive things. Unlike the spare cycles on my servers


You don't want to heap too much expectation on your servers. They need a nurturing environment, room to experiment, fail, get piercings, tattoos and generally make poor choices, it's all part of developing as a mature production environment...


----------



## cdoublejj

also i keep hearing aobut this "ceph" so i looked it up and it's a "unified distributed storage", what the name of hell is "unified distributed storage" sounds like buzz words. can't they just say it's file system that incorporates multiple SANs and NASs? and that's just what i'm assuming it is.


----------



## twerk

Quote:


> Originally Posted by *cdoublejj*
> 
> also i keep hearing aobut this "ceph" so i looked it up and it's a "unified distributed storage", what the name of hell is "unified distributed storage" sounds like buzz words. can't they just say it's file system that incorporates multiple SANs and NASs? and that's just what i'm assuming it is.


This industry loves buzzwords, "hyperconverged' seems to be flavour of the month at the moment.


----------



## cdoublejj

Quote:


> Originally Posted by *twerk*
> 
> This industry loves buzzwords, "hyperconverged' seems to be flavour of the month at the moment.


That's another word i don't get and it hurts my head and no matter how much research i do, i can not find out what means, just more words i don't understand.


----------



## zdude

Quote:


> Originally Posted by *cdoublejj*
> 
> also i keep hearing aobut this "ceph" so i looked it up and it's a "unified distributed storage", what the name of hell is "unified distributed storage" sounds like buzz words. can't they just say it's file system that incorporates multiple SANs and NASs? and that's just what i'm assuming it is.


Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *twerk*
> 
> This industry loves buzzwords, "hyperconverged' seems to be flavour of the month at the moment.
> 
> 
> 
> That's another word i don't get and it hurts my head and no matter how much research i do, i can not find out what means, just more words i don't understand.
Click to expand...

The reason they don't call it a file storage system is it can provide block and object storage as well as a file-system. Ceph is not and never was meant to be just a nas file system, it just is pretty good at it by accident.


----------



## Norlig

I've got 2 seperate machines at the moment.

1x HTPC/Server running windows and 1x Freenas Machine for media Storage.

What would be the best way to merge the harddrives in my freenas, to a virtual machine running on the Windows HTPC?

I got a IT-flashed SAS card and freenas boots on some USB thumb drives.

Would it work with Hyper-V?

I.E. Putting my HTPC components into this case:









And booting FreeNas on a VM, and still use my HTPC as a normal PC.


----------



## cdoublejj

Quote:


> Originally Posted by *zdude*
> 
> The reason they don't call it a file storage system is it can provide block and object storage as well as a file-system. Ceph is not and never was meant to be just a nas file system, it just is pretty good at it by accident.


I don't what object level storage is but, block sure sounds like a reference to hard drive blocks, does it emulate a singular storage device such as a hard drive over multiple sans?


----------



## zdude

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> The reason they don't call it a file storage system is it can provide block and object storage as well as a file-system. Ceph is not and never was meant to be just a nas file system, it just is pretty good at it by accident.
> 
> 
> 
> I don't what object level storage is but, block sure sounds like a reference to hard drive blocks, does it emulate a singular storage device such as a hard drive over multiple sans?
Click to expand...

Yes, block storage allows it to create virtual block devices which can be used either as an ISCSI boot/target or virtual machine drives (KVM supports this now). Object storage allows it to serve as a data base storage system. CephFS actually works using object storage on the backend with a database server (MDS) pointing the client to the correct files and presenting the file structure.



This is what my lsblk output looks like on my server. The rbd devices are block devices from the ceph cluster mounted on the system to provide boot disks for containers and virtual machines.


----------



## cdoublejj

Quote:


> Originally Posted by *zdude*
> 
> Yes, block storage allows it to create virtual block devices which can be used either as an ISCSI boot/target or virtual machine drives (KVM supports this now). Object storage allows it to serve as a data base storage system. CephFS actually works using object storage on the backend with a database server (MDS) pointing the client to the correct files and presenting the file structure.
> 
> 
> 
> This is what my lsblk output looks like on my server. The rbd devices are block devices from the ceph cluster mounted on the system to provide boot disks for containers and virtual machines.


any DB or specific types of DBs?


----------



## zdude

As a side note, I figure I will ask here. Has anyone figured out a way to run an ethereum node (geth or parity) that doesn't write obscene amounts of data to disk continuously? I am seeing ~2TB per day from geth AFTER it is caught up. Because of its huge disk I/O requirements it needs to be run on an SSD or it will not ever catch up to the rest of the network but over the course of a week and a half geth was able to use 1.2% of the lifetime writes on 3 1TB 850 EVOs....


----------



## zdude

Quote:


> Originally Posted by *cdoublejj*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zdude*
> 
> Yes, block storage allows it to create virtual block devices which can be used either as an ISCSI boot/target or virtual machine drives (KVM supports this now). Object storage allows it to serve as a data base storage system. CephFS actually works using object storage on the backend with a database server (MDS) pointing the client to the correct files and presenting the file structure.
> 
> 
> 
> This is what my lsblk output looks like on my server. The rbd devices are block devices from the ceph cluster mounted on the system to provide boot disks for containers and virtual machines.
> 
> 
> 
> any DB or specific types of DBs?
Click to expand...

I am not entirely sure, I have only used object storage for CephFS, not anything else. I would expect the application needs to be coded specifically for that.


----------



## KyadCK

Finally got around to putting my storage server on the backbone. Had to significantly boost it's allotted RAM and CPU usage to get it to it's peak.



Far more appropriate for shared storage now.


----------



## zdude

Just got another payout from crypto mining, trying to decide what to buy with it. Whatever I am going to buy needs to be brought before the end of the year. Any cool <$500 addons for a server? I already have 128GB RAM, 2 12core xeons, SSD VM storage and 6 8TB HDDs. Right now I will probably just add a few more 8TB hard drives for data storage.


----------



## cekim

Quote:


> Originally Posted by *zdude*
> 
> As a side note, I figure I will ask here. Has anyone figured out a way to run an ethereum node (geth or parity) that doesn't write obscene amounts of data to disk continuously? I am seeing ~2TB per day from geth AFTER it is caught up. Because of its huge disk I/O requirements it needs to be run on an SSD or it will not ever catch up to the rest of the network but over the course of a week and a half geth was able to use 1.2% of the lifetime writes on 3 1TB 850 EVOs....


I'm not well versed in this, but had run into this issue in playing around with 128G thumb-drive based installs for mining experiments.

There is -lite and -cache (-cache_size??? don't recall exact name) which can be used to drastically reduce disk consumption, but evidently -lite is more experimental than -cache.

Short version is I was able to make big improvements here doing what the google gods suggested after searching for
"geth disk usage"

I'm planning on doing some more experiments in the coming weeks, so I'll post anything I find in that process.


----------



## silvrr

Quote:


> Originally Posted by *zdude*
> 
> Just got another payout from crypto mining, trying to decide what to buy with it. Whatever I am going to buy needs to be brought before the end of the year. Any cool <$500 addons for a server? I already have 128GB RAM, 2 12core xeons, SSD VM storage and 6 8TB HDDs. Right now I will probably just add a few more 8TB hard drives for data storage.


Do you have a battery backup in place?


----------



## cdoublejj

yeah i'd stock up on those 8 and 10 TB drives for a few years from now when a few konk out.


----------



## mouacyk

While most of these are classic servers stuffed with TB's of RAM and server-grade components, a well-tuned desktop PC can likewise be re-purposed and often is cheaper:

Web/file/mysql server, portage rsync/bin/package server, media/streaming server, compile server, virtualization server

OS: Gentoo 64-bit Multi-Lib
Case: Lian-Li PC7HX
Cooling: XSPC Rasa CPU block, EK TF5, 240mm Swiftech radiator, D5 PWM pump
CPU: I7-3770 41x @ 4.3GHz/4.5GHz
GPU: Intel HD 4000, GTX 980 Ti
Motherboard: ASUS P8Z77-WS 105MHz
Memory: 2x8GB 2100MHz
PSU: SS XP2 660W
Storage HDD(s): 4x HD's + 2x External Backups
Server Manufacturer: Self-built

Awaiting E5-1660 V2 and X79 mobo to move to 6-core. Also adding 120mm radiator and using EK MX CPU block.


----------



## cekim

Quote:


> Originally Posted by *mouacyk*
> 
> While most of these are classic servers stuffed with TB's of RAM and server-grade components, a well-tuned desktop PC can likewise be re-purposed and often is cheaper:
> 
> Web/file/mysql server, portage rsync/bin/package server, media/streaming server, compile server, virtualization server
> 
> OS: Gentoo 64-bit Multi-Lib
> Case: Lian-Li PC7HX
> Cooling: XSPC Rasa CPU block, EK TF5, 240mm Swiftech radiator, D5 PWM pump
> CPU: I7-3770 41x @ 4.3GHz/4.5GHz
> GPU: Intel HD 4000, GTX 980 Ti
> Motherboard: ASUS P8Z77-WS 105MHz
> Memory: 2x8GB 2100MHz
> PSU: SS XP2 660W
> Storage HDD(s): 4x HD's + 2x External Backups
> Server Manufacturer: Self-built
> 
> Awaiting E5-1660 V2 and X79 mobo to move to 6-core. Also adding 120mm radiator and using EK MX CPU block.


Even though I have multiple and different types of backup and rsync with checksums, I occasionally have a flash of existential panic when I think of the lack of ECC on my primary, largest, fastest disk store....

I'm working toward migrating to ZFS on an ECC capable system as the primary big/fast store, but until then, we are living dangerously... Looking forward to some of the accelerations of remote sync with ZFS...

My main archival backup is an ECC system, so the issue is if a bit error is pushed with an rsync from the primary store (raid 10) to the archival backup... I could either store a broken file or corrupt an existing with a bogus time-stamp artificial diff caused by a bit error.

The other saving grace of my setup is that the overwhelming majority of my large data is recoverable from other sources, if painful. The longer term data that isn't is already on the archival system and its backup and isn't mirrored on the big/fast non-ECC raid10 store... So, not subject to a spurious over-write with bad data.

Short version: desktop hardware's disadvantage is that there are some bennies to server hardware in terms of stability and quality that come into play when the unexpected happens...


----------



## KyadCK

Quote:


> Originally Posted by *mouacyk*
> 
> While most of these are classic servers stuffed with TB's of RAM and server-grade components, a well-tuned desktop PC can likewise be re-purposed and often is cheaper:
> 
> Web/file/mysql server, portage rsync/bin/package server, media/streaming server, compile server, virtualization server
> 
> OS: Gentoo 64-bit Multi-Lib
> Case: Lian-Li PC7HX
> Cooling: XSPC Rasa CPU block, EK TF5, 240mm Swiftech radiator, D5 PWM pump
> CPU: I7-3770 41x @ 4.3GHz/4.5GHz
> GPU: Intel HD 4000, GTX 980 Ti
> Motherboard: ASUS P8Z77-WS 105MHz
> Memory: 2x8GB 2100MHz
> PSU: SS XP2 660W
> Storage HDD(s): 4x HD's + 2x External Backups
> Server Manufacturer: Self-built
> 
> Awaiting E5-1660 V2 and X79 mobo to move to 6-core. Also adding 120mm radiator and using EK MX CPU block.


The problem with using desktops for many of us (having done it), is the obscene amount of RAM required to do virtualization in a way that does not waste other resources, combined with consumer chipsets not supporting very much.

Also you can pick up strong old Dell servers for $200, and maxing out the ram is easy at $200-500. Compared to upgrading a 3770 to anything modern, having to move up to DDR4 and all the costs that brings about, it isn't actually that bad. Likewise desktop cases have bad airflow for things like RAID cards or good NICs, and typically don't support very many hotswap bays even with adapters, which also add to the cost depending on need. I would say both my servers together (minus HDDs, maybe, depending on what yours are) would be cheaper to buy than your re-purposed rig even today.

Either way though, yours will absolutely be quieter, use less power, and can actually fit a GPU, allowing you to keep it in a room without tearing your hair out.


----------



## cdoublejj

Quote:


> Originally Posted by *KyadCK*
> 
> The problem with using desktops for many of us (having done it), is the obscene amount of RAM required to do virtualization in a way that does not waste other resources, combined with consumer chipsets not supporting very much.
> 
> Also you can pick up strong old Dell servers for $200, and maxing out the ram is easy at $200-500. Compared to upgrading a 3770 to anything modern, having to move up to DDR4 and all the costs that brings about, it isn't actually that bad. Likewise desktop cases have bad airflow for things like RAID cards or good NICs, and typically don't support very many hotswap bays even with adapters, which also add to the cost depending on need. I would say both my servers together (minus HDDs, maybe, depending on what yours are) would be cheaper to buy than your re-purposed rig even today.
> 
> Either way though, yours will absolutely be quieter, use less power, and can actually fit a GPU, allowing you to keep it in a room without tearing your hair out.


Yeah i definitely had to get creative for air flow in the last 2 tower cases i did builds in. server builds demand or can use a little more.


----------



## Ziku

Quote:


> Originally Posted by *KyadCK*
> 
> Finally got around to putting my storage server on the backbone. Had to significantly boost it's allotted RAM and CPU usage to get it to it's peak.
> 
> 
> 
> Far more appropriate for shared storage now.


Wow ) That's great


----------



## cdoublejj

so before i go and get a switch like this; https://www.amazon.com/dp/B0723DT6MN/_encoding=UTF8?coliid=I37JXNEJVMU5XI&colid=1K5V20JIA8KJO&psc=0

any nominations fora USED 24 port switch with SFP+/10G uplinks? i'm going to be pushing any where from 7-16 IP cams on my server not mention file sharing for music/movie streaming. i feel like a bunch f 4k Cameras even @ 1080p would kil lthe servers 1g connection so i figured a SFP+ uplink would be in order.


----------



## zdude

I don't think you will be able to beat that price..


----------



## KyadCK

Agreed, you'd be hard pressed to beat that even on ebay.


----------



## cekim

Quote:


> Originally Posted by *KyadCK*
> 
> Finally got around to putting my storage server on the backbone. Had to significantly boost it's allotted RAM and CPU usage to get it to it's peak.
> 
> 
> 
> Far more appropriate for shared storage now.


Have a link or any hints to your setup? I've been playing around with various approaches over the years... I can saturate 10GbE but not with redundancy.... so I rely on manually mirroring the important bits a fast 30T partition to a slower 15T partition to save my bacon if something goes wrong. When a drive dies (smart error this summer) I still have the important data on the smaller raid 10 but had to rebuild the raid 0 manually. Not ideal, but functional and fast so I am always on the look out for better approaches.

I'm also not wowed by ZFS' cached/compressed performance as my bottleneck is in dumping out an enormous amount of data at some point and then reading it once later... not conducive to caching acceleration.... I need raw throughout without the tricks that apply to serving lots of small static files to many users over and over...

I need lots of large dynamic files to perform... and by large, I mean larger than the memory of either client or server...


----------



## KyadCK

Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Finally got around to putting my storage server on the backbone. Had to significantly boost it's allotted RAM and CPU usage to get it to it's peak.
> 
> 
> 
> Far more appropriate for shared storage now.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Have a link or any hints to your setup? I've been playing around with various approaches over the years... *I can saturate 10GbE but not with redundancy*.... so I rely on manually mirroring the important bits a fast 30T partition to a slower 15T partition to save my bacon if something goes wrong. When a drive dies (smart error this summer) I still have the important data on the smaller raid 10 but had to rebuild the raid 0 manually. Not ideal, but functional and fast so I am always on the look out for better approaches.
> 
> I'm also not wowed by ZFS' cached/compressed performance as my bottleneck is in dumping out an enormous amount of data at some point and then reading it once later... not conducive to caching acceleration.... I need raw throughout without the tricks that apply to serving lots of small static files to many users over and over...
> 
> I need lots of large dynamic files to perform... and by large, I mean larger than the memory of either client or server...
Click to expand...

SSDs and a RAID card (RAID50), OS is just Windows Server 2016. Also note that it is one big 16GB file, not many small ones, and I explicitly used Robocopy with 16 threads.

If you need looooots of small stuff, why not drop an NVMe SSD in there and have the OS do automated incremental backups to slower storage overnight? No HDD will ever be good at lots of small stuff regardless of config, but a single NVMe drive can do a lot.


----------



## cekim

Quote:


> Originally Posted by *KyadCK*
> 
> SSDs and a RAID card (RAID50), OS is just Windows Server 2016. Also note that it is one big 16GB file, not many small ones, and I explicitly used Robocopy with 16 threads.
> 
> If you need looooots of small stuff, why not drop an NVMe SSD in there and have the OS do automated incremental backups to slower storage overnight? No HDD will ever be good at lots of small stuff regardless of config, but a single NVMe drive can do a lot.


No, I need lots of LAAARGE stuff... I'll have 4-8 10GbE compute nodes either creating or digesting giant DBs on a common NFS mount. As we speak, the disk I/O is not a limiting factor since there is enough optimization on the client side that upstream processing is now the limit, but I'm looking forward to Mr Amdahl during the next phase where those bottlenecks are removed...

Then it will be time for 40GbE ;-)


----------



## KyadCK

Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> SSDs and a RAID card (RAID50), OS is just Windows Server 2016. Also note that it is one big 16GB file, not many small ones, and I explicitly used Robocopy with 16 threads.
> 
> If you need looooots of small stuff, why not drop an NVMe SSD in there and have the OS do automated incremental backups to slower storage overnight? No HDD will ever be good at lots of small stuff regardless of config, but a single NVMe drive can do a lot.
> 
> 
> 
> No, I need lots of LAAARGE stuff... I'll have 4-8 10GbE compute nodes either creating or digesting giant DBs on a common NFS mount. As we speak, the disk I/O is not a limiting factor since there is enough optimization on the client side that upstream processing is now the limit, but I'm looking forward to Mr Amdahl during the next phase where those bottlenecks are removed...
> 
> Then it will be time for 40GbE ;-)
Click to expand...

I mean, if you're making money off it anyway you could do the dream everyone wants and shove 24 2TB NVMe SSDs into 3 sets of 8-drive RAID-whatever and span them on an Epyc server and drop in whatever network cards you want... Even one NVMe drive is capable of reading at a few GB/s.

Realistically though the difference between 7200 HDDs, to 15k HDDs, to SATA SSDs, to U.2 NVMe SSDs should leave enough headroom to continue stepping up storage side if we're not talking 40gbps yet provided the money is available, and if it isn't then perhaps distributed storage so you simply have more spindles available from more servers could help.

Alternatively; SAN in place of NAS. TCP/IP eats into your latencies and total bandwidth a lot, SAN would help streamline.

Every available option takes $$$$$$$ though.


----------



## cekim

Quote:


> Originally Posted by *KyadCK*
> 
> I mean, if you're making money off it anyway you could do the dream everyone wants and shove 24 2TB NVMe SSDs into 3 sets of 8-drive RAID-whatever and span them on an Epyc server and drop in whatever network cards you want... Even one NVMe drive is capable of reading at a few GB/s.
> 
> Realistically though the difference between 7200 HDDs, to 15k HDDs, to SATA SSDs, to U.2 NVMe SSDs should leave enough headroom to continue stepping up storage side if we're not talking 40gbps yet provided the money is available, and if it isn't then perhaps distributed storage so you simply have more spindles available from more servers could help.
> 
> Alternatively; SAN in place of NAS. TCP/IP eats into your latencies and total bandwidth a lot, SAN would help streamline.
> 
> Every available option takes $$$$$$$ though.


That's the rub - I'm not making money off it day-to-day byte-by-byte... So, there are resource limits...

nvme ssds are great but...

1. random access is actually not impressive until you get to optane/cross-point - which is pretty ludicrously expensive per GB. Without them - shallow queue access to 4K blocks can be ~50MB/s or worse... As I mentioned - this is already largely optimized out - as something as small 4K is never requested, but the more random the access the less amazing the throughput.

2. expensive per GB - I need to handle 10T at a minimum and would like to support 20-30T as 10T requires pretty aggressive house cleaning. Not to mention SSDs hate being full with a passion in terms of life-span reduction and performance. You don't want to have a 10T SSD system at 9T all the time. You will kill it.

3. 10T of SSD would presently be about 12K 30T of hdd with redundancy is $1800-$2K... so... yeah, there's that...

I'm not looking for nose-bleed performance per se (yes, of course I want it all!) so much as making the most of 8-10 spinning disks at a time. Configuration choices can make or break performance and ease-of-use. I think I've just free'd up 4 850 pro SATA ssds, so those go into the experimentation pool as well...

I do have 2 4xnvme cards (one active/plx, one passive/bifircated), so I have setup 2T nvme arrays on them to play around with the highest tier of performance, but space is limited, so that's even more forward looking than what I expect to do with the spinning disks (with or without SSD caches) right now.


----------



## parityboy

Quote:


> Originally Posted by *KyadCK*
> 
> Alternatively; SAN in place of NAS. TCP/IP eats into your latencies and total bandwidth a lot, SAN would help streamline.


Do you mean Fibre Channel rather than iSCSI, or perhaps iSCSI on a separate switch?


----------



## KyadCK

Quote:


> Originally Posted by *parityboy*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Alternatively; SAN in place of NAS. TCP/IP eats into your latencies and total bandwidth a lot, SAN would help streamline.
> 
> 
> 
> Do you mean Fibre Channel rather than iSCSI, or perhaps iSCSI on a separate switch?
Click to expand...

iSCSI is NAS, and still TCP.

Actual Fibre Channel SANs are rated in IOPS directly, as there is near-zero overhead. The downside is needing specialized (and expensive) hardware. It also isn't nearly as versatile as iSCSI solutions.
Quote:


> Originally Posted by *cekim*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> I mean, if you're making money off it anyway you could do the dream everyone wants and shove 24 2TB NVMe SSDs into 3 sets of 8-drive RAID-whatever and span them on an Epyc server and drop in whatever network cards you want... Even one NVMe drive is capable of reading at a few GB/s.
> 
> Realistically though the difference between 7200 HDDs, to 15k HDDs, to SATA SSDs, to U.2 NVMe SSDs should leave enough headroom to continue stepping up storage side if we're not talking 40gbps yet provided the money is available, and if it isn't then perhaps distributed storage so you simply have more spindles available from more servers could help.
> 
> Alternatively; SAN in place of NAS. TCP/IP eats into your latencies and total bandwidth a lot, SAN would help streamline.
> 
> Every available option takes $$$$$$$ though.
> 
> 
> 
> That's the rub - I'm not making money off it day-to-day byte-by-byte... So, there are resource limits...
> 
> nvme ssds are great but...
> 
> 1. random access is actually not impressive until you get to optane/cross-point - which is pretty ludicrously expensive per GB. Without them - shallow queue access to 4K blocks can be ~50MB/s or worse... As I mentioned - this is already largely optimized out - as something as small 4K is never requested, but the more random the access the less amazing the throughput.
> 
> 2. expensive per GB - I need to handle 10T at a minimum and would like to support 20-30T as 10T requires pretty aggressive house cleaning. Not to mention SSDs hate being full with a passion in terms of life-span reduction and performance. You don't want to have a 10T SSD system at 9T all the time. You will kill it.
> 
> 3. 10T of SSD would presently be about 12K 30T of hdd with redundancy is $1800-$2K... so... yeah, there's that...
> 
> I'm not looking for nose-bleed performance per se (yes, of course I want it all!) so much as making the most of 8-10 spinning disks at a time. Configuration choices can make or break performance and ease-of-use. I think I've just free'd up 4 850 pro SATA ssds, so those go into the experimentation pool as well...
> 
> I do have 2 4xnvme cards (one active/plx, one passive/bifircated), so I have setup 2T nvme arrays on them to play around with the highest tier of performance, but space is limited, so that's even more forward looking than what I expect to do with the spinning disks (with or without SSD caches) right now.
Click to expand...

Dunno if you know this, but 4k blocks on HDDs don't even hit the MB/s mark most days. 4k is _exactly_ where SSDs of either kind excel.

Here's an example of 4k with one thread and a queue depth of one, on a 950 Evo NVMe on a 2.0 x4 riser vs 6x 4TB 7200RPM HDDs in RAID6 on an H700 RAID card, both on a Windows Server 2016 VM on ESXi 6.0;



You know, just 60 *times* the speed in a worst case scenario. But as you said, 4ks are rare... except NVMe murders spindles all day every day, in every metric.



SATA SSDs work too, it's all about not waiting for the spindle to catch up. If you have a good RAID card, then regular SATA SSDs can max your PCI-e lane on anything sequential and will be miles better than any spindle array you can make in small stuff.

I'm not trying to convince you to spend thousands of dollars, but asking to max a 10gbps connection with HDDs is asking too much. They can't even max their own 3/6gbps connections doing pure sequential, so you're reliant on caching.

Your alternatives are to;

distribute the load over more nodes, giving you asymmetrical access and faster response times as you can handle a higher number of requests at the same time, reducing seek latency, and possibly adding more redundancy.
move to storage mediums that are more capable of handling many different requests simultaneously.
both.

Unfortunately, all options are expensive, and even more so if you want enterprise anything.


----------



## burksdb

anyone run into issues where ubuntu server will not ramp the cpu up to full speed? dual 2670's but i cant seem to get above 1.2ghz


----------



## cekim

Quote:


> Originally Posted by *burksdb*
> 
> anyone run into issues where ubuntu server will not ramp the cpu up to full speed? dual 2670's but i cant seem to get above 1.2ghz


I've had various updates over the years that saw the power saving and/or on-demand governor be too aggressive such that if only one of many cores was loaded, the clock would not ramp up.

There are various settings to tune these governors, but to see if this is your issue, you can try something like this:

sudo cpupower frequency-set -g performance

This should have your clocks bouncing to their peak under even a hint of load. It will consume more power at idle of course, but it will confirm that the issue is your performance governor.


----------



## burksdb

Quote:


> Originally Posted by *cekim*
> 
> I've had various updates over the years that saw the power saving and/or on-demand governor be too aggressive such that if only one of many cores was loaded, the clock would not ramp up.
> 
> There are various settings to tune these governors, but to see if this is your issue, you can try something like this:
> 
> sudo cpupower frequency-set -g performance
> 
> This should have your clocks bouncing to their peak under even a hint of load. It will consume more power at idle of course, but it will confirm that the issue is your performance governor.


Dosent look like that made any difference. Unless im not looking at the right part but running.


Spoiler: Warning: Spoiler!



Code:



Code:


grep MHz /proc/cpuinfo

comes up with

Code:



Code:


[email protected]:~$  grep MHz /proc/cpuinfo
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
cpu MHz         : 1199.960
[email protected]:~$

Even Watching it live

Code:



Code:


watch -n.1 "cat /proc/cpuinfo | grep \"^[c]pu MHz\""

Frequency will never go above 1200



only other thing i can think of is the intel_pstate driver is having issues or something. Was going to try and disable it - I have all the power saving options in the bios disabled as well.


----------



## cekim

Quote:


> Originally Posted by *KyadCK*
> 
> I'm not trying to convince you to spend thousands of dollars, but asking to max a 10gbps connection with HDDs is asking too much. They can't even max their own 3/6gbps connections doing pure sequential, so you're reliant on caching.
> 
> Your alternatives are to;
> 
> distribute the load over more nodes, giving you asymmetrical access and faster response times as you can handle a higher number of requests at the same time, reducing seek latency, and possibly adding more redundancy.
> move to storage mediums that are more capable of handling many different requests simultaneously.
> both.
> 
> Unfortunately, all options are expensive, and even more so if you want enterprise anything.


Aware of all of that... My comment was admittedly poorly formed and confusing as to what I expect and I am trying to achieve...

I'm tuning not only the array - using different file-systems, hardware, number of disks, types of disks, etc... but the client side software as well in this grand experiment trying to understand as much as possible about what works and what does not.

Very generally speaking, I work hard to ensure that we never do 4k random and as much as possible, reads/writes are bundled together in very large blocks.

I've started experimenting with ssd caching more recently, so I'm trying to learn about as many setups as I can.


----------



## cekim

Quote:


> Originally Posted by *burksdb*
> 
> Dosent look like that made any difference. Unless im not looking at the right part but running.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> grep MHz /proc/cpuinfo
> 
> comes up with
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> [email protected]:~$  grep MHz /proc/cpuinfo
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> cpu MHz         : 1199.960
> [email protected]:~$
> 
> Even Watching it live
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> watch -n.1 "cat /proc/cpuinfo | grep \"^[c]pu MHz\""
> 
> Frequency will never go above 1200
> 
> 
> 
> only other thing i can think of is the intel_pstate driver is having issues or something. Was going to try and disable it - I have all the power saving options in the bios disabled as well.


Still true if you run an all-core load like stress-app-test?


----------



## burksdb

Quote:


> Originally Posted by *cekim*
> 
> Still true if you run an all-core load like stress-app-test?


yup.

Actually looks like disabling intel_pstate on boot solved my issue. which is odd since i had everything i could about pstate disabled but i will take it


----------



## cekim

Quote:


> Originally Posted by *burksdb*
> 
> yup.
> 
> Actually looks like disabling intel_pstate on boot solved my issue. which is odd since i had everything i could about pstate disabled but i will take it


hmm, wonder if there's an issue in your BIOS setup? My haswell, broadwell, skylake all behave well with that enabled.


----------



## cdoublejj

it would be nice if i can find a 24 port with at least 10G uplink that has VLANs and POE for IP cams and Ubquiti AC Pros


----------



## twerk

Quote:


> Originally Posted by *cdoublejj*
> 
> it would be nice if i can find a 24 port with at least 10G uplink that has VLANs and POE for IP cams and Ubquiti AC Pros


I bought a T1700G-28TQ last week. Honestly very impressed with it. 24 gigabit ports and 4 10GbE SFP+ uplinks.

For the price you really can't go wrong. Fanless too!


----------



## cdoublejj

Quote:


> Originally Posted by *twerk*
> 
> I bought a T1700G-28TQ last week. Honestly very impressed with it. 24 gigabit ports and 4 10GbE SFP+ uplinks.
> 
> For the price you really can't go wrong. Fanless too!


how many watts is the POE? can it do 12, 24 and 48v?


----------



## twerk

Quote:


> Originally Posted by *cdoublejj*
> 
> how many watts is the POE? can it do 12, 24 and 48v?


It doesn't do POE. If you want POE and 10G uplinks you are looking at a LOT of money.

You are better off using POE injectors or just buying a cheap POE switch.


----------



## cdoublejj

the mikrotik i linked a page back is 24 ports, managed with 10G no POE and only $135. i'm just wondering if 10+ POE injectors would suck significantly more power than a single PSU/transformer. would save me since i have a pile of them.


----------



## twerk

Quote:


> Originally Posted by *cdoublejj*
> 
> the mikrotik i linked a page back is 24 ports, managed with 10G no POE and only $135. i'm just wondering if 10+ POE injectors would suck significantly more power than a single PSU/transformer. would save me since i have a pile of them.


I've not heard great things about the Mikrotik switches to be honest... the firmware is pretty flaky.

If you need as many as 10 PoE devices then I'd definitely get a PoE switch.

Alternatively you can get rackmount PoE injectors which will save you from having individual injectors messing up the place. Stuff like this:

https://www.wifi-stock.co.uk/details/12_port_gigabit_poe_rack_mount_panel.html


----------



## cdoublejj

oh man, +REP for that! did not know that was a thing!


----------



## beers

Quote:


> Originally Posted by *cdoublejj*
> 
> the mikrotik i linked a page back is 24 ports, managed with 10G no POE and only $135. i'm just wondering if 10+ POE injectors would suck significantly more power than a single PSU/transformer. would save me since i have a pile of them.


A Cisco 3750E or 3560E can do the poe you want (ps model) and have 2x 10g uplinks. You can probably find a used one for less than you listed.


----------



## cdoublejj

Quote:


> Originally Posted by *beers*
> 
> A Cisco 3750E or 3560E can do the poe you want (ps model) and have 2x 10g uplinks. You can probably find a used one for less than you listed.


since it's a Cisco i might assume it's safe to assume i'll need several noctua fans to quite it down. def added to the watch list


----------



## twerk

Quote:


> Originally Posted by *cdoublejj*
> 
> since it's a Cisco i might assume it's safe to assume i'll need several noctua fans to quite it down. def added to the watch list


Catalysts aren't too bad, especially compared to Nexus'. They're loud but not unbearable, depends where they are in your house.


----------



## Aximous

Quote:


> Originally Posted by *twerk*
> 
> I've not heard great things about the Mikrotik switches to be honest... the firmware is pretty flaky.
> 
> If you need as many as 10 PoE devices then I'd definitely get a PoE switch.
> 
> Alternatively you can get rackmount PoE injectors which will save you from having individual injectors messing up the place. Stuff like this:
> https://www.wifi-stock.co.uk/details/12_port_gigabit_poe_rack_mount_panel.html


I'm curious about your experience with Mikrotik switches, I'm just on the fence on buying that exact switch mentioned before.

Care to elaborate?


----------



## cdoublejj

Quote:


> Originally Posted by *twerk*
> 
> Catalysts aren't too bad, especially compared to Nexus'. They're loud but not unbearable, depends where they are in your house.


slated/vented closet in the main hallway across from the bed rooms.







lol
Quote:


> Originally Posted by *Aximous*
> 
> I'm curious about your experience with Mikrotik switches, I'm just on the fence on buying that exact switch mentioned before.
> 
> Care to elaborate?


spiceworks forums and server the home forums would also be great places to inquire about the mikrotik.

https://community.spiceworks.com/topic/2105298-are-mikrotik-switches-any-good-how-about-this-one

EDIT: maybe Twerk can confirm but, it seems perhaps a crappy UI.


----------



## dir_d

Thinking about going super simple and getting a Dell R720xd and Freenas 11.1. using my jails and some Windows server 2016 Vms on bhyve. What you guys think?


----------



## cekim

Quote:


> Originally Posted by *dir_d*
> 
> Thinking about going super simple and getting a Dell R720xd and Freenas 11.1. using my jails and some Windows server 2016 Vms on bhyve. What you guys think?


I've been looking at those and x3650m4 but haven't been impressed with prices lately given the now aged hardware. I assume it's DDR prices doing that.

I was looking more for a high speed storage controller than could also do VMs. Ended up moving some things around and buying a little ECC DDR4 and some odds and ends to tide me over until prices drop so I could divide up existing hardware.

Seems like reasonable hardware if you can find a good deal. Can't speak to hyve. I played with FreeNAS but at least when I did had issues getting 10GbE NFS performance where I wanted it. Could be me... I've had CentOS, RH or Fedora running on something since the 90's so change is hard and matching my familiarity with a different OS is hard.

I also like having a hypervisor that stays well patched for security on anything that talks to the outside world. KVM is obtuse, but works well.


----------



## twerk

Quote:


> Originally Posted by *cdoublejj*
> 
> slated/vented closet in the main hallway across from the bed rooms.
> 
> 
> 
> 
> 
> 
> 
> lol
> spiceworks forums and server the home forums would also be great places to inquire about the mikrotik.
> 
> https://community.spiceworks.com/topic/2105298-are-mikrotik-switches-any-good-how-about-this-one
> 
> EDIT: maybe Twerk can confirm but, it seems perhaps a crappy UI.


I know how you feel... I live in a flat and have all my servers in a cupboard in the hallway. Have to keep things quiet. I run a fanless switch and have to spend a bit extra on modern Sandy Bridge+ servers that don't make too much noise.

Mikrotik routers are great, the UI has a steep learning curve but it's very powerful. The functionality is huge.

Their switches basically started out with a bodged version the RouterOS, and it really didn't work well. It would crash, the UI was buggy etc. It's now transitioned into SwOS and is slightly better but it still feels very much "Beta".


----------



## cekim

twerk said:


> Quote:Originally Posted by *cdoublejj*
> ...
> Their switches basically started out with a bodged version the RouterOS, and it really didn't work well. It would crash, the UI was buggy etc. It's now transitioned into SwOS and is slightly better but it still feels very much "Beta".


Don't sugar coat it - tell us how you really feel about it. ;-)


----------



## Mr Pink57

Just my little RP 3 B+ right now its just a squid cache I am waiting on gargoyle to finish compiling their new firmware to see how that will fair on this device.


----------



## DaveLT

Here's mine. Literally made out of spare parts 

i7 2600 @ full OC forgot what it was but should be 4GHz or so
Z77X-D3H
Intel 530 240GB SSD
CM G750M PSU
Deepcool Captain 240 AIO
Kingston HyperX 16GB 1600 ddr3 kit
And a old 3TB HDD that has nothing on it
NZXT Switch 810 that has repainted plastic panels and no grommets because they've gone back to earth (i.e carbon)

I do have a Dell H310 that i wanted to use with it and 2 600GB 2.5" SAS drives but both those drives are dead. What a shame.


----------



## cdoublejj

i wanna try a R Pi 3B+ or an Asus Tinker Board


----------



## DaveLT

cdoublejj said:


> i wanna try a R Pi 3B+ or an Asus Tinker Board


Instead of an RPi (They don't match my performance requirements OR price) I went with an OrangePi... Or more particularly an zero.

At least I get 4 A7 cores not one sad single core if i bought the rpi zero.

It really isn't bad at all though, it has wifi, 100M lan, 1 usb port which can be further expanded if you want to. Has no HDMI but my application will not need a display.


----------



## mbmumford

Well I placed the order for my custom rackmount case last night, and I should have the completed drawings in about 2 weeks. 

I can't wait to get my hands on this beast!


----------



## cekim

Started a major BDC (basement data center) re-work last summer... almost ready for the power outage we just had. ;-)

3rd tier Backup didn't shutdown cleanly as it was incorporating a new disk into the array in the middle. It's battery held out like a champ for 73minutes, but it wasn't long enough. Everything else is back up and running.


----------



## fg2chase

If you want to know how many drives a 750D can fit the answer is 24...


----------



## deafboy

Nice! Why's pool 4 only have 5 disks?


----------



## fg2chase

deafboy said:


> Nice! Why's pool 4 only have 5 disks?


Because there is a 950 pro occupying that port, you can see that SSD if you look hard enough.


----------



## Unknownm

And I thought having two bays was bad enough. Same case, damn









Sent from my HTC 10 using Tapatalk


----------



## watever44

fg2chase said:


> If you want to know how many drives a 750D can fit the answer is 24...


Wow
What do you stock on theses? 

Sent from my LG-M470 using Tapatalk


----------



## fg2chase

watever44 said:


> Wow
> What do you stock on theses?
> 
> Sent from my LG-M470 using Tapatalk


stuff


----------



## Lady Fitzgerald

fg2chase said:


> stuff


:lachen:


----------



## fg2chase

watever44 said:


> Wow
> What do you stock on theses?
> 
> Sent from my LG-M470 using Tapatalk












This stuff


----------



## shadow5555

my newly rebuilt plex/hyper v /flexraid storage server/ anything else i need server

corsair 760t case
Super Micro X9DRE-LN4F
Dual Xeon Hex Core cpus 6 core 12thread 
Dual Noctua NH-D9DX
96gb ddr3 ecc ram
Dual lsi hba
dual 850 120g ssds that will go in raid 0
480gb ssd for vm storage
10gb nic x520
50+ tbs in storage (using from the old storage server)


----------



## SystemTech

Here is my new baby:






















A Dell R720.
Specs are :

Dual Intel Xeon OCTA Core E5-2670 2.60Ghz (16 cores, 32 threads total)
64Gb RAM (DDR3 Registered ECC Dimms)
16 X Hot Swap Hard Drive Bays
Dell PERC H710 Raid Controller with BBWC (Support RAID0,RAID1,RAID10,RAID5,RAID50,RAID6, RAID 60)
4 X Gigabit Network Ports 10/100/1000
2 X Redundant Dual Power Supplies
4 x Crucial MX500 250GB SSD's in RAID 10 for OS + VHD's
4 x ST5000LM000 5TB HDDs IN RAID 6 for storage (shucked from Seagate Backup Plus 5TB Portable)

More details available in my thread : http://www.overclock.net/forum/18083-build-logs/1682193-build-log-32thread-64gb-ram-home-server.html


----------



## cekim

SystemTech said:


> Here is my new baby:
> View attachment 154825
> 
> View attachment 154833
> 
> View attachment 154841
> 
> 
> A Dell R720.
> Specs are :
> 
> Dual Intel Xeon OCTA Core E5-2670 2.60Ghz (16 cores, 32 threads total)
> 64Gb RAM (DDR3 Registered ECC Dimms)
> 16 X Hot Swap Hard Drive Bays
> Dell PERC H710 Raid Controller with BBWC (Support RAID0,RAID1,RAID10,RAID5,RAID50,RAID6, RAID 60)
> 4 X Gigabit Network Ports 10/100/1000
> 2 X Redundant Dual Power Supplies
> 4 x Crucial MX500 250GB SSD's in RAID 10 for OS + VHD's
> 4 x ST5000LM000 5TB HDDs IN RAID 6 for storage (shucked from Seagate Backup Plus 5TB Portable)
> 
> More details available in my thread : http://www.overclock.net/forum/18083-build-logs/1682193-build-log-32thread-64gb-ram-home-server.html


Good stuff...


----------



## cekim

Finally removed the "beige" (well, white and blue) monstrosity from the rack and replaced it with some more respectable rack hardware for the same, but much improved purpose (KVM host and KVM/LXC experimentation). Sloooooooowly coming together.


----------



## iamdjango

*Quad Socket 192 Threads*

He's my contribution:

CPU: 4 x E7-8894V4 ES 24 Core @ 2.3Ghz base, 2.7Ghz all, 3.2Ghz boost
RAM: 512GB DDR4 (16GB 2Rx4 DIMMs, mix of Micron and Samsung ebay specials as ram is stupid expensive these days)
MB: Supermicro X10QBL-4CT
NIC: X550T2 20Gbit Bonded 
NVMe: 256GB Samsung PM961
Case: Rosewill Blackhawk Ultra (heavily modified to fit non standard mb, riveted drive cages removed, top fan grill altered to accommodate push/pull cooling through AIOs)
Coolers: 4 x 92mm Asetek 545LC AIOs, 17+ Noctua fans of various sizes.
OS: Debian 9

That chassis picture was part way through my build and is a little old now. I switched the bottom intake for a 200mm, moved the old bottom 140mm to the front top and attached 2 additional 92mm to the motherboard to cool ram. System pulls around 1kW at load, idles around 160w. Added some pretty time series graphs and htop screenshot of it running too


----------



## TheBloodEagle

That looks insane and beastly as heck! Really interesting build. You might be able to fit a 140mm fan where the 5.25" slots are (if true 140mm spacing holes) since the edge of the bays seem to have a few holes still (possibly use washer & nut). A 5.25" bay is 133.35mm, so those holes might just be in reach. Just throwing ideas out there. There are 5.25" adapters of course but since you don't have the bay side walls anymore to screw in, I don't think it would work. Not to say you really need it.


----------



## Bill Owen

Nice Dual Xeon build!


----------



## iamdjango

Thanks  It's a Broadwell E7 V4 quad socket MB. It's very efficient given my space and cooling constraints. I have another dual socket 44 core E5-2696V4 OEM Linux machine I use as a network gateway and workstation (see attached, again an old picture and has changed a fair bit since that was taken). I prefer this quad socket, saves on a lot of desk space.



TheBloodEagle said:


> That looks insane and beastly as heck! Really interesting build. You might be able to fit a 140mm fan where the 5.25" slots are (if true 140mm spacing holes) since the edge of the bays seem to have a few holes still (possibly use washer & nut). A 5.25" bay is 133.35mm, so those holes might just be in reach. Just throwing ideas out there. There are 5.25" adapters of course but since you don't have the bay side walls anymore to screw in, I don't think it would work. Not to say you really need it.


Cheers, definitely different with it's 4x all in one water cooling. Yep that's where I moved the bottom 140mm fan to in the end after fabing two metal plates. Helps a lot with temperature, as the ram gets cooked by the Jordan Creek II memory buffer heat sinks. I bought all my ram second hand off ebay and had several soft ECC errors. After replacing those bad DIMMS (3 out of 32, not a bad failure rate considering it was half the price of new, ram is such a rip these days  ) and the additional fans, it's running great now.


----------



## cekim

iamdjango said:


> Cheers, definitely different with it's 4x all in one water cooling. Yep that's where I moved the bottom 140mm fan to in the end after fabing two metal plates. Helps a lot with temperature, as the ram gets cooked by the Jordan Creek II memory buffer heat sinks. I bought all my ram second hand off ebay and had several soft ECC errors. After replacing those bad DIMMS (3 out of 32, not a bad failure rate considering it was half the price of new, ram is such a rip these days  ) and the additional fans, it's running great now.


Awesome machine indeed... 

Not sure where this 1/2 price ram of which you speak lives on ebay? ;-)

I'm seeing lots of full-boat $200/16G for "pre-owned".... I'll keep looking.


----------



## iamdjango

cekim said:


> Awesome machine indeed...
> 
> Not sure where this 1/2 price ram of which you speak lives on ebay? ;-)
> 
> I'm seeing lots of full-boat $200/16G for "pre-owned".... I'll keep looking.


Your best bet it to search for EOL obscure modules from the big 3 manufactures such as: m393a2g40db0-cpb and mta36asf2g72pz-2g1a.

I looked at SK Hynix too but was less of that around at the time of buying

If you are prepared to wait a bit, take the risk of faulty DIMMs and to buy in small quantities (a few sticks at a time or if you're lucky several from "clean" server pulls) you can pick them up. As long as you're running the same model of memory per channel, matching all memory is a myth. I'm running 8 Dimms of Micron and 24 Samsung of two slightly different models in my Quad socket without issue.


----------



## Sean O

I built an 16TB unRAID server. It is used for PLEX and creating backups of my other computers. It has been running for about 3 months now, no issues.

New unRAID build:

Case: HAF 912 
CPU: i5 3570K (O.C. 4.2Ghz) Still puting in work.
Motherboard: ASUS P8Z77-V
RAM: 4x4GB G. Skillz DDR3 816MHz (16GB)
PSU: EVGA Supernova 550 G2 (ECO Mode)
Cache Drive: Samsung 840 PRO - 256GB
Parity Drive: WD 8TB Red 
HDD 1: WD 8 TB Red 
HDD 2: WD 8TB Red 
Boot Drive: USB 16GB x 2
SuperMicro CSE-M35T-1B
SATA 6G PCI Express Card
8 x Fans. Should stay cool


----------



## kirb112

This is amazing. Well done! What purpose will this serve?



iamdjango said:


> He's my contribution:
> 
> CPU: 4 x E7-8894V4 ES 24 Core @ 2.3Ghz base, 2.7Ghz all, 3.2Ghz boost
> RAM: 512GB DDR4 (16GB 2Rx4 DIMMs, mix of Micron and Samsung ebay specials as ram is stupid expensive these days)
> MB: Supermicro X10QBL-4CT
> NIC: X550T2 20Gbit Bonded
> NVMe: 256GB Samsung PM961
> Case: Rosewill Blackhawk Ultra (heavily modified to fit non standard mb, riveted drive cages removed, top fan grill altered to accommodate push/pull cooling through AIOs)
> Coolers: 4 x 92mm Asetek 545LC AIOs, 17+ Noctua fans of various sizes.
> OS: Debian 9
> 
> That chassis picture was part way through my build and is a little old now. I switched the bottom intake for a 200mm, moved the old bottom 140mm to the front top and attached 2 additional 92mm to the motherboard to cool ram. System pulls around 1kW at load, idles around 160w. Added some pretty time series graphs and htop screenshot of it running too /forum/images/smilies/smile.gif


----------



## Prophet4NO1

I moved into a new home recently and have been working one migrating things into a rack. I have the hardest parts done now, all the wiring. Today I finished it up and got my router moved into its own rack mount case. 

Still need a case for my fileserver. But that will have to wait. All the ones I like are around $400-500. Unless someone has a line on a solid 3U or 4U case with hot-swap bays.


----------



## nismoskyline

Hi everyone, here is my 'server/media' PC, It has:


X3450 xeon cpu
16gb ddr3
p7p55d
2x680 in sli for gaming/streaming
1x1070ti for testing purposes 
2x500gb hdd
2x1tb hdd
1x10tb hdd
1x60gb ssd
cougar 1000w psu.


I am going to be building a windows server 2016 environment where I need a lot more processing power than this. I was thinking of buying a dual X58/1366 supermicro board and loading it with the max RAM(192)gb and 2xXEON L5520 for 24 cores. In the simulations/lab that will be running the resources don't necessarily need to be blazing fast but a lot of physical cores and ram is needed for virtualization (more clients). Because the X58 supermicro boards don't have driver updates for windows server 2016 will this affect the ability to use windows server 2016, or will the older drivers install fine ? are these boards only good up to 2007/2012? thank you for any reply or input.

edit: https://imgur.com/a/Qa4tiTK here is a picture, ocn picture service isn't letting me drag the photo, attach it, or accept the link for the photo lol..


----------



## Tadaen Sylvermane

I love the muscle builds here. Not sure if I posted mine in this thread. Don't always need a ton of muscle to get the job done.

AMD Kabini Quad on the AM1 socket. 8gigs of ram. 120gb ssd for containers / os. 2tb spinner for media storage. And 2tb usb 3.0 for snapraid parity. Direct boots to Kodi for media center in living room as well as provides pxe installers / ltsp pxe boot kodi boxes around the house, dhcp, dns, minecraft, mythtv-backend, apt caching, transmission, backup target for all laptops in the home. All on Ubuntu 16.04 LTS base.

Currently housing 530 movies, 1200 episodes of tv shows, 7-8 gigs of music

I need to get a can of air and clean it out. Otherwise running like a champ for the last couple years.


----------



## Mikecdm

Pics of my unraid build used pretty much for plex.

Using old Ln2 benching hardware that has been re-purposed. 

Asrock z87m OCF
4770k, de-lidded, great batch, terrible cold bug. 
16gb (4x4gb) random sticks
(4) 4tb hgst nas, 
(2) 128gb samsung 840 pro for cache
Fractal R5
seasonic x650
Prolimatech Armageddon - so big that heatpipes sit on heatsink. Can't install the other way either. 

Also have an Lsi sas-9207-8i that i'll add in when i add more drives. Wanted to add a hot swap bay in the 5.25 bays but never found one that I really liked.


----------



## redhat_ownage

My mess of equipment


----------



## camry racing

Meet Blitzraid
Asus X99 runnning a xeon 2690V3
40GB of DRR4 ram (non ecc)
850 pro 512gb ssd for cache drive
2 HGS HE8 8TB drive parity drives
3 4TB WD reds
1 6TB WD Datacenter drive
1 Seagate compute 8TB drive 
total of 26TB of space 
Running UNRAID 
Houses over 700 movies 78 series 
DNS server
Homelab
Backup server all computers in the house backup here


----------



## marcus556

camry racing said:


> Meet Blitzraid
> Asus X99 runnning a xeon 2690V3
> 40GB of DRR4 ram (non ecc)
> 850 pro 512gb ssd for cache drive
> 2 HGS HE8 8TB drive parity drives
> 3 4TB WD reds
> 1 6TB WD Datacenter drive
> 1 Seagate compute 8TB drive
> total of 26TB of space
> Running UNRAID
> Houses over 700 movies 78 series
> DNS server
> Homelab
> Backup server all computers in the house backup here


What case are you using?

Sent from my Pixel 2 XL using Tapatalk


----------



## camry racing

is this one 
https://www.amazon.com/gp/product/B01BFX02QM/ref=oh_aui_detailpage_o02_s00?ie=UTF8&psc=1


----------



## bobfig

no pics of mine off hand but all my stuff is stuffed into a U-Nas 810a. http://www.u-nas.com/xcart/product.php?productid=17640

about to make a NVR Server for cameras and that's even smaller and super low power.

has

CPU
Intel Xeon E3-1260L v1 

Motherboard
SuperMicro X9SCM-F 

RAM
Crucial 16gb 1333 EEC UDIMM 

Hard Drive
Intel 320 120gb ssd

2x Samsung F3 HD102SJ 

3x HGST Deskstar NAS 3.5-Inch 3TB 

Power Supply
Seasonic 350w gold 1u ss-350m1u

Cooling
Noctua NH-L9i

Case
U-Nas NSC-810a

Operating System
Server 2012 Standerd x64 

Other
3Ware 9650SE-8LPML + BBU

stock pics of the case but everything is in there


----------



## reezin14

bobfig said:


> no pics of mine off hand but all my stuff is stuffed into a U-Nas 810a. http://www.u-nas.com/xcart/product.php?productid=17640
> 
> about to make a NVR Server for cameras and that's even smaller and super low power.
> 
> has
> 
> CPU
> Intel Xeon E3-1260L v1
> 
> Motherboard
> SuperMicro X9SCM-F
> 
> RAM
> Crucial 16gb 1333 EEC UDIMM
> 
> Hard Drive
> Intel 320 120gb ssd
> 
> 2x Samsung F3 HD102SJ
> 
> 3x HGST Deskstar NAS 3.5-Inch 3TB
> 
> Power Supply
> Seasonic 350w gold 1u ss-350m1u
> 
> Cooling
> Noctua NH-L9i
> 
> Case
> U-Nas NSC-810a
> 
> Operating System
> Server 2012 Standerd x64
> 
> Other
> 3Ware 9650SE-8LPML + BBU
> 
> stock pics of the case but everything is in there


Nice build,gathering the parts now to do something similar,thinking a E3 1245 V2 or the 1265L V2 with a 3u rack-mount case.


----------



## nycgtr

Can't do a server rack but 

corsair 780t
64gb 3200
1000w g3
7900x
2x 4tb black
assorted 3tb drives
1tb 850 evo
1tb 960 evo. 
Server 2016 Datacenter

spare kpe laying around.


----------



## mbmumford

I migrated my server Sunday night, and only lost an hour of folding time. 

1 year of debating what rackmount enclosure I want, 2 months for building a custom enclosure, 1 month after build before I picked it up, 10 hours of preparation, 1 hour to migrate, and 42 hours to transfer media back on to it. 

I just need to get some sata extensions, install new fans, perform some cable management, try not to add more to it for a while. 

I will post new pictures once it is cleaned up.


----------



## rlwgone

Prophet4NO1 said:


> I moved into a new home recently and have been working one migrating things into a rack. I have the hardest parts done now, all the wiring. Today I finished it up and got my router moved into its own rack mount case.
> 
> Still need a case for my fileserver. But that will have to wait. All the ones I like are around $400-500. Unless someone has a line on a solid 3U or 4U case with hot-swap bays.


Not sure how many drive bays you need but maybe check this out: https://www.amazon.com/Rosewill-Rackmount-Computer-Pre-Installed-RSV-L4500/dp/B00N9CXGSO/


----------



## R99photography

bobfig said:


> no pics of mine off hand but all my stuff is stuffed into a U-Nas 810a. http://www.u-nas.com/xcart/product.php?productid=17640
> 
> about to make a NVR Server for cameras and that's even smaller and super low power.
> 
> has
> 
> CPU
> Intel Xeon E3-1260L v1
> 
> Motherboard
> SuperMicro X9SCM-F
> 
> RAM
> Crucial 16gb 1333 EEC UDIMM
> 
> Hard Drive
> Intel 320 120gb ssd
> 
> 2x Samsung F3 HD102SJ
> 
> 3x HGST Deskstar NAS 3.5-Inch 3TB
> 
> Power Supply
> Seasonic 350w gold 1u ss-350m1u
> 
> Cooling
> Noctua NH-L9i
> 
> Case
> U-Nas NSC-810a
> 
> Operating System
> Server 2012 Standerd x64
> 
> Other
> 3Ware 9650SE-8LPML + BBU
> 
> stock pics of the case but everything is in there


really nice and elegant.
It is not something like you build and leave it in a closet or a dedicated server room, but it could be running also in a living room.
Great choice.


----------



## bobfig

R99photography said:


> really nice and elegant.
> It is not something like you build and leave it in a closet or a dedicated server room, but it could be running also in a living room.
> Great choice.


Thanks!

now i am getting the other server/nvr buttoned up and installed. have everything ran on a 1000va ups and with the switch and server maxed out with prime it only pulls 53watts from the wall









CPU
Intel Xeon E3-1235L v5 ---> soon to be a 1260L v5

Motherboard
Asus P10S-I

RAM
Kingston 8gb 2400mbps ecc 

Hard Drive
Crucial mx300 525gb ssd - boot
Seagate Skyhawk 4tb - camera storage

Power Supply
FSP brand 250watt bronze included with the case

Cooling
Noctua NH-L9i

Case
Supermicro CSE-721TQ-250B 

Operating System
Server 2016 Standerd x64


Cameras
3x Hikvision DS-2CD2155FWD-I 5mp cameras
1x Trendnet TV-IP315PI 4mp for front door


----------



## Prophet4NO1

Swapped out my Cisco SGE2000 switch for a Unifi PoE unit. Very happy with the switch so far. Much easier to change configs on it and the PoE is great since i have unifi AP units and soon will be adding some cameras as well.


----------



## cdoublejj

i like that more little nas style cases are starting to pop up


----------



## Sgsi5512

*Ghetto Email Server*

I finally completed another project! All of my money goes toward college, so I made a few budget cuts.

System is an old Dell Optiplex 780, 2GB of RAM, 320GB HDD. I added: a metal box, a mini-PCIe raid card, x2 SATA power split cables (The power supply only had 1 SATA power), x2 WD SATA-2 500GB HDD's. Running Postfix.


----------



## R99photography

Hello gents,
I am looking for information about a future project I have in my mind. I’d like to build a micro/mini server pc, so something similar to a 4/6 bay NAS size. I don’t want a server rack mount for many reasons, but a really compact home server instead.

My skepticisms are related to the fact that a LGA2011 (hypothetically with a old XEON Sandy Bridge) or a AMD Ryzen platform, which are two choices in my mind, could be really noisy to run in my bedroom (at the moment I use a Synology DS918+ which is super silent). The system noise is the most important factor to keep in mind if I will decide to switch from my current NAS to a more powerful server system. Unfortunately I cannot appreciate the system noise due to air cooling system of a kind of build.

Could anyone who built a compact server system give some suggestions and impressions?

Thank you.




Inviato dal mio iPad utilizzando Tapatalk


----------



## bobfig

R99photography said:


> Hello gents,
> I am looking for information about a future project I have in my mind. I’d like to build a micro/mini server pc, so something similar to a 4/6 bay NAS size. I don’t want a server rack mount for many reasons, but a really compact home server instead.
> 
> My skepticisms are related to the fact that a LGA2011 (hypothetically with a old XEON Sandy Bridge) or a AMD Ryzen platform, which are two choices in my mind, could be really noisy to run in my bedroom (at the moment I use a Synology DS918+ which is super silent). The system noise is the most important factor to keep in mind if I will decide to switch from my current NAS to a more powerful server system. Unfortunately I cannot appreciate the system noise due to air cooling system of a kind of build.
> 
> Could anyone who built a compact server system give some suggestions and impressions?
> 
> Thank you.
> 
> 
> 
> 
> Inviato dal mio iPad utilizzando Tapatalk


what are you planing on using the server for? and how much money is there to play with? 

both of mine up top of this page are pretty nice but you cant run a full power cpu with out it getting pretty toasty as the cooling area is kind of small for a cooler. both run basically a low power 45w tdp i7 and a noctua l9-i cooler. the NVR is a skylake era and the NAS is a sandybridge.


----------



## R99photography

bobfig said:


> what are you planing on using the server for? and how much money is there to play with?
> 
> both of mine up top of this page are pretty nice but you cant run a full power cpu with out it getting pretty toasty as the cooling area is kind of small for a cooler. both run basically a low power 45w tdp i7 and a noctua l9-i cooler. the NVR is a skylake era and the NAS is a sandybridge.


Hello, thanks for your reply. I have seen your builds and the first one is really cool, something I would like. That case (U-NAS) is not unfortunately on the market here in Italy.
I am planning for a file server, AD and WSUS server. At the moment using a Synology DS918+ but running Windows Server 2016 in a virtual machine it slows down too much. So I am evaluating something of more robust and powerful.
The budget is the last of my problems,I firstly need to understand other things: noise, temperatures, which kind of CPU (some pre-owned Xeon Sandy Bridge are really powerful and cheap on ebay), which Motherboard...

I see that a micro/mini ATX system is quite difficult to build, especially for a lack of motherboard (I was looking for a LGA2011).


----------



## bobfig

for those 3 things you listed i don't see much of a need to go lga2011. i feel just staying with a nice e3-12xx series will be fine. both do file serving when needed and do video streaming/ transcoding just fine. both are low enough power where they use a small cooler but if you get a larger case then a larger cooler is a possibility. sound wise is they are both a whisper and bearly know they are on other then when lights start to flash.

the unas one with the older sandy bridge is a low power and can saturate all of the network when needed. can stream videos while on the fly transcoading if needed. i think it can handle 2-3 while transcoading but i have all my vids now in a format where that isn't needed any more. also ran an ARK server on it for a little bit and didn't have any issues with 4-5 people but then my internet was the issue then.

the NVR set up in the supermicro case is imo pretty sweet other then having to find a ITX motherboard. the cpu is a few gens newer so it is faster but also in the limitation of just how much space you have for a cooler. i did have the e3-1235l in it being it had intel GFX on it but was still an i5 4 core at 25w tdp. was hopping to get blue iris to use the intel gfx to encode but i guess the motherboard i got doesn't allow built in gfx. ether way with the noctua cooler i was still "maxing" the core temp at 58ºc. was kind of weak and bearly was able to do the 4 cameras i wanted. 

i tend to start looking at motherboard brands like supermicro as to me they have a decent history of server stuff. i did end up going asus on my last one just because of the specs it had. really everything just just starts around figuring what parts you can get and what you want to put them into. i tend to start figuring out what cpu i want to run and what generation depends on how much money to spend. i just know going any older then a e3/e5 series xeons the power usage of the system gets pretty high.



pics of both running prime for a few min. both same cooler just different case. temps are still good to go. 

NAS









NVR


----------



## deafboy

Just moved so it's lacking a bit of personality, lol, but it's functional


----------



## cdoublejj

Wish had known about that u-NAS. might have filled up all my space at the bottom of the shelf but, i'd have more storage options.


----------



## Levelog

This is me and my roommate's mess.
I've got:
Dell Precision T5600 with an 8 core, 64gb of RAM, 12tb of HDD storage, 2tb of SSD storage
Synology DS216 with a pair of 4tb drives in R0

He's got:
Dell T420 with a pair of 6 cores, 64gb RAM, a bunch of storage
Dell T320 with a single 6 core, 48gb RAM, even more storage

I suppose it's pretty unnecessary but we do some cool stuff!


----------



## NexusRed

cdoublejj said:


> Wish had known about that u-NAS. might have filled up all my space at the bottom of the shelf but, i'd have more storage options.


Ughhh I love/hate setup.

Love what you have.
Hate the way you took the pictures.

Just my opinion


----------



## Rbby258

NexusRed said:


> Ughhh I love/hate setup.
> 
> Love what you have.
> Hate the way you took the pictures.
> 
> Just my opinion


xD agreed


----------



## cdoublejj

NexusRed said:


> Ughhh I love/hate setup.
> 
> Love what you have.
> Hate the way you took the pictures.
> 
> Just my opinion



Yeah, that was not by choice, it's so small and tight that the camera can only get like 1/4 to 2/1 of it in any given shot with multi shots and or fisheye to see it all. and thank you for the compliment!  The hole thing isn't even 2 feet deep! it was not fun contorting my body in there to get stuff mounted let alone UPSes.


----------



## KyadCK

Eeeeeeyyyyyyy migration complete!















Whole rack pics:


Spoiler




















New Specs/Perf:


Spoiler






Code:


2x 2.9ghz hex Sandy (to be upgraded later)
128GB (2x quad, room for 3rd rank)
1x 960GB G.Skill Blade (passed to VM)
2x X520-DA2 (2x 10gbps per)
1x HP420i RAID card
- 4x1TB 2.5in SSDs, RAID5 (3TB)
1x HP P421 SAS card
- HP DL2700
- - 4x1TB 7200 2.5in HDDS, RAID5 (3TB)
- - 4x1TB 7200 2.5in HDDS, RAID5 (3TB)
- - 5x500GB 7200 2.5in HDDS, RAID5+HS (1.5TB)
- HPDL2600
- - 6x4TB 7200 3.5in HDDs, RAID50 (14.5TB)



















SSD write perf sucks hard because even on-SSD write caching is disabled, but maxing the 10gbps line to my rig without cache is fun and it's mainly for steam games and other read-only stuff anyway. The HDD arrays could probably be faster, but each DAS only gets a single 6gbps SAS for now.

The storage groups are separated via VM and physical layout. Each storage VM for each of the network drives gets it's disks as well as one 10gbps link not shared with anything. Everything else shares the final 10gbps. This way I can absolutely max out the SSD array/10g while loading a game, but other PCs in the house that may be, say, archiving files or using hosted game servers, won't even notice. Still got 12 2.5s in the 2700, 6 3.5s in the 2600, and 4 2.5s in the server itself, so I should have plenty of room for expansion, but if that isn't enough I can just keep throwing more das units at it until the problem goes away since each card can chain 8 of them.

I plan to replace the CPUs with a pair of 10c/20t Ivy chips later which will be fun. The R310 is still my PFSense box, I still have the G8124 and G8000, though my WiFi got upgraded to a Ubiquiti Unifi AP AC Pro and I had to get a new modem to support 1gbps internet (down only, thanks comcast, 2gbps sym when). 

Also per the power strip, the whole assembly draws about 5 Amps 24/7 at usual load, which is actually 1.5A lower than my previous set up despite the added storage and networking.


----------



## cdoublejj

damn nearing home data center levels. i like home racks like that.


----------



## Punjab

*Render Farm*

We just finished insulating our garage and I got around to re-organizing the render farm. Had to minimize down time.

The servers were all throwing amber faults in these pics because I only had one power supply plugged in.


















Currently have 5 Poweredge R610 and 4 Poweredge 1950

R610 all have dual X5670, 48GB, and a single 146GB 15K SAS

1950 all have dual E5450, 48GB, and a single 146GB 15K SAS

Precision T5400 on the shelf has dual E5450, 16GB, and I think a 500GB HDD

Switch is a 24-port Dell PowerConnect 5324 network managed

KVM is a 16-port Dell 2161 DS-2

APC keyboard/monitor console

APC 42u rack


----------



## speed_demon

You must have cheap electricity! 

My HP Dl360p Gen 8 with 24gb memory and dual E5-2609 V2's. Haven't gotten around to giving it a proper home yet. Running just a single 300gb 10K drive to get things rolling.


----------



## Punjab

I actually don't have cheap electricity but the servers I posted only run when they're making money. So that cost is accounted for and offset in the project total.

The PE 1950 use a little more than the R610 but honestly it's not that much different. If you take all the hard drives out and just run a small one for OS the only real difference is newer processors using less wattage.

We have an upgrade path for replacing older with newer but the main driver is just higher core count vs. cost to acquire. RX10 series servers got really cheap over the last 5 years.

I'll probably recycle the 1950 soon and replace with RX30 series.

Everyone else has awesome setups as well and I easily waste hours looking here and r/homelab and r/cableporn !


----------



## Nicolas Nico

*modded DL380G6 silent*

HI,


following this tutorial
https://www.instructables.com/id/Convert-HP-DL380-G6-to-Cheap-Gaming-PC/


my modded DL380-g6 for very silent stuf
2*140mm arctic cooling blow out

5 noiseblocker 60mm fans blow out

1*gt 620 nvidia zotac

adaptec 51245 raid
130gb ram
hp p410i raid
lsi logic scsi for tape drive lto3

2* 5675X xeon 95w (max tpd authorized) 2*6 cores/ 24 threads HT


----------



## Bonz(TM)

My Christmas bonus project

Unifi Switch Aggregation for many cheap 10gbps ports - L2 (😢)
4x R720's w/ 8x1TB enterprise nvme and a mix of 3TB and 6TB drives in them.
Proxmox hyperconverged w/ CEPH
Tons of VMs and k8s deployments


----------



## Prophet4NO1

Added a Proxmox server and Unifi 10gig switch to the mix.


----------

