Tuesday, February 9, 2010

Server Room Extreme Makeover

While we consolidated all our project servers and mimic simulation equipment into a single location awhile back and took care of HVAC issues (thanks to wireless technology and our RegReports package), the space was in desperate need of a cleanup and makeover.

With our extensive use of VMWare, we’ve been able to add more than 50% server capacity without increasing our hardware footprint (not to mention our electrical load – how’s that for being green). And now we’ve made the layout and access easier and simpler for all our engineers to use. Easier to use = less time spent on non-value add activities and a better value for our customers.


One issue we always faced in the past was changing up DeltaV controllers from one system to another as project requirements change. With the new network and switch layout, we’ve eliminated all the confusion.

Scott Thompson did a great job getting all this organized (Ty Pennington’s been calling to see if Scott’s got any free time).

Oh, and just in case you were wondering how we were handling all those SI dongles in a VMWare environment, take a look at this:

anybody jealous out there?

5 comments:

Erin and Aaron said...

Bruce - Thanks for sharing your setup! We're in the process of putting together our own setup (albeit much more modest for now). We're definitely going to the servers with ESX and the USB server for our SI dongles.

About how many ProPluses do you get out of each physical machine? Our first candidate is a dual Xeom quad-core with 4GB of RAM.

Unknown said...

Aaron - Memory is your friend when virtualizing. We currently have three ESXi boxes all with unique hardware configurations:

PE2900 #1: two Xeon Dual core w/10GB RAM - 8 VMs (6 Pro+, 1 Control Dekstop, 1 Syncade) - 1-day average CPU usage 17.5%

PE2900 #2: one Xeon Dual core w/12 GB RAM - 8 VMs (4 Pro+, 2 Batch Execs, 1 Iconics, 1 VM Stats (by VKernel) - 1-day average CPU usage 40%

PE2850: two Xeon HT single core w/6 GB RAM - 5 VMs (3 Pro+, 1 MiMiC 2.8, 1 VCenter Server) - 1-day average CPU usage 30%

VMware lets you over allocate memory so you can assign more memory to your VMs that you have physical memory in the host, but if you get into a situation where multiple machines need most of their memory you will get into a swapping situation just like on a regular computer and performance goes way down.

If your Pro+ VMs need to talk to real control networks you'll need a NIC for each Pro+ in addition to the management/RT network.

With a dual Xeon quad-core you'll have more than enough horsepower for ~12 Pro+'s (if you have hard-drive space for that many VMs). If you want more than two you'll want to up the RAM. 12GB RAM is a good point for up to 8 Pro+ depending on your database size and how many users are going to be on each one.

Unknown said...

Bruce - Thanks for your insight and the great pics. I'm working with Aaron on architecting a similar setup. What are you using for your USB network server? It looks like the keys are plugged into a standard USB hub?

Many thanks!

Unknown said...

David - That is a standard (but old) USB hub. We are using a software only solution to share USB devices across the network called USB Over Network by Fabulatech. There are several solutions out there, some are hard hardware/software. Our method uses a server/client architecture with the USB Over Network server residing on a Windows host. The client is installed on each VM. The licensing is by concurrent client device connection.

The solution has been reasonably stable. Every few months I'll have to reboot the computer that runs the server piece when it stops responding. ALl that happens to the DeltaV dongles is that the downloads fail until the server piece comes back on-line.

Erin and Aaron said...

Thanks for your suggestions on all of this! We got our gold keys working with our revamped ESXi box and virtual ProPlus working this afternoon.