I’d guess that 90% of hosting providers ‘oversell’. This essentially means that should they have 1,000GB allocated, they might offer 15 packages of 100Gb to 15 of their customers, banking on the fact that no one will fully use their 100GB allocation – Selling 5 Virtual Machines with 256MB RAM on a 1GB host, assuming that no one will use their full RAM allocation. This is bad, because you’ll generally be able to confirm that you’ve been allocated the resources, but nonetheless benchmark tests will show that you’re just not getting them, and your environment will be sluggish and unresponsive. This is the same as airlines selling 110 seats on a 100 seat plane. When that 101st paying customer does show up to claim his seat, he’s stuck without a flight.
The general consensus is that a VPS is a cheaper and lower-grade option than a dedicated service, however VPSs have a number of indisputable advantages over dedicated servers and I’m going to discuss why almost all the dedicated machines I manage are hosts for a range of VPSs.
VPSs are not slow except when they’re placed in an oversold hosting environment in which case they’re only slow as a result of the system administrators actions. Here’s an example of one of my servers:
8 GB RAM, 2x Quad Core Intel 3.2GHz Xeon, RAID-5 (3x500Gb 7200rpm hard drives [1TB of usable space]). Total cost, £2,000 about 2 years ago, and about £35/month to co-locate it. There’s a 10mbit pipe connected to the rack, and I’ve allocated a dedicated 2mbit and allowed burst up to 5mbit for this machine as a whole. The host machine itself has a clean minimal debian-etch installed, latest stable kernel with the following modifications made from the default debian kernel config:
– Tick Speed from 250MHz to 1000MHz
– Memory support changed from 4GB to up to 64GB
– Processor type changed to Xeon family
– Xen support
Xen is a great virtualization technology and the one that’s in use on this system. Vmware also has it’s uses for me, but I’ll discuss those further down.
After the kernel upgrade, the system is rebooted and a number of virtual machines are set up. I’ll list their resources and briefly detail what they do.
ns3: Primary nameserver (bind), apache2-prefork/php5+eaccelerator/mysql+qcache server, exim4/courier/mysql-authlib/spamassassin mailserver. The machine has 2GB RAM, 120Gb disk space and 2 of 8 processors allocated to it. Load is generally 0.20 < load < 0.50 (max: 0.10 < load < 1.00). It takes about 25k web impressions/day, 10k emails/day, and 150k bind requests/day.
vpn: The machine runs as a non routing VPN endpoint for about 25 hosts to access remote services as local. 256MB RAM, 2 of 8 processors, 20GB HDD (pptp with mppc/mppe is great but needs CPU power).
client1: A client’s backup service [ssh/rsync], 40GB HDD, 256MB RAM and 1 of 8 processors.
client2: A client’s VPS web/email server similar setup to ns3. Takes about 8k impressions/day, connected via a virtual private interface to ‘client3’ below. 40Gb HDD, 2GB RAM, 2 of 8 processors.
client3: A client’s internal mysql server bound privately with ‘client2’ above. Takes about 12k queries/day. 120GB HDD, 2GB RAM, 3 of 8 processors.
We have 5 virtual machines here, 8 of 8 processor/cores in use where each is dedicated to it’s allocated VPS alone [ We could probably reuse 2 of them again on a machine with a similar usage to client2 or ns3 without any performance loss]. 6.5GB of 8GB RAM in use. No VMs or Host machine show swap space in use which is really bad in a VM environment where HDD access is probably the most scarce resource, and the full guaranteed sum of 340GB of disk space available leaving 660GB left for future. The general load on the machine is 0.60 < load < 2.60 (max: 0.50 < load < 4.20).
The host machine runs just great and the VPSs are all very responsive. The host itself generally uses between 2mbit and 3.5mbit bandwidth at any given time. It would more than happily service another one or two similar VPSs. Consider some of the huge advantages here:
– Resources or “Virtual Hardware” can be strictly allocated and customized way beyond any physical server alone to really allow you to utilize your resources fully.
– As your ‘servers’ are in a Virtual Machine environment, they are entirely and -very- easily ported/cloned to other physical machines.
– Once upon a time you might have considered hosting these machines on 5 physical machines, most of which would sit around costing you hosting fees and hardly being used to their full potential.
– Easily partition your services to ensure that each VM/service is clean efficient rather than bogged down with huge amounts of software on a single machine.
– Although undisputed that running a set of services inside a VPS will never meet the same benchmarks as that same set of services on the host physical machine due to the added virtualization layer, if your host and Xen setup is good, this is really negligable and not of great concern.
I think that’s a good set of advantages to be getting on with, for disadvantages:
– Disk IO will be the most heavily impeded resource as different VMs asking the physical hard disk to keep jumping all over the place to read and write data is not the most efficient use of the drive, note the drive’s ‘seek time’.
I don’t know of any other valid disadvantages but would welcome any additions.
I normally extend this setup to what I call ‘a cluster of 3’. Another machine with the same setup would be run alongside this, then a 3rd low specification of say 2.0GHz Intel, 256MB RAM or above with a RAID5 (3x720GB) disk array to act as a backup server for two other physical machines and their VPSs. This first line backup would then be mirrored from each of the ‘clusters of 3’ to a separate location.
Where dedicated servers are required for customers, it’s always on option, however I’ve consistently found that this is the most efficient way to use your resources in such an environment. I currently have about 5 ‘clusters of 3’ across various NOC racks, showing 15 dedicated servers, not including other dedicated servers, routers and other equipment, where most others might be using 50+ servers for the same and pay associated costs.
VMware GSX/Free Server also has it’s uses. It’s something I’ve generally moved away from [specifically as free server is for non commercial use only]. It’s incredibly easy to set up and very very user friendly but I do find that the Xen virtualization layer offers far better performance than VMware Free Server, and is really not that much harder to get set up.
I wrote this in response to a post I saw here: http://www.webhostingtalk.com/showthread.php?p=5311738 and will be writing my own simple howtos on VMWare Free Server & Xen later.