Post date: Oct 29, 2012 4:04:19 PM
First a little history and exploration of why we got into the habit of deploying one application per server( hint, the same reason that a reboot is the recommended solution to all that ails a computer).
Back in the days when all applications ran on the mainframe there was not question of server consolidation it was already done, but new applications were not put directly onto the mainframe production system.
...By the early 90's with the introduction of logical partitions and better resource control schemes under Unix it become possible to use a mainframe type approach to Unix applications. Many applications can now be easily run on a single Unix server and the applications and processes limit to specific resources.
It seems that some managers are starting to make the connection between deploying new applications on there own servers and the seemingly endless process of server consolidation. They are seeing new applications being deployed to standard alone servers at the same time as old applications are being consolidated into single servers or onto virtual servers.
So this begs the question why are we deploying new applications this way?
Well you better have an answer to this question because unless you have thought about this BEFORE the question is asked you will not like the answer. So this entry is going to explore some of the factors that go into answering this question.
First a little history and exploration of why we got into the habit of deploying one application per server( hint, the same reason that a reboot is the recommended solution to all that ails a computer).
Back in the days when all applications ran on the mainframe there was not question of server consolidation it was already done, but new applications were not put directly onto the mainframe production system. Instead the mainframe was normally divided into a couple of different Logical Partitions (LPARS). Normally, these were development, test and production. There were logically separated devices. If one LPAR had a problem it would normally not effect the other ones.
As the industry moved to using more Unix, and VMS systems that did not as easily support this separation of resources it was common practice to have development, test and production systems. These systems were also not as good at limiting processes and applications to specific resources. It was possible for single process with a memory leak to take all of memory, or use all of the disk space in a partition or filesystem. This lead to new and untested applications being run on the test servers for longer periods of time to prove themselves.
By the early 90's with the introduction of logical partitions and better resource control schemes under Unix it become possible to use a mainframe type approach to Unix applications. Many applications can now be easily run on a single Unix server and the applications and processes limit to specific resources. A single process can take all the CPU, memory or disk space.
Unfortunately, around this time a new operating system can up from the desktop that again had none of the features for limiting resources. Windows NT was designed as a multi-thread and multi-processor system.
Because it was initially designed an operating system for machines with very limited scale, cpu, memory and disk, application develops got into some interesting habits. One of the worst was the replacement of system libraries (DLLs) with either new versions or their own versions. This lead to something call DLL hell where applications could not co-exist on the same machine as they used mutually incompatible DLLs.
So even with out the issues of memory leaks and bugs in the operating system that limited uptime to less than 30 days it was necessary to deploy every Windows NT application to its own server to avoid DLL hell. This was acceptable as the server hardware was less reliable and the cost was lower so the process moved forward.
These problems has largely been corrected on the Windows platforms but the habits still remain. Server consolidation using tools like VMware is now proceeding as this tool provides LPAR like functionality.
Now with the history complete how do you justify getting new server hardware for the new application? The best answer is risk avoidance. With any new application there is a period of "teething" pain, when the application is unstable and its resource requirements are unclear. As the application is used for a period of time the stablity of the application in production becomes clear. In addition, once the user community is fully up and running on the application are the true resource requirements know. During this period of time you do not want this application taking "down" other applications on a shared server. This is just to risky.
The other problem is that often the newest applications are designed to use all of the resources of the newest "commodity" server. This means that if you want to share multiple applications on a single server you will need to spend more on hardware. As the applications mature and hardware they run on replaced this is the time to consider server consolidation . The newest generation commodity hardware has more horsepower to run multiple applications and the how the application works is how well understood.
So today we are consolidating the first generation of Windows based applications onto the newest hardware running VMware so that each application looks like it is getting its own server. The new applications are getting there own real servers to avoid the risk of one of these applications taking down the entire server and allow us to use commodity hardware to deploy the latest applications.