Over the years I have used many different methods in trying to duplicate system builds, as well as cut down on build times especially in a catastrophic failure that requires a complete rebuild. Some of the main issues that arose around duplicating systems all depends on what the end results needs to be.
- Do you want to have an exact copy or just need multiply systems with the same build and setup with slight configuration differences. The other aspect is to think about is how you want your system to look for each user as they login.
- Do you want them to have the same look and feel? What about the same support directory structures?
- Should all of the servers have the same functionality?
The answers to these questions may lead you to use 3rd party block based duplicating software, or to document steps to manually build the base system with scripted post build setups, or even to use bare metal restore processes.
In some aspect it really does not matter how you achieve your base build. The challenge is more around which is the easiest way to duplicate the build for the same type of functioning system. From what I have found, with builds that require large quantities of builds with 99% of the exact configurations it is best to use some sort of block copy that allows you to build large amounts of disks and just swap them out as the hardware goes bad. Of course this does not make the systems unique. This will still require a small script to configure the small amount of changes you need to make to allow the system on the network without conflicts. This method does have some shortcomings; as the base build changes the disks in overflow will be out date and will need to be redone. The biggest advantage to this method you don’t need to be technical for this type of process. This task can be done by basic technicians anywhere around the world to get the base system back on the network. Once it is on the network it can be reconfigured remotely.
One of my more interesting methods for deployment around the world was to build a *NIX system that each supporting site would own. I am a big believer in as much hands as possible during the initial build process for anyone to understand how the system work and how to support the system. Without any hands on how would one learn the system functionality? This also promotes refreshing some skills that may be lost due to infrequent use. I created a process for each site to perform their own builds with docs and scripts. This process helps promote learning and consistency across all systems created. An install would follow the doc on base build. Each builder would run into similar build issues which were documented along with how to resolve them. After the base build was completed, each builder would need to modify scripts to complete the build. Each of the scripts would range from driver installs to setting up profiles of different users. This would allow each site to configure the system the way their site operated but keep them consistent between locations. The scripts would help each builder understand what is being installed and what commands are used to build the system. They would also help support the system if any issues would arise.
With all the different products out there on the market and supplied by the OS vendor, there are many different ways to accomplish the same result. As there are many methods out there, and not one is the answer to all needs, there are some main things to think about: who is doing the builds, will they be supporting the systems, how many systems do you need to create and how often. Most importantly, document your process well to assist your supporting groups.