With the Windows 2003 Server end of life time bomb now passed on July 14th 2015, we thought we would take the time to document our advice for Server 2003 migration project planning activities based on our own server migration experience. Download our Server 2003 Project Plan for free.
Application Owner Buy-In & Communications
It’s a common situation to discover that your application owners probably don’t even know what Operating System their application(s) are running on, and highly likely that they are unaware of the impending security risk / cost associated with extended support agreements. Whether it is an in-house or a vendor application, there is a large amount of work to undertake before you will be ready to migrate it to the selected new OS (whether 2008, 2012 or 2014). It is also likely that the application team has not factored in any resource or budget to address the migration issue. Therefore, getting their buy-in to the problem, their ownership and accountability for testing, and subsequent support for the migration is of paramount importance. This is especially true when remediation of the application is going to be required, whether in the form of an in-house re-code/develop, or new vendor version upgrade. We advise you start here – communicate early, and ensure that you have the right level of participation when the time comes.
As with any migration, if existing inventory data is garbage in, your project will be garbage out, and doomed to travel the road of increased resource, blown timelines and budgets. Whilst servers will, by their very nature, be more static than desktops in terms of change events, their complexity can be much higher. So it is critical that you have full and complete information about your estate. From the simple stuff like hardware specification and application installation data through to the services, databases and application owner / vendor information, through to application maps and server interdependencies, gathering the right data as the baseline for your project is key. The question is how best to collect it? Do you already have the information in an existing discovery tool like SCCM, is it in a CMDB (or equivalent), can you utilise a new scanning agent, or do you have to resort to manual collection? One word of warning here … none of the currently available discovery tools will get you all of the data you need to fully manage your migration, but they are getting there with new releases. As such, be prepared to take a hybrid approach to your data discovery efforts.
Many of your servers will be low hanging fruit (probably more than you realise) and relatively simple to migrate from a technical perspective. By categorising them quickly, you can begin to prepare your migration effort much faster than you would think. A fast-pass of complexity (low = single server, single application, medium = single server, multiple applications or multiple server, single application, high = multiple servers, multiple applications), risk (low, medium, high) and criticality (low, medium, high) will show where these low hangers might lie. Next, take the [low, low, low] category servers and look at the application compatibility. Can you migrate as-is to the new OS, or do you need to upgrade or re-code the application? Finally, categorise the target machine specification – does it have low, medium or high specification for hardware. At the end of this process, you should be able to build a problem and impact analysis for your project, and also list out the servers that you should be able to migrate quickly.
Understanding the dependencies between your servers is one major reason why server migration projects can take longer than expected. Large swathes of server-based applications are not on a single box and nicely contained. You may have numerous applications split over multiple servers covering web applications, databases, middleware, redundancy and disaster recovery. But where is this information held in your organisation? Who truly understands the architecture behind the application and can identify which components interact with which others? Discovery tools can help to identify where servers are talking to each other, but you need to remove the noise. For example, we don’t need to know that the server is talking to the default gateway or DNS servers, but we do need to know which database server(s) a web application is calling. Likewise, on shared servers with multiple applications, identifying which application is talking to which servers and components on those servers can be a challenge. Our advice … use the discovery tools to build the initial data map, then build your final application map with your application owner for verification. Fail to get this piece right, and you will introduce huge risk into your migration program.
Application Compatibility & User Acceptance Testing
Get a server migration wrong and you could be taking out an entire department or key business function. Therefore, understanding whether your application will experience issues on the new target platform is of paramount importance. The good news is that there are ways of establishing this information quicker than you’d imagine. Let’s look at each application type:
- Vendor Application – if your application has been bought off the shelf, of course, you should be able to find out the information pretty easily. Equally, if you are doing it at scale, take a look at our friends FutureStateIT who host a very useful database of application currency and supported versions/platforms
- MSI (Packaged) or Source Code Exists Application – if your application was installed from an MSI, you can analyse this using software from our friends at Dell who can statically analyse your application package and tell you whether or not it will work on your chosen platform. This is also true if you have access to the original source media, as Dell’s software can create a package from the install media and analyse this for compatibility
- Non-Vendor, Non-Packaged, No Source Code Application – if your application was manually installed and you no longer have the source media, there is still a way to test without huge amounts of effort. In this instance check out our friends at AppZero who can place your old application into a bubble, copy it to the new OS for testing and then if successful, dissolve it back into the new platform as a migrated instance. We’re also looking forward to a future release from Dell software which can analyse the AppZero bubble for compatibility which will simplify things even further
Irrespective of whether you are automating the compatibility analysis or having to do it all manually, you will still need to get sign off for your migration from the application owners, and potentially end users. Having a slick process and system where you can try it out on the new OS is a key dependency and our friends at AppZero can help here too!
A common outcome of a server migration project will be to rationalise the number of physical and virtual servers running in your data centre. How to do this scientifically is a challenge faced by most project teams. The answer often lies within your server performance data rather than the existing specification of the hardware. Some tools (such as Microsoft’s MAP) will provide a recommendation on the rationalised estate. However, dig deeper and you will realise that this tool is primarily looking at the current hardware specification and making a determination. Our experience is that a greater level of understanding may help your end design. For example, if you know the peak of CPU and memory usage, and possibly the highest level of IOPS experienced for the disk, it can help properly size the virtual environment for co-existance. This is where tools like Lakeside SysTrack can provide more value, with the ability to both analyse performance over a period of time, but also to help make a determination on the target state for the new platform.
With Server 2012 and 2014, you should be able to achieve a higher density of servers per physical host even on a like for like basis. Proper performance analysis will achieve an even better result, and you should be able to understand exactly which servers should be hosted on the same physical unit to achieve the highest levels of rationalisation. Of course, if you are also considering the cloud, and your application or environment is of the type that can be spun up and down dynamically, your target system needs to be sized appropriately, along with any up-time requirements.
Project Scheduling & Reporting
Server migration projects can burn a lot of time and resource before they are ready to achieve any significant deployment volumes. Therefore, it is crucial that your project is able to effectively report on the progress that your project team is making, and how ‘ready’ the servers are becoming for migration. Equally, what is holding you up? Is it a lack of response from an application owner, the data centre not being ready, the application packaging process, change control, or the hundreds of other moving parts? Successful projects have one thing in common. A central, common project engine, where resources can share and update information, and that provides a single source of truth for the project is essential to running things as efficiently as possible. If reports and scorecards are taking you hours to generate in spreadsheets, think about whether investing into a data warehousing and project readiness/scheduling tracking tool would provide benefit to your program. Of course we would love you to consider Juriba’s Dashworks if you are looking at project tracking tooling.
The final piece in the puzzle for successful migration is the ‘how to’. Our friends at AppZero have a great tool for automating many of your migrations. Their clever software will link to the source server, copy the application and all settings into a virtual bubble which can be tested and dissolved into the target state when ready. Whilst not applicable for all server migrations (for example, some Microsoft technologies are not covered), you can significantly speed up the process by utilising some of this new technology to help with your testing and migration activities. Of course, there is a significant amount of testing still required to ensure that your application is being fully migrated, and as more and more companies complete their migrations, the more mature the software will become.
Of course, how you manage the migration will likely involve a hybrid mix of different methodologies. Our advice is always where practical to build new, sync to old and cutover. This approach significantly de-risks the migration and reduces the number of moving parts. It’s not always this easy, but if you can build and test on the new environment, the likelihood of application migration failure is lower. Equally, if you are migrating an infrastructure such as Active Directory or Exchange/Notes, make sure that you are using some of the great synchronisation tools out there like Dell Quest Migration Manager or Binary Tree. Both these companies provide software to ensure that the source and target are aligned, and will remove risk associated with a major migration such as this.
Did we miss something important? Your feedback is welcome in the comments below...