Over the past few years, we have seen our fair share of failed and dangerously inefficient enterprise attempts to keep Windows 10 and Office 365 Servicing up-to-date. Often, the organization took longer than anticipated to run through the first initial migration, skipped an update to come up for air, just to drown under the workload of the feature update roll out, for which it had vastly underestimated the risk and work effort. Ever worse, these updates are often managed manually, with antiquated and inadequate project management practices!
[Please note, the article is part of the Definitive Guide to Successful Windows 10 Servicing series.]
On the bright side, we were also able to help dozens of enterprises successfully ready their estates for rolling out smooth waves of Windows 10 and Office 365 Servicing updates and there is one thing we learned. The secret to achieving delivery velocity is to utilize a data-driven approach that allows you to prioritize workload and resource activities.
While we have previously written about the importance of creating repeatable and scalable processes, we haven't explicitly outlined the data journey itself, which is full of decisions and potential pitfalls. Today, I want to walk you through all of the data decisions that you have to make along the way. I also will highlight what is involved (from a data perspective) in creating an efficient, streamlined, end-to-end feature update release process from start to finish.
Your data journey can be broken up as follows:
Speak Visually. Create an infographic with Visme
Identify Your Data Repositories & Sources
The first step in your process, of course, is to identify all data sources and repositories required to support your upgrade planning. For example:
- Users (Active Directory)
- Computers (SCCM, Altiris, SysTrack, CMDB)
- Applications (SCCM, Windows Analytics, Altiris, SysTrack, CMDB)
- Office 365 ProPlus Apps (Plugins, Macros, VBA Files)
- Locations (HR Systems)
- Departments (HR Systems)
- Infrastructure (CMDB)
- Mobile Devices (InTune, Airwatch, MobileIron)
While you could export most of this data into multiple spreadsheets, we recommend loading them into an IT Transformation Management platform, like Dashworks, as you will need an always-up-to-date and centralized command and control center to manage your data journey effectively. Dashworks, for example, comes out of the box with robust data connectors that hook up into almost any system, which enables you to see and analyse a live status quo of your estate right away.
Identify Data Linkages & Tie Assets To Your Users
Secondly, you will have to identify all linkages and dependencies that exist within the data. The easiest example of data linkage is the fact that a user is the owner of a device and the device has certain applications installed. He or she is part of a department and has a designated location which has its own infrastructure.
However, you will have to go a lot deeper than that. For example, he or she might be entitled to applications outside of the device or they might also have different permissions and entitlements in the environment or use mobile devices. You need to know and understand them and link them all back to the user object effectively. As you can see, things already have gotten a lot more complex, and creating actionable linkages between data objects is difficult in a spreadsheet or a hand-cranked database, so sophisticated data management is required.
Extract, Transform, Load & Cleanse
For all the data nerds and efficiency seekers out there, this is probably the most exciting part of the process — the extracting, transforming, loading, and cleansing the data — as it holds the biggest potential for improvements.
First, you categorize and rationalize your list of "Not Applicable" or "Irrelevant" applications. These are your hot fixes, patches, standard language packs and run times that would not be packaged specifically for Windows 10. This can be a significant portion (65%) of the applications estate at first, but as you are running through this process over and over, the percentage of irrelevant apps will hopefully diminish.
Next, you will define and set your device ownership rules for shared PCs or, if no owner is identified, for a specific device as well as your location requirements. For this, you need to confirm your site list and mapping rules, as well as confirm users and devices with the site mapping, before you can create date cleaning rules.
Identify In-Scope Assets & Onboard Users To Groups
Now that you have identified your data sources, linked and cleansed them, it is time to identify all of your in-scope assets, e.g. for Feature Release 1709, and associate your users and devices to default deployment groups, or Deployment Rings. This methodology allows for staggered or phased deployment and helps you streamline your roll out process significantly. Then, you simply onboard all of your in-scope users, computers, and associated applications into the Dashworks project groups which happens automatically.
Application Categorization & Ring Analysis
Now you have to turn your attention to one of the biggest practical roadblocks when it comes to Windows 10 and Office 365 Servicing: your application compatibility.
Although, Microsoft touts that Windows 7, Windows 8.1 and, of course, previous Windows 10 version applications should be compatible in general and require little to no additional testing or repackaging, I know of no enterprise or large organization that would be willing to take that risk!
Therefore, it is prudent to take an inventory of your application estate and categorize the applications based on their volume, usage and risk factor for causing business disruption.
- High Volume Business Critical (High Risk) Apps
- Low Volume Business Critical (High Risk) Apps
- High Volume General (Low Risk) Apps
- Low Volume General (Low Risk) Apps
Once you know how many users you have that require business critical, low-volume applications versus high volume, general apps, something magical happens: you can create velocity reporting! If you are using tools like Upgrade Readiness or Applicationreadiness.com, you can also feed in some important compatibility information to help you plan further. For example, a device that contains all low volume, and 'compatible' applications should be an easy target.
Instead of having to blindly go down a list, you can prioritize applications in your testing and packaging team that would allow you to upgrade as many seats as possible right away. This can be up to 80% of your environment. After some quick certification testing, you usually can be confident enough that they are ready to go live while you manage any upcoming issues in flight.
Application Validation & Readiness
At this point, you will know which applications you should investigate further and which don't need testing. But in order to get to your target list of apps effectively, you still need to make a few decisions. For example:
- Application owner/tester assignment
- Which applications to keep, retire, or upgrade
- Testing process agreement
- Test slot booking/release deployment
- Results tracking
- Low impact apps — derived compatibility on rollout numbers
This will kick off your application testing and packaging process. Again, if this process is automated using smart workflows and a central command and control, you can create trigger-based migration schedules, resulting in an exponentially faster rollout.
Application Readiness, Scheduling, and Communication
Once the first batch of apps has cleared packaging & testing, you can move into app readiness.
From a data perspective, you should:
- Set low impact apps to GREEN. Because you won't be testing your low-risk general applications, the readiness flag in Dashworks Project can be set to "Green" to indicate that these applications are ready to go. This can be set up as tollgates to trigger workflow automation.
- Receive test results for tested applications and set readiness status accordingly.
- Auto-update as apps turn to ready, or the volume deployed warrants a change in color.
To support your auto-upgrades, you will need to set up your automated, trigger activity-driven communications and allow users to initiate self-service scheduling. This can be done leveraging Dashworks' Self-Service capabilities and Automated Communication.
Now, we are finally ready to get the ball rolling! Most of these migrations will probably be in-place upgrades which are done by placing the device in question in an SCCM collection that will trigger an upgrade. It is also manipulating application entitlements if they need to change. For example, if version A didn't work in the next feature release, but version B does, and we need to move you from A to B, we would need to orchestrate that activity (e.g. Active Directory group adds / removes) as well as trigger the upgrade process.
So, in reality, you are using Dashworks to push the data to SCCM and Active Directory at the right moment to trigger deployments. Of course, first you would run tests within your IT organization or a small pilot group, and then do the low risk deployments before moving on to managed deployments of early adopters and broad deployment across your environment.
I cannot stress enough how absolutely critical it is that your team works on one central platform providing them with real-time data through a single pane of glass. Once you get over 1,000 seats, your Windows 10 and Office 365 Servicing twice a year will fail without the ability to run at least the following reports at a drop of a hat:
- Application readiness & testing status
- Feature release deployment completion status (by department)
- Scheduling status (by Deployment Ring)
- Plan vs. Actuals against completion date (FR EOL)
- Velocity reports (application prioritization)
- Summary scorecard
To successfully roll out bi-annual feature updates for both Windows 10 and Office 365, it is essential to take a data-driven approach. This will allow you to find low hanging fruit which immediately accelerates your project, to improve the cleanliness of your estate, and will even help you save on maintenance and license costs. In order to do so, you need to map out your exact data journey and models and implement it in an IT Transformation Tool that provides you with real-time data connectors, one central command and control platform and much more.