Most of our customers are currently in the process of deciding 1) which Windows 10 Upgrade Management strategy to pursue (6, 12, 18 or 24 months), and whatever they are choosing, 2) how to achieve the velocity they need to get everything ready and tested to roll everything out in the defined period.
To manage your Windows 10 upgrades efficiently and with maximum velocity, there are a lot of upfront activities that have to be done first, e.g., getting a new build image ready, testing your hardware, planning your deployment rings, and more.
As you might already know, Microsoft is currently suggesting to have four deployment rings: the preview ring, targeted ring, broad ring, and critical ring. In order to define those rings, most organizations used to initially look at the devices and find friendly business testers that we could bring into these rings.
This allows us to gather feedback from testers that do not mind some associated inconvenience, as to whether or not the applications work. Unfortunately, people are starting to realize that this process doesn't always work out as planned because there are no guarantees that the testers even did anything with the applications on their new release, and there is no way of actually measuring the feedback on those applications.
Often testers might sign off with a simple launch of an application for example. There was also no impact analysis allowing us to determine that, if we successfully test these specific applications, we enable X devices to be ready for migration, so the friendly testing approach didn't quite achieve the coverage required to create a proper deployment plan.
Recently, we tackled this problem by taking an application-based approach to defining deployment rings with the goal of maximizing the velocity of the testing methodology.
The preview ring is still defined as very similar to the friendly tester approach: but now you select certain devices mathematically within each business unit and in IT, give them an in-place upgrade, and have them continue using their machine as is. Any issues will be reported back to the project team, ideally through automated application health reporting such as is available in Microsoft's Desktop Analytics platform.
What happens after the preview ring has now become a lot more formalized. Within the targeted ring, we are taking a hybrid approach: testing a huge number of applications live by picking machines that give us the most application coverage, while also testing a specific subset of applications more thoroughly by asking our official business unit testers to test some specific apps with approval and sign off.
This gives us a larger Targeted Ring, but as a result, we can plan our subsequent broad rings much more effectively. For example, if we can put these 30 devices in the targeted ring, we will have 100% of critical application coverage that opens up xx number of potential upgrade ready machines..
But in order to do that, we have to develop a holistic application testing strategy.
In the past, we have found that there are five different types of app testing that our customers view as possible strategies for managing their app testing for their Windows 10 upgrades:
The problem is that most enterprises cannot afford to do it all (too lengthy, costly, and labor-intensive), but they also cannot afford to do nothing at all (too risky). In order to achieve enough velocity to get through yearly upgrade cycles, they need to understand the state of their application estate and to take a pragmatic approach on the application testing cycle.
While none of our enterprise customers would risk barging ahead and test none of their apps, we are finding that, more and more, large organizations do not necessarily want to fully test all of their applications. However, they do want to get a feel for which of these applications might cause problems and which ones are going to work fine.
This can be accomplished though:
You cannot achieve deployment velocity without appropriate planning. The important thing is to categorize your applications once as either critical, important, core (part of the image, e.g., MS Office), or not important before trying to figure out which apps effectively become part of each ring. Then you can determine how, within that application estate, you will plan to test each of those apps.
No matter which methodology or combination of such (a hybrid approach is industry best practice) you choose, you still need a central repository and command and control center to manage it all.
Within Juriba Dashworks, you can create a WaaS Application Ring Coverage Report that allows you to get a quick overview of how many of your applications are critical, important, and not important, and gives you the ability to filter by project, ring name, and criticality.
By using this information, you can set up automated project readiness, scheduling and migration workflows. For example, if this app is a critical app, it needs to go through a proper UAT phase, it needs to get a sign off, it needs to be distributed slowly to people, and much more. On the other hand, if you have a low-impact application, you could create a workflow within Dashworks that will roll that app out, monitor it in the live environment, and turn it red if issues are reported.
Dashworks gives you a highly-effective and efficient hybrid approach that allows you to manage your entire application portfolio, define your deployment rings around the applications to get the most coverage, and therefore get through your upgrades with minimal hassle, resources, and time.