Get serious about client performance management

The old axiom "work expands to fill time" seems to parallel a truth about client computing: capabilities expand to consume available resources. As an Enterprise Services Architect responsible for Intel's IT client architecture, I have seen first hand how IT shops, software vendors, and users manage to throw everything but the kitchen sink into client systems only to be surprised by the resulting hit on performance. Let's face it: at most companies, client performance is an after-thought that becomes important when people start screaming. Making matters worse, everybody wants to dictate what needs to be on the client, including business, productivity, communication and collaboration applications, security & manageability agents, connectivity managers, backup clients, personal user applications, and more recently virtualization applications.

It's time to break free of reactive performance management "tiger teams" and get serious about addressing client performance in a more proactive way. Intel IT has begun to take a much more comprehensive view of client performance and has instituted a new framework for client performance management. Here are ten key learnings that you may be able to use at your company.

1. Form a client performance virtual team (v-team).

This step is first on the list for a reason. Client performance cannot be approached from only one perspective, otherwise it wouldn't be such a difficult problem. Form a virtual team consisting of at least one person from each of the following areas: client platform engineering, security & manageability engineering, client support (helpdesk), release management, human factors engineering, and enterprise architecture. To keep us on task, our client performance v-team has the mission "to drive an intentional approach to client performance management".

2. Develop a process and identify tools to measure performance, establish benchmarks, and set performance targets.

This is a discipline that needs to be baked into your client engineering processes. In order to determine what performance should be expected for any generation of client in your environment, you first need a baseline for comparison. The best time to develop this baseline is when your next generation client is ready for deployment and performance has been optimized. Industry and/or third-party benchmarking tools can be used to form part of the "performance profile" for you latest release. The choice of benchmarking tool(s) is less important than that you pick at least one and begin documenting the baseline performance of your clients. Data collected in a pilot under real-word conditions will also be useful in the baseline profile. The goal is to end up with something that can be used in the future as a yardstick for performance troubleshooting on the same generation of clients and for setting targets for the next generation client.

3. Develop a process and identify tools for tracking and reporting platform performance on the installed base of clients.

This is related but different from the last one and more difficult to pull off without further impacting performance. Benchmarks and performance targets are important, but those are established in lab conditions or in controlled tests and created only occasionally. Here we need to actually instrument the client and provide supporting infrastructure to actually collect data from our live clients. To do this, you are probably going to need some sort of agent running on the client. The results from this data collection and aggregation ought to correlate with the feedback you are getting from the helpdesk regarding their top performance issues. One related idea we are considering is to deploy a tool that allows the user check a performance status indicator on their desktop and/or allows them to push a button that sends a snapshot of their system status to the helpdesk when they are experiencing a performance issue.

4. Dedicate resources to forward engineering of performance enhancing capabilities.

There are a number of emerging technologies and capabilities that can actually deliver improved performance on the client, mostly under the general heading of QoS. For example, there are some third party products that will help with resource and process prioritization. There are also new capabilities baked into Vista that can be leveraged to improve performance including i/o prioritization and client-side policy-based network QoS.

5. Establish and maintain a strategic client performance capability roadmap.

A strategic capability roadmap for client performance will help by defining targets and setting context for engineering activities. Many of these capabilities we've discussed above, including those that enable performance management and those that enhance performance. Such a roadmap can also be used to drive application vendors to improve the performance of their applications.

6. Continuously validate platform performance against established benchmarks.

Modify your client release management process so that a comparative analysis can be done between your performance benchmarks/targets and the actual performance of the new client platform. You'll need to ensure that the benchmarking methodology you developed earlier can be exactly reused by the QA team.

7. Institute an ongoing continuous improvement process.

Establish a rhythm for taking what's learned about performance from PCs in the wild and incorporate the fixes, BKMs, enhancements, and optimizations into the client build engineering process for future releases.

8. Don't play chicken with your PC refresh cycle!

You know how people still do things that they know they're going to regret? Exactly. If you know what the right PC refresh cadence is for your environment, don't mess with it! The school of hard knocks has taught us that when we stretch the lifetime of our PC fleet, we end up paying for it in the end with spikes in helpdesk call volumes, above average failure rates, and complaints of performance degradation. The temptation is great to put off spending for a quarter or two or hold off to intercept a new version of an OS or new hardware platform, but this is a losing strategy that will probably land you in tiger team **** and will end up costing more in the long run.

9. Map out the client ecosystem and figure out where you can eliminate redundancy.

Take a fresh look at all the capabilities and products that are either installed on your client or that exist in the infrastructure that impact the client. This way you can identify where you may have redundancy that can be streamlined. For example, do you have two or more manageability agents with overlapping functions? Can you live with one even if it means giving up a feature or two? Don't forget infrastructure services that impact clients from afar. "Agentless" performance data collectors, for example, still have a performance impact on the client.

10. Establish client integration standards.

Assuming you have a healthy governance process, standards can act as both sword and shield to protect your client platform from being overrun by the barbarians. Like your city's building codes, these policies and related guidance can set a bar for application owners and service providers so they understand what is required and expected of them before they try to land anything on the client. Some IT shops have developed a "minimum security specification" that stakes out the absolute bottom line security controls that must be implemented in a given solution. Consider establishing a "minimum performance specification" to help educate application developers & vendors about performance optimization on your clients.