Sunday, April 5, 2009

Applying Data Center Recovery Principles to PCs

PCs have become an operational lifeblood of most organizations and the amount of viruses, malware and zero-day attacks continues to increase. Very large security practices have been built up to address these problems and yet this trend surges forward unabated. Are we simply stuck with continually paying more to protect our systems or is there a point where PCs are treated in the same manner as the servers in the data center?

Large companies installed mainframes in the 1960s to automate back office processes, gain efficiencies and enable new and larger business models. It soon became apparent that the data center needed to be well protected. Guards, keypass entry cards, UPS power and fire suppression systems became the norm. But investments to keep the data center safe were not enough and the disaster recovery business was created to allow a company, at a fraction of the cost of running a duplicate data center, to recover their critical applications if the primary data center was unusable. You could never spend enough money to reduce the risk of losing the primary data center to zero.

Perhaps applying the same principles to recovering access to these same applications in the advent of a massive virus outbreak, power blackout or communications failure would provide a cost-effective solution. These applications might also be the same ones that employees need while working remotely.

There are many ways to architect and design a solution, but the least common denominator in today's world is the web browser. If you're fortunate enough to have all your applications web-enabled, then you have a huge head start. Perhaps a dual-boot option on your corporate PCs with a Linux/Firefox option is enough to get your users productive again. Another strategy would be to have employees use their home computers, almost ubiquitous now, as their backup device. A final option would be to re-stage each PC, although this may take more time to accomplish than the business might be able to tolerate.

For those not fortunate enough to be fully web-enabled, which includes most of us, a solution to access those applications needs to be available, but not require a huge investment in hardware and software. The advent of pay-as-you-go Cloud Computing and more robust Open Source software comes to the rescue. The idea is to build a ready-to-go desktop image in the Cloud (e.g. Amazon Web Services) using Linux, Firefox, native Linux applications and Windows applications under the Wine environment. This image would have the necessary VPN connectivity to your data center to access the back-end services. Each user would spin up a copy of the image, with proper authentication of course, and be back in business in minutes. Or perhaps leveraging open source virtualization software can allow multiple people to use one Cloud server concurrently.

This image might also be used for home or hotel access, and potentially avoid the extra costs of providing laptops by leveraging personal and hotel business-center PCs. A copy of this image that provides isolated access during your disaster recovery testing can significantly reduce that network effort. These are just a few of the possible uses for a solution architected in this manner.

No comments: