03 May 2010

IT Hotsite Best Practices

Introduction

A "hotsite" is a general term for unplanned downtime - a failing site, product, or feature that is having significant impact on revenue generation.  A problem is escalated to hotsite level when significant numbers of (potential) customers are affected and a business ability to earn money is significantly affected.  Hotsite handling may or may not be used if the problem is not under direct control of the team controlling a set of systems (e.g., a critical feature the systems depend on is provided by a remote supplier, such as a web service being used by a mashup).

Hotsites happen.  Costs increase infinitely as you push your system design and management to 100% uptime.  You can aspire for 100% uptime, but it's foolish to guarantee it (e.g., in an SLA).  Change can also cause service disruptions.  In general, the less change, the less downtime.  However, it's rarely commercially viable to strongly limit change.

This article isn't about reducing planned or unplanned downtime, it's a collection of tips, tricks, and best practices for managing an unplanned downtime after it has been discovered by someone who can do (or start to do) something about it.  I'll also focus in on a new type of downtime, one that the people involved haven't seen before.

General strategy - the management envelope

It's important early on for a major problem to separate technically solving the problem from managing the problem itself into the wider business.  Because an unplanned downtime can be extremely disruptive to a business, it's often almost as important to keep people informed about the event as solving the event itself.

Although that may feel like an odd statement, as a business grows there are people throughout the business that are trying to manage risk and mitigate damage caused by the downtime.  Damage control must be managed in parallel with damage elimination.

You want to shelter those that are able to technically solve the problem from those that are hungry for status and are slowing down the problem solving process by "bugging" critical staff for information.  Technical problem solving tends to require deep concentration that is slowed by interruptions.

It is the management envelope's responsibility to:
  • Agree periods of "no interruption" time with the technical staff to work on the problem
  • Shelter the team from people asking for updates but are not helping to solve the problem
  • Keep the rest of the business updated on a regular basis
  • Set and manage expectations of concerned parties
  • Recognize if no progress is being made and escalate
  • Make sure the escalation procedure (particularly to senior management) is being followed
  • Make sure that problems (not necessarily root cause related) discovered along the way make it into appropriate backlogs and "to-do" lists
General strategy - the shotgun or pass-the-baton

Throughout the event, you have to strike a balance between consuming every possible resource that *might* have a chance to contribute (the "shotgun") versus completely serializing the problem solving to maximize resource efficiency ("pass-the-baton").

Some technologists, particularly suppliers who might have many customers like yourself, may not consider your downtime as critical as you do.  They will only want to be brought in when the problem has been narrowed down to their area and not "waste" their time on helping to collaboratively solve a problem that isn't "their problem".

There is a valid argument here.  It is ultimately better to engage only the "right" staff to solve a problem so that you minimize impact on other deliverables.  Your judgment about who to engage will improve over time as you learn the capabilities of the people you can call on and the nature of your problems.

However, my general belief for a 24x7 service like an Internet gambling site that is losing money every second it is down, calling whoever you think you might need to solve the problem is generally fully justified.  And if you're not sure, error on the shotgun side rather than passing the baton from one person to the next.

General strategy - the information flows and formats

Chat.  We use Skype chat with everyone pulled into a single chat.  Skype's chat is time stamped and allows some large number of participants (25+) in a single chat group.  We spin out side chats and small groups to focus on specific areas as the big group chat can become too "noisy", although it's still useful to log information.  It gives us a version history to help make sure change management doesn't spin out of control.  We paste in output from commands and note events and discoveries.  Everything is time threaded together.

The management envelope or technical lead should maintain a separate summary of the problem (e.g., in a text editor) that evolves as understanding of the problem/solution evolves.  This summary can be easily copy/pasted into chat to bring new chat joiners up to speed, keep the wider problem solving team synchronized, and be used as source material for periodic business communications.

Extract event highlights as you go.  It's a lot easier to extract key points as you go then going through hours of chat dialogues afterwards.

Make sure to copy/paste all chat dialogues into an archive.

Email.   Email is used to keep a wider audience updated about the event so they can better manage into partners and (potential) customers.  Send out an email to an internal email distribution list at least every hour or when a breakthrough is made.  Manage email recipients expectations - note if there will be further emails on the event or note if this is the last email of the event.

The emails should always lead off with a non-technical summary/update.  Technical details are fine, but put them at the end of the message.

At a minimum, send out a broad distribution email when:
  • The problem first identified as a likely systemic and real problem (not just a one off for a specific customer or fluke event). Send out whatever you know about the problem at that time to give the business as much notice as possible of the problem. Don't delay sending this message while research is conducted or a solution is created.
  • Significant information is discovered or fixes created over the course of the event
  • Any changes are made in production to address the problem that may affect users or customers
  • More than an hour goes by since the last update and nothing has otherwise progressed (anxiety control)
  • At the end of a hotsite event covering the non-tech details on root cause, solution, impact (downtime duration, affected systems, customer-facing affects)
Chain related emails together over time.  Each time you send out a broad email update, send it out as a Reply-All to your previous email on the event.  This gives new-comers a connected high-level view of what has happened without having to wade through a number of separate emails.

Phone.  Agree a management escalation process.  Key stakeholders ("The Boss") may warrant a phone call to update them.  If anyone can't be reached quickly by email and help is needed, they get called.  Keep key phone numbers with you in a format that doesn't require a network/internet connection.  A runbook with supplier support numbers on the share drive with a down network or power failure isn't very useful.

The early stage

Potential hotsite problems typically come from a monitor/alert system or customer services reporting customer problems. Product owners/operators or members of a QA team (those with deep user-level systems knowledge) may be brought in to make a further assessment on the scope and magnitude of the problem to see if hotsite escalation is warranted.

Regardless, at some point the first line of IT support is contacted.  These people tend to be more junior and make the best call they can on whether the problem is a Big Deal or not.  This is a triage process, and is critical in how much impact the problem is going to make on a group of people.  Sometimes, a manager is engaged to make a call of whether to escalate an issue to hotsite status. Escalating a problem to this level is expensive as it engages a lot of resources around the business and takes away from on-going work. Therefore, a fair amount of certainly that an issue is critical should be reached before the problem is escalated to a hotsite level.  The first line gets better at this with escalation with practice and retrospective consideration of how the event was handled.

Once the event is determined to be a hotsite, a hotsite "management envelope" is identified.  The first line IT support may very well hand off all problem management and communications off to the management envelope while the support person joins the technology team trying to solve the problem.

All relevant communications now shift to the management envelope.  The envelope is responsible for all non-technical decisions that are made.  Depending on their skills, they may also pick up responsibility for making technical decisions as well (e.g., approving a change proposal that will/should fix the problem). The envelope may change over time, and who the current owner and decision maker is should be kept clear with all parties involved.

The technical leader working to solve the problem may shift over time as possible technical causes and proposed solutions are investigated.  Depending on the size and complexity of the problem, the technical leader and management envelope will likely be two different people.

Holding pages.  Most companies have a way to at least put up "maintenance" pages ("sorry server") to hide failing services/pages/sites.  Sometimes these blanket holding pages can be activated by your upstream ISP - ideal if the edge of your network or web server layer is down.  Even better is being able to "turn off" functional areas of your site/service (e.g., specific games, specific payment gateways) in a graceful way such that the overall system can be kept available to customers while only the affected parts of the site/service are hidden behind the holding pages.

Holding pages are a good way to give yourself "breathing room" to work on a problem without exposing the customer to HTTP 404 errors or (intermittently) failing pages/services.

Towards a solution

Don't get caught up in what systemic improvements you need to do in the future.  When the hotsite is happening, focus on bringing production back online and just note/table the "what we need to do in the future" on the side.  Do not dwell on these underlying issues and definitely no recriminations.  Focus on solving the problem.

Be very careful of losing version/configuration control.  Any in-flight changes to stop/start services or anything created at a filesystem level (e.g., log extract) should be captured in the chat.  Changes of state and configuration should be approved in the chat by the hotsite owner (either the hotsite tech lead or the management envelope).  Generally agree within the team where in-flight artifacts can be created (e.g., /tmp) and naming conventions (e.g., name-date directory under /tmp as a scratchpad for an engineer).

All service changes up/down and all config file changes or deployment of new files/codes should be debated, then documented, communicated, reviewed, tested and/or agreed before execution.

Solving the problem

At some point there will be an "ah-ha" moment where a problem is found or a "things are looking good now" observation - you've got a workable solution and there is light at the end of the tunnel.

Maintaining production systems configuration control is critical during a hotsite. It can be tempting to whack changes into production to "quickly" solve a problem without fully understanding the impact of the change or testing it in staging.  Don't do it.  Losing control of configuration in a complex 24x7 environment is the surest way to lead to full and potentially unrecoverable system failure.

While it may seem painful at the time, quickly document the change and communicate it in the chat or email to the parties that can intelligently contribute to it or at least review it.  This peer review is critical in helping to prevent making a problem worse, especially if it's late at night trying to problem solve on little or no sleep.

Ideally you'll be able to test the change out in a staging environment prior to live application.  You may want to invoke your QA team to health check around the change area on staging prior to live application.

Regardless, you're then ready to apply the change to production.  It's appropriate to have the management envelope sign off on the fix - certainly someone other than the person whose discovered and/or created the fix must consider overall risk management.

You might decide to briefly hold off on the fix in order to gather more information to help really find a root cause.  It is sometimes the case that a restart will likely "solve" the problem in the immediate term, even though the server may fail again in a few days.  For recurring problems the time you spend working behind the scenes to identify a more systemic long term fix should increase with each failure.

In some circumstances (tired team, over a weekend) it might be better to shut down aspects of the system rather than fix it (apply changes) to avoid the risk of increasing systems problems.

Regardless, the step taken to "solve" the problem and when to apply it should be a management decision, taking revenue, risk, and short/long term thinking into account.

Tidying up the hotsite event

The change documentation should be wrapped up inside your normal change process and put in your common change documentation archive.  It's important you do this before you end the hotsite event in case there are knock on problems a few hours later.  A potentially new group of people may get involved, and they need to know what you've done and where they can find the changes made.

Some time later

While it may be a day or two later, any time you have an unplanned event, as IT you owe the business a follow-up summary of the problem, effects and solution.

When putting together the root cause analysis, keep asking "Why?" until you bottom out.  The answers may become non-technical in nature and become commercial, and that's ok.  Regardless, don't be like the airlines - "This flight was late departing because the aircraft arrived late.".  That's a pretty weak excuse for why the flight is running late.

Sometimes a root cause is never found.  Maybe during the event you eventually just restarted services or systems and everything came back up normally.  You can't find any smoking gun in any of the logs.  You have to make judgment call on how much you invest in root cause analysis before you let go and close the event.

Other times the solution simply isn't commercially viable.  Your revenues may not warrant a super-resiliant architecture or highly expensive consultants to significantly improve your products and services.  Such a cost-benefit review should be in your final summary as well.

At minimum, if you've not solved the problem hopefully you've found a new condition or KPI to monitor/alert on, you've started graphing it, and you're in a better position to react next time it triggers.

A few more tips

Often a problem is found that is the direct responsibility of one of your staff.  They messed up.  Under no circumstances should criticism be delivered during the hotsite event.  You have to create an environment where people are freely talking about their mistakes in order to effectively get the problem solved.  Tackle sustained performance problems at a different time.

As more and more systems and owners/suppliers are interconnected, the shotgun approach struggles to scale as the "noise" in the common chat increases proportional to the number of people involved.  Although it creates more coordination work, side chats are useful to limit the noise, bringing in just those you need to work on a sub-problem.

Google Wave looks like a promising way to partition discussions while still maintaining an overall problem collaboration document.  Unfortunately, it's easy to insist all participants use Skype (many do anyway), but it's harder with Wave that not many have used or don't even have an account or invite available.

Senior leadership should re-enforce that anyone (Anyone!  Not just Tech) in the business may be called in to help out with a hotsite event.  This makes the intact team working on the hotsite fearless about who they're willing to call for help at 3am.

Depending on the nature of your problem, don't hesitate to call your ISP.  This is especially true if you have a product that is sensitive to transient upstream interruptions or changes in the network.  A wave of TCP resets may cause all kinds of seemingly unrelated problems with your application.

Conclusion

Sooner or later your technical operation is going to deal with unplanned downtime.  Data centres aren't immune to natural disasters and regardless, their fault tolerance and verification may be no more regular than yours.

When a hotsite event does happen, chances are you're not prepared to deal with it.  By definition, a hotsite is not "business as usual" so you're not very "practiced" in dealing with them.  Although planning and regular failover and backup verification is a very good idea, no amount of planning and dry runs will enable you to deal with all possible events.

When a hotsite kicks off, pull in whoever you might need to solve the problem.  While you may be putting a spanner into tomorrow's delivery plans, it's better to error on the shotgun (versus pass-the-baton) side of resource allocation to reduce downtime and really solve the underlying problems.

And throughout the whole event, remember that talking about the event is almost as important as solving the event, especially for bigger businesses.  The wider team wants to know what's going on and how they can help - make sure they're enabled to do so.

Using MobileMe's iDisk as an interim backup while traveling

Introduction

I use an Apple laptop hard disk as my primary (master) data storage device.  To provide interim backups while traveling, I use Apple's MobileMe iDisk for network backups to supplement primary backups only available to me when I'm at home.

Having dabbled with iDisk for a few years, I have two key constraints for using iDisk:
  • I don't always have a lot of bandwidth available (e.g., a mobile phone GPRS connection) and I don't want a frequent automatic sync to hog a limited connection.
  • I don't trust MobileMe with primary ownership of data or files.  Several years ago I switched to using the iDisk Documents folder (with local cache) for primary storage but then had several files magically disappear.
I've now evolved to using iDisk as a secondary backup medium.  I manually run these steps when I have plenty of bandwidth available.  There are two steps to this:
  • rsync files/folders from specific primary locations to a named directory under iDisk
  • Sync the iDisk
How to do it

The rsync command I use looks like this:


for fn in Desktop dev Documents Sites; do
   du -sk "/Users/my_username/$fn" | tee -a ~/logs/laptop_name-idisk.rsync.log
   rsync -avE --stats --delete "/Users/my_username/$fn" "/Volumes/my_mobileme_name/laptop_name/Users/my_username" | tee -a ~/logs/laptop_name-idisk.rsync.log
done

The rsync flags in use:

-a         archive (-rlptgoD no -H)
           -r    recursive
           -l    copy symlinks as symlinks
           -p    preserve permissions
           -t    preserve times
           -g    preserve group
           -o    preserve owner
           -D    same as "--devices --specials" (preserve device and special files)
-v         verbose
-E         preserve extended attributes
--stats    detailed info on sync
--delete   remove destination files not in source


Explanation:
  • I'm targeting specific locations that I want to backup that aren't overly big but tend to change frequently (in this case several folders from my home directory: Desktop, dev, Documents, Sites)
  • A basic log is maintained, including the size of what is being backed up (the "du" command)
  • I use rsync rather than copy because rsync is quite efficient - it generally only copies the differences, not the whole fileset.
  • The naming approach on the iDisk allows me to keep a backup by laptop name allowing me to keep discrete backup collections over time.  My old laptop and backups sit beside my current laptop backups.
  • The naming approach also means I don't use any of the default directories supplied by iDisk as I'm not confident that Apple won't monkey with them.
  • ~/Library/Mail is a high change area but not backed up here (see below for why)
The rsync updates the local iDisk cache.  Once the rsync is complete (after the first rsync I find it takes less than 10 seconds for subsequent rsyncs), manually kick off an iDisk network sync (e.g., via a Finder window, clicking on the icon next to iDisk).

An additional benefit to having a network backup of my important files and folders is that I can view and/or edit these files from the web, iphone, or PC.  I find that being able to access email/IMAP from alternative locations is the most useful feature, but I have had minor benefit from accessing files as well when my laptop was unavailable or inconvenient to access (e.g., quick check of a contract term in the back of a taxi on an iphone).

Other Backups

I have two other forms of backups:
  • Irregular use of Time Machine to a Time Capsule, typically once a week if my travel schedule permits.
  • MobileMe's IMAP for all email filing (and IMAP generally for all email).
Basically, if I'm traveling, I rely on rsync/iDisk and IMAP for backups.  I also have the ability to recover a whole machine from a fairly recent Time Machine backup.

Success Story

In June 2009 I lost my laptop HDD on a return flight home after 2 weeks of travel.  I had a Time Machine backup from right before I'd left on travel, and occasional iDisk rsyncs while traveling.

Once I got home I found an older HDD of sufficient size and restored from the Time Machine image from the Time Capsule.  This gave me a system that was just over 2 weeks "behind".  Once IMAP synchronized my mailboxes, that only left a few documents missing that I'd created while traveling.  Luckily I'd run an rsync and iDisk right before my return flight, so once I'd restored those, I'd recovered everything I'd worked on over the two weeks of travel, only missing only some IMAP filing I'd done on the plane.

Weakness

The primary flaw in my approach is that you have to have the discipline to remember to manually kick off the rsync and iDisk sync after you've made changes you don't want to lose.  I certainly don't always remember to run it, nor do I always have a good Internet connection available to enable it.  However, I find that remembering sometimes is always better than not having any recent backup at all.

Alternative Approaches

An obvious alternative is to use the MobileMeBackup program that is preloaded onto your iDisk under the Software/Backup directory.  Using this tool, you should be able to perform a similar type of backup to what I've done here.  I've not tried it as it was considered buggy back when I first started using iDisk for network backups.  I'll likely eventually try this and may shift to it if it works.

A viable alternative approach is to carry around a portable external hard drive, and make Time Machine backups to it more frequently than you would otherwise do over the network via iDisk.  You could basically keep a complete system image relatively up-to-date if you do this.  More hassle, but lower risk and easier recovery if your primary HDD fails.  However, if you get your laptop bag and external HDD stolen, you'll be worse off.

While on holiday recently, I was clearing images off of camera SD card memory as it filled up.  I put these images both on the laptop HDD and an external HDD.  This protects me from laptop HDD failure, but wouldn't help if both the laptop and external HDD was stolen.

iDisk Comparison to DropBox

DropBox is a popular alternative to iDisk.  I find DropBox to be better at quickly and selectively sharing files, it has better cross-platform support (particularly with a basic Android client), and it's sync algorithm seems to work better than the iDisk equivalent.  You could certainly do everything described here with DropBox.

The downside with DropBox is having to pay $120 per year for 50GB of storage versus $60-100 per year ($60 on promotion, e.g., with a new Apple laptop; otherwise $100) for 20GB of storage with MobileMe.  I find 20GB to be plenty for IMAP, iDisk and photos providing I filter out big auto-generated emailed business reports (store on laptop disk not in IMAP), and only upload small edited sets of photos.  I'll probably exhaust the 20GB in 2-3 more years at my current pace, but I'd expect Apple to increase the minimum by the time I would otherwise be running out of space.

MobileMe is of course more than just iDisk, so if you use more of it's features, it increases in value relative to DropBox.

Both iDisk and DropBox are usable choices, the differences are not sufficiently material to strongly argue for one or the other.  I have seen iDisk improve over the last few years and I'd expect Apple to eventually catch up with DropBox.

Conclusion

While I'm not confident in using MobileMe's iDisk as a primary storage location, I have found it useful as a network backup.  Combined with normal backups using Time Machine and Time Capsule, it provides a high-confidence recovery from damaged or lost primary use laptops.