Chief Investment Officer
lessons from katrina: the real story, or hope is not a method
In recent peer group Forums I have seen several bankers from companies that were struck by Katrina and Rita. They had a unique perspective on the impact of the disaster on bank operations and customer welfare, as well as the effectiveness of traditional Disaster Recovery planning during a true catastrophe. John Hairston, COO of Hancock Bank, headquartered in Gulfport, MS, recently shared some amazing insights with me and a group of CEOs. His presentation provided the backbone to the article below. In addition, Kevin Reed of Whitney Bank, headquartered in New Orleans, and Lois Ann Stanton of Texas Regional Banks, headquartered in Beaumont, TX, provided invaluable comments on what works and what doesn't during a disaster. Their thoughts are summarized and shared below.
Also, Laura Hansen of Mechanics Bank shared the following story with me after reading the last BirdsEye View article about problem resolution: "I have a personal experience where this process worked. I received a complaint phone call from a customer... who proceeded to tell me how they had been treated during a recent visit to one of their offices. I let the customer spill out all their frustrations, interjecting periodically to let him know I was listening. I took detailed notes on what he told me. When he came to the end of his story, I responded with what I heard him say and asking for confirmation on my understanding of the problem,. I ended with letting him know that I understood his frustration and asked him what he would like me to do, since it wasn't clear from what he told me. He said, "nothing"; he only wanted someone to listen and not to place blame on him. I was the first person after three other bankers he had talked to that didn't try to make excuses." Laura concludes, "I have never forgotten that experience. I truly came away from that call glad that I had taken it. not all calls end this way, but even if only one in five do, look how many customers we can make a difference with". Well said!
As always, I'd be grateful for any thoughts and comments on what works and what doesn't in this all-important arena. Looking forward to hearing from you,
Lessons From Katrina: The Real Story, or Hope Is Not A Method
Katrina and Rita showed us all that Disaster Recovery isn't what we thought it was: a controlled test of few items in a fully planned environment. Instead, banks learned that, while you can't plan for the unthinkable, you can be better prepared for those times when disaster strikes. John Hairston of Hancock Financial, Kevin Reed of Whitney Bank, and Lois Ann Stanton of Texas Regional Bancshares shared some of the main lessons their organizations learned during those excruciating times late August 2005. It is admirable that both companies have grown substantially (at least 20% balance sheet expansion) since the devastation, have not suffered serious loan losses and have become even more integrated into their communities as a pillar and source of strength for all.
When disaster strikes, assume you are alone. The most basic needs that we take for granted are major obstacles to survival and business recovery in these situations. Food, water, lumber, tarp, tools (especially electric saws), medicine and fuel should be warehoused for short-term use (4-7 days). Keep your supplies, especially fuel, under wraps, or FEMA will appropriate them. As an additional precaution, place external suppliers on retainer so that they will deliver supplies automatically with no contact or request when disaster strikes. You will not be able to contact them when the time comes. At that time, trucks should start rolling immediately to pre-determined locations. The trucks should be marked EMERGENCY RESTORATION. Similarly, have a pre-determined source for currency delivery that will keep delivering until told otherwise.
One of the major issues to achieve timely recovery is the lack of fuel availability. Fuel arriving at the strike zone is likely to be seized by law enforcement for governmental use. Warehouse diesel and gasoline on hand to support your emergency generators and transport employees. Keep external delivery on retainer as well, with routes using roads other than interstate or highways, as those become choked with traffic (it took over 18 hours for some to drive 50 miles getting out of Houston in the wake of Rita). Have several trucks, trailers and many five gallon gas cans at the ready.
Communication is another huge issue. Phones became unreliable, especially in Louisiana and Florida, when Bell South towers collapsed. In order to assure that communication work, each of your locations should have two-way communications, either satellite or storm-proven tower-based. All employees who are needed to get the business going again should have similar devices and their names pre-keyed into directories.
Evacuation can be messy. In preparation for an orderly process, identify mission-critical employees to short-term recovery (leadership, wire, technology, check processing, ACH, correspondent and cash letter reconciliation, other reconciliation personnel etc.) and provide evacuation (bus, plane etc.) to keep critical teams together at the recovery sites. Make plans to evacuate and deploy those customers first.
Plan evacuation ahead of the disaster to pre-tested, established sites for business continuity. Each employee should state whether they wish to be evacuated or stay, and, if leaving, where, how to contact and when to expect their return.
You will likely not have enough employees to open all key locations. Some branches might stay closed for a while, and some may never open. Again, being prepared is the way to go. Establish key locations as rallying points for returning bankers; each person should know where to report and with one backup location.
The IT recovery center is the key location for IT and analytical staff. Consider sending mission-critical IT staff up ahead of the anticipated disaster to be ready to restore systems. Also make plans to deploy your reconciliation staff early; that will save you days of unwinding messy accounting trails later on.
In general, do not put all the corporate eggs in one basket, no matter how tempting.
It is safe to assume that the local telecommunications companies will not be functional and that backup circuits are unavailable as well. Satellite backup is recommended, as well as pre-ordered 800 numbers, call lists posted on the web-site, and pre-determined conference calls at a specific hour each day for all key employees. Losing employees and not being able to connect was one of the greatest and most damaging surprises of Katrina. Planning for it in the future is imperative..
ATM, web banking, check processing, VRUs, call centers, cash management, wire technology, should all have redundancies and in hot or warm sites for immediate activation. Test each critical server-based application across the disaster network. Even if the software application will run, it is useless if bandwidth is congested and response time is poor. For example, if it takes more than 15 minutes to warm up a large-scale device (e.g. teller station), you have an infrastructure problem. It should be easy and take no time; don't let anyone tell you differently.
Also, test your backup sites for volume-appropriate sturdiness. Testing an item sorter for 500 items doesn't mean the equipment will hold up to 500,000 items, as has been proven during last year's disasters...
Also, you should have in every branch basic customer account information such as name, address, identifier, account status and balance. Banks cashed customer checks without knowing their balances since all information was centrally housed and inaccessible. The reconciliation issues and the potential fraud exposure are huge in such cases. Ask your vendor or internal IT department to develop a simple application to provide this information. It IS possible (and doesn't cost an arm and a leg)!
Management oversight to Disaster Recovery plans
Set up a no-nonsense, not here to make friends, steering committee comprised of experienced and objective front and back office professionals to insure that every area has an adequate plan. The plans don't have to be thick, but testing and execution must be confirmed. Require an annual walk-through, physical test (with appropriate volumes) and results reporting to the Board's Audit Committee. Give the committee teeth by ensuring that failure of a recovery test or plan should be considered unacceptable and should bear serious consequences.
Business or operating unit responsibilities
Recovery Plans must be simple, practical and executable. It should also be clear that each area bears the responsibility for plan development and identification of key employees, with pre-set understandings of evacuation strategy and dependencies. Lines of business are also responsible to ensure that back-up facilities are in place, equipped and have been tested.
All IT dependencies should be tested by each business unit. This is not an IT function, but rather the responsibility of the unit to ensure that its systems work in a recovery mode.
Establish a business recovery coordinator for each unit who knows the bank's plan and specifically the unit's plan, and is being held accountable for each associate in that unit knowing their role.
Key vendors and service providers should provide you with a plan and work together to ensure they will deliver the expected service during a disaster. Assume that a key partner does not deliver. Are they still the right partner for you? Similarly, coordinate with key customers, to make sure they are not left in a lurch.
How much should you invest?
Don't be short-sighted when it comes to investment of capital and time in adequate planning. At the same time, recognize you can't plan for every eventuality; plan for the people and the resources you need; the rest will have to be improvised based upon the specific situation.
Below is a list of ideas that folks who have been on the firing line suggest. Keep them in mind as you plan for the "unplannable".
Make sure your reconciliation people are deployed back as early as possible.
Keep executive management visible to the employees.
Remember the mail doesn't work, may not work for months and will not acknowledge necessarily that there is a problem.
"The veneer of civility is very thin during dire times"; expect aberrational behavior.
Put bankers' names and their contact numbers on the front page of your website during a disaster.
Pre-set conference call numbers and times for the team to connect each day at a pre-determined hour. Buy 1-800 phone numbers for both customers' and employees' usage.
Cash providers might not be able to service you, nor will other typically reliable vendors.
Increase employees' credit card limits during an emergency; empower them to buy what it takes; paper, supplies etc., without going through the normal channels.
Know that text messaging works sometimes where voice communications doesn't.
In summary, we all know that lightening is unlikely to strike twice, but we have also seen that happen thrice this past year in certain parts of our country. Through the eyes of the local banks, we now know that what we deemed to be adequate preparedness isn't. The thoughts above can be used by each of you to contemplate the appropriate steps you need to take to be better prepared for the next lightening strike, should it ever occur. While there is a cost to such preparedness, the cost of its absence is even greater.