Last month’s tsunami warning lit up our phones. Clients asked the same, very human questions: Do we shut everything down? What about the servers? Will staff be able to work if the office is inaccessible? If you felt that spike of adrenaline, you weren’t alone. Events like a tsunami warning are a reminder that in Hawaiʻi, planning beats panic. This post is our practical, plain-English guide to disaster preparedness and recovery—what to do before, during, and after an incident—with a special focus on moving data and critical services to the Cloud and maintaining backups so your business can keep going.
Our goal here is simple: help you build a business continuity plan that’s realistic for your team and budget, reduces downtime, and gives you confidence the next time the emergency alerts go off.
Why Hawaiʻi Businesses Need a Continuity Plan That’s “All-Hazards”
Disasters here don’t read calendars, and they don’t stick to one playbook. In Hawaiʻi, continuity planning should be all-hazards—built once, usable for many scenarios:
- Tsunamis & king tides: Flooding, saltwater intrusion, and extended building closures.
- Hurricanes & tropical storms: Wind damage, power outages, connectivity loss.
- Flooding & landslides: Facility access blocked, water damage to gear.
- Earthquakes & volcanic hazards (VOG/ashfall on some islands): Power instability; particulate risks to equipment.
- Wildfires: Fast-moving evacuations, poor air quality, widespread infrastructure disruption.
- Extended power or Internet outages: The most common “everyday disaster.”
- Supply chain interruptions: Replacement parts and new hardware can take longer to reach the islands.
Your plan should not be a binder that gathers dust. It should be a small set of checklists, communications templates, and decision rules that your team can actually use under stress—plus the right technology choices that let you avoid the scramble in the first place.
Business Outcomes First: Define RTO, RPO, and “What Good Looks Like”
Before talking about Cloud, backups, or gear, define the outcomes your business needs:
- RTO (Recovery Time Objective): How quickly must a system be back online? (e.g., “Email: 2 hours. EMR: same day. File shares: 24 hours.”)
- RPO (Recovery Point Objective): How much data loss is acceptable? (e.g., “We can lose at most 1 hour of changes.”)
- Critical processes & systems: Which workflows generate revenue or protect safety/compliance? Rank them.
- MBCO (Minimum Business Continuity Objective): The smallest workable version of your operations you can sustain during recovery (e.g., 50% of call center capacity, curbside service only, paper intake forms for one day).
If you set these targets first, technology choices get easier and budgets get clearer. You’re not paying for “the Cloud” or “backups”—you’re paying to hit an RTO/RPO your leadership agrees on.
Cloud as a Resilience Strategy
Moving data and workloads to the Cloud is not just about convenience. Done right, it reduces single points of failure tied to a physical office and speeds up recovery. Done wrong, it can create new blind spots. Here’s the balanced view:
What the Cloud gives you
- Geographic resilience: Data and apps live in regions not in the same flood zone as your office.
- Anytime/anywhere access: If staff can get online, they can work (home, evacuation site, neighbor island, mainland).
- Elastic capacity: Scale up to handle backlogs after an outage.
- Managed services: Less to patch and less hardware to replace locally.
What it doesn’t automatically solve
- SaaS data loss: Deleted or corrupted data in Microsoft 365/Google Workspace still needs independent backups.
- Connectivity: If your office or home Internet is down and there’s no failover, Cloud access still stalls.
- Identity: If your identity provider (IDP) is misconfigured or down, nobody can log in. Redundancy matters.
Bottom line: Cloud reduces infrastructure risk, but you still need backups, identity resilience, and cybersecurity safeguards to keep people productive.
The Backup Rule We Recommend: 3-2-1-1-0
You’ve probably heard 3-2-1 (three copies of your data, two media types, one offsite). Modern threats add two more requirements:
- 3 copies of data (production + two backups)
- 2 different media or storage types
- 1 offsite copy
- 1 immutable or air-gapped copy (can’t be altered or encrypted by ransomware)
- 0 errors after automated backup verification and restore testing
What this looks like in practice:
- Production: Your live data (servers, SaaS, or cloud storage).
- Local backup: A fast-restore copy on NAS or backup appliance in the office.
- Offsite/Cloud backup: Encrypted, immutable copies in another region or provider.
- SaaS backup: Separate backups of Microsoft 365/Google Workspace (mail, OneDrive/Drive, SharePoint, Teams).
- Test restores: Quarterly bare-metal or VM restores; monthly file-level spot checks. Track and fix any errors.
If you can meet your RTO/RPO with this setup, you’re resilient against everything from flood-damaged servers to accidental deletion.
Connectivity and Power: The Two Pillars Everyone Forgets
You can have perfect Cloud backups and still be down if you can’t reach them. Invest in:
- Redundant Internet circuits: Primary fiber/cable + secondary (fixed wireless, LTE/5G).
- Automatic failover: SD-WAN or business-grade firewalls that switch over without manual intervention.
- Prioritized apps: QoS for voice/video and your line-of-business tools during degraded service.
- UPS and safe shutdown: Battery backup sized for orderly shutdown, not for running all day. Pair with scripts/agents that safely power down servers if outage exceeds X minutes.
- Generator (where feasible): For sites that must remain open (clinics, retail hubs), coordinate with building management.
For remote staff: provide work-from-anywhere kits—VPN, MFA, admin-approved personal device policies, and guidance on using mobile hotspots when the home ISP is down, all supported by ongoing IT oversight to keep systems reliable.
What to Do Before, During & After Tsunami or Severe Weather
1) Before a warning (pre-season prep and quarterly hygiene)
- Document the plan: One-page quick start + detailed playbook. Store in the Cloud and share offline copies (PDF on phones).
- Assign roles: Incident lead, communications lead, facilities lead, IT lead, vendor contact lead.
- Verify backups: Immutable copy present? Test restores documented?
- Harden identity: MFA everywhere; emergency break-glass accounts stored securely offline.
- Check contact trees: Phone, SMS, Teams/Slack, personal email fallbacks. Run a quarterly check.
- Pre-stage Cloud access: Staff know how to reach files/apps away from the office; VPN split-tunnel rules are tested.
- Label and elevate: Servers/NAS not on the floor; power cables labeled; surge protection in place.
- Tabletop exercise: Walk through a 60-minute scenario quarterly. Who calls whom? Who decides to shut down gear?
2) When a warning is issued
- Safety first: Follow county/state guidance. People > hardware.
- Activate comms: Send a single update channel (email + Teams post) to avoid confusion.
- Reduce exposure:
- Shut down nonessential workstations.
- If time allows, cleanly shut down on-prem servers after confirming last good backup completed.
- Move any floor-level equipment to higher shelves.
- Switch to remote mode:
- Confirm staff can reach Cloud apps from home.
- Enable phone continuity (cloud telephony, mobile app softphones, auto attendants updated).
- Note the timestamp: Record when actions were taken. This matters for post-incident analysis.
3) After the all-clear
- Safety walkthrough: Facilities checks for water damage, tripped breakers, strange odors/sounds from gear.
- Power up methodically: Firewalls/routers → switches → storage → virtualization hosts → VMs → applications.
- Validate services: Can users authenticate? Are file shares, apps, voice services reachable?
- Run health checks: Verify backups still running; confirm no replication backlog; check server logs.
- Communicate status: “We’re back up. Here’s what to watch for. Here’s how to report issues.”
- Post-mortem: What worked? What should we change? Update the plan within 1–2 business days.
Your Continuity Roadmap: Crawl, Walk, Run
Not every business can flip a Cloud-first architecture overnight. Here’s a staged path that works in Hawaiʻi:
Crawl (30–60 days)
- Backups first: Implement 3-2-1-1-0 with immutable cloud copies.
- SaaS backups: Turn on Microsoft 365/Google Workspace backup.
- MFA & break-glass accounts: Baseline identity resilience.
- Basic communications plan: One-page PDFs, contact tree, a single status update channel.
- Tabletop #1: Prove you can execute the plan.
Walk (60–120 days)
- Move file shares to SharePoint/OneDrive or Google Drive with DLP: Reduce on-prem dependency.
- Dual WAN with automatic failover: Keep Cloud reachable.
- Cloud telephony or survivable voice: Maintain inbound calls during outages.
- Harden endpoints: EDR, auto patching, disk encryption, MDM for laptops.
- Tabletop #2: Test partial office outage + remote-only operations for one business day.
Run (120–240 days)
- DRaaS or cloud-native for key apps: Critical servers replicated to Cloud; you can fail over within your RTO.
- Zero-trust access: Conditional access, per-app VPN or ZTNA, device compliance checks.
- Geo-redundant identity: Secondary IDP region/tenant recovery strategy.
- Procurement agreements: Pre-negotiated replacements and loaner equipment (accounting for island logistics).
- Full simulation: Schedule one planned “remote-only day” per quarter.
Special Considerations for Healthcare, Finance, and Regulated Teams
- HIPAA/PII: Ensure Cloud services have compliant BAAs; enable audit logs, retention, encryption. Don’t store PHI in personal clouds or unmanaged devices.
- eSign and paper fallback: Pre-approved temporary paper forms and scanning workflows if systems are offline.
- Chain of custody for devices: Clear procedures for water-damaged laptops/phones; document wipe/destruction for HIPAA.
- Incident documentation: Keep timelines, actions, and approvals in a centralized, access-controlled space (even if created after the fact).
Cloud Myths (and the Reality)
- Myth: “If we’re in Microsoft 365/Google Workspace, we don’t need backups.”
Reality: SaaS retains data for limited windows and can’t save you from admin error, sync corruption, or malicious deletion. Use third-party backups with immutable retention. - Myth: “Moving a server to the Cloud is automatically cheaper.”
Reality: It’s often more resilient, but cost depends on workload patterns, storage needs, and egress. Use RTO/RPO to choose where it belongs (on-prem, hybrid, or cloud-native). - Myth: “We’ll just ride out the outage.”
Reality: Staff want to work, customers need updates, and regulators expect you to have a plan—especially one that aligns with compliance obligations. Continuity is a leadership discipline, not a luxury.
A Simple, Realistic Incident Playbook
Trigger: Tsunami warning or severe weather alert.
Decision Rule: If building access or power is uncertain, switch to remote-first operations immediately and begin controlled shutdowns of nonessential on-prem systems.
Checklist (Business/IT Lead):
- Post the “Incident Activated” message in your designated channel with timestamp.
- Confirm last successful backups (local + offsite + SaaS).
- Initiate workstation shutdown reminder to staff (template message).
- If time permits, perform safe shutdown of on-prem servers (scripted sequence).
- Verify Cloud access path (VPN/IdP) is healthy; flip to remote-first help desk workflows.
- Update phone system greetings/auto attendants for service continuity.
- Log actions and times.
Checklist (Comms Lead):
- Send customer-facing notice if service impact is expected (keep it calm and factual).
- Provide internal status updates at set intervals (e.g., every 60 minutes) to reduce rumor-mill noise.
- After all-clear, announce phased restoration and how to report lingering issues.
After Action (within 48 hours):
- Review timelines, what went well, what to change.
- Record total downtime vs. RTO targets; data loss vs. RPO targets.
- Update the plan and re-train.
Budgeting for Continuity (Without Guesswork)
Budget follows targets. To avoid overspending or underpreparing:
- Tie costs to RTO/RPO: Faster recovery and near-zero data loss cost more. Decide where speed pays off.
- Prioritize impact: Fund the top 3–5 high-impact controls first (immutable backups, redundant Internet, MFA/z-trust, SaaS backups, Cloud telephony).
- Consider island logistics: Keep a small inventory of loaner laptops, spare switches, and critical cables.
- Track TCO, not just CAPEX: Managed Cloud services often reduce the hidden costs of on-prem maintenance—especially when severe weather interrupts access.
How We Handle “What Should I Do Right Now?” Calls
When clients call during an active warning, here’s what we recommend:
- Safety first. We never ask staff to drive or stay in unsafe areas.
- Stabilize data. Confirm backup status; if time allows, perform clean shutdowns.
- Shift to remote. Verify users can reach Cloud apps and phones; enable emergency call routing.
- Communicate simply. One channel, periodic updates, short messages.
- Recover methodically. After all-clear, power-up sequence, validation checks, business resumption.
Every one of those steps becomes lighter, faster, and less stressful when your core data and collaboration tools already live in the Cloud and your backups are immutable and tested.
A Short Story: Two Offices, Two Outcomes
Office A kept file servers and phone systems fully on-prem. Backups ran nightly to a local NAS. When floodwater entered the ground floor, the NAS and a switch were damaged. Staff couldn’t access files or phones until the building reopened and replacement parts arrived. The team worked from personal email for two days—risky and chaotic.
Office B migrated files to SharePoint/OneDrive and phones to a Cloud provider with mobile apps. Immutable backups ran daily for SaaS and weekly image-level backups for the few remaining on-prem servers. When the warning came, they shut down nonessential gear and told staff to work from home. Phones and files worked the same from anywhere. Post-event, they powered up gear and resumed on-prem printing the next morning.
Same hazard, different outcomes. Cloud + backups + a simple plan make the difference.
Your Ready-to-Use Checklists
Quarterly Prep
- Test restore from backup (files + a full VM).
- Verify SaaS backups are current and immutable.
- Confirm dual-WAN failover works.
- Update contact tree; run a roll call.
- Tabletop exercise; update plan.
72-Hour Readiness
- Ensure laptops have current patches, VPN, and MFA.
- Verify key staff can log in from home and use softphone apps.
- Confirm power-down order and scripts are documented and accessible offline.
During Warning
- Announce “Incident Activated” in the designated channel with timestamp.
- Confirm last backup status; initiate safe shutdowns if appropriate.
- Switch to remote-first operations; update IVR/greetings.
- Send staff and customer updates at set intervals.
After All-Clear
- Facilities check and staged power-up.
- Application and identity validation.
- Communicate restored status and any known issues.
- Post-mortem: record lessons and update playbook.
Bringing It All Together
Disaster preparedness is not about predicting the next event—it’s about engineering your business to handle the unexpected. In Hawaiʻi, that means shaping a plan that works for tsunamis, storms, and everything in between. Practically, it looks like this:
- Define your outcomes (RTO/RPO, critical processes).
- Adopt Cloud where it adds resilience (files, phones, critical apps).
- Implement 3-2-1-1-0 backups (including SaaS).
- Harden identity and enable remote work (MFA, MDM, VPN/ZTNA).
- Build connectivity and power redundancy (dual-WAN, UPS, safe shutdown).
- Practice with short, realistic drills (tabletops, remote-day tests).
- Communicate simply before, during, and after.
Do those seven things and your next “What should we do right now?” call will be shorter, calmer, and—most important—effective.
Want Help Building or Testing Your Plan?
Intech Hawaii can help you prioritize, budget, and implement a right-sized continuity roadmap for your team—from Cloud migrations to immutable backups and dual-WAN failover. We can also run a 60-minute tabletop exercise with your leadership so everyone knows exactly what to do when the sirens sound.
Next steps:
• Ask us for a Continuity Quick Assessment (RTO/RPO targets + gap analysis).
• Schedule a backup restore test (we’ll document results and fix gaps).
• Get a Cloud & Connectivity plan (files/phones, dual-WAN, remote-work readiness).
When the ocean sends a warning, the best feeling in the world is knowing you’ve already done the hard work. Contact Intech Hawaii today to get started—so your business is ready before disaster strikes.