Hello, you are using an old browser that's unsafe and no longer supported. Please consider updating your browser to a newer version, or downloading a modern browser.

Leadership & Strategy
C
Christopher Porter Training Camp
Published
Read Time 12 min read

What Flying Taught Me About Risk Management

I fly. Have for years. And here’s the thing that keeps hitting me every time I’m up there: aviation already figured out the stuff we’re still banging our heads against in cybersecurity. Human error, communication breakdowns, that creeping complacency that sets in when you’ve done something a thousand times. The cascade of little screw ups that turns into a catastrophe. Pilots dealt with all of it decades ago. We’re playing catch up.

The numbers are weirdly similar. Aviation accidents? Somewhere between 70 and 80 percent trace back to human factors. Cybersecurity breaches? Depending on which study you believe, 68 to 95 percent involve human error. That’s not coincidence. Both fields run on complex systems that work beautifully right up until people get involved. And the fix isn’t replacing humans with machines. It’s designing around how people actually behave when they’re tired, distracted, stressed, or just convinced they already know the answer. Which, let’s be honest, is most of the time.

You can’t train away human error. Aviation figured that out the hard way. What you can do is build systems that catch the mistakes before they kill anyone. Cybersecurity is slowly getting there.


Swiss Cheese and Why Your Defenses Have Holes

Back in the late 80s, a psychologist named James Reason came up with what’s now called the Swiss Cheese Model. Simple concept, actually. Picture your organization’s defenses as slices of Swiss cheese stacked up. Each slice is a different barrier: training, procedures, technical controls, supervision, culture. And each slice has holes. Because nothing’s perfect. You get a disaster when all those holes happen to line up at the same moment, giving whatever’s going wrong a clear shot through.

Security folks love talking about “defense in depth” like we invented it. Aviation’s been doing this since before I was born. Every aircraft has redundant systems because engineers know any single system will eventually fail. Same idea applies to your security stack. Firewall gets misconfigured. Someone clicks a phishing link. A patch sits in a queue too long. MFA gets bypassed because an attacker smooth talked someone on the phone. Any one of those by itself shouldn’t be fatal. The breach happens when you get unlucky and three or four of them line up at once.

Here’s what took me a while to really internalize: big failures almost never come from one dramatic mistake. They’re the result of a bunch of little things that piled up. Mimecast put out a study in early 2025 showing 95 percent of data breaches had human error as a contributing factor. Not the only cause. Just one of the holes that happened to line up with all the other holes. Your job is catching those little failures before they stack up into something ugly.


When the Boss Is Wrong and Nobody Says Anything

December 28, 1978. United Airlines DC 8 coming into Portland. Crew spots a landing gear problem. Captain decides to circle while they figure it out. First officer notices fuel getting low. Mentions it. Mentions it again. Captain’s fixated on the gear thing, waves him off. Plane runs out of fuel. Crashes. Ten dead. Eight of them didn’t have to die.

That crash basically invented Crew Resource Management, or CRM. The concept’s not complicated: when the stakes are high enough, the traditional “captain knows best” model gets people killed. If the guy in charge is automatically right because he’s in charge, critical information never makes it up the chain. The first officer sees a problem but doesn’t push hard enough. The flight engineer notices something’s off but figures someone else has it handled. CRM training teaches people to speak up clearly, question what doesn’t look right, and make sure the right information reaches the right people. Doesn’t matter what your rank is.

Now think about your security team. Junior analyst spots something weird in the logs. Do they feel like they can escalate it, or do they worry about looking dumb? Engineer finds a gap in your defenses. Can they say something without getting pushback from whoever built the thing in the first place? These hierarchy problems killed people in cockpits for years. They’re killing security programs right now. SOC analyst sees an anomaly, doesn’t want to bother anyone during a busy day. Penetration tester finds a critical vuln but gets told fixing it would delay the product launch. Same dynamics, different industry.

What Happened Over Sioux City

Portland shows what goes wrong when communication fails. United Flight 232 shows what happens when it works. July 19, 1989. DC 10, Denver to Chicago. Fan disk in the tail engine comes apart and takes out all three hydraulic lines. In a DC 10, hydraulics control everything. Ailerons, elevators, rudder, spoilers. All of it. No hydraulics, no way to steer. This was considered so impossible that nobody trained for it. The official NTSB report basically said no crew should be able to land a plane in that condition.

Captain Al Haynes and his crew were looking at a situation nobody had survived before. They improvised, figuring out they could sort of steer using differential thrust from the two wing engines. Push more power to one side, the plane turns. Kind of. An off duty United training captain named Dennis Fitch was on board, came up to the cockpit and offered to help. Haynes took him up on it immediately. Fitch worked the throttles while Haynes and his first officer tried everything else they could think of. They talked constantly. Threw out ideas. Challenged each other. Nobody had the answer, but collectively they had just enough to get that plane on the ground.

112 people died. But 184 survived. When they tried to replicate the scenario in simulators afterward, professional test pilots couldn’t get anywhere near the runway. Most of them crashed before getting close. The NTSB credited United’s CRM training as a major factor in why anyone lived at all. Haynes said later that if he hadn’t accepted help, if he’d tried to do it all himself, “it’s a cinch we wouldn’t have made it.” That stayed with me.

Ask yourself this: When your incident response team hits something they’ve never seen before, will they have the structures and the psychological safety to improvise? Or will ego and hierarchy stop them from using every resource available? The Flight 232 crew had 103 years of combined flight experience in that cockpit. None of it covered what they were trying to do. They made it because they worked as a team. Not because any one person was a hero.


Checklists Are Boring. Use Them Anyway.

The checklist got invented in 1937 after a Boeing bomber crashed during a demo flight. Cause? The pilot forgot to release a control lock before takeoff. That’s it. Plane was just too complex to reliably remember every step, so Boeing made a list. That boring little piece of paper has prevented more accidents than any piece of technology I can think of.

Every pilot I know uses checklists like their life depends on it. Because it does. Before every flight, during critical phases, after landing. Doesn’t matter if you’ve flown ten thousand hours. The checklist isn’t an insult. It’s acknowledgment that human memory is garbage, that routine makes you sloppy, and that missing one step can kill you. I’ve watched guys with more flight time than I’ll ever have work through the preflight list item by item. Because “I’ve done this a million times” is exactly when you mess up.

Cybersecurity has runbooks. We have procedures. We have documentation. But actually following them step by step? In my experience, that’s seen as beginner stuff. The senior engineer skips the deployment checklist because they know what they’re doing. The experienced analyst doesn’t bother with every step in the incident response playbook. And then we get the breach because someone forgot to check whether MFA was enabled on the new admin account, or nobody verified that the backup actually worked before wiping the compromised server.

✈️ Steal These From Aviation
PREFLIGHT

Run before any major change. Backups verified? Rollback plan ready? Access controls confirmed? Dependencies checked? Don’t touch anything until you’ve gone through the list.

IN FLIGHT

Active monitoring while the change is happening. What specific indicators tell you to stop and roll back? Decide that ahead of time, not in the moment.

POST FLIGHT

Confirm everything worked. Update the docs. Tell the next team about anything weird you noticed. Future you will thank present you.

EMERGENCY

Your incident response runbooks. Specific steps, not vague guidance. And practice them. Repeatedly. So they’re automatic when the adrenaline hits.


Are You Actually Fit to Be Making Decisions Right Now?

Pilots use something called IMSAFE before every flight. Illness, Medication, Stress, Alcohol, Fatigue, Emotion. The idea is that even a perfect aircraft flown by a fully trained pilot will crash if the pilot isn’t in shape to fly that day. Coming down with something? Judgment might be off. Going through a rough patch at home? Might miss warning signs. Exhausted from a bad week? Mistakes happen that wouldn’t happen otherwise.

FAA data shows fatigue contributes to about 7 percent of aviation incidents. Sounds low until you think about what’s at stake. In cybersecurity, we don’t even bother tracking this stuff. How many incidents happen because the engineer was running on four hours of sleep after a week of on call? How many misconfigurations happen because someone’s mind was on their kid’s surgery tomorrow? How many phishing emails get through because a stressed out employee wasn’t paying attention?

That 2025 Mimecast research found that 8 percent of employees cause 80 percent of security incidents. That concentration tells you something. It’s not that most people are bad at security. It’s that certain people, in certain conditions, are more vulnerable. Maybe they’re drowning in work. Maybe they’re new and don’t know what to watch for. Maybe something’s happening in their life that’s eating their focus. Instead of treating everyone as equally likely to be a problem, we should be thinking about the human factors the way pilots do.

A buddy who flies commercial told me one of the best things CRM training gave him was permission to say “I’m not fit to fly today.” No shame. No career hit. Just an honest read on conditions. How many of your security staff feel like they can say “I’m too tired to be making critical calls right now” without someone holding it against them?


Compliant Isn’t the Same as Secure

Pilot training hammers on the difference between what’s legal and what’s safe. Regulations set minimums. You can legally fly with a certain ceiling, a certain fuel reserve, a certain rest period. But minimums aren’t recommendations. They’re the absolute floor below which you cannot go. Smart pilots build margins above those minimums because conditions change, things break, and you don’t want to run out of options when that happens.

Same problem in security. PCI DSS, HIPAA, ISO 27001. These frameworks set minimum requirements. Meeting them is mandatory. But compliance and security aren’t the same thing. I’ve seen organizations sail through audits while sitting on major gaps. The audit confirms you have the controls in place. It doesn’t necessarily prove those controls would stop an actual attacker who knows what they’re doing.

This is why I keep telling people that certs and compliance are just the beginning. Our CISSP training covers the regulatory domain, obviously, but we push hard on understanding the difference between knowing the rules and actually understanding security. A CISSP certified professional should know what regulations require and why they exist and where they don’t go far enough. The point isn’t checking boxes. It’s protecting the organization.


Building the Kind of Culture That Actually Works

Aviation went from a dangerous way to travel to one of the safest things you can do. That wasn’t because planes got better, though they did. It happened because the whole industry decided safety was the priority. Near misses get reported and studied, not buried. Root cause analysis looks at systems, not scapegoats. Lessons learned make it into training, procedures, design. Everyone from the greenest first officer to the most senior captain is expected to contribute.

Most security organizations aren’t there yet. Incidents are problems to hide, not analyze. There’s still reflex toward finding someone to blame rather than systems to fix. Near misses go unreported because nobody wants to admit they almost caused a disaster. Lessons learned stay with the team that lived through them and don’t spread.

The organizations getting this right build environments where surfacing problems is safe. They treat incidents as chances to learn, not excuses for punishment. They invest in training because skilled people are their best defense, not because some auditor said they had to. And they understand that security belongs to everyone, not just the security team.

What You Can Do Starting Tomorrow

Run blameless postmortems. Something breaks, you focus on what went wrong in the system. Not who screwed up. Write it down. Share it widely. Make the lesson stick.

Create a way to report near misses. That phishing email someone almost clicked. The misconfiguration someone caught in code review. The weird activity that turned out to be nothing. Gold mines, all of them. But only if people actually report them.

Reward people who challenge assumptions. When the junior analyst questions the senior engineer’s call, that’s a feature not a bug. Build a culture where “wait, are you sure?” is always okay to ask.

Treat compliance as the floor. Yes you have to meet the requirements. But always ask what else would actually make you more secure, even if no regulation demands it.

✈️ What It Looks Like from Up Here

Aviation spent decades learning you can’t design out human error. But you can catch it. You can build cultures where it surfaces instead of hides. You can create teams that work through it together. Defense in depth. Clear communication regardless of rank. Checklists that even experts follow religiously. Honest assessment of whether you’re fit to be making decisions. Understanding that regulatory minimums are not the goal. All of it translates directly to security. The aviation industry got safer not through any single breakthrough, but through relentless commitment to learning from everything that went wrong and almost went wrong. We can do the same. We just have to actually commit to it.

author avatar
Christopher Porter Chief Executive Officer (CEO)
Christopher D. Porter is a dynamic marketing executive and visionary leader, celebrated as an early adopter of internet technologies for innovative lead generation strategies. Continuing his career as the CEO of one of the leading IT and Cybersecurity Certification Training companies, he has consistently harnessed digital innovation to drive business growth and market transformation.