Bear with me for a moment, because I want to talk about Bigfoot.
Not because I think there is a large bipedal primate wandering the forests of the Pacific Northwest. I do not. But I have spent years studying why people get deceived, running phishing simulations for companies across Europe, debriefing employees who clicked links they absolutely should not have clicked, and somewhere along the way it became impossible for me to ignore how much the psychology of Bigfoot belief and the psychology of phishing victims have in common. The same mental shortcuts that make a blurry photograph of a stump look like a monster are the exact same shortcuts that make a fake IT helpdesk email look completely legitimate. Understanding one helps you understand the other more than you might expect.
The human brain is not built to resist manipulation. It is built to find patterns, trust authority, and act quickly under pressure. Phishing attacks do not exploit stupidity. They exploit the way healthy human cognition actually works.
The Pattern Problem
Humans are extraordinarily good at finding patterns. Suspiciously good, actually. We see faces in clouds, animals in rock formations, and meaning in coincidences that are almost certainly random. Psychologists call this apophenia, the tendency to perceive connections between unrelated things. And here is the thing: this is not a bug in human cognition. For most of human history, the ability to quickly detect a potential predator in a noisy visual environment was a genuine survival advantage. Mistaking a shadow for a lion cost you nothing. Missing a lion cost you everything.
Bigfoot believers are not stupid people seeing something that obviously is not there. They are people whose very capable pattern recognition systems are doing exactly what pattern recognition systems do: finding the most plausible explanation for what they are seeing given the information they have. A large dark shape moving through trees is, in the context of North American forests, more likely to be a bear than anything else. But if someone has already heard convincing stories, has an emotional investment in the possibility, and is viewing a low-quality image, the brain is not starting from a neutral position. It is filling in gaps with what it already expects to find.
Now think about a phishing email. It arrives in an inbox already loaded with dozens of legitimate emails from IT, from HR, from senior leadership. The brain has an established pattern for what those emails look like: a logo, a certain tone, a request that fits a plausible work scenario. A well-crafted phishing email does not need to be perfect. It just needs to be close enough that the pattern recognition system fills in the rest. The logo is slightly off? The brain smooths it over. The sender domain has an extra character? The eye skips past it. We are not reading carefully. We are pattern matching, and we are very fast at it.
From the Simulation Floor
In a phishing simulation I ran for a financial services client in Amsterdam, we found that employees were significantly more likely to click links in emails that arrived between 9 and 10 in the morning, right as people were processing their overnight inboxes. The same email sent at 2pm had a meaningfully lower click rate. Context and cognitive load matter as much as content. A person managing twenty decisions before their first meeting is not reading carefully. They are triaging.
Confirmation Bias Is Doing Most of the Work
Here is the uncomfortable truth about confirmation bias: it does not require you to be gullible. It requires you to have already formed a belief, which is something everyone does constantly. Once a belief is in place, the brain gets very good at finding evidence that supports it and quietly bad at processing evidence that challenges it. Psychologists do not call this stupidity. They call it normal cognition.
A dedicated Bigfoot researcher who has spent years building a case will look at a piece of ambiguous evidence and interpret it charitably. Every unexplained footprint is potentially significant. Every inconclusive photograph is worth examining further. The researcher is not being irrational given their existing framework. They are being entirely consistent with it. The problem is the framework, not the reasoning within it.
Phishing exploits the same dynamic. An employee who receives an email appearing to come from their IT department already has a strong prior belief: IT sends emails like this all the time. Password resets, system updates, security alerts. That belief functions as the framework, and the phishing email only needs to fit loosely within it. The employee is not asking whether this email could be fake. They are asking whether it fits the pattern they already expect, and the answer is usually yes. Challenging that assumption requires cognitive effort that most people are not going to spend on their fortieth email of the morning.
The Role of Emotional Investment
What makes Bigfoot belief particularly persistent is not just pattern recognition or confirmation bias in isolation. It is the emotional dimension. People want there to be something unexplained out there. The possibility of discovery is exciting. Mystery is engaging in a way that mundane explanations are not. A footprint that turns out to be a bear track is a dead end. A footprint that might be something else is a story worth pursuing.
Phishing attacks manufacture the same emotional engagement, just with different levers. Fear instead of excitement. Urgency instead of curiosity. An email telling you that your account has been compromised and you need to act immediately is not engaging your rational evaluation system. It is engaging your threat response. Your pulse goes up slightly. Your focus narrows. The goal becomes resolving the threat, not evaluating whether the threat is real. Attackers know this. Urgency language in phishing emails is not accidental. It is the core mechanism.
Authority Bias and Why Eyewitness Accounts Feel So Convincing
One of the most compelling elements of Bigfoot lore is the eyewitness testimony. Not from obvious cranks, but from people who seem credible: hunters, park rangers, police officers, military veterans. People whose professional judgment we generally trust. The reasoning goes that someone like that would not make something up, and they would know what they were looking at. This is authority bias at work, and it is a completely understandable cognitive response. We delegate trust to people whose expertise or character we have reason to respect. It saves time and is usually correct.
The problem is that authority bias is blind to the actual reliability of the source on this specific question. A park ranger is an authority on trail conditions and wildlife management. That expertise does not necessarily extend to accurately identifying a large mammal seen briefly at dusk through dense tree cover. We extend the authority credential further than it should go because differentiating between “trustworthy person” and “trustworthy person on this specific topic” requires extra mental effort we often do not invest.
This maps almost perfectly onto business email compromise. The most effective phishing attacks do not pretend to come from strangers. They impersonate the CFO, the CEO, the IT security team, or a trusted external partner. Not because those targets are easier to spoof technically, but because authority bias does a significant portion of the attacker’s work for free. An email from the CEO asking you to process an urgent wire transfer benefits from every positive prior experience you have had with that person. Your brain is not evaluating the email in isolation. It is evaluating it in the context of a trusted relationship, and that context is being exploited.
Why “Just Be More Careful” Is Useless Advice
There is a version of Bigfoot skepticism that basically amounts to “those people are just not thinking critically.” And there is an equivalent version of security awareness training that amounts to “just read your emails more carefully.” Both are useless. Not because careful thinking is unimportant but because neither approach grapples with what is actually driving the behavior.
You cannot train someone out of having a pattern recognition system. You cannot tell a person to stop being susceptible to authority or urgency. These are not personality flaws that thoughtful people have overcome. They are cognitive defaults that everyone has, including the security professionals designing the training. The researchers who study phishing susceptibility consistently find that education level, technical background, and general skepticism do not reliably predict who clicks. What predicts clicks is context: cognitive load, time pressure, relevance to the recipient’s current situation, and the quality of the attack.
This is why the most effective security awareness training programs are not built around warnings and rules. They are built around repeated, realistic experience. Simulated phishing campaigns work not because they shame people into being more careful but because they create a new pattern. The experience of almost clicking a fake link, recognizing it at the last moment or being told you missed it, introduces friction into a process that previously had none. It does not eliminate susceptibility. It adds a competing instinct: wait, could this be one of those tests?
“
The goal of good security awareness training is not to make people immune to manipulation. It is to make manipulation slightly harder and slightly slower, which is often enough to interrupt the chain before damage is done.
Nora Grace
What Actually Changes Behavior
The most interesting thing about sophisticated Bigfoot skeptics, the ones who have actually engaged with the evidence seriously rather than just dismissing it, is that they do not argue believers are wrong to trust their instincts. They argue for building in verification steps. Check the photograph metadata. Look at the scale reference in the frame. Consider the conditions under which the sighting occurred. Not because any individual instinct is untrustworthy but because instincts alone are not sufficient for a high-stakes claim.
Security works the same way. You cannot train people to stop trusting emails that look legitimate, but you can build verification habits that kick in for specific triggers. Wire transfer request? Call and confirm through a number you already have, not one in the email. Credential reset? Go directly to the system rather than clicking the link. Urgent message from leadership? Urgency is precisely when you should add a step, not remove one. These habits do not come naturally. They have to be practiced until they feel automatic, which takes repetition and a workplace that actually encourages them.
The organizations I work with that have genuinely low phishing click rates are not the ones with the scariest warning posters or the most aggressive acceptable use policies. They are the ones that have made verification feel normal. Where calling to confirm a wire transfer is standard practice, not an insult to the requester. Where flagging a suspicious email is celebrated rather than treated as a waste of the security team’s time. The culture around the behavior matters as much as the behavior itself. If people feel embarrassed to question an email from leadership, the training is fighting against something much larger than cognitive bias.
The Empathy Piece That Security Teams Usually Miss
Here is what I think gets missed in most post-incident conversations: the people who click phishing links are not failing at their jobs. They are succeeding at a different job, which is processing a high volume of communications as efficiently as possible. The fact that one of those communications was malicious is not evidence of carelessness. It is evidence that the attacker understood human cognition better than the organization’s defenses did.
When I debrief employees after a simulated phishing campaign, the ones who clicked almost always feel embarrassed. And that embarrassment is a problem, because shame is not a learning state. People who are ashamed of a mistake want to minimize it and move on, not examine it carefully and understand it. The debrief conversations that actually change future behavior are the ones that start from a position of curiosity rather than judgment. Why did this one get through? What was different about the moment you received it? What would have made you pause?
Bigfoot sightings work the same way. People who report them are not primarily making a zoological claim. They are often describing a genuinely surprising experience that they are trying to make sense of, usually something they saw briefly, at dusk, in conditions that were not ideal for clear observation. Dismissing them as credulous accomplishes nothing. Asking what they actually saw and why the brain reached for that particular explanation is far more interesting, and far more useful for understanding how perception works under uncertainty.
That is the framing I try to bring to social engineering work generally. The attacker understood something about human psychology. The job is to understand it better than they do, and to build that understanding into systems, cultures, and habits rather than expecting individuals to simply try harder.