Situational Awareness

What in the World Were They Thinking?

flooding.jpg

As I write this, the Houston area is dealing with the aftermath of a 500-year flood that has left several feet of water in areas that have never flooded before.  Some areas received 15- to 20-inches of rain in less that 6-hours which left all of the creeks and bayou’s overflowing their banks and inundating residential areas, displacing several thousand people and shutting down travel in much of the area.  As I watched live television coverage of this event from my non-flooded home I was saddened by the impact on the lives of so many, but initially struck by the “stupidity” of those who made decisions that put their lives at risk and in a few cases cost them their lives.  I began to try to make sense of why these individuals would make what appeared to be such fool-hearty decisions.  What could they have been thinking when they drove past a vehicle with flashing lights right into an underpass with 20 feet of water in it?  What could they have been thinking when three people launched their small flat-bottom, aluminum boat to take a “sight-seeing” trip down a creek that was overflowing with rushing waters and perilous undercurrents only to capsize, resulting in them floating in the chilly water for 2+ hours before being rescued by the authorities?  As I reflected on it, and after my initial incredulous reaction, my conclusion was that it made perfect sense to each of them to do what they did.  In the moment, each of their contexts led them to make what to me seemed in hindsight to be a very foolish and costly decision.  You may be asking yourself….” What is he talking about?  How could it make sense to do something so obviously foolish?”  Let me attempt to explain. Context is powerful and it is the primary source we have when making decisions.  Additionally, it is individual-centric.  My context, your context and the context of the individual who drove around a barricade into twenty feet of water are all very different, but they are our personal contexts.  In my context where I am sitting in my living room, watching TV, sipping a cup of coffee, with no pressure to get to a certain location for a specific purpose is most likely completely different from the man who drove around a police vehicle, with flashing lights, in a downpour, with his windshield wipers flashing, on his way to check on someone he cares about and who could be in danger from the rising water.  What is salient to me and what was salient to him are very different and would most likely lead to different decisions.  His decision was “locally rational”, i.e. it made perfect sense in the moment.  We will never know, but it is very likely that his context precluded him from even noticing the flashing lights of the police vehicle or the possibility of water in the underpass.  It is also possible that “human error” was present in the tragic deaths of at least 6 people during the flood, but human error is not a sufficient explanation.  We can never really understand what led to their decisions to put themselves at risk without understanding the contexts that drove those decisions.

This is what we really need to focus on when we are investigating incidents in the workplace so that we can impact the aspects of contexts that become salient to our workers.  The greater impact we have on minimizing the salience of contextual factors that lead to risk taking and increasing the salience of contextual factors that minimize risk, the greater opportunity we will have to end “senseless” injury and death in the workplace, and on rain swollen highways.  This approach will have a lot more positive impact than just chalking it up to “stupidity”!

Motivating Safe Performance

Construction_tied.jpg

Have you ever observed someone acting in an unsafe manner and immediately attributed their action to lack of motivation to perform safely?  Research, including our own demonstrates that we tend to do exactly that more than 80% of the time, but when we personally act unsafely we attribute our decision to outside, situational forces rather than internal dispositional forces over 90% of the time.  Attribution theories of motivation are explanations of how we attempt to understand our environment, including the behavior of others, and ourselves, by attributing/inferring causes to behavior that we observe.  These theories are helpful in explaining how we attempt to understand performance, but the fact that we do attribute such causes to behavior is not really helpful in our personal understanding of motivation.  In fact in many cases this tendency causes us to inaccurately infer cause and then react to the action of others incorrectly, i.e. commit the Fundamental Attribution Error.  The attributional approach typically does not take into account many other potential causes of unsafe action including situational/contextual contributors. Competence theories of motivation, on the other hand are based on the premise that individuals want (are intrinsically motivated) to interact effectively with their environments.  Psychological researchers such as Albert Bandura (Social Cognitive Theory, 2001) and Edward Deci and his associates (Self-Determination Theory, 2000) have helped us understand what really produces our motivation to perform in certain ways.  They propose that we are trying to effectively engage our environments in ways that make sense given our current understanding of the components of that those environments.  Interestingly, according to Self-Determination Theory we are seen as primarily intrinsically motivated to be simultaneously autonomous and competent and the more successful we are at both of these, the more intrinsically motivated we become.

So why is this important?  How do these theories relate to motivating people to minimize risk and work more safely?

Simply attributing internal motivational causation (Attribution Theories) to unsafe performance creates the opportunity for execution of the Fundamental Attribution Error which is usually negative, often wrong and often leads to blame.  Additionally, it doesn’t help us understand where the motivational state came from in the first place or how to control it.  Viewing the individual as a complex entity who is actively attempting to understand and engage his environment (Competence Theories) would seem to be much more fruitful.  If we “engineer” that environment (context) to increase intrinsic motivation we would have a greater opportunity of developing employees who are competent, take initiative and work more effectively together.  Consider the following.  What if we allowed participation (autonomy) in decision making about those aspects of context that are open to input?  What if we explained the reasons for safety rules, limits, etc so that autonomy could be supported?  What if we made sure that consequences for what turns out to be intentional rule breaking are clearly understood?  The idea is to create a work context where people will adhere to safety rules and procedures, not because they are coerced to do so, but because they feel autonomous and competent in doing so; their reason for doing so is their own and they accept responsibility for doing so.  They are intrinsically motivated to be safe.  Not many people want to get hurt, so capitalizing on the intrinsic desire for safety would seem to make a lot of sense.  Deci’s research (1995) demonstrates that the more controlled people feel (i.e., the less autonomous they feel) the more likely they are to engage in risky behavior.  Isn’t it ironic that most safety programs attempt to control behavior when it is just the opposite that has been shown to motivate behaviors that lead to safety?

Distracted Driving: I Teach This Stuff and I Still Mess It Up

textingdriving.jpg

My oldest son just turned 15 so my wife and I started researching the different avenues for teen driver’s education available in our town.  With a handful of options we decided to take the route of “parent taught driver-ed” as it was the most convenient, cost effective, and we felt we were quite capable of taking on the task.  After all, my wife is a teacher and I research and train people about performance in complex systems.  Additionally, I have developed our online training platform, which is the medium in which most of the classroom driver-ed learning will take place.  “We’ve got this,” I remember telling her. As we have progressed in our program, he has become more and more capable and comfortable in not only the rules of the road, but also driving in our small Dallas/Ft. Worth suburb.  But just a couple of days ago I was smacked in the face by reality when we stopped at a red light and he instinctively reached into his pocked to retrieve his cell phone to read a text he just received from his girlfriend.  How could he be so irresponsible with all he’s learned?  How could he possibly think this was okay to read texts while behind the wheel?  His response: “You always check your phone at red lights so I just figured it wasn’t a big deal”!

Yes, I teach this stuff and yes I mess it up on a consistent basis when I’m not intentional about what I know to be true.

For those of you that frequently read our blogs you know that we talk about complexity, the impact of context on performance, and how the model provided by others impact the performance of those around them in surprisingly unforeseen ways.  You are also aware of the studies about using cell phones to talk and text and the impact that these actions have on the ability to operate a vehicle.  We have been accustomed to seeing anti-texting commercials and even live in communities that have laws, with fines attached, preventing the use of cell phones while driving.  Yet some of you, and some of us who teach this stuff, still glance down at the phone when we hear the ding or even pick up the phone when that all important call finally comes while we are driving.

A recent study shows that after a decade of car related deaths declining year after year, a steep increase of almost 10% occurred in 2015.  Could this be an anomaly or a sign of something far more troubling occurring?  While no data has yet pointed to any trend in automobile fatality causation, I do have have my own theories and anecdotal data that I will share.

Smart phones have only been around for a little over a decade now and they are getting smarter and smarter with each launch.  We all remember the blinking red light of the blackberry that screamed out to us, “Read me!”  Today our phones have Facebook messenger, LinkedIn alerts, text, iMessage, email, and bluetooth and our cars have mobile apps and wifi hotspots.  We are constantly being alerted that somebody wants to talk to us right now.  It’s like that blinking red light on steroids.  The good news is that local governments and other organizations saw this coming years ago so they implemented laws and launched public service announcements and signage by the roads, warning us of the dangers these devices present.  And as any good safety professional knows, making new rules and putting up signs that scare people works…for awhile.

Where I have failed as a driver-ed instructor, and father, is that I kept my phone within reach.  It was in my pocket, sitting in my cup holder, or holstered on that clamp attached to my A/C vent that I bought at Best Buy so that I could use the navigation app twice a year.  Based on what we know about complexity, context, Human Factors, Human Performance, or whatever current science we want to throw out there, the answer is easy.  Turn the phone off, put it in the glove compartment and drive.  It doesn’t take any research or a consultant to come up with this idea; in fact, it’s something that a lot of people have figured out already.  Take the blinking red light, the text ding, or the “silent” buzz away from our attention and our attention remains on the road where it belongs.

I will leave you with this last thing.  Again, nothing groundbreaking here but a fun example about how those devices that are designed to make our lives so much better have effected our performance in fascinating ways.  A recent study at the Western Washington University shows that people walking and talking on their cell phones noticed a clown on a unicycle directly next to them only 25% of the time, while those walking and not on a phone noticed the clown over half of the time.  But more impressively, those walking in pairs noticed the clown 75% of the time.  If you are interested, here is a link describing this research:  https://www.youtube.com/watch?v=Ysbk_28F068

The Safety Switch℠

Safety-Switch-3.jpg

As our world and workplaces grow in complexity, and as failures in these complex systems become increasingly calamitous, how do we take the insights that have been given us by so many dedicated and brilliant individuals, and make things better for the people who, whether we want to think about it or not, will suffer and die if we don’t adapt? It’s a heavy question, and one that’s been on our minds for a while.

You might not have known but, between blog posts and our day jobs, we’ve been writing a book.  In fact, we are now in the final phases of writing this book called, “The Safety Switch℠,”  which aims to tie together our research and the priceless contributions made by scholars and practitioners from a wide range of disciplines.

We thought it was about time to introduce the premise.

The Safety Switch℠ is a way of thinking about how we can adapt to a new world — one in which organizations are understood as complex systems, and the ever-increasing complexity of these systems presents new challenges.

The “Switch” happens at two levels.

First, it is a micro-level, personal, in-the-moment switch between two mental Modes.  Our default setting, Mode 1, is powered by mental shortcuts (called “heuristics”) and distortions (called “biases”) and often leads us to fix upon human error as the cause of safety problems.  While we may be “wired” to stay in this default mode, we can deliberately switch to a second Mode.  When in this Mode 2, we take a rigorous, effortful, sometimes counterintuitive, and often winding path to understand and address persistent safety challenges.

Second, there is a macro-level, organizational switch.  It involves activating within the organizational system an inherently dynamic layer of protection — it’s people — positioning humans as a unique and requisite response to growing complexity.

But here’s the catch: You can’t flip the second switch until you flip the first.

We have to learn when and how to switch from Mode 1 to Mode 2 in the moment and on the fly if we are going to generate the capacity to flip the second switch, and energize within our organizations this vital, dynamic and fully integrated layer of protection — the people.

Overcoming Age Stereotypes: Older Workers + Younger Workers = Better Decisions

dreamstime_xxl_40289060.jpg

Neurological research has helped us better understand some of the developmental and age related changes in cognitive functioning and performance, including risk tolerance (see Complexity, Age and Performance).  We have proposed that these findings provide additional support for the need to have older and younger workers learn to work together so as to capitalize on their age-related strengths, i.e., older workers + younger workers = better decisions.  The problem is that there are also very strong age related stereotypes that inhibit the effectiveness of this suggestion.  Therefore, we need to understand the role that stereotypes play in the interactions of various age groups in the workplace so that we can create environments where negative stereotypes are minimized or overcome. Stop for a moment and describe the characteristics of people who are 20 years of age, 40 years of age, 60 years of age, and 75 years of age.  If you are honest with yourself, you will have some overlap in your descriptors, but you will also have differences and if you are also honest you will find that there are more negative characteristics identified for age groups to which you do not currently belong.  If you think about it you will also most likely find that your descriptions don’t necessarily accurately describe every person that you know within each of those age groups.  We all function with age related stereotypes but we also know that there are individual differences.  The problem is that until we know a given individual, our stereotypes tend to guide our perceptions and expectations of that person.  In fact, when our expectations are strong, we will overlook invalidating evidence of our stereotypes and foster what is called a “self-fulfilling prophecy (SFP)” through our actions toward the other person.  In other words, our stereotypes will tend to override our search for individual differences, or exceptions, and simultaneously help create the behavior that we expected in the other person.  For example, research has demonstrated that performance is improved when it follows an interaction driven by a positive stereotype but decreased when it follows an interaction driven by a negative stereotype (e.g., Hausdorff, et al, 1999).  Likewise, the behavior resulting from the interaction will strengthen our stereotype and if that behavior is perceived negatively, e.g., inflexible, know-it-all, etc., we will likely become less willing to work with or listen to the other person and this makes our objective of “better decisions” more difficult to attain.

So how do we overcome the negative impact of our stereotypes?

We have been helping supervisors and managers deal with the impact of their negative stereotypes for the past 30+ years and the process, while simple requires understanding and effort.  First, we help them evaluate and understand their stereotypes, especially their negative stereotypes and the role that they personally play in creating SFP’s.  Second, we help them think about specific individuals with whom they interact and then have them honestly evaluate the role of negative stereotyping on both their expectations of the person and the impact of their interaction on the behavior of the person.  In other words, we help them understand that because everyone is different we need to look for those individual differences rather that viewing everyone of a certain age as the same.  Third, we have them evaluate the positive aspects of these individuals, especially those characteristics that can be beneficial to other team members and the organization.  Finally, we have them commit to a regular review of their stereotypes and evaluation of how those stereotypes are impacting their relationships.  Our objective is to improve interactions and relationships by minimizing the impact of negative age related stereotypes.  If we are going to create “better decisions” through the interaction of older and younger workers we will first need to positively impact stereotypes that are currently leading to reduced respect and willingness to listen, learn and depend on each other.

Complexity, Age and Performance

Chronologically-gifted-worker.jpg

We live and work in environments that are continuously increasing in complexity, which puts an even greater strain on our ability to make quick, accurate decisions.  (See Human Error and Complexity: Why your safety “world view” matters).  Among the many important considerations for organizational leaders is how age affects people’s decisions and performance in these complex environments. Though the cognitive and socio-emotional skills of younger workers (under 25 years of age) are still developing (see Protecting Young Workers from Themselves), this group is at its peak performance with respect to speed of information processing and physical abilities…including vision, hearing, strength, flexibility and reaction time.

On the other hand, the opposite is generally true of the 55+ age group, even though there are individual differences.  Aging tends to bring with it a decline in just about all of these physical abilities as well as some cognitive abilities.  (Note that we are talking about “normal” aging, absent significant pathology such as Alzheimers and Dementia.)

While research has demonstrated that aging has little or no effect on general intelligence, it can impact other aspects of cognition.  The aging brain is slower to shift attention to new stimuli in the environment and also slower to recall uncued relevant information.  Additionally, short term (“working”) memory functions less efficiently with age.  While an older worker might make fewer mistakes in decision making, he or she will normally require more time to make those decisions.  So when a task is complex and requires the manipulation of information or ignoring irrelevant information, there may be age related decline in performance (e.g., Balota, et al, 2000), especially when the older person is under pressure to perform.  In short…

Complexity + Time Pressure = Kryptonite for the Aging Brain

Left at that, it would be bad news for the aging worker in our increasingly complex and fast-paced world.  HOWEVER, as with nearly everything in life, there are more pieces to this puzzle.  Two of these pieces are experience and contextual cues.  Research has shown that older adults tend to perform well on recognition tasks where contextual cues are present.  This could help explain their lower incident rate relative to younger workers, since older adults recognize and process contextual cues effectively because of their past experience.  They are more likely to recognize a hazard as a hazard because they have experienced it in the past.  In short…

Contextual Cues + Experience = The Great Equalizer

Adolescents and young adults don’t have the experience with contextual cues that older adults do, so they are less likely to recognize them and respond to them.  Younger workers’ higher speed of processing is offset by their lack of experience with contextual cues…and vice versa with older workers. These findings provide additional support for the need to have older and younger workers learn to work together so as to capitalize on their age-related strengths.  In short…

Older Workers + Younger Workers = Better Decisions

Unfortunately, there are common and misguided stereotypes about both younger and older workers, which can keep us from honestly exploring the many ways that they may contribute to organizational success.  Understanding the truth about the developing brain of younger workers and the aging brain of older workers may just be a key to thriving in our increasingly complex world.

Protecting Young Workers: Bridging the Age Gap in the Workplace

Business-Confrontation.jpg

In a recent blog (Protecting Young Workers from Themselves) we discussed some of the reasons for the relatively high risk tolerance of young (15-24 years old) workers compared with older workers.  We concluded that while there is still cortical structure development during this developmental period that this alone does not explain why this age group is at a higher risk of engaging in unsafe actions and suffering the consequences of those actions. The research demonstrates that the less developed limbic system which is involved in both social and pleasure seeking behavior can at times override the logical capabilities of the young workers and stimulate them to engage in risky behavior.  Because educational programs designed to provide the young workers with the knowledge necessary to effectively interpret their contexts has not proven overly successful, we proposed that one way to impact their risk taking in the workplace is to remove social stimuli such as peers from their work teams and replace them with older, more risk averse and experienced workers, especially those in the 55+ age group.  We suggested that these older workers who understand and can interpret the various workplace contexts could provide mentoring and coaching for the younger workers.  This however introduces another set of issues that must be addressed if this approach is to have the desired impact.  These issues include the perceptions/stereotypes/expectations of each cohort group by the other and the skills necessary to impact those perceptions/stereotypes/expectations. We all have a tendency to focus on actions and traits of other people that fit with our expectations and stereotypes of the groups to which that person belongs, including the person’s age.  We also tend to behave toward that person based on what we perceive them doing and they do likewise to us.  The problem is that what we “see” is driven by what we “expect to see”  and often results in a phenomenon known as the “Self-Fulfilling Prophecy (SFP)” which also reinforces our stereotypes and thus our future interactions.  For example, an older worker observes a younger worker engage in some risky behavior and because the older worker views younger workers as thinking they are “bullet proof” he immediately criticizes the younger worker for his failure to “think”.  The younger worker who did what he thought was the right thing in the situation becomes defensive toward the “judgmental/rude” older worker and “smarts off” to him.  This causes the older worker to become defensive and the cycle continues, reinforcing the SFP and strengthening the stereotypes held by both individuals (see “Your Organization’s Safety Immune System (Part 2): Strengthening Immunity” for a more in-depth discussion of defensiveness).

The question is how do we utilize the older workers as coaches for the younger workers without the negative impact of the SFP?  The key is to change the expectations that both age groups have of each other and this requires training.  Facilitated, interactive training programs that address the common impact of the SFP, help people of all ages understand the role of individual differences in performance, teach people how to deal with the Defensive Cycle™, and give them opportunity to interact successfully with each other tend to produce environments where both older and younger workers can capitalize on the strengths that each bring to the table.  While younger workers bring less socioemotional maturity and experience, they also bring creativity, physical strength and a fresh view of the work context.  Older workers bring the experience and a broader understanding of the work context that can help younger workers make better, less risky decisions.  The key is mutual understanding and mutual respect which come from less stereotyping, less defensiveness and more teamwork.

Are Safety and Production Compatible?

Manufacturing.jpg

Can we all agree that people tend to make fewer mistakes when they slow down and, conversely, make more mistakes when they speed up?  And people tend to increase their speed when they feel pressure to produce?  Personal experience and research both support these two contentions.  Deadlines and pressure to produce literally change the way we see the world.  Things that might otherwise be perceived as risks are either not noticed at all or are perceived as insignificant compared to the importance of getting things done. Pressure and Perception

A famous research study by Darley & Batson (1973), sometimes referred to as “The Good Samaritan Study”, demonstrated the impact of production pressure on people’s willingness to help someone in need:

Participants were seminary students who were given the task of preparing a speech on the parable of the Good Samaritan — a story in which a man from Samaria voluntarily helps a stranger who was attacked by robbers.  The participants were divided into different groups, some of which were rushed to complete this task.  They were then sent from one building to another, where, along the way, they encountered a shabbily dressed “confederate” slumped over and appearing to need help.  The researchers found that participants in the hurry condition (production pressure) were much more likely to pass by the person in need, and many even reported either not seeing the person or not recognizing that the person needed help.

Even people’s deeply held moral convictions can be trumped by production pressure, not because it has eroded those convictions, but because it makes people see the world differently.

The Trade Off

One reason for this is that many of our decisions are impacted by what is known as the Efficiency-Thoroughness Trade-off (ETTO) (Hollnagel, 2004, 2009).  It is often impossible to be both fast and completely accurate at the same time because of our limited cognitive abilities, so we have to give in to one or the other.

When we give in to speed (efficiency) we tend to respond automatically rather than thoughtfully. We engage what Daniel Kahneman (see Hardwired to Jump to Conclusions) refers to as “System 1” processing — we utilize over-learned, quickly retrieved heuristics that have worked for us in the past, even though those approaches cause us to overlook risks and other important subtleties in the current situation.  This is how we naturally deal with the ETTO while under pressure from peers, supervisors or organizational systems to increase efficiency.

Conversely, when we are not under pressure to increase efficiency, but, rather, pressure to be completely accurate (thorough), we have a greater tendency to engage what Kahneman calls “System 2” processing — we are more thorough in how we manage our efforts and account for the factors that could impact the quality of what we are producing.  In these instances, we will notice risks, opportunities and other subtleties in our environments, just as the “non-rushed” participants did in the “Good Samaritan Study.”

So what is the point?

Most of our organizations are geared to make money, so efficiency is very important; but how do we bolster the thoroughness side of the tradeoff to support safety and minimize undesired events?  To answer this, we have to take an honest look at the context in which employees work.  Which is more significant to employees, efficiency or thoroughness?  And what impact is it having on decision making?

Some industries (e.g. manufacturing) have opted to streamline and automate their processes so that this balance is handled by interfacing humans more effectively with the machines.  Some industries can’t do this as well because of the nature of their work (e.g., construction).  We worked with a client in this later category that had a robust safety program, experienced employees and well intentioned leaders, but which was about to go out of business because of poor safety performance…and it had everything to do with the Efficiency-Thoroughness Trade-off.  The contracts that they operated under made it nearly impossible to turn a profit unless they completed projects ahead of schedule.  As they became more efficient to meet these deadlines, the time-to-completion got shorter and shorter in each subsequent contract until “thoroughness” had been edged out almost entirely.  For this company, preaching “safety” and telling people to take their time was simply not enough to outweigh the ever-increasing, systemic pressure to improve efficiency.  The only way to fix the problem and balance the ETTO was to fix the way that contracts were written, which was much more challenging than the quick and illusory solutions that they had originally tried.

Every organization is different, so balancing the ETTO will require different solutions and an understanding of the cultural factors driving decision making at all levels of the organization.  Once you understand what is salient to people in the organization, you can identify changes that will decrease the negative impact of pressure on performance.

Protecting Young Workers from Themselves

SC-photo.jpg

Looking back at your younger self, did you ever do something that now seems foolish and excessively risky? We have talked about the phenomenon of “local rationality” several times in the past, which is how our reasoning and decisions are heavily influenced by our immediate context. We are all subject to its impact, including yours truly (see “A Personal Perspective on Context and Risk Taking”), but perhaps even more so when we are young, especially between the ages of 15 and 24. The data are clear.  Adolescents and young adults are more likely to engage in risky behaviors than are adults (especially older adults) and workplace incidents are more frequent among this age group.

So is it because young workers are less experienced, poorer decision makers or inherently more risk tolerant? The answer is likely “yes” to all of these questions, but it is more complicated than that. Understanding why young workers do risky things requires an understanding of the neural mechanisms that are at play in these types of situations. While it's a heady topic (forgive the pun), understanding neural development can be of extreme importance when attempting to protect our younger workers.

It has been suggested that the adolescent brain's (cortical) structures - those  involved in logical reasoning and decision making - aren't completely developed, which contributes to risky decisions and behaviors. While it is true that the frontal cortex continues development into young adulthood, research demonstrates that, by age 15, logical reasoning abilities have already development equal to those of adults. In fact, 15-year olds are equal to adults at perceiving risk and estimating their vulnerability to that risk (Reyna & Farley, 2006).

In light of this type of evidence, Steinberg (2004; 2007) has proposed that risk taking is the interaction of both logical (cognitive) reasoning and psychosocial factors such as peer pressure. Unlike the logical reasoning abilities that have developed by age 15, the psychosocial capacities that impact logical reasoning have not developed until the mid-twenties and therefore interfere with real-world decision making and risk aversion. In other words, the mature decision making processes of adolescents and young adults my be interfered with by the immature psychosocial processes of this group, and reasoning only shows maturity when these psychosocial factors are minimized...for example, when there are no peers around to pressure them.

Additionally, the limbic system, which is integral to socioemotional processing and also the center for experiencing pleasure, is less developed and highly sensitive in adolescence.  Because of this, they will put themselves in high risk situations in the hope of experiencing the “high” that comes from a dopamine rush. Even though the frontal cortex (executive function) is more advanced, the “thrill” that comes from the risk can overpower the logical functions of the brain and lead to risk taking, especially when under stress or fatigue. In other words, at this age, the attraction to rewards causes young adults to do exciting and perhaps risky things while their poor self-control makes it hard for them to slow down and think before acting, even when they know that the risk is present.

So what does this mean for protecting this age group. According to Steinberg (2004), attempts to reduce risk taking in this group by improving their knowledge, attitudes or beliefs have generally failed. Changes to their decision making contexts, by removing peers from the team and having older adults observe them, have had a much greater impact on reducing risk taking behaviors.  Rearranging teams, so that young workers are not with their peers, minimizes the impact of negative psychosocial factors on their decisions and is a first step in protecting young workers from their own developing brains.  Additionally, teaming young workers with older workers, who have been trained to observe and effectively intervene in their younger counterparts' unsafe performance, will also reduce incidents among this age group. It is, however, very important that mutual respect be nurtured so that coaching does not trigger defensiveness.  Creating contexts that minimize the impact of negative psychosocial factors on logical decision making is one way to protect young workers from themselves.

Authority Pressure, Obedience and Organizational Culture

obedience.jpg

In a recent blog we discussed Peer Pressure, Conformity and Safety Culture.   As with peer pressure, authority pressure and the resulting obedience can be either good or bad.  It is hard to imagine a functioning society without obedience to police officers or successful organizations without obedience to supervisors.  It is also not hard to imagine the negative impact of power hungry, authoritarian police or over zealous, production oriented supervisors. The study of obedience to authority has its roots in the famous research of Stanley Milgram (1963).  His research was stimulated by the Nazi atrocities seen during WWII.  The question he attempted to answer was…how could seemingly moral people follow instructions to kill innocent civilians simply at the command of a superior officer?  The experimental conditions that he utilized involved a series of subjects who were required to “administer” electric shocks to a confederate when the confederate failed to answer a question correctly.  In reality no shock was actually administered but the test subjects were unaware of this and thought that they were actually administering increasingly powerful shocks to the confederate.  If the test subjects balked at administering the shocks, they were directed/commanded by the experimenter (in white lab coat) to continue.  The “shocks” began at 15-volts and progressively increased to a maximum of 450-volts which could in reality kill the confederate if actually administered.  The results indicated that a majority (62.5%) of test subjects went all the way up to the maximum shock when directed to do so by the authority figure.  Many of the test subjects showed signs of distress, indicating that they did not agree with the directive, but the majority did so anyway.

Perhaps even more concerning is recent research that indicates that even having a resistant ally did not stop others from being obedient to authority (Burger, 2009).  The power of authority pressure can be extreme.  While the Milgram studies are focused on the negative effects of bad authority pressure, obedience which leads to prosocial behavior ultimately contributes to culture and organizational success.  It is difficult to achieve success in social groups whether it be society or organizations without obedience.  Understanding the powerful influence that leaders have on the performance of their employees and establishing cultural norms and developing the leadership skills that lead to desired performance can have a profound impact on how these individuals lead and on how their employees respond when pushed to perform in an undesired manner whether that performance relates to production, ethics or safety.

Overcoming the Bystander Effect

Initiative.jpg

Research and personal experience both demonstrate that people are less likely to intervene (offer help) when there are other people around than they are when they are the only person observing the incident. This phenomenon has come to be known as the Bystander Effect and understanding it is crucial to increasing intervention into unsafe actions in the workplace. It came to light following an incident on March 13, 1964 when a young woman named Kitty Genovese was attacked by a knife-wielding rapist outside of her apartment complex in Queens, New York. Many people watched and listened from their windows for the 35 minutes that she attempted to escape while screaming that he was trying to kill her. No one called the police or attempted to help. As a matter of fact, her attacker left her on two occasions only to return and continue the attack. Intervention during either of those intervals might have saved her life. The incident made national news and it seemed that all of the “experts” felt that it was "heartless indifference" on the part of the onlookers that was the reason no one came to assist her. Following this, two social psychologists, John Darley and Bibb Latane began conducting research into why people failed to intervene. Their research became the foundation for understanding the bystander effect and in 1970 they proposed a five step model of helping where failure at any of the steps could create failure to intervene (Latane & Darley, 1970).

Step 1: Notice That Something Is Happening. Latane & Darley (1968) conducted an experiment where male college students were placed in a room either alone or with two strangers. They introduced smoke into the room through a wall vent and measured how long it took for the participants to notice the smoke. What they found was that students who were alone noticed the smoke almost immediately (within 5 seconds) but those not alone took four times as long (20 seconds) to notice the smoke. Just being with others, like working in teams in the workplace can increase the amount of time that it takes to notice danger.

Step 2: Interpret Meaning of Event. This involves understanding what is a risk and what isn’t. Even if you notice that something is happening (e.g., a person not wearing PPE), you still have to determine that this is creating a risk. Obviously knowledge of risk factors is important but when you are with others and no one else is saying anything you might think that they know something that you don’t about the riskiness of the situation. Actually they may be thinking the same thing (pluralistic ignorance) and so no one says anything. Everyone just assumes that nothing is wrong.

Step 3: Take Responsibility for Providing Help. In another study, Darley and Latane (1968) demonstrated what is called diffusion of responsibility. What they demonstrated is that as more people are added the less responsibility each assumes and therefore the less likely any one person is to intervene. When the person is the only one observing the event then they have 100% of the responsibility, with two people each has 50% and so forth.

Step 4: Know How to Help. When people feel competent to intervene they are much more likely to do so than when they don’t feel competent. Competence engenders confidence. Cramer et al. (1988) demonstrated that nurses were significantly more likely to intervene in a medical emergency than were non medically trained participants. Our research (Ragain, et al, 2011) also demonstrated that participants reported being reluctant to intervene when observing unsafe actions because they feared that the other person would become defensive and they would not be able to deal with that defensiveness. In other words, they didn’t feel competent when intervening to do so successfully, so they didn’t intervene.

Step 5: Provide Help. Obviously failure at any of the previous four steps will prevent step 5 from occurring, but even if the person notices that something is happening, interprets it correctly, takes responsibility for providing help and knows how to do so successfully, they may still fail to act, especially when in groups. Why? People don’t like to look foolish in front of others (audience inhibition) and may decide not to act when there is a chance of failure. A person may also fail to act when they think the potential costs are too high. Have you ever known someone (perhaps yourself) who decided not to tell the boss that he is not wearing proper PPE for fear of losing his job?

The bottom line is that we are much less likely to intervene when in groups for a variety of reasons. The key to overcoming the Bystander effect is two fold, 1) awareness and 2) competency. 1) Just knowing about the Bystander effect and how we can all fall victim to this phenomenon makes us less likely to do so. We are wired to be by-standers, but just knowing about this makes us less likely to do so. 2) Training our employees in risk awareness and intervention skills makes them more likely to identify risks and actually intervene when they do recognize them.

The Brain Science of Human Performance: Part 2

Nervous-System.png

In our last post, "The Brain Science of Human Performance", I described how three inherent functions of the brain affect the performance of people in very real ways.  These three functions are problem solving, automation, and generalizing.  I also introduced another mechanism of the brain that can inhibit performance, cognitive biases.  In this followup, I will propose a way to overcome the cognitive biases and use the three functions in a strategic manner to drive good performance. As I detailed before, our brains take in an enormous amount of data when we are trying to problem solve a new and/or difficult task.  This data is comprised of many factors that we call our "context".  The most salient (important) and obvious factors actually create a feeling of what makes sense in that moment and is referred to as "Local Rationality".  Once we complete the task and it seems to be successful we eventually automate this process and it becomes part of our normalized routine. We then, without even realizing it, assume that if that process worked in that case, then it must be the right thing to do in other, similar, cases and this is where the "generalizing" comes into play.  While this may seem like an inherent flaw, those that understand this process are able to actually use it to create better performance.  We know that our brains kick in when we have to start processing new context.  If we can identify the context that was previously in place (i.e., that created a moment of local rationality for performing in a flawed way) we can change that context to be more conducive to better performance.  For example, an operator at a manufacturing facility has found a way to reach around a guard and remove product that has become lodged in the machinery.  He doesn't perform lockout/tagout (LOTO) because the main power source is across the facility and it takes more time to walk over there and lock and tag than it does to perform his work-around.  He also knows just where to insert his arm to reach around the guard and pull out the product.  He's not the only person doing this, as many other operators have been performing it that way in this facility for years.  In fact, it's just how they do things around there, and after all nobody has ever been hurt doing it this way and, additionally, they have certain levels of production that they must maintain to keep their supervisors off their back.  While that may seem like a very mundane and simple example of what happens in countless facilities everyday, it is actually rooted in an incredibly complex cognitive system.  While most of you can see an immediate fix or two (move the power source and create a better guard) let's understand how that actually affects the brain.  If we are able to get budget approval (sometimes difficult) to move the power source and fabricate a better guarding system, then we would have a new and salient context.  If the operator can't reach through the guard, then he would be required to remove the guard, therefore removing the guard becomes the logical, but time consuming thing to do.  If, however, de-energizing the machinery is easier and requires less time, then it becomes far more likely that he will actually do that, not because he's lazy but because we've just impacted a cognitive bias that I'll explain later.  Once this context is changed, the cognitive automation stops and we move back to problem solving.  Based on the new context, a different way of doing things becomes locally rational and once that new, and better way of performing the task is successful, that performance will then become automated and generalized.

Unfortunately, our work isn't yet complete, we also have to deal with those pesky cognitive biases (distortions in how we perceive context).  I mentioned above that a person may chose to skip LOTO because it takes more time to walk across the facility than to perform the actual task.  This is rooted in a cognitive bias called "Unit Bias" where our brains are focused on completing a single task as quickly and efficiently as possible.  Or how about the "bandwagon effect" which is the tendency to believe things simply because others believe it to be true.  There is also "hyperbolic discounting" which is the tendency to prefer the more immediate payoff rather than the more distant payoff (completing a task vs. performing the task in a safe way), and the list goes on.  To overcome these cognitive biases we must first become aware that they exist.  Our brain is wired in a way that these biases are a core function.  To begin to rewire the brain and overcome these biases we must understand these biases and with this awareness we are actually less likely to fall victim to them.  When we fail to do this we are actually falling victim to yet another cognitive bias that is called "Bias blind spot".

So what is the take-away from all of this?  Our brains are wired to function as efficiently as possible.  One of the ways we do this is to automate decision making and performance to maximize efficiency.  Our decisions are driven by our contexts and the sometimes distorted way that we view that context.  If you want to change unsafe performance you have to change the context and the way we view our context so that it becomes locally rational to perform in a safe manner.  If we don't change the context we will continue to get the same performance we have always gotten because that is just the way our brains do it.

 

Sorry, I Just Forgot!

remember.jpg

Do you ever have trouble remembering someone’s name, or a task that you were supposed to have accomplished but didn’t, or maybe how to safely execute a procedure that you don’t do very often? I know…. you can’t remember! Well if you do forget then you are perfectly normal. Forgetting is a cognitive event that everyone experiences from time to time, but why? What causes us to forget and is there anything we can do about it? Bottom line is that when we forget, we have either failed to encode the information into long-term-memory (LTM), which means we don’t have the information stored in the first place, or we have failed to retrieve it effectively. The failure to remember the name of someone that we have just met is probably an encoding failure because we don’t move the person’s name from working memory to LTM and it just disappears or gets knocked-out because of the short-term nature of working memory. To get it into LTM we have to “elaborate” on the information in some way, maybe with a rhyme, or rehearsal, or some other mnemonic technique. The problem is that most of us either don’t expend the effort needed to transfer information like names of people we probably won’t meet again to LTM, or other information that comes in right after we hear the name interferes with transfer. But what about information that is important, like a meeting that we scheduled for 10:00 AM next Monday with a coworker about an important project that we are working on, or wearing your safety glasses when using a grinder in your home workshop? Both are important but might require different assistance to avoid forgetting. Maybe you put the meeting on your calendar but didn’t create a reminder because this is an important meeting and you will certainly not forget to check your calendar Sunday night. But you were busy watching Sunday Night Football and didn’t check your calendar and when you got a call from your coworker at 10:10 on Monday morning asking why you weren’t in the meeting, you were totally shocked that you hadn’t remembered the event. Maybe you began operating your grinder without putting on your safety glasses because the glasses weren’t readily available. These types of retrieval failures are most likely caused by something that impacts us all….interference at retrieval. There has been a lot of research into the effects of interference on memory both at encoding and at retrieval and the evidence is pretty clear…..retrieval is cue dependent (a context effect) in that it is stimulated by hints and clues from the external and internal environment (i.e., our context). If the salient cues that were present at encoding are present at retrieval, then you are less likely to forget, i.e. have a retrieval failure. The more similar the context at encoding and retrieval the greater the chances of remembering. Interference by dissimilar cues like the report that you started working on at 8:00 AM on Monday when you got to work increase the chances of forgetting the meeting. Or not having safety glasses readily available and obvious on the grinder. The way we can capitalize on the strengths of our brains and overcome it’s short comings is to better understand how our brains work. In the case of the meeting, creating cues that will be present at both encoding and retrieval is very helpful. Creating a reminder when putting the event on your calendar and then experiencing that same reminder cue before the meeting, or putting the meeting on your to-do list and then visualizing your to-do list at the beginning of the day are things that capitalize on our brain’s strengths and help avoid its weaknesses. But what about remembering to wear your safety glasses when operating a grinder? Something as simple as hanging safety glasses on the grinder switch can help. Also, research has clearly demonstrated that emotional cues tied to information at encoding increase the chances of accurate retrieval. Creating a visual image of an eye injury or hearing/reading a vivid story of a real grinder related eye injury will increase the chances that simply seeing the grinder will cause you to remember to put on your safety glasses. The bottom line is that the more we understand how we function cognitively, the better able we are to create contexts that help us remember and succeed.

Why Does Context Matter?

person-in-middle-of-arrows-to-make-a-choice.jpg

If you’ve been reading our blogs for some time you know that we center our approach to human performance around the idea of “context”.  Context is at the heart of the science of Human Factors, also referred to as “ergonomics”.  Human Factors involves understanding and integrating humans with the systems that they must use to succeed and context is central to that understanding.  To say that we are a product of our environment is accurate, but far too simplistic for those attempting to be more intentional in changing performance.  A practical way to look at context is to think of the world around us as composed of pieces of information that we must process in order to successfully interact with our environment.  These pieces of information include the other people, physical surroundings, weather, rules, laws, timing, and on and on and on. The breakdown in this process is when it comes time for us to crunch that data and react to it.  Our brains, at the time of this writing, still have the edge on computers in that we can intentionally take in data rather than passively waiting for something else to give us the data, and we can then decide how we behave with respect to that data where a computer is programmed to behave in predictable ways.  However, at times, that unpredictability could also be a weakness for humans.

The two most glaring weaknesses in processing the data are topics that we have written about just recently (Hardwired Blog and Cognitive Bias Blog).  The first of these can be explained by staying with our computer analogy.  For those of you that understand computer hardware, you would never spend your money on a new computer that has a single core processor, which means it can only process one job at a time.  While our brains aren’t exactly single core processors, they are close.  We can actually do two jobs at a time, just not very well and we bounce back and forth between these jobs more than we actually process them simultaneously.  Due to this, our brains like to automate as many jobs as possible in order to free itself up to process when the time comes.  This automatic (System 1) processing impedes our more in-depth System 2 processing and while necessary for speedy success, it can also lead to errors due to failure to include relevant data.  In other words, while living most of our lives in System I is critical to our survival, it is also a weakness as there are times that we don’t shift into System II when we should, we stay in automation.  Unfortunately we are also susceptible to cognitive biases, or distortions in the way we interact with the reality of our context.  You can read more about these biases (here) but just know that our brains have a filter in how we intake the data of our context and those distortions can actually change the way our brains work.

So what are some examples of how context has shaped behavior and performance?

- Countries that have round-a-bouts (or traffic circles) have lower vehicle mortality rates because the accidents that occur at intersections are side swipes rather than t-bones.

- People that live in rural areas tend to be more politically conservative and those in urban areas tend to be more politically liberal. The reason is that those living in smaller population densities tend to be more self-reliant and those living in higher population densities rely on others, in particularly, government services.

- People who work in creative fields, (artists, writers, musicians, etc.) are more creative when they frequently change the environment where they do their work. The new location stimulates the executive center of the brain.

- Painting the holding facilities of people arrested under the influence of alcohol a particular shade of pink has proven to lower violent outbursts. *Read the book “Drunk Tank Pink”, it’s genius.

- A person that collapses due to acute illness in a street is less likely to be provided aid by other people if that street has heavy foot traffic. The fewer people that are around the more likely one of those people will provide aid.

- As a hiring manager, I’m more likely to hire a person whose name is common and which matches my age expectation.

- School yard fights increase during the spring time when the wind blows harder causing the children to become irritable.

These are all examples of how the context around us can change our behaviors and performance.  If we can start looking at our context in more intentional ways and engineering it to be more conducive to high performance, we will ultimately be better at everything we do, at work and home.

Just Pay Attention and You Won’t Get Hurt!

baseball.jpg

I have been thinking about the role of “attention” in personal safety lately.  I can’t tell you how many times I have heard supervisors say…”He wouldn’t have gotten hurt if he had just been paying attention.”  In reality, he was paying attention, just to the wrong things.  Let me illustrate this with a brief observation.  Two of my grandsons (ages 4 and 6) play organized baseball.  The 4-year old plays what is called Tee-ball.  It is Tee-ball because the coach places the ball on a chest high Tee and the batter attempts to hit the ball into the field of play where there are players on the opposing team manning the normal defensive positions.  It is my observation of the players on defense that has helped me understand attention to a greater depth.  Most of the batters at this age can’t hit the ball past the infield and most of them are lucky to get it to the pitchers mound, so the outfielders have very little chance of actually having a ball get to them and they seem to know this.  For the most part, the “pitcher” (i.e., the person standing on the mound) and to some extent the other in-fielders watch the batter and respond to the ball.  The outfielders however are a very different story.  They spend their time playing in the dirt, rolling on the ground, chasing butterflies or chasing each other.  When, on the rare occasion that a ball does get to the outfield the coach has to yell instructions to his outfielders to get them to look for the ball, pick it up and throw it to the infield.  There is a definite difference of attention between the infield and the outfield in Tee-ball.  This is not the case in the “machine-pitch” league that my 6-year old grandson plays in however.    For the most part all of the defensive players seem to attend to the batter and respond when the ball is hit.  So what is the difference?  Obviously there is a maturational difference between the 4/5-year olds and the 6/7-year olds but I don’t think this explains all of the attentional difference because even Tee-ball players seem to pay more attention when playing the infield.  I think much of it has to do with expectations and saliency.  Attention is the process of selecting among the many competing stimuli that are present in one’s environment and then processing some while inhibiting the processing of others.  That selection process is driven by the goals and expectations that we have and the salience of the external variables in our environment.  The goal of a 4-year old “pitcher” is to impress her parents, grandparents and coach and she expects the ball to come her way, thus attention is directed to the batter and the ball.  The 4-year old outfielder has a goal of getting through this inning so that he can bat again and impress his audience knowing that the probability of having a ball come his way is very small.  The goals and expectations are different in the infield and outfield so the stimuli that are attended to are different.  The same is true in the workplace.  What is salient, important and obvious to the supervisor (after the injury occurred) are not necessarily what was salient, important and obvious to the injured employee before the injury occurred.  We can’t attend to everything, so it is the job of the supervisor (parent; Tee-ball coach) to make those stimuli that are the most important (e.g., risk in the workplace, batter and ball in the Tee-ball game) salient.  This is where the discussions that take place before, during and after the job are so important to focusing the attention of workers on the salient stimuli in their environment.  Blaming the person for “not paying attention” is not the answer because we don’t intentionally “not pay attention”.  Creating a context where the important stimuli are salient is a good starting point.

Lone Workers and “Self Intervention”

dreamstime_s_27480666.jpg

We work with a lot of companies that have Stop Work Authority policies and that are concerned that their employees are not stepping up and intervening when they see another employee doing something that is unsafe.  So they ask us to help their employees develop the skills and the confidence to do this with our SafetyCompass®: Intervention training program.  Intervention is critical to maintaining a safe workplace where teams of employees are working together to accomplish results.  However, what about situations where work is being accomplished, not by teams but by individuals working in isolation…..the Lone Worker?  He or she doesn’t have anyone around to watch their back and intervene when they are engaging in unsafe actions, so what can be done to improve safety in these situations?  It requires “self intervention”.  When we train interventions skills we help our students understand that the critical variable is understanding why the person has made the decision to act in an unsafe way by understanding the person’s context.  This is also the critical variable with “self intervention”.  Everyone writing (me) or reading (you) this blog has at some point in their life been a lone worker.  Have you ever been driving down the road by yourself?  Have you ever been working on a project at home with no one around?  Now, have you ever found yourself speeding when you were driving alone or using a power tool on your home project without the proper PPE.  Most of us can answer “yes” to both of these questions.  In the moment when those actions occurred it probably made perfect sense to you to do what you were doing because of your context.  Perhaps you were speeding because everyone else was speeding and you wanted to “keep up”.  Maybe you didn’t wear your PPE because you didn’t have it readily available and what you were doing was only going to take a minute to finish and you fell victim to the “unit bias”, the psychological phenomenon that creates in us a desire to complete a project before moving on to another.  Had you stopped (mentally) and evaluated the context before engaging in those actions, you possibly would have recognized that they were both unsafe and the consequences so punitive that you would have made a different decision.  “Self Intervention” is the process of evaluating your own personal context, especially when you are alone, to determine the contextual factors that are currently driving your decision making while also evaluating the risk and an approach to risk mitigation prior to engaging in the activity.  It requires that you understand that we are all susceptible to cognitive biases such as the “unit bias”  and that we can all become “blind” to risk unless we stop, ask ourselves why we are doing what we are doing or about to do, evaluating the risk associated with that action and then making corrections to mitigate that risk.  When working alone we don’t have the luxury of having someone else watching out for us, so we have to consciously do that ourselves.  Obviously, as employers we have  the responsibility to engineer the workplace to protect our lone workers, but we also can’t put every barrier in place to mitigate every risk so we should equip our lone workers with the knowledge and skills to self intervene prior to engaging in risky activities.  We need to help them develop the self intervention habit.

Hardwired to Jump to Conclusions

yeah-if-everybody-s3lpx6.jpg

Have you ever misinterpreted what someone said, or why they said it, responded defensively and ended up needing to apologize for your response? Or, have you ever been driving down the freeway, minding your own business, driving the speed limit and gotten cut off by someone? If you have, and you are like me then you probably shouted something like “jerk” or “idiot”. (By the way, as my 6-year old grandson reminded me from the back seat the other day….the other driver can’t hear you!) As it turns out, we are actually cognitively hardwired to respond quickly with an attributional interpretation of what we see and hear. It is how we attempt to make sense of our fast paced, complex world. Daniel Kahneman in his 2011 book, “Thinking, Fast and Slow” proposes that we have two different cognitive systems, one designed for automatic, rapid interpretation of input with little or no effort or voluntary control (System 1) and the other designed for conscious, effortful and rational interpretation of information (System 2). We spend most of our time utilizing System 1 in our daily lives because it requires much less effort and energy as it helps us make sense of our busy world. The problem is that System 1 analysis is based on limited data and depends on past experience and easily accessible knowledge to make interpretations, and thus is often wrong. When I interpreted the actions of the driver that cut me off to be the result of his intellect (“idiot”), it was System 1 processing that led to that interpretation. I “jumped to a conclusion” without sufficient processing. I didn’t allow System 2 to do it’s work. If I stay with my System 1 interpretation, then the next time I get cut off I am even more likely to see an “idiot” because that interpretation is the most easily accessible one because of the previous experience, but if I allow System 2 to operate I can change the way I perceive future events of this nature. System 2 allocates attention and effortful processing to alternative interpretations of data/events. It requires more time but also increases the probability of being right in our interpretation of the data. Asking myself if there could be other reasons why the driver cut me off is a System 2 function. Identifying and evaluating those possibilities is also a System 2 function. Engaging in System 2 cognitive processing can alter the information stored in my brain and thus affect the way I perceive and respond to similar events in the future.

So how can we stop jumping to conclusions?

It would be great if we could override our brains wiring and skip System 1 processing but we can’t. Actually, without System 1 we would not be very efficient because we would over analyze just about everything. What we can do is recognize when we are jumping to conclusions (guessing about intent for example) and force ourselves to focus our attention on other possible explanations, i.e. activate System 2. You need to find your “guessing trigger” to signal you to call up System 2. When you realize that you are thinking negatively (“idiot”) about someone or feeling a negative emotion like anger or frustration, simply ask yourself…. “Is there something I am missing here?” “Is there another possible explanation for this?” Simply asking this will activate System 2 processing (and also calm you down) and lead to a more accurate interpretation of the event. It will help override your natural tendency to jump to conclusions. It might even keep you from looking like an “idiot” when you have to apologize for your wrong interpretation and action.

Why It Makes Sense to Tolerate Risk

Construction_not_tied.jpg

Risk-Taking and Sense-Making Risk tolerance is a real challenge for nearly all of us, whether we are managing a team in a high-risk environment or trying to get a teenager to refrain from using his cellphone while driving.  It is also, unfortunately, a somewhat complicated matter.  There are plenty of moving parts.  Personalities, past experiences, fatigue and mood have all been shown to affect a person’s tolerance for risk.  Apart from trying to change individuals’ “predispositions” toward risk-taking, there is a lot that we can do to help minimize risk tolerance in any given context.  The key, as it turns out, is to focus our efforts on the context itself.

If you have followed our blog, you are by now familiar with the idea of “local rationality,” which goes something like this: Our actions and decisions are heavily influenced by the factors that are most obvious, pressing and significant (or, “salient”) in our immediate context.  In other words, what we do makes sense to us in the moment.  When was the last time you did something that, in retrospect, had you mumbling to yourself, “What was I thinking?”  When you look back on a previous decision, it doesn’t always make sense because you are no longer under the influence of the context in which you originally made that decision.

What does local rationality have to do with risk tolerance?  It’s simple.  When someone makes a decision to do something that he knows is risky, it makes sense to him given the factors that are most salient in his immediate context.

If we want to help others be less tolerant of risk, we should start by understanding which factors in a person’s context are likely to lead him to think that it makes sense to do risky things.  There are many factors, ranging from the layout of the physical space to the structure of incentive systems.  Some are obvious; others are not.  Here are a couple of significant but often overlooked factors.

Being in a Position of Relative Power

If you have a chemistry set and a few willing test subjects, give this experiment a shot.  Have two people sit in submissive positions (heads downcast, backs slouched) and one person stand over them in a power position (arms crossed, towering and glaring down at the others).  After only 60 seconds in these positions, something surprising happens to the brain chemistry of the person in the power position.  Testosterone (risk tolerance) and cortisol (risk-aversion) levels change, and this person is now more inclined to do risky things.  That’s right; when you are in a position of power relative to others in your context, you are more risk tolerant.

There is an important limiting factor here, though.  If the person in power also feels a sense of responsibility for the wellbeing of others in that context, the brain chemistry changes and he or she becomes more risk averse.  Parents are a great example.  They are clearly in a power-position relative to their children, but because parents are profoundly aware of their role in protecting their children, they are less likely to do risky things.

If you want to limit the effects of relative power-positioning on certain individuals’ risk tolerance - think supervisors, team leads, mentors and veteran employees - help them gain a clear sense of responsibility for the wellbeing of others around them.

Authority Pressure

On a remote job site in West Texas, a young laborer stepped over a pressurized hose on his way to get a tool from his truck.  Moments later, the hose erupted and he narrowly avoided a life-changing catastrophe.  This young employee was fully aware of the risk of stepping over a pressurized hose, and under normal circumstances, he would never have done something so risky; but in that moment it made sense because his supervisor had just instructed him with a tone of urgency to fetch the tool.

It is well documented that people will do wildly uncharacteristic things when instructed to do so by an authority figure.  (See Stanley Milgram’s “Study of Obedience”.)  The troubling part is that people will do uncharacteristically dangerous things - risking life and limb - under the influence of minor and even unintentional pressure from an authority figure.  Leaders need to be made aware of their influence and unceasingly demonstrate that, for them, working safely trumps other commands.

A Parting Thought

There is certainly more to be said about minimizing risk tolerance, but a critical first step is to recognize that the contexts in which people find themselves, which are the very same contexts that managers, supervisors and parents have substantial control over, directly affect people’s risk tolerance.

So, with that “trouble” employee / relative / friend / child in mind, think to yourself, how might their context lead them to think that it makes sense to do risky things?

Hardwired Inhibitions: Hidden Forces that Keep Us Silent in the Face of Disaster

Brain-Cogs.jpg

Employees’ willingness and ability to stop unsafe operations is one of the most critical parts of any safety management system, and here’s why: Safety managers cannot be everywhere at once.  They cannot write rules for every possible situation.  They cannot engineer the environment to remove every possible risk, and when the big events occur, it is usually because of a complex and unexpected interaction of many different elements in the work environment.  In many cases, employees working at the front line are not only the first line of defense, they are quite possibly the most important line of defense against these emergent hazards. Our 2010 study of safety interventions found that employees intervene in only about 39% of the unsafe operations that they recognize while at work.  In other words, employees’ silence is a critical gap in safety management systems, and it is a gap that needs to be honestly explored and resolved.

An initial effort to resolve this problem - Stop Work Authority - has been beneficial, but it is insufficient.  In fact, 97% of the people who participated in the 2010 study said that their company has given them the authority to stop unsafe operations.  Stop Work Authority’s value is in assuring employees that they will not be formally punished for insubordination or slowing productivity.  While fear of formal retaliation inhibits intervention, there are other, perhaps more significant forces that keep people silent.

Some might assume that the real issue is that employees lack sufficient motivation to speak up.  This belief is unfortunately common among leadership, represented in a common refrain - “We communicated that it is their responsibility to intervene in unsafe operations; but they still don’t do it.  They just don’t take it seriously.”  Contrary to this common belief, we have spoken one-on-one with thousands of frontline employees and nearly all of them, regardless of industry, culture, age or other demographic category, genuinely believe that they have the fundamental, moral responsibility to watch out for and help to protect their coworkers.  Employees’ silence is not simply a matter of poor motivation.

At the heart this issue is the “context effect.”  What employees think about, remember and care about at any given moment is heavily influenced by the specific context in which they find themselves.  People literally see the world differently from one moment to the next as a result of the social, physical, mental and emotional factors that are most salient at the time.  The key question becomes, “What factors in employees’ production contexts play the most significant role in inhibiting intervention?”  While there are many, and they vary from one company to the next, I would like to introduce four common factors in employees’ production contexts:

THE UNIT BIAS

Think about a time when you were focused on something and realized that you should stop to deal with a different, more significant problem, but decided to stick with the original task anyway?  That is the unit bias.  It is a distortion in the way we view reality.  In the moment, we perceive that completing the task at hand is more important than it really is, and so we end up putting off things that, outside of the moment, we would recognize as far more important.  Now imagine that an employee is focused on a task and sees a coworker doing something unsafe.  “I’ll get to it in a minute,” he thinks to himself.

BYSTANDER EFFECT

This is a a well documented phenomenon, whereby we are much less likely to intervene or help others when we are in a group.  In fact, the more people there are, the less likely we are to be the ones who speak up.

DEFERENCE TO AUTHORITY

When we are around people with more authority than us, we are much less likely to be the ones who take initiative to deal with a safety issue.  We refrain from doing what we believe we should, because we subtly perceive such action to be the responsibility of the “leader.”  It is a deeply-embedded and often non-conscious aversion to insubordination: When a non-routine decision needs to be made, it is to be made by the person with the highest position power.

PRODUCTION PRESSURE 

When we are under pressure to produce something in a limited amount of time, it does more than make us feel rushed.  It literally changes the way we perceive our own surroundings.  Things that might otherwise be perceived as risks that need to be stopped are either not noticed at all or are perceived as insignificant compared to the importance of getting things done. In addition to these four, there are other forces in employees’ production contexts that inhibit them when they should speak up.  If we're are going to get people to speak up more often, we need to move beyond “Stop Work Authority” and get over the assumption that motivating them will be enough.  We need to help employees understand what is inhibiting them in the moment, and then give them the skills to overcome these inhibitors so that they can do what they already believe is right - speak up to keep people safe.

Human Error and Complexity: Why your “safety world view” matters

Contextual-Model-2.0.png

Have you ever thought about or looked at pictures of your ancestors and realized, “I have that trait too!” Just like your traits are in large part determined by random combinations of genes from your ancestry, the history behind your safety world view is probably largely the product of chance - for example, whether you studied Behavioral Psychology or Human Factors in college, which influential authors’ views you were exposed to, who your first supervisor was, or whether you worked in the petroleum, construction or aeronautical industry. Our “Safety World View” is built over time and dramatically impacts how we think about, analyze and strive to prevent accidents.

Linear View - Human Error

Let’s briefly look at two views - Linear and Systemic - not because they are the only possible ones, but because they have had and are currently having the greatest impact on the world of safety. The Linear View is integral in what is sometimes referred to as the “Person Approach,” exemplified by traditional Behavior Based Safety (BBS) that grew out of the work of B.F. Skinner and the application of his research to Applied Behavioral Analysis and Behavior Modification. Whether we have thought of it or not, much of the industrial world is operating on this “linear” theoretical framework. We attempt to understand events by identifying and addressing a single cause (antecedent) or distinct set of causes, which elicit unsafe actions (behaviors) that lead to an incident (consequences). This view impacts both how we try to change unwanted behavior and how we go about investigating incidents. This behaviorally focused view naturally leads us to conclude in many cases that Human Error is, or can be, THE root cause of the incident. In fact, it is routinely touted that, “research shows that human error is the cause of more than 90 percent of incidents.” We are also conditioned and “cognitively biased” to find this linear model so appealing. I use the word “conditioned” because it explains a lot of what happens in our daily lives, where situations are relatively clean and simple…..so we naturally extend this way of thinking to more complex worlds/situations where it is perhaps less appropriate. Additionally, because we view accidents after the fact, the well documented phenomenon of “hindsight bias” leads us to linearly trace the cause back to an individual, and since behavior is the core of our model, we have a strong tendency to stop there. The assumption is that human error (unsafe act) is a conscious, “free will” decision and is therefore driven by psychological functions such as complacency, lack of motivation, carelessness or other negative attributes. This leads to the also well-documented phenomenon of the Fundamental Attribution Error, whereby we have a tendency to attribute failure on the part of others to negative personal qualities such as inattention, lack of motivation, etc., thus leading to the assignment of causation and blame. This assignment of blame may feel warranted and even satisfying, but does not necessarily deal with the real “antecedents” that triggered the unsafe behavior in the first place. As Sidney Dekker stated, “If your explanation of an accident still relies on unmotivated people, you have more work to do."

Systemic View - Complexity

In reality, most of us work in complex environments which involve multiple interacting factors and systems, and the linear view has a difficult time dealing with this complexity. James Reason (1997) convincingly argued for the complex nature of work environments with his “Swiss Cheese” model of complexity. In his view, accidents are the result of active failures at the “sharp end” (where the work is actually done) and “latent conditions,” which include many organizational decisions at the “blunt end” (higher management) of the work process. Because barriers fail, there are times when the active failures and latent conditions align, allowing for an incident to occur. More recently Hollnagel (2004) has argued that active failures are a normal part of complex workplaces because of the requirement for individuals to adapt their performance to the constantly changing environment and the pressure to balance production and safety. As a result, accidents “emerge” as this adaptation occurs (Hollnagel refers to this adaptive process as the “Efficiency Thoroughness Trade Off”) . Dekker (2006) has recently added to this view the idea that this adaptation is normal and even “locally rational” to the individual committing the active failure because he/she is responding to a context that may not be apparent to those observing performance in the moment or investigating a resulting incident. Focusing only on the active failure as the result of “human error” is missing the real reasons that it occurs at all. Rather, understanding the complex context that is eliciting the decision to behave in an “unsafe” manner will provide more meaningful information. It is much easier to engineer the context than it is to engineer the person. While a person is involved in almost all incidents in some manner, human error is seldom the “sufficient” cause of the incident because of the complexity of the environment in which it occurs. Attempting to explain and prevent incidents from a simple linear viewpoint will almost always leave out contributory (and often non-obvious) factors that drove the decision in the first place and thus led to the incident.

Why Does it Matter?

Thinking of human error as a normal and adaptive component of complex workplace environments leads to a different approach to preventing the incidents that can emerge out of those environments. It requies that we gain an understanding of the many and often surprising contextual factors that can lead to the active failure in the first place. If we are going to engineer safer workplaces, we must start with something that does not look like engineering at all - namely, candid, informed and skillful conversations with and among people throughout the organization. These conversations should focus on determining the contextual factors that are driving the unsafe actions in the first place. It is only with this information that we can effectively eliminate what James Reason called “latent conditions” that are creating the contexts that elicit the unsafe action in the first place. Additionally, this information should be used in the moment to eliminate active failures and also allowed to flow to decision makers at the “blunt end”, so that the system can be engineered to maximize safety. Your safety world view really does matter.