Safety Culture

The Safety Switch℠

Safety-Switch-3.jpg

As our world and workplaces grow in complexity, and as failures in these complex systems become increasingly calamitous, how do we take the insights that have been given us by so many dedicated and brilliant individuals, and make things better for the people who, whether we want to think about it or not, will suffer and die if we don’t adapt? It’s a heavy question, and one that’s been on our minds for a while.

You might not have known but, between blog posts and our day jobs, we’ve been writing a book.  In fact, we are now in the final phases of writing this book called, “The Safety Switch℠,”  which aims to tie together our research and the priceless contributions made by scholars and practitioners from a wide range of disciplines.

We thought it was about time to introduce the premise.

The Safety Switch℠ is a way of thinking about how we can adapt to a new world — one in which organizations are understood as complex systems, and the ever-increasing complexity of these systems presents new challenges.

The “Switch” happens at two levels.

First, it is a micro-level, personal, in-the-moment switch between two mental Modes.  Our default setting, Mode 1, is powered by mental shortcuts (called “heuristics”) and distortions (called “biases”) and often leads us to fix upon human error as the cause of safety problems.  While we may be “wired” to stay in this default mode, we can deliberately switch to a second Mode.  When in this Mode 2, we take a rigorous, effortful, sometimes counterintuitive, and often winding path to understand and address persistent safety challenges.

Second, there is a macro-level, organizational switch.  It involves activating within the organizational system an inherently dynamic layer of protection — it’s people — positioning humans as a unique and requisite response to growing complexity.

But here’s the catch: You can’t flip the second switch until you flip the first.

We have to learn when and how to switch from Mode 1 to Mode 2 in the moment and on the fly if we are going to generate the capacity to flip the second switch, and energize within our organizations this vital, dynamic and fully integrated layer of protection — the people.

Are Safety and Production Compatible?

Manufacturing.jpg

Can we all agree that people tend to make fewer mistakes when they slow down and, conversely, make more mistakes when they speed up?  And people tend to increase their speed when they feel pressure to produce?  Personal experience and research both support these two contentions.  Deadlines and pressure to produce literally change the way we see the world.  Things that might otherwise be perceived as risks are either not noticed at all or are perceived as insignificant compared to the importance of getting things done. Pressure and Perception

A famous research study by Darley & Batson (1973), sometimes referred to as “The Good Samaritan Study”, demonstrated the impact of production pressure on people’s willingness to help someone in need:

Participants were seminary students who were given the task of preparing a speech on the parable of the Good Samaritan — a story in which a man from Samaria voluntarily helps a stranger who was attacked by robbers.  The participants were divided into different groups, some of which were rushed to complete this task.  They were then sent from one building to another, where, along the way, they encountered a shabbily dressed “confederate” slumped over and appearing to need help.  The researchers found that participants in the hurry condition (production pressure) were much more likely to pass by the person in need, and many even reported either not seeing the person or not recognizing that the person needed help.

Even people’s deeply held moral convictions can be trumped by production pressure, not because it has eroded those convictions, but because it makes people see the world differently.

The Trade Off

One reason for this is that many of our decisions are impacted by what is known as the Efficiency-Thoroughness Trade-off (ETTO) (Hollnagel, 2004, 2009).  It is often impossible to be both fast and completely accurate at the same time because of our limited cognitive abilities, so we have to give in to one or the other.

When we give in to speed (efficiency) we tend to respond automatically rather than thoughtfully. We engage what Daniel Kahneman (see Hardwired to Jump to Conclusions) refers to as “System 1” processing — we utilize over-learned, quickly retrieved heuristics that have worked for us in the past, even though those approaches cause us to overlook risks and other important subtleties in the current situation.  This is how we naturally deal with the ETTO while under pressure from peers, supervisors or organizational systems to increase efficiency.

Conversely, when we are not under pressure to increase efficiency, but, rather, pressure to be completely accurate (thorough), we have a greater tendency to engage what Kahneman calls “System 2” processing — we are more thorough in how we manage our efforts and account for the factors that could impact the quality of what we are producing.  In these instances, we will notice risks, opportunities and other subtleties in our environments, just as the “non-rushed” participants did in the “Good Samaritan Study.”

So what is the point?

Most of our organizations are geared to make money, so efficiency is very important; but how do we bolster the thoroughness side of the tradeoff to support safety and minimize undesired events?  To answer this, we have to take an honest look at the context in which employees work.  Which is more significant to employees, efficiency or thoroughness?  And what impact is it having on decision making?

Some industries (e.g. manufacturing) have opted to streamline and automate their processes so that this balance is handled by interfacing humans more effectively with the machines.  Some industries can’t do this as well because of the nature of their work (e.g., construction).  We worked with a client in this later category that had a robust safety program, experienced employees and well intentioned leaders, but which was about to go out of business because of poor safety performance…and it had everything to do with the Efficiency-Thoroughness Trade-off.  The contracts that they operated under made it nearly impossible to turn a profit unless they completed projects ahead of schedule.  As they became more efficient to meet these deadlines, the time-to-completion got shorter and shorter in each subsequent contract until “thoroughness” had been edged out almost entirely.  For this company, preaching “safety” and telling people to take their time was simply not enough to outweigh the ever-increasing, systemic pressure to improve efficiency.  The only way to fix the problem and balance the ETTO was to fix the way that contracts were written, which was much more challenging than the quick and illusory solutions that they had originally tried.

Every organization is different, so balancing the ETTO will require different solutions and an understanding of the cultural factors driving decision making at all levels of the organization.  Once you understand what is salient to people in the organization, you can identify changes that will decrease the negative impact of pressure on performance.

Protecting Young Workers from Themselves

SC-photo.jpg

Looking back at your younger self, did you ever do something that now seems foolish and excessively risky? We have talked about the phenomenon of “local rationality” several times in the past, which is how our reasoning and decisions are heavily influenced by our immediate context. We are all subject to its impact, including yours truly (see “A Personal Perspective on Context and Risk Taking”), but perhaps even more so when we are young, especially between the ages of 15 and 24. The data are clear.  Adolescents and young adults are more likely to engage in risky behaviors than are adults (especially older adults) and workplace incidents are more frequent among this age group.

So is it because young workers are less experienced, poorer decision makers or inherently more risk tolerant? The answer is likely “yes” to all of these questions, but it is more complicated than that. Understanding why young workers do risky things requires an understanding of the neural mechanisms that are at play in these types of situations. While it's a heady topic (forgive the pun), understanding neural development can be of extreme importance when attempting to protect our younger workers.

It has been suggested that the adolescent brain's (cortical) structures - those  involved in logical reasoning and decision making - aren't completely developed, which contributes to risky decisions and behaviors. While it is true that the frontal cortex continues development into young adulthood, research demonstrates that, by age 15, logical reasoning abilities have already development equal to those of adults. In fact, 15-year olds are equal to adults at perceiving risk and estimating their vulnerability to that risk (Reyna & Farley, 2006).

In light of this type of evidence, Steinberg (2004; 2007) has proposed that risk taking is the interaction of both logical (cognitive) reasoning and psychosocial factors such as peer pressure. Unlike the logical reasoning abilities that have developed by age 15, the psychosocial capacities that impact logical reasoning have not developed until the mid-twenties and therefore interfere with real-world decision making and risk aversion. In other words, the mature decision making processes of adolescents and young adults my be interfered with by the immature psychosocial processes of this group, and reasoning only shows maturity when these psychosocial factors are minimized...for example, when there are no peers around to pressure them.

Additionally, the limbic system, which is integral to socioemotional processing and also the center for experiencing pleasure, is less developed and highly sensitive in adolescence.  Because of this, they will put themselves in high risk situations in the hope of experiencing the “high” that comes from a dopamine rush. Even though the frontal cortex (executive function) is more advanced, the “thrill” that comes from the risk can overpower the logical functions of the brain and lead to risk taking, especially when under stress or fatigue. In other words, at this age, the attraction to rewards causes young adults to do exciting and perhaps risky things while their poor self-control makes it hard for them to slow down and think before acting, even when they know that the risk is present.

So what does this mean for protecting this age group. According to Steinberg (2004), attempts to reduce risk taking in this group by improving their knowledge, attitudes or beliefs have generally failed. Changes to their decision making contexts, by removing peers from the team and having older adults observe them, have had a much greater impact on reducing risk taking behaviors.  Rearranging teams, so that young workers are not with their peers, minimizes the impact of negative psychosocial factors on their decisions and is a first step in protecting young workers from their own developing brains.  Additionally, teaming young workers with older workers, who have been trained to observe and effectively intervene in their younger counterparts' unsafe performance, will also reduce incidents among this age group. It is, however, very important that mutual respect be nurtured so that coaching does not trigger defensiveness.  Creating contexts that minimize the impact of negative psychosocial factors on logical decision making is one way to protect young workers from themselves.

Your Organization’s Safety Immune System (Part 2): Strengthening Immunity

immune_system.jpg

In a recent blog (Your Organization’s Safety Immune System) we talked about people being the “white blood cells” of our "safety immune system", but also that we have to help them become competent to do so.  People care about the safety of others, but most people do not have the natural ability to conduct a successful intervention discussion.   Isn’t it ironic that most organizational leaders assume that their employees have that very ability when they tell them to intervene when they see something unsafe.  It takes skill to successfully tell someone that their actions could lead to injury.  Many times people don’t intervene because they are afraid of reactance/defensiveness on the part of the other person.  Having the skills to deal with defensiveness is essential to being willing to enter into this potentially high stress conversation in the first place.  Success involves understanding where defensiveness comes from, how to deal with it before it arises and what to do when we encounter it both in others and in ourselves.  The intervention conversation is not a script, but rather a process that involves understanding the dynamics of the inhibiting forces and development of a set of skills that lead to effective communication. Defensiveness.  We have all experienced defensiveness both in ourselves and in other people.  Defensiveness arises because we perceive that we are under attack.  We are naturally inclined to defend our bodies and our property from danger, but we are also naturally inclined to protect/defend our personal dignity from criticism and our reputation from public ridicule.  When we perceive that our dignity or reputation are threatened, we defend either internally by retreating/avoiding or externally by pushing back either physically or verbally.  Thus we enter the Defensive Cycle™.

When we see someone doing something undesirable, such as acting in an unsafe manner we automatically attempt to understand why they are doing it and most of the time we automatically attribute it to something internal to the person.  This leads to the  well-documented phenomenon of the “Fundamental Attribution Error” (FAE), whereby we have a tendency to attribute failure on the part of others to negative personal qualities such as inattention, lack of motivation, etc., thus leading to the assignment of causation and blame.  When you fall victim to the FAE you will likely become frustrated or even angry with the other person, and if you enter into a conversation, you will likely come across as blaming the person, whether you mean to or not.  When the other person perceives you blaming, they will most likely guess that you are attacking their dignity or reputation, whether you mean to or not.  When this happens they naturally become defensive.  In turn, if the person gets quiet (defends internally), you will guess that you were right and they took your words to heart so you will expect performance changes which may or may not occur.  If, on the other hand, the person becomes aggressive (defends externally), you will guess that they are attacking your dignity or reputation and you will then become defensive and either retreat or push back yourself.  And the cycle goes on until someone retreats, or until you are able to stop the defensiveness and focus not on the person but on the context that created the unsafe performance in the first place.  You have to change your intent from blame to understanding and you have to communicate that intent to the other person.

Recognizing that we are in the Defensive Cycle™ is the first step to controlling defensiveness and conducting a successful intervention.  It is at this point that we need to stop and remember that when people engage in unsafe actions it is because it makes sense to them (local rationality) given the context in which they find themselves.  When we commit the FAE we are limiting the possible causes of their decision to act in an unsafe manner to their motivation and/or other internal attribute and then allowing that guess to create frustration which causes us to come across as blaming the person.  Recognizing that there could be other contextual factors driving their decision will reduce our tendency to blame, stop the defensive cycle before it begins and significantly increase our chances of having a successful intervention discussion.

Over the past decade we have trained many frontline workers and supervisors/managers in the skills needed to deal with defensiveness, hold an intervention discussion and create sustained behavior change.  We have also found that following training, interventions increase and incidents decrease as a result of simply creating competence which leads to confidence, thus strengthening the “white blood cells” needed for the "safety immune system" to work.

Your Organization's Safety Immune System

immune_system.jpg

Have you ever considered that organizations in many ways are like living organisms?  While there are obvious differences between organizations and living organisms, the metaphor can be helpful in understanding how to keep people safe in the workplace.  Like a living organism, organizations are made up of complex, interacting components and systems that allow the organization to survive, flourish and grow.  One of those systems in a living organism is its immune system which is needed to help it fight off external and internal attacks.  Organizations also need an immune system to help it defend itself from danger.  An immune system is composed of many different types of barriers against disease, some static and some dynamic.  Your body’s immune system includes static components like your skin, blood vessels, thymus, spleen, bone marrow, liver, etc., each designed to act as a barrier to defend your body against various dangers that could cause damage to you.  Organizations also rely of various static barriers to defend themselves against injury.  These include rules, policies, procedures and various mechanical safeguards, such as personal protective equipment, machine guards, etc.  While unquestionably useful, these defenses are also inherently insufficient.  No matter how well designed or assimilated, these devices simply cannot prevent all incidents in complex workplaces because they are static and slow to change.  As such, there is a need for something different.  Something more naturally suited to mitigate risk in our highly complex work environments.  Something that is more agile than our usual tools.  Something ubiquitous, reactive and creative.  The immune systems of living organisms contain something that is more agile than the static structures and barriers listed above.  They include white blood cells that move around the body and create various types of antibodies needed to fight off invaders.  Our organizations also have “white blood cells”…. the people that work there.  The individuals that are moving around, observing and intervening to activate safeguards or remove others from danger.  The difference between the white blood cells of our immune system and the people in our organizations is that white blood cells “naturally” intervene when danger is observed.  People on the other hand don’t necessarily always intervene.  White blood cells have the natural ability to detect danger and intervene.  Most people on the other hand don’t have the natural ability to intervene and as we have discussed in other articles (Hardwired Inhibitions), they are actually predisposed not to intervene.  To overcome this inhibition people have to be trained.  They have to learn how to not only recognize hazards but how to effectively speak up and also deal with the possible defensiveness that can arise when they do so.  Is your organization's immune system fully functional.  Do your organization's “white blood cells” know how to intervene.  If not, then your organization is very possibly at serious risk of injury.

Peer Pressure, Conformity and Your Safety Culture

Conformity-Lines.png

We are social creatures. We desire and attempt to maintain relationships wherever we are. In other words, we try to fit in with other people. This is true whether we are talking about family, work or just out in public with people we don’t know. The research is pretty clear….our decisions and actions are impacted by the people around us. Take the classic research of Solomon Asch (1955; 1956) which demonstrates the power of groups (normative influence) on our decision making. The experimental task was simple….select which one of three comparison lines match the standard where one line was obviously longer and one obviously shorter. The catch was that the experimental subject was grouped with varying numbers of confederates who would select an obviously wrong answer. The results were consistent….participants were likely to go along with the group even when the answers were obviously wrong and this conformity increased as group size increased. Additional research by Asch demonstrated that conformity decreases by approximately 25% with just one dissenter, suggesting that people want to make the correct decision and they don’t need a lot of support from group members to do so. The implication is that people tend to conform to group norms if everyone agrees, but are willing to dissent if there is any sort of disagreement among group members. The reason people are willing to go along with a group even when the decision is obviously wrong is because of fear of rejection and research provides ample evidence that rejection is a very common result of dissension with group decisions (see Tata, et al, 1996). There is a second reason that people go along with the group in addition to the desire to be liked and to fit in (normative influence). Research demonstrates that we go along with the group on many occasions because we think the group knows more about the correct decision than we do (informational influence). Two types of situations produce informational influence: (1) ambiguous situations in which a decision is difficult, and (2) crisis situations in which people don’t have time to really think for themselves. While (2) is pretty uncommon, (1) is very common in the workplace, especially with new hires. Less experienced employees don’t want to be rejected by the group, but additionally don’t have the experience to make thoughtful decisions when faced with situations that they have not encountered before. This is especially true when they are observing more experienced employees who don’t view the situation as ambiguous at all and don’t seem to hesitate when making a decision, even when the decision leads to an unsafe action. These types of decisions become automatic….just the way we do it around here. While peer pressure can be a bad thing if it leads to undesired behavior, it can also be a “good” thing if it leads to positive, safe, desired behavior. Understanding the power of peer pressure and the accepted, automatic nature of responding within an organization can help you create a safety culture where peer pressure leads to safe performance and a decrease in undesired behaviors and resulting incidents.

Lone Workers and “Self Intervention”

dreamstime_s_27480666.jpg

We work with a lot of companies that have Stop Work Authority policies and that are concerned that their employees are not stepping up and intervening when they see another employee doing something that is unsafe.  So they ask us to help their employees develop the skills and the confidence to do this with our SafetyCompass®: Intervention training program.  Intervention is critical to maintaining a safe workplace where teams of employees are working together to accomplish results.  However, what about situations where work is being accomplished, not by teams but by individuals working in isolation…..the Lone Worker?  He or she doesn’t have anyone around to watch their back and intervene when they are engaging in unsafe actions, so what can be done to improve safety in these situations?  It requires “self intervention”.  When we train interventions skills we help our students understand that the critical variable is understanding why the person has made the decision to act in an unsafe way by understanding the person’s context.  This is also the critical variable with “self intervention”.  Everyone writing (me) or reading (you) this blog has at some point in their life been a lone worker.  Have you ever been driving down the road by yourself?  Have you ever been working on a project at home with no one around?  Now, have you ever found yourself speeding when you were driving alone or using a power tool on your home project without the proper PPE.  Most of us can answer “yes” to both of these questions.  In the moment when those actions occurred it probably made perfect sense to you to do what you were doing because of your context.  Perhaps you were speeding because everyone else was speeding and you wanted to “keep up”.  Maybe you didn’t wear your PPE because you didn’t have it readily available and what you were doing was only going to take a minute to finish and you fell victim to the “unit bias”, the psychological phenomenon that creates in us a desire to complete a project before moving on to another.  Had you stopped (mentally) and evaluated the context before engaging in those actions, you possibly would have recognized that they were both unsafe and the consequences so punitive that you would have made a different decision.  “Self Intervention” is the process of evaluating your own personal context, especially when you are alone, to determine the contextual factors that are currently driving your decision making while also evaluating the risk and an approach to risk mitigation prior to engaging in the activity.  It requires that you understand that we are all susceptible to cognitive biases such as the “unit bias”  and that we can all become “blind” to risk unless we stop, ask ourselves why we are doing what we are doing or about to do, evaluating the risk associated with that action and then making corrections to mitigate that risk.  When working alone we don’t have the luxury of having someone else watching out for us, so we have to consciously do that ourselves.  Obviously, as employers we have  the responsibility to engineer the workplace to protect our lone workers, but we also can’t put every barrier in place to mitigate every risk so we should equip our lone workers with the knowledge and skills to self intervene prior to engaging in risky activities.  We need to help them develop the self intervention habit.

Are Safety Incentive Programs Counterproductive?

Business-Carrot-e1426995931336.jpg

In our February 11, 2015 blog we talked about “How Context Impacts Your Motivation” and one of the contextual aspect of many workplaces is a Safety Incentive Program designed to motivate employees to improve their safety performance. Historically the “safety bonus” has been contingent on not having any Lost Time Injuries (LTI’s) on the team during a specified period of time. The idea is to provide an extrinsic reward for safe performance that will increase the likelihood of safe behavior so that accidents will be reduced or eliminated. We also concluded in that blog that what we really want is people working for us who are highly intrinsically motivated and not in need of a lot of extrinsic “push” to perform. Safety Incentive Programs are completely based on the notion of extrinsic “push”. So do they work? We know from research dating back to the 1960’s that the introduction of an extrinsic reward for engaging in an activity that is already driven intrinsically will reduce the desire to engage in that activity when the reward is removed. In other words, extrinsic reward can have the consequence of reducing intrinsic motivation. I don’t know about you, but I don’t want to get hurt and I would assume that most people don’t want to get injured either. People are already intrinsically motivated to be safe and avoid pain. We also know that financial incentives can have perverse and unintended consequences. It is well known that Safety Incentive Programs can have the unintended consequence of under reporting of incidents and even injuries. Peer pressure to keep the incident quiet so that the team won’t lose it’s safety bonus happens in many organization. This not only leads to reduced information about why incidents are occurring, but it also decreases management’s ability to improve unsafe conditions, procedures, etc. resulting in similar incidents becoming more likely in the future. Because of this, the Occupational Safety and Health Administration (OSHA) has recently determined that safety incentive programs based on incident frequency must be eliminated because of these unintended consequences. Their suggestion is that safety bonuses should be contingent on upstream activities such as participation in safety improvement efforts like safety meetings, training, etc. On a side note, in some organizations, the Production Incentive Program is in direct conflict with the Safety Incentive Program so that production outweighs safety from a financial perspective. When this happens production speed can interfere with focus on safety and incidents become more likely. Our View

It is our view that Safety Incentive Programs are not only unnecessary, but potentially counterproductive. Capitalizing on the already present intrinsic motivation to be safe and creating an organizational culture/context that fosters that motivation to work together as a team to keep each other safe is much more positive and effective than the addition of the extrinsic incentive of money for safety. We suggest that management take the money budgeted for the safety incentive program and give pay increases while simultaneous examining and improving organizational context to help keep employees safe.

Contrasting Observation and Intervention Programs - Treating Symptoms vs. the Cause

Diagnostic-puzzle.jpg

Our loyal readers are quite familiar with our 2010 research into safety interventions in the workplace and the resulting SafetyCompass® Intervention training that resulted from that research. What you may not know is why we started that research to begin with. For years we had heard client after client explain to us their concerns over their observation programs. The common theme was that observation cards were plentiful when they started the program but submissions started to slow down over time. In an attempt to increase the number of cards companies instituted various tactics to increase the number of cards submitted. These tactics included such things as communicating the importance of observation cards, rewards for the best cards, and team competitions. These tactics proved successful, in the short term, but didn’t have sustainable impact on the number or quality of cards being turned in. Eventually leadership simply started requiring that employees turn in a certain number of cards in a given period of time. They went on to tell us of their frustration when they began receiving cards that were completely made up and some employees even using the cards as a means to communicate their dissatisfaction with their working conditions rather than safety related observations. They simply didn't know what to do to make their observation programs work effectively. As we spoke with their employees we heard a different story. They told us about the hope that they themselves had when the program was launched. They were excited about the opportunity to provide information about what was really going on in their workplace so they could get things fixed and make their jobs safer. They began by turning in cards and waiting to hear back on the fixes. When the fixes didn’t come they turned in more cards. Sometimes they would hear back in safety meetings about certain aspects of safety that needed to be focused on, but no real fixes. A few of them even told us of times that they turned in cards and their managers actually got angry about the behaviors that were being reported. Eventually they simply stopped turning in cards because leadership wasn’t paying attention to them and it was even getting people in trouble. Then leadership started giving out gift cards for the best observation cards so they figured they would turn a few in just to see if they could win the card. After all, who couldn’t use an extra $50 at Walmart? But even then, nothing was happening with the cards they turned in so they eventually just gave up again. The last straw was when their manager told them they had to turn in 5 per week. They spoke about the frustration that came with the added required paperwork when they knew nobody was looking at the cards anyway. As one person put it, “They’re just throwing them into a file cabinet, never to be seen again”. So the obvious choice for this person was to fill out his 5 cards every Friday afternoon and turn them in on his way out of the facility. It seemed that these organizations were all experiencing a similar Observation Program Death Spiral.

The obvious question is why? Why would such a well intentioned and possibly game changing program fail in so many organizations? After quite a bit of research into these organizations the answer became clear, they weren’t intervening. Or more precisely, they weren’t intervening in a very specific manner. The intent of observation programs is to provide data that shows the most pervasive unsafe actions in our organizations. If we, as the thought goes, can find out what unsafe behaviors are most common in our organization, then we can target those behaviors and change them. The fundamental problem with that premise is that behaviors are the cause of events (near misses, LTA, injuries, environmental spills, etc.). Actually, behaviors themselves are the result of something else. People don’t behave in a vacuum, as if they simply decide that acting unsafely is more desirable than acting safely. There are factors that drive human behaviors, the behavior themselves are simply a symptom of something else in the context surrounding and embedded in our organizations. Due to this fact, trending behaviors as a target for change efforts is no different than doctors treating the most common symptoms of disease, rather than curing the disease itself.

A proper intervention is essentially a diagnosis of what is creating behavior. Or, to steal the phrase from the title of our friend Todd Conklin's newest book, a pre-accident investigation.  An intervention program equips all employees with the skills to perform these investigations. When they see an unsafe behavior, they intervene in a specific way that allows them to create immediate safety in that moment, but they also diagnose the context to determine why it made sense to behave that way to begin with. Once context is understood, a targeted fix can be put into place that makes it less likely that the behavior happens in the future. The next step in an Intervention Program is incredibly important for organizational process improvement. Each intervention should be recorded so that the context (equipment issues, layout of workplace, procedural or rule discrepancies, production pressure, etc.) that created that behavior can be gathered and trended against other interventions. Once a large enough sample of interventions is created, organizations can then see the interworking of their work environment. Rather than simply looking at the total number of unsafe behaviors being performed in their company (e.g. not tying off at heights) they can also understand the most common and salient context that is driving those behaviors. Only then does leadership have the ability to put fixes into place that will actually change the context in which their employees perform their jobs and only then will they have the ability to make sustainable improvement.

Tying it back to observation programs

The observation program death spiral was the result of information that was not actionable. Once a company has data that is actionable, they can then institute targeted fixes. Organizations that use this approach have actually seen an increase in the number of interventions logged into the system. The reason is that the employees actually see something happening. They see that their interventions are leading to process improvement in their workplace and that’s the type of motivation that no $50 gift card could ever buy.

Hardwired Inhibitions: Hidden Forces that Keep Us Silent in the Face of Disaster

Brain-Cogs.jpg

Employees’ willingness and ability to stop unsafe operations is one of the most critical parts of any safety management system, and here’s why: Safety managers cannot be everywhere at once.  They cannot write rules for every possible situation.  They cannot engineer the environment to remove every possible risk, and when the big events occur, it is usually because of a complex and unexpected interaction of many different elements in the work environment.  In many cases, employees working at the front line are not only the first line of defense, they are quite possibly the most important line of defense against these emergent hazards. Our 2010 study of safety interventions found that employees intervene in only about 39% of the unsafe operations that they recognize while at work.  In other words, employees’ silence is a critical gap in safety management systems, and it is a gap that needs to be honestly explored and resolved.

An initial effort to resolve this problem - Stop Work Authority - has been beneficial, but it is insufficient.  In fact, 97% of the people who participated in the 2010 study said that their company has given them the authority to stop unsafe operations.  Stop Work Authority’s value is in assuring employees that they will not be formally punished for insubordination or slowing productivity.  While fear of formal retaliation inhibits intervention, there are other, perhaps more significant forces that keep people silent.

Some might assume that the real issue is that employees lack sufficient motivation to speak up.  This belief is unfortunately common among leadership, represented in a common refrain - “We communicated that it is their responsibility to intervene in unsafe operations; but they still don’t do it.  They just don’t take it seriously.”  Contrary to this common belief, we have spoken one-on-one with thousands of frontline employees and nearly all of them, regardless of industry, culture, age or other demographic category, genuinely believe that they have the fundamental, moral responsibility to watch out for and help to protect their coworkers.  Employees’ silence is not simply a matter of poor motivation.

At the heart this issue is the “context effect.”  What employees think about, remember and care about at any given moment is heavily influenced by the specific context in which they find themselves.  People literally see the world differently from one moment to the next as a result of the social, physical, mental and emotional factors that are most salient at the time.  The key question becomes, “What factors in employees’ production contexts play the most significant role in inhibiting intervention?”  While there are many, and they vary from one company to the next, I would like to introduce four common factors in employees’ production contexts:

THE UNIT BIAS

Think about a time when you were focused on something and realized that you should stop to deal with a different, more significant problem, but decided to stick with the original task anyway?  That is the unit bias.  It is a distortion in the way we view reality.  In the moment, we perceive that completing the task at hand is more important than it really is, and so we end up putting off things that, outside of the moment, we would recognize as far more important.  Now imagine that an employee is focused on a task and sees a coworker doing something unsafe.  “I’ll get to it in a minute,” he thinks to himself.

BYSTANDER EFFECT

This is a a well documented phenomenon, whereby we are much less likely to intervene or help others when we are in a group.  In fact, the more people there are, the less likely we are to be the ones who speak up.

DEFERENCE TO AUTHORITY

When we are around people with more authority than us, we are much less likely to be the ones who take initiative to deal with a safety issue.  We refrain from doing what we believe we should, because we subtly perceive such action to be the responsibility of the “leader.”  It is a deeply-embedded and often non-conscious aversion to insubordination: When a non-routine decision needs to be made, it is to be made by the person with the highest position power.

PRODUCTION PRESSURE 

When we are under pressure to produce something in a limited amount of time, it does more than make us feel rushed.  It literally changes the way we perceive our own surroundings.  Things that might otherwise be perceived as risks that need to be stopped are either not noticed at all or are perceived as insignificant compared to the importance of getting things done. In addition to these four, there are other forces in employees’ production contexts that inhibit them when they should speak up.  If we're are going to get people to speak up more often, we need to move beyond “Stop Work Authority” and get over the assumption that motivating them will be enough.  We need to help employees understand what is inhibiting them in the moment, and then give them the skills to overcome these inhibitors so that they can do what they already believe is right - speak up to keep people safe.

Human Error and Complexity: Why your “safety world view” matters

Contextual-Model-2.0.png

Have you ever thought about or looked at pictures of your ancestors and realized, “I have that trait too!” Just like your traits are in large part determined by random combinations of genes from your ancestry, the history behind your safety world view is probably largely the product of chance - for example, whether you studied Behavioral Psychology or Human Factors in college, which influential authors’ views you were exposed to, who your first supervisor was, or whether you worked in the petroleum, construction or aeronautical industry. Our “Safety World View” is built over time and dramatically impacts how we think about, analyze and strive to prevent accidents.

Linear View - Human Error

Let’s briefly look at two views - Linear and Systemic - not because they are the only possible ones, but because they have had and are currently having the greatest impact on the world of safety. The Linear View is integral in what is sometimes referred to as the “Person Approach,” exemplified by traditional Behavior Based Safety (BBS) that grew out of the work of B.F. Skinner and the application of his research to Applied Behavioral Analysis and Behavior Modification. Whether we have thought of it or not, much of the industrial world is operating on this “linear” theoretical framework. We attempt to understand events by identifying and addressing a single cause (antecedent) or distinct set of causes, which elicit unsafe actions (behaviors) that lead to an incident (consequences). This view impacts both how we try to change unwanted behavior and how we go about investigating incidents. This behaviorally focused view naturally leads us to conclude in many cases that Human Error is, or can be, THE root cause of the incident. In fact, it is routinely touted that, “research shows that human error is the cause of more than 90 percent of incidents.” We are also conditioned and “cognitively biased” to find this linear model so appealing. I use the word “conditioned” because it explains a lot of what happens in our daily lives, where situations are relatively clean and simple…..so we naturally extend this way of thinking to more complex worlds/situations where it is perhaps less appropriate. Additionally, because we view accidents after the fact, the well documented phenomenon of “hindsight bias” leads us to linearly trace the cause back to an individual, and since behavior is the core of our model, we have a strong tendency to stop there. The assumption is that human error (unsafe act) is a conscious, “free will” decision and is therefore driven by psychological functions such as complacency, lack of motivation, carelessness or other negative attributes. This leads to the also well-documented phenomenon of the Fundamental Attribution Error, whereby we have a tendency to attribute failure on the part of others to negative personal qualities such as inattention, lack of motivation, etc., thus leading to the assignment of causation and blame. This assignment of blame may feel warranted and even satisfying, but does not necessarily deal with the real “antecedents” that triggered the unsafe behavior in the first place. As Sidney Dekker stated, “If your explanation of an accident still relies on unmotivated people, you have more work to do."

Systemic View - Complexity

In reality, most of us work in complex environments which involve multiple interacting factors and systems, and the linear view has a difficult time dealing with this complexity. James Reason (1997) convincingly argued for the complex nature of work environments with his “Swiss Cheese” model of complexity. In his view, accidents are the result of active failures at the “sharp end” (where the work is actually done) and “latent conditions,” which include many organizational decisions at the “blunt end” (higher management) of the work process. Because barriers fail, there are times when the active failures and latent conditions align, allowing for an incident to occur. More recently Hollnagel (2004) has argued that active failures are a normal part of complex workplaces because of the requirement for individuals to adapt their performance to the constantly changing environment and the pressure to balance production and safety. As a result, accidents “emerge” as this adaptation occurs (Hollnagel refers to this adaptive process as the “Efficiency Thoroughness Trade Off”) . Dekker (2006) has recently added to this view the idea that this adaptation is normal and even “locally rational” to the individual committing the active failure because he/she is responding to a context that may not be apparent to those observing performance in the moment or investigating a resulting incident. Focusing only on the active failure as the result of “human error” is missing the real reasons that it occurs at all. Rather, understanding the complex context that is eliciting the decision to behave in an “unsafe” manner will provide more meaningful information. It is much easier to engineer the context than it is to engineer the person. While a person is involved in almost all incidents in some manner, human error is seldom the “sufficient” cause of the incident because of the complexity of the environment in which it occurs. Attempting to explain and prevent incidents from a simple linear viewpoint will almost always leave out contributory (and often non-obvious) factors that drove the decision in the first place and thus led to the incident.

Why Does it Matter?

Thinking of human error as a normal and adaptive component of complex workplace environments leads to a different approach to preventing the incidents that can emerge out of those environments. It requies that we gain an understanding of the many and often surprising contextual factors that can lead to the active failure in the first place. If we are going to engineer safer workplaces, we must start with something that does not look like engineering at all - namely, candid, informed and skillful conversations with and among people throughout the organization. These conversations should focus on determining the contextual factors that are driving the unsafe actions in the first place. It is only with this information that we can effectively eliminate what James Reason called “latent conditions” that are creating the contexts that elicit the unsafe action in the first place. Additionally, this information should be used in the moment to eliminate active failures and also allowed to flow to decision makers at the “blunt end”, so that the system can be engineered to maximize safety. Your safety world view really does matter.

Safety Culture Shift: Three Basic Steps

Gear-Shifter.png

In the world of safety, culture is a big deal. In one way or another, culture helps to shape nearly everything that happens within an organization - from shortcuts taken by shift workers to budget cuts made by managers. As important as it is, though, it seems equally as confusing and intractable. Culture appears to emerge as an unexpected by-product of organizational minutia: A brief comment made by a manager, misunderstood by direct-reports, propagated during water cooler conversations, and compounded with otherwise unrelated management decisions to downsize, outsource, reassign, promote, terminate… Safety culture can either grow wild and unmanaged - unpredictably influencing employee performance and elevating risk - or it can be understood and deliberately shaped to ensure that employees uphold the organization’s safety values.

Pin it Down

The trick is to pin it down. A conveniently simple way of capturing the idea of culture is to say that it is the “taken-for-granted way of doing things around here;” but even this is not enough. If we can understand the mechanics that drive culture, we will be better positioned to shift it in support of safety. The good news is that, while presenting itself as extraordinarily complicated, culture is remarkably ordinary at its core. It is just the collective result of our brains doing what they always do.

Our Brains at Work

Recall the first time that you drove a car. While you might have found it exhilarating, it was also stressful and exhausting. Recall how unfamiliar everything felt and how fast everything seemed to move around you. Coming to a four-way stop for the first time, your mind was racing to figure out when and how hard to press the brake pedal, where the front of the car should stop relative to the stop sign, how long you should wait before accelerating, which cars at the intersection had the right-of-way, etc. While we might make mistakes in situations like this, we should not overlook just how amazing it is that our brains can take in such a vast amount of unfamiliar information and, in a near flash, come up with an appropriate course of action. We can give credit to the brain’s “executive system” for this.

Executive or Automatic?

But this is not all that our brains do. Because the executive system has its limitations - it can only handle a small number of challenges at a time, and appears to consume an inordinate amount of our body’s energy in doing so - we would be in bad shape if we had to go through the same elaborate and stressful mental process for the rest of our lives while driving. Fortunately, our brains also “automate” the efforts that work for us. Now, when you approach a four-way-stop, your brain is free to continue thinking about what you need to pick up from the store before going home. When we come up with a way of doing something that works - even elaborate processes - our brains hand it over to an “automatic system.” This automatic system drives our future actions and decisions when we find ourselves in similar circumstances, without pestering the executive system to come up with an appropriate course of action.

Why it Matters

What does driving have to do with culture? Whatever context we find ourselves in - whether it is a four-way-stop or a pre-job planning meeting - our brains take in the range of relevant information, come up with an effective course of action, try it out and, when it works, automate it as “the way to do things in this situation.”

For Example

Let’s imagine that a young employee leaves new-hire orientation with a clear understanding of the organization’s safety policies and operating procedures. At that moment, assuming that he wants to succeed within the organization, he believes that proactively contributing during a pre-job planning meeting will lead to recognition and professional success.

Unfortunately, at many companies, the actual ‘production’ context is quite different than the ‘new-hire orientation’ context. There are hurried supervisors, disinterested ‘old timers’, impending deadlines and too little time, and what seemed like the right course of action during orientation now looks like a sure-fire way to get ostracized and opposed. His brain’s “executive system” quickly determines that staying quiet and “pencil whipping” the pre-job planning form like everyone else is a better course of action; and in no time, our hapless new hire is doing so automatically - without thinking twice about whether it is the right thing to do.

Changing Culture

If culture is the collective result of brains figuring out how to thrive in a given context, then changing culture comes down to changing context - changing the “rules for success.” If you learned to drive in the United States but find yourself at an intersection in England, your automated way of driving will likely get you into an accident. When the context changes, the executive system has to wake up, find a new way to succeed given the details of the new context, and then automate that for the future.

How does this translate to changing a safety culture? It means that, to change safety culture, we need to change the context that employees work in so that working safely and prioritizing safety when making decisions leads to success.

Three Basic Steps:

Step 1

Identify the “taken-for-granted” behaviors that you want employees to adopt. Do you want employees to report all incidents and near-misses? Do you want managers to approve budget for safety-critical expenditures?

This exercise amounts to defining your safety culture. Avoid the common mistake of falling back on vague, safety-oriented value statements. If you aren’t specific here, you will not have a solid foundation for the next two steps.

Step 2

Analyze employees’ contexts to see what is currently inhibiting or competing against these targeted, taken-for-granted behaviors. Are shift workers criticized or blamed by their supervisors for near-misses? Are the managers who cut cost by cutting corners also the ones being promoted?

Be sure to look at the entire context. Often times, factors like physical layout, reporting structure or incentive programs play a critical role in inhibiting these desired, taken-for-granted behaviors.

Step 3

Change the context so that, when employees exhibit the desired behaviors that you identified in Step 1, they are more likely to thrive within the organization.

“Thriving” means that employees receive recognition, satisfy the expectations of their superiors, avoid resistance and alienation, achieve their professional goals, and avoid conflicting demands for their time and energy, among other things.

Give It a Try

Shifting culture comes down to strategically changing the context that people find themselves in.  Give it a try and you might find that it is easier than you expected. You might even consider trying it at home. Start at Step 1; pick one simple "taken-for-granted" behavior and see if you can get people to automate this behavior by changing their context. If you continue the experiment and create a stable working context that consistently encourages safe performance, working safely will eventually become "how people do things around here."

The Safety Side Effect

Things Supervisors do that, Coincidentally, Improve Safety

 

Common sense tells us that leaders play a special role in the performance of their employees, and there is substantial research to help us understand why this is the case.  For example, Stanley Milgram’s famous studies of obedience in the 1960s demonstrated that, to their own dismay, people will administer what they think are painful electric shocks to strangers when asked to do so by an authority figure.  This study and many others reveal that leaders are far more influential over the behavior of others than is commonly recognized.  

In the workplace, good leadership usually translates to better productivity, efficiency and quality.  Coincidentally, as research demonstrates, leaders whose teams are the most efficient and consistently productive also usually have the best safety records.  These leaders do not necessarily “beat the safety drum” louder than others.  They aren’t the ones with the most “Safety First” stickers on their hardhats or the tallest stack of “near miss” reports on their desks; rather, their style of leadership produces what we call the “Safety Side Effect.”  The idea is this: Safe performance is a bi-product of the way that good leaders facilitate and focus the efforts of their subordinate employees.  But what, specifically, produces this effect?

Over a 30 year period, we have asked thousands of employees to describe the characteristics of their best boss - the boss who sustained the highest productivity, quality and morale.  This “Best Boss” survey identified 20 consistently recurring characteristics, which we described in detail during our 2012 Newsletter series.  On close inspection, one of these characteristic - “Holds Himself and Others Accountable for Results” - plays a significant role in bringing about the Safety Side Effect.  Best bosses hold a different paradigm of accountability.  Rather than viewing accountability as a synonym for “punishment,” these leaders view it as an honest and pragmatic effort to redirect and resolve failures.  When performance failure occurs, the best boss...

  1. consistently steps up to the failure and deals with it immediately or as soon as possible after it occurs;
  2. honestly explores the many possible reasons WHY the failure occurred, without jumping to the simplistic conclusion that it was one person’s fault; and
  3. works with the employee to determine a resolution for the failure.

When a leader approaches performance failure in this way, it creates a substantially different working environment for subordinate employees - one in which employees:

  1. do not so quickly become defensive when others stop their unsafe behavior
  2. focus more on resolving problems than protecting themselves from blame, and
  3. freely offer ideas for improving their own safety performance.

Your Culture Gap is Showing

The Gap between your Formal and Informal Cultures is as simple as 'Follow the Leader' Companies often express frustration that their operations fail to live up to the standards set forth for itself.  These companies are essentially describing gaps between their formal (company standard) and informal (what actually happens) cultures.  While many factors contribute to this gap, such as communication, size, number of locations and hiring practices, maybe the single most prevalent force in driving informal culture is the behavior of front line managers and supervisors.

One important characteristics of a “Best Boss” is leading by example.  On the surface, this seems like a straightforward and common characteristic of many bosses, but let’s look deeper.  How does the significance of this characteristic extend beyond just the personal esteem in which we hold the boss to the point that it actually impacts the success of the entire organization?

A workplace is an extremely complex and dynamic organism and the workers themselves will only act in ways that make sense to them in the moment.  If the actions of supervisors suggest that certain behaviors are acceptable, even if they fly in the face of company policy, the employees will be prompted to act in the same manner as their leader.  Even worse, if the boss is allowed to pick and choose which rules to follow, he or she is giving unspoken permission for others to do the same.

Let’s look at a specific example.  There is a manufacturing company that has very high safety standards, including the proper use of PPE (Personal Protective Equipment).  The plant manager is well known to show up on the manufacturing floor wearing his Nike training shoes and a hat of his favorite football team.  While he may try to justify not wearing PPE in his own mind, what he fails to recognize is the precedent he is setting for the workforce.  After all, if the boss can wear his tennis shoes on the plant floor, why can’t the others?  Not only is he not modeling the proper standard, he has now set the precedent that the standards themselves are simply suggestions and not to be taken seriously.

Some of you may be asking yourself, “but what if I make a simple mistake and now I’m leading the entire team down the wrong road?”  There is actually no better time to demonstrate the characteristic of leading by example than when you make a mistake.  Simply stating your mistake and the steps that you are going to take to rectify the situation shows that you do in fact care about the standards of the company, and most importantly, that you are willing to hold yourself accountable to the standards.  The resulting impact on informal culture is that the formal culture will be seen as worthy of being embraced and that everyone is able - especially leaders - and prepared to redirect and be redirected for performance that doesn't match the desired culture.

We won’t go into detail in this post about what leaders do to redirect bad performance, in themselves or others, but you can click here to read an archived newsletter on that topic.