Safety Performance

Why It Makes Sense to Tolerate Risk

Construction_not_tied.jpg

Risk-Taking and Sense-Making Risk tolerance is a real challenge for nearly all of us, whether we are managing a team in a high-risk environment or trying to get a teenager to refrain from using his cellphone while driving.  It is also, unfortunately, a somewhat complicated matter.  There are plenty of moving parts.  Personalities, past experiences, fatigue and mood have all been shown to affect a person’s tolerance for risk.  Apart from trying to change individuals’ “predispositions” toward risk-taking, there is a lot that we can do to help minimize risk tolerance in any given context.  The key, as it turns out, is to focus our efforts on the context itself.

If you have followed our blog, you are by now familiar with the idea of “local rationality,” which goes something like this: Our actions and decisions are heavily influenced by the factors that are most obvious, pressing and significant (or, “salient”) in our immediate context.  In other words, what we do makes sense to us in the moment.  When was the last time you did something that, in retrospect, had you mumbling to yourself, “What was I thinking?”  When you look back on a previous decision, it doesn’t always make sense because you are no longer under the influence of the context in which you originally made that decision.

What does local rationality have to do with risk tolerance?  It’s simple.  When someone makes a decision to do something that he knows is risky, it makes sense to him given the factors that are most salient in his immediate context.

If we want to help others be less tolerant of risk, we should start by understanding which factors in a person’s context are likely to lead him to think that it makes sense to do risky things.  There are many factors, ranging from the layout of the physical space to the structure of incentive systems.  Some are obvious; others are not.  Here are a couple of significant but often overlooked factors.

Being in a Position of Relative Power

If you have a chemistry set and a few willing test subjects, give this experiment a shot.  Have two people sit in submissive positions (heads downcast, backs slouched) and one person stand over them in a power position (arms crossed, towering and glaring down at the others).  After only 60 seconds in these positions, something surprising happens to the brain chemistry of the person in the power position.  Testosterone (risk tolerance) and cortisol (risk-aversion) levels change, and this person is now more inclined to do risky things.  That’s right; when you are in a position of power relative to others in your context, you are more risk tolerant.

There is an important limiting factor here, though.  If the person in power also feels a sense of responsibility for the wellbeing of others in that context, the brain chemistry changes and he or she becomes more risk averse.  Parents are a great example.  They are clearly in a power-position relative to their children, but because parents are profoundly aware of their role in protecting their children, they are less likely to do risky things.

If you want to limit the effects of relative power-positioning on certain individuals’ risk tolerance - think supervisors, team leads, mentors and veteran employees - help them gain a clear sense of responsibility for the wellbeing of others around them.

Authority Pressure

On a remote job site in West Texas, a young laborer stepped over a pressurized hose on his way to get a tool from his truck.  Moments later, the hose erupted and he narrowly avoided a life-changing catastrophe.  This young employee was fully aware of the risk of stepping over a pressurized hose, and under normal circumstances, he would never have done something so risky; but in that moment it made sense because his supervisor had just instructed him with a tone of urgency to fetch the tool.

It is well documented that people will do wildly uncharacteristic things when instructed to do so by an authority figure.  (See Stanley Milgram’s “Study of Obedience”.)  The troubling part is that people will do uncharacteristically dangerous things - risking life and limb - under the influence of minor and even unintentional pressure from an authority figure.  Leaders need to be made aware of their influence and unceasingly demonstrate that, for them, working safely trumps other commands.

A Parting Thought

There is certainly more to be said about minimizing risk tolerance, but a critical first step is to recognize that the contexts in which people find themselves, which are the very same contexts that managers, supervisors and parents have substantial control over, directly affect people’s risk tolerance.

So, with that “trouble” employee / relative / friend / child in mind, think to yourself, how might their context lead them to think that it makes sense to do risky things?

Hardwired Inhibitions: Hidden Forces that Keep Us Silent in the Face of Disaster

Brain-Cogs.jpg

Employees’ willingness and ability to stop unsafe operations is one of the most critical parts of any safety management system, and here’s why: Safety managers cannot be everywhere at once.  They cannot write rules for every possible situation.  They cannot engineer the environment to remove every possible risk, and when the big events occur, it is usually because of a complex and unexpected interaction of many different elements in the work environment.  In many cases, employees working at the front line are not only the first line of defense, they are quite possibly the most important line of defense against these emergent hazards. Our 2010 study of safety interventions found that employees intervene in only about 39% of the unsafe operations that they recognize while at work.  In other words, employees’ silence is a critical gap in safety management systems, and it is a gap that needs to be honestly explored and resolved.

An initial effort to resolve this problem - Stop Work Authority - has been beneficial, but it is insufficient.  In fact, 97% of the people who participated in the 2010 study said that their company has given them the authority to stop unsafe operations.  Stop Work Authority’s value is in assuring employees that they will not be formally punished for insubordination or slowing productivity.  While fear of formal retaliation inhibits intervention, there are other, perhaps more significant forces that keep people silent.

Some might assume that the real issue is that employees lack sufficient motivation to speak up.  This belief is unfortunately common among leadership, represented in a common refrain - “We communicated that it is their responsibility to intervene in unsafe operations; but they still don’t do it.  They just don’t take it seriously.”  Contrary to this common belief, we have spoken one-on-one with thousands of frontline employees and nearly all of them, regardless of industry, culture, age or other demographic category, genuinely believe that they have the fundamental, moral responsibility to watch out for and help to protect their coworkers.  Employees’ silence is not simply a matter of poor motivation.

At the heart this issue is the “context effect.”  What employees think about, remember and care about at any given moment is heavily influenced by the specific context in which they find themselves.  People literally see the world differently from one moment to the next as a result of the social, physical, mental and emotional factors that are most salient at the time.  The key question becomes, “What factors in employees’ production contexts play the most significant role in inhibiting intervention?”  While there are many, and they vary from one company to the next, I would like to introduce four common factors in employees’ production contexts:

THE UNIT BIAS

Think about a time when you were focused on something and realized that you should stop to deal with a different, more significant problem, but decided to stick with the original task anyway?  That is the unit bias.  It is a distortion in the way we view reality.  In the moment, we perceive that completing the task at hand is more important than it really is, and so we end up putting off things that, outside of the moment, we would recognize as far more important.  Now imagine that an employee is focused on a task and sees a coworker doing something unsafe.  “I’ll get to it in a minute,” he thinks to himself.

BYSTANDER EFFECT

This is a a well documented phenomenon, whereby we are much less likely to intervene or help others when we are in a group.  In fact, the more people there are, the less likely we are to be the ones who speak up.

DEFERENCE TO AUTHORITY

When we are around people with more authority than us, we are much less likely to be the ones who take initiative to deal with a safety issue.  We refrain from doing what we believe we should, because we subtly perceive such action to be the responsibility of the “leader.”  It is a deeply-embedded and often non-conscious aversion to insubordination: When a non-routine decision needs to be made, it is to be made by the person with the highest position power.

PRODUCTION PRESSURE 

When we are under pressure to produce something in a limited amount of time, it does more than make us feel rushed.  It literally changes the way we perceive our own surroundings.  Things that might otherwise be perceived as risks that need to be stopped are either not noticed at all or are perceived as insignificant compared to the importance of getting things done. In addition to these four, there are other forces in employees’ production contexts that inhibit them when they should speak up.  If we're are going to get people to speak up more often, we need to move beyond “Stop Work Authority” and get over the assumption that motivating them will be enough.  We need to help employees understand what is inhibiting them in the moment, and then give them the skills to overcome these inhibitors so that they can do what they already believe is right - speak up to keep people safe.

Safety Intervention: A Dynamic Solution to Complex Safety Problems

Speak_up.jpg

If your organization is like many that we see, you are spending ever increasing time and energy developing SOPs, instituting regulations from various alphabet government organizations, buying new PPE and equipment, and generally engineering your workplace to be as safe as possible.  While this is both invaluable and required to be successful in our world today, is it enough?  The short answer is “no”. These things are what we refer to as mechanical and procedural safeguards and are absolutely necessary but also absolutely inadequate.  You see, mechanical and procedural safeguards are static, slow to change, and offer limited effectiveness while our workplaces are incredibly complex, dynamic, and hard to predict.  We simply can’t create enough barriers that can cover every possible hazard in the world we live in.  In short, you have to do it but you shouldn’t think that your job stops there. For us to create safety in such a complex environment we will have to find something else that permeates the organization, is reactive, and also creative.  The good news is that you have the required ingredient already…..people.  If we can get our people to speak up effectively when they see unsafe acts, they can be the missing element that is everywhere in your organization, can react instantly, and come up with creative fixes.  But can it be that easy?  Again, the short answer is “no”.

In 2010 we completed a large scale and cross-industry study into what happens when someone observes another person engaged in an unsafe action.  We wanted to know how often people spoke up when they saw an unsafe act.  If they didn’t speak up, why not?  If they did speak up how did the other person respond?  Did they become angry, defensive or show appreciation?  Did the intervention create immediate behavior change and also long term behavior change, and much more?  I don’t have the time and space to go into the entire finding of our research (EHS Today Article) , just know that people don’t speak up very often (39% of the time) and when they do speak up they tend to do a poor job.  If you take our research findings and evaluate them in light  of a long history of research into cognitive biases (e.g. the fundamental attribution error, hindsight bias, etc.) that show how humans tend to be hardwired to fail when the moment of intervention arises we know where the 61% failure rate of speaking up comes from…… it’s human nature.

We decided to test a theory and see if we could fight human nature simply by giving front line workers a set of skills to intervene when they did see an unsafe action by one of their coworkers.  We taught them how to talk to the person in such a way that they eliminated defensiveness, identified the actual reasons for why the person did it the unsafe way, and then ultimately found a fix to make sure the behavior changed immediately and sustainably.  We wanted to know if simply learning these skills made it more likely that people would speak up, and if they did would that 90 second intervention be dynamic and creative enough to make immediate and sustainable behavior change.  What we found in one particular company gave us our answer.  Simply learning intervention skills made their workforce 30% more likely to speak up.  Just knowing how to talk to people made it more likely that people didn’t fall victim to  the cognitive biases that I mentioned earlier.  And when they did speak up, behavior changes were happening at a far great rate and lasting much longer that they ever did previously, which helped result in a 57% reduction in Total Recordable Incident Rate (TRIR) and an 89% reduction in severity rates.

I would never tell a safety professional to stop working diligently on their mechanical and procedural barriers, they should be a significant component of the foundation on which safety programs are built.  However, human intervention should be the component that holds that program together when things get crazy out in the real world.  It can be as simple as helping your workers understand their propensity for not intervening and then giving them the ability and confidence to speak up when they do see something unsafe.

Human Error and Complexity: Why your “safety world view” matters

Contextual-Model-2.0.png

Have you ever thought about or looked at pictures of your ancestors and realized, “I have that trait too!” Just like your traits are in large part determined by random combinations of genes from your ancestry, the history behind your safety world view is probably largely the product of chance - for example, whether you studied Behavioral Psychology or Human Factors in college, which influential authors’ views you were exposed to, who your first supervisor was, or whether you worked in the petroleum, construction or aeronautical industry. Our “Safety World View” is built over time and dramatically impacts how we think about, analyze and strive to prevent accidents.

Linear View - Human Error

Let’s briefly look at two views - Linear and Systemic - not because they are the only possible ones, but because they have had and are currently having the greatest impact on the world of safety. The Linear View is integral in what is sometimes referred to as the “Person Approach,” exemplified by traditional Behavior Based Safety (BBS) that grew out of the work of B.F. Skinner and the application of his research to Applied Behavioral Analysis and Behavior Modification. Whether we have thought of it or not, much of the industrial world is operating on this “linear” theoretical framework. We attempt to understand events by identifying and addressing a single cause (antecedent) or distinct set of causes, which elicit unsafe actions (behaviors) that lead to an incident (consequences). This view impacts both how we try to change unwanted behavior and how we go about investigating incidents. This behaviorally focused view naturally leads us to conclude in many cases that Human Error is, or can be, THE root cause of the incident. In fact, it is routinely touted that, “research shows that human error is the cause of more than 90 percent of incidents.” We are also conditioned and “cognitively biased” to find this linear model so appealing. I use the word “conditioned” because it explains a lot of what happens in our daily lives, where situations are relatively clean and simple…..so we naturally extend this way of thinking to more complex worlds/situations where it is perhaps less appropriate. Additionally, because we view accidents after the fact, the well documented phenomenon of “hindsight bias” leads us to linearly trace the cause back to an individual, and since behavior is the core of our model, we have a strong tendency to stop there. The assumption is that human error (unsafe act) is a conscious, “free will” decision and is therefore driven by psychological functions such as complacency, lack of motivation, carelessness or other negative attributes. This leads to the also well-documented phenomenon of the Fundamental Attribution Error, whereby we have a tendency to attribute failure on the part of others to negative personal qualities such as inattention, lack of motivation, etc., thus leading to the assignment of causation and blame. This assignment of blame may feel warranted and even satisfying, but does not necessarily deal with the real “antecedents” that triggered the unsafe behavior in the first place. As Sidney Dekker stated, “If your explanation of an accident still relies on unmotivated people, you have more work to do."

Systemic View - Complexity

In reality, most of us work in complex environments which involve multiple interacting factors and systems, and the linear view has a difficult time dealing with this complexity. James Reason (1997) convincingly argued for the complex nature of work environments with his “Swiss Cheese” model of complexity. In his view, accidents are the result of active failures at the “sharp end” (where the work is actually done) and “latent conditions,” which include many organizational decisions at the “blunt end” (higher management) of the work process. Because barriers fail, there are times when the active failures and latent conditions align, allowing for an incident to occur. More recently Hollnagel (2004) has argued that active failures are a normal part of complex workplaces because of the requirement for individuals to adapt their performance to the constantly changing environment and the pressure to balance production and safety. As a result, accidents “emerge” as this adaptation occurs (Hollnagel refers to this adaptive process as the “Efficiency Thoroughness Trade Off”) . Dekker (2006) has recently added to this view the idea that this adaptation is normal and even “locally rational” to the individual committing the active failure because he/she is responding to a context that may not be apparent to those observing performance in the moment or investigating a resulting incident. Focusing only on the active failure as the result of “human error” is missing the real reasons that it occurs at all. Rather, understanding the complex context that is eliciting the decision to behave in an “unsafe” manner will provide more meaningful information. It is much easier to engineer the context than it is to engineer the person. While a person is involved in almost all incidents in some manner, human error is seldom the “sufficient” cause of the incident because of the complexity of the environment in which it occurs. Attempting to explain and prevent incidents from a simple linear viewpoint will almost always leave out contributory (and often non-obvious) factors that drove the decision in the first place and thus led to the incident.

Why Does it Matter?

Thinking of human error as a normal and adaptive component of complex workplace environments leads to a different approach to preventing the incidents that can emerge out of those environments. It requies that we gain an understanding of the many and often surprising contextual factors that can lead to the active failure in the first place. If we are going to engineer safer workplaces, we must start with something that does not look like engineering at all - namely, candid, informed and skillful conversations with and among people throughout the organization. These conversations should focus on determining the contextual factors that are driving the unsafe actions in the first place. It is only with this information that we can effectively eliminate what James Reason called “latent conditions” that are creating the contexts that elicit the unsafe action in the first place. Additionally, this information should be used in the moment to eliminate active failures and also allowed to flow to decision makers at the “blunt end”, so that the system can be engineered to maximize safety. Your safety world view really does matter.

Safety Culture Shift: Three Basic Steps

Gear-Shifter.png

In the world of safety, culture is a big deal. In one way or another, culture helps to shape nearly everything that happens within an organization - from shortcuts taken by shift workers to budget cuts made by managers. As important as it is, though, it seems equally as confusing and intractable. Culture appears to emerge as an unexpected by-product of organizational minutia: A brief comment made by a manager, misunderstood by direct-reports, propagated during water cooler conversations, and compounded with otherwise unrelated management decisions to downsize, outsource, reassign, promote, terminate… Safety culture can either grow wild and unmanaged - unpredictably influencing employee performance and elevating risk - or it can be understood and deliberately shaped to ensure that employees uphold the organization’s safety values.

Pin it Down

The trick is to pin it down. A conveniently simple way of capturing the idea of culture is to say that it is the “taken-for-granted way of doing things around here;” but even this is not enough. If we can understand the mechanics that drive culture, we will be better positioned to shift it in support of safety. The good news is that, while presenting itself as extraordinarily complicated, culture is remarkably ordinary at its core. It is just the collective result of our brains doing what they always do.

Our Brains at Work

Recall the first time that you drove a car. While you might have found it exhilarating, it was also stressful and exhausting. Recall how unfamiliar everything felt and how fast everything seemed to move around you. Coming to a four-way stop for the first time, your mind was racing to figure out when and how hard to press the brake pedal, where the front of the car should stop relative to the stop sign, how long you should wait before accelerating, which cars at the intersection had the right-of-way, etc. While we might make mistakes in situations like this, we should not overlook just how amazing it is that our brains can take in such a vast amount of unfamiliar information and, in a near flash, come up with an appropriate course of action. We can give credit to the brain’s “executive system” for this.

Executive or Automatic?

But this is not all that our brains do. Because the executive system has its limitations - it can only handle a small number of challenges at a time, and appears to consume an inordinate amount of our body’s energy in doing so - we would be in bad shape if we had to go through the same elaborate and stressful mental process for the rest of our lives while driving. Fortunately, our brains also “automate” the efforts that work for us. Now, when you approach a four-way-stop, your brain is free to continue thinking about what you need to pick up from the store before going home. When we come up with a way of doing something that works - even elaborate processes - our brains hand it over to an “automatic system.” This automatic system drives our future actions and decisions when we find ourselves in similar circumstances, without pestering the executive system to come up with an appropriate course of action.

Why it Matters

What does driving have to do with culture? Whatever context we find ourselves in - whether it is a four-way-stop or a pre-job planning meeting - our brains take in the range of relevant information, come up with an effective course of action, try it out and, when it works, automate it as “the way to do things in this situation.”

For Example

Let’s imagine that a young employee leaves new-hire orientation with a clear understanding of the organization’s safety policies and operating procedures. At that moment, assuming that he wants to succeed within the organization, he believes that proactively contributing during a pre-job planning meeting will lead to recognition and professional success.

Unfortunately, at many companies, the actual ‘production’ context is quite different than the ‘new-hire orientation’ context. There are hurried supervisors, disinterested ‘old timers’, impending deadlines and too little time, and what seemed like the right course of action during orientation now looks like a sure-fire way to get ostracized and opposed. His brain’s “executive system” quickly determines that staying quiet and “pencil whipping” the pre-job planning form like everyone else is a better course of action; and in no time, our hapless new hire is doing so automatically - without thinking twice about whether it is the right thing to do.

Changing Culture

If culture is the collective result of brains figuring out how to thrive in a given context, then changing culture comes down to changing context - changing the “rules for success.” If you learned to drive in the United States but find yourself at an intersection in England, your automated way of driving will likely get you into an accident. When the context changes, the executive system has to wake up, find a new way to succeed given the details of the new context, and then automate that for the future.

How does this translate to changing a safety culture? It means that, to change safety culture, we need to change the context that employees work in so that working safely and prioritizing safety when making decisions leads to success.

Three Basic Steps:

Step 1

Identify the “taken-for-granted” behaviors that you want employees to adopt. Do you want employees to report all incidents and near-misses? Do you want managers to approve budget for safety-critical expenditures?

This exercise amounts to defining your safety culture. Avoid the common mistake of falling back on vague, safety-oriented value statements. If you aren’t specific here, you will not have a solid foundation for the next two steps.

Step 2

Analyze employees’ contexts to see what is currently inhibiting or competing against these targeted, taken-for-granted behaviors. Are shift workers criticized or blamed by their supervisors for near-misses? Are the managers who cut cost by cutting corners also the ones being promoted?

Be sure to look at the entire context. Often times, factors like physical layout, reporting structure or incentive programs play a critical role in inhibiting these desired, taken-for-granted behaviors.

Step 3

Change the context so that, when employees exhibit the desired behaviors that you identified in Step 1, they are more likely to thrive within the organization.

“Thriving” means that employees receive recognition, satisfy the expectations of their superiors, avoid resistance and alienation, achieve their professional goals, and avoid conflicting demands for their time and energy, among other things.

Give It a Try

Shifting culture comes down to strategically changing the context that people find themselves in.  Give it a try and you might find that it is easier than you expected. You might even consider trying it at home. Start at Step 1; pick one simple "taken-for-granted" behavior and see if you can get people to automate this behavior by changing their context. If you continue the experiment and create a stable working context that consistently encourages safe performance, working safely will eventually become "how people do things around here."

The Human Factor - Missing from Behavior Based Safety

dreamstime_l_23645757.jpg

Since the early 1970’s, there has been an interest in the application of Applied Behavioral Analysis (ABA) techniques to the improvement of safety performance in the workplace. The pioneering work of B.F. Skinner on Operant Conditioning in the 1940’s, 50’s and 60’s led to a focus on changing unsafe behavior using observation and feedback techniques. Thousands of organizations have attempted to use various aspects of ABA to improve safety with various levels of success. This approach (referred to as Behavior Based Safety, or BBS) typically attempts to increase the chances that desired “safe” behavior will occur in the future by first identifying the desired behavior, observing the performance of individuals in the workplace and then applying positive reinforcement (consequences) following the desired behavior. The idea is that as safe behavior is strengthened, unsafe behavior will disappear (“extinguish”).

The Linear View

Traditionally, incidents/accidents have been viewed as a series of cause and effect events that can be understood and ultimately prevented by interrupting the chain of events in some way. With this “Linear” view of accident causation, there is an attempt to identify the root cause of the incident, which is often determined to be some form of “Human Error” due to an unsafe action. The Linear view can be depicted as follows:

Event “A” (Antecedent) → Behavior “B” → Undesired Event → Consequence “C”

Driven by the views of Skinner and others, Behavioral Psychology and BBS have been concerned exclusively with what can be observed. The issue is that, while people do behave overtly, they also have “cognitive” capacity to observe their environment, think about it and make calculated decisions about how to behave in the first place. While Behavioral Psychologists acknowledge that this occurs, they argue that the “causes” of performance can be explained through an analysis of the Antecedents within the environment. However, since they also take a linear view, they tend to limit the “causal” antecedent to a single source known as the “root cause”.

Human Factors

The field of Human Factors Psychology has provided a body of research that has demonstrated that many, if not most, accidents evolve out of complex systems that are not necessarily linear. Some researchers call this a “Systemic” view of incidents. The argument is that incidents occur in complex environments, characterized as involving multiple interacting systems rather than just simple linear events. That is, multiple interacting events (Antecedents) combine to create the “right” context to elicit the behavior that follows.

In such complex environments, individuals are constantly evaluating multiple contextual factors to allow them to make decisions about how to act, rather than simply responding to single Antecedents that happen to be present. In this view, the decision to act in a specific (safe or unsafe) manner is directed by sources of information, some of which are only available to the individual and not obvious to on-lookers or investigators who attempt to determine causation following an incident.

Local Rationality

This is referred to as “Local Rationality” because the decision to act in a certain way makes perfect sense to the individual in the local context given the information that he has in the moment. The local rationality principle says that people do what makes sense given the situation, operational pressures and organizational norms in which they find themselves.

People don’t want to get hurt, so when they do something unsafe, it is usually because they are either not aware that what they are doing is unsafe, they don’t recognize the hazard, or they don’t fully realize the risk associated with what they are doing. In some cases they may be aware of the risk, but because of other contextual factors, they decide to act unsafely anyway. (Have you ever driven over the speed limit because you were late for an appointment?) The key here is developing an understanding of why the individual made or is making the decision to behave in a particular way.

A More Complete Understanding

We believe that the most fruitful way to understand this is to bring together the rich knowledge provided by behavioral research and human factors (including cognitive & social psychological) research to create a more complete understanding of what goes on when people make decisions to take risks and act in unsafe ways. We believe it is time to put the Human Factor into Behavior Based Safety.

Unsafe Behavior Is a Downstream Indicator

At first glance, the suggestion that behavior is a “downstream” indicator may seem ridiculous, because in the world of safety and accident prevention, behavior is almost universally viewed as an “upstream or leading” indicator.  The more unsafe behaviors that are occurring, the more likely you are to have an undesired event and thus an increase in incident rate (downstream or lagging indicator).  This view is the basis for most “behavior based safety” programs.

Over the past few years, however, there has been a great deal of research in the area of human factors which suggests that there are variables much more upstream than behavior that can help us decrease the chances of an incident.  The human factors approach views an individual’s behavior as a component of a much more complex system which includes contextual factors such as social (supervisory and peer) climate,  organizational climate (rules, values, incentives, etc.), environment climate (weather, equipment, signage, etc.), and regulatory climate (OSHA, BOEMRE, etc.).  Individuals work within these climates, evaluate action based on their interpretation of these climates and then act based on that evaluation.

Research has shown that individuals, for the most part make rational decisions based on the information that they have at their disposal in the moment.  If an individual “understands” that her boss really rewards speed, then she is more likely to pick up speed even if she is not capable of working at that speed and thus increases the likelihood of having an incident.  While speed of performance is a behavior, it is the result of the person’s knowledge of the demands of the climate and is therefore a downstream indicator.  Evaluating and impacting the climate is thus more upstream and should be the focus of our intervention programs.  When we can impact the decision making process (upstream) we can have a much better chance of creating safe/desired behavior (downstream).

Why Rule Breaking Makes Sense

Complexity & Rationality Why do employees decide to break the rules?  Do it their way?  Resist change?  It doesn’t make any sense!

It can be frustrating, and often perplexing, when employees fail to adhere to company policies and procedures, especially when those policies and procedures are in their best interest. There is a useful way to think about this issue: What employees do makes sense...to them; but the complexity of work environments makes it hard to understand why it makes sense to them.

We live and work in complex environments. It helps to think of our environments as systems with overlapping and interacting components - including people, things, rules, values, etc. - which are, in turn, complex sub-systems. One of the principles of complex systems is that the “people” component tends to respond only to the limited information that they are presented with locally. We make decisions based on our knowledge of what makes sense at the local level, which is called “local rationality”.

The policies and procedures contained in the corporate manual are only influential if they are brought to bear on the daily lives of people in the workplace. If those policies and procedures only exist in the manual and are not made a part of the local workplace, then they don’t exist in reality and will not have an impact on performance. They will lack influence.

Companies have policies and procedures for a reason - to create good, reliable results; so it is the responsibility of supervisors to bring those policies and procedures to life in the workplace. By intentionally incorporating formal policies and procedures into the “local” work environments of employees - through conversation, feedback, modeling, etc. - supervisors make it “rational” to follow the rules.

The Safety Side Effect

Things Supervisors do that, Coincidentally, Improve Safety

 

Common sense tells us that leaders play a special role in the performance of their employees, and there is substantial research to help us understand why this is the case.  For example, Stanley Milgram’s famous studies of obedience in the 1960s demonstrated that, to their own dismay, people will administer what they think are painful electric shocks to strangers when asked to do so by an authority figure.  This study and many others reveal that leaders are far more influential over the behavior of others than is commonly recognized.  

In the workplace, good leadership usually translates to better productivity, efficiency and quality.  Coincidentally, as research demonstrates, leaders whose teams are the most efficient and consistently productive also usually have the best safety records.  These leaders do not necessarily “beat the safety drum” louder than others.  They aren’t the ones with the most “Safety First” stickers on their hardhats or the tallest stack of “near miss” reports on their desks; rather, their style of leadership produces what we call the “Safety Side Effect.”  The idea is this: Safe performance is a bi-product of the way that good leaders facilitate and focus the efforts of their subordinate employees.  But what, specifically, produces this effect?

Over a 30 year period, we have asked thousands of employees to describe the characteristics of their best boss - the boss who sustained the highest productivity, quality and morale.  This “Best Boss” survey identified 20 consistently recurring characteristics, which we described in detail during our 2012 Newsletter series.  On close inspection, one of these characteristic - “Holds Himself and Others Accountable for Results” - plays a significant role in bringing about the Safety Side Effect.  Best bosses hold a different paradigm of accountability.  Rather than viewing accountability as a synonym for “punishment,” these leaders view it as an honest and pragmatic effort to redirect and resolve failures.  When performance failure occurs, the best boss...

  1. consistently steps up to the failure and deals with it immediately or as soon as possible after it occurs;
  2. honestly explores the many possible reasons WHY the failure occurred, without jumping to the simplistic conclusion that it was one person’s fault; and
  3. works with the employee to determine a resolution for the failure.

When a leader approaches performance failure in this way, it creates a substantially different working environment for subordinate employees - one in which employees:

  1. do not so quickly become defensive when others stop their unsafe behavior
  2. focus more on resolving problems than protecting themselves from blame, and
  3. freely offer ideas for improving their own safety performance.

Your Culture Gap is Showing

The Gap between your Formal and Informal Cultures is as simple as 'Follow the Leader' Companies often express frustration that their operations fail to live up to the standards set forth for itself.  These companies are essentially describing gaps between their formal (company standard) and informal (what actually happens) cultures.  While many factors contribute to this gap, such as communication, size, number of locations and hiring practices, maybe the single most prevalent force in driving informal culture is the behavior of front line managers and supervisors.

One important characteristics of a “Best Boss” is leading by example.  On the surface, this seems like a straightforward and common characteristic of many bosses, but let’s look deeper.  How does the significance of this characteristic extend beyond just the personal esteem in which we hold the boss to the point that it actually impacts the success of the entire organization?

A workplace is an extremely complex and dynamic organism and the workers themselves will only act in ways that make sense to them in the moment.  If the actions of supervisors suggest that certain behaviors are acceptable, even if they fly in the face of company policy, the employees will be prompted to act in the same manner as their leader.  Even worse, if the boss is allowed to pick and choose which rules to follow, he or she is giving unspoken permission for others to do the same.

Let’s look at a specific example.  There is a manufacturing company that has very high safety standards, including the proper use of PPE (Personal Protective Equipment).  The plant manager is well known to show up on the manufacturing floor wearing his Nike training shoes and a hat of his favorite football team.  While he may try to justify not wearing PPE in his own mind, what he fails to recognize is the precedent he is setting for the workforce.  After all, if the boss can wear his tennis shoes on the plant floor, why can’t the others?  Not only is he not modeling the proper standard, he has now set the precedent that the standards themselves are simply suggestions and not to be taken seriously.

Some of you may be asking yourself, “but what if I make a simple mistake and now I’m leading the entire team down the wrong road?”  There is actually no better time to demonstrate the characteristic of leading by example than when you make a mistake.  Simply stating your mistake and the steps that you are going to take to rectify the situation shows that you do in fact care about the standards of the company, and most importantly, that you are willing to hold yourself accountable to the standards.  The resulting impact on informal culture is that the formal culture will be seen as worthy of being embraced and that everyone is able - especially leaders - and prepared to redirect and be redirected for performance that doesn't match the desired culture.

We won’t go into detail in this post about what leaders do to redirect bad performance, in themselves or others, but you can click here to read an archived newsletter on that topic.

Can you work incident free without the use of punishment?

I was speaking recently to a group of mid-level safety professionals about redirecting unwanted behaviors and making change within individual and systemic safety systems.  I had one participant who was particularly passionate about his views on changing the behaviors of workers.  According to him, one cannot be expected to change behavior or work incident free without at least threatening the use of punitive actions.  In his own words, “you cannot expect them to work safely if you can’t punish them for not working safely.”  He was also quite vocal in his assertion that it is of little use to determine which contextual factors are driving an unsafe behavior.  Again quoting him, “why do I need to know why they did it unsafely?  If they can’t get it done, find somebody that can.”  

What an Idiot!

I meet managers like this from time-to-time and I’m immediately driven to wonder what it must be like to work for such a person.  How could a person like this have risen in the ranks of his corporate structure?  How could such an idiot...oh,wait.  Am I not making the same mistakes that I now, silently scold him for?  You see, when people do things that we see as evil, stupid, or just plain wrong, there are two incredibly common and powerful principles at play.  The first principle is called the Fundamental Attribution Error (FAE) and, if allowed to take over one’s thought process, it will make a tyrant out of the most pleasant of us.  The FAE says that when we see people do things that we believe to be undesirable, we attribute it to them as being flawed in some way or to them having bad intentions.  They are stupid, evil, heartless, or just plain incompetent.  If we assume these traits to be the driving factor of an unsafe act and we have organizational power, we will likely move to punish this bad actor for their evil doings.  After all, somebody so (insert evil adjective here) deserves to be punished.  The truth is that most people are good and decent people who just want to do a good job.

Context Matters

This leads us to our second important principle, Local Rationality.  Local Rationality says that when good and decent people do things that are unsafe or break policies or rules, they usually do it without any ill-intent.  In fact, because of their own personal context, they do it because it makes sense to them to do it that way; hence the term “local rationality”.  As a matter of fact, had you or I been in their situation, given the exact same context, chances are we would have done the same thing.  It isn’t motive that normally needs to be changed, it’s context.

With this knowledge, let’s look back at the two questions from our Safety Manager.

  1. “How can I be expected to change behavior or work incident free, without threatening to to punish the wrong-doers?” and
  2. “Why do I need to know why they did it unsafely?  If they can’t get it done, find somebody that can.”

Once we understand that, in general, people don’t knowingly and blatantly do unsafe things or break rules, rather that they do it because of a possibly flawed work system, e.g. improper equipment, pressure from others, lack of training, etc., then we have the ability to calmly have a conversation to determine why they did what they did.  In other words, we determine the context that drove the person to rush, cut corners, use improper tools, etc.  Once we know why they did it, we then have a chance of creating lasting change by changing the contextual factors that led to the unsafe act.

Your key take-aways: 
  1. When you see what you think is a pile of stupidity, be curious as to where it came from.  Otherwise, you may find yourself stepping in it yourself.
  2. Maybe it wasn’t stupidity at all.  Maybe it was just the by-product of the context in which they work.  Find a fix together and you may both come out smelling like roses.

Because I Said So! The Importance of “WHY”

Sending a clear message, such as an assignment to an employee requires that we make sure that Six-Points are understood: WHO-WHAT-WHERE-WHEN-HOW & WHY.  Sometimes we send mixed or unclear messages because we leave out one or more of these points.  This can happen because we are pressed for time, we assume understanding or because we just don’t see the importance of that point.  Failure to communicate any of these points could lead to failure, but one point in particular can really impact motivation.   In most organizations, there are those tasks that nobody enjoys doing.  They may be either repetitive or noxious, but they have to get done anyway.  For example, some of our client companies use Behavior Based Safety (BBS) as a component of their comprehensive safety program.  One aspect of many of these BBS programs is the requirement for employees to complete “observation cards” on a regular basis (a repetitive task).  We find that many employees don’t see the importance of this task, so they put it off until the last minute and then “pencil-whip” or “make up” the observations just to satisfy the requirement.  The reason this happens is because the employees don’t really understand the “WHY” behind the observation task.  Supervisors assume that they understand the purpose behind the task so they don’t take the time to communicate this clearly to their employees.  As you might guess, this “false” data can lead management to make safety decisions that may be misguided.  We have found that simply telling employees that their observations are actually used to direct safety decision-making by management can greatly increase the validity of those observations.

People need to understand why they are being asked to do something that they don’t really like to do.   Simply saying “because I said so” doesn’t work with children and it certainly doesn’t work with employees.  Take the time to clearly communicate the reason behind what you are asking them to do and you will increase motivation.

Is Dissent in the Workplace Good for Results?

We are inclined to conform to what we believe the people around us expect and value.  This has been demonstrated by decades of research into social conformity dating back to the Solomon Asch Line studies in the early 1950’s.  The crux of this research is that when in small groups, we tend to acquiesce (conform) to the view of the group even if it is not our natural view to begin with.  Think about how this would impact team decision making.  When the majority have one view, even when we have a different view, we are less likely to express that view because dissenters are labeled trouble-makers and most of us don’t want to be trouble-makers. Dissent does, however, serve some very important functions.

1.  Dissent boosts group creativity

While conformity results in fewer variations, creativity thrives on a variety of ideas.

2.  Dissent can prevent failures

We conform to what we *believe* others expect and value, but sometimes people are doing things simply because they aren't aware of the possible negative consequences.

For example, in the safety arena, dissent (which we call ‘Intervention’) helps to prevent undesired consequences by stopping an unsafe behavior.  Imagine that you see two co-workers put a tool into service that you see is compromised.  Speaking up could mean the difference between operations as normal and a catastrophic event.  Unfortunately, the group norm is to “keep quiet”, so you conform and don’t speak up and the tool goes into service.

The key to capitalizing on dissent is to do it right.  If you go about it with a critical tone, unflappable confidence that you are right, or punitive intent, not only will it probably do more harm than good, but you are sure to end up with that ‘trouble-maker’ label.

Consequences of Not Speaking Up

What we learned upon completing a large-scale (3,000+ employees) study of safety interventions is that employees directly intervene in only about two of five unsafe actions and conditions that they observe in the workplace.  The obvious concern is that a significant number of unsafe operations that could be stopped are not, which increases the likelihood of incidents and injuries; but this statistic is troubling for a less obvious reason - its cultural implication.

The influence of culture on safe and unsafe employee behavior is of such concern that regulatory bodies, like OSHA in the U.S. and the Health and Safety Executive (HSE) in the U.K., have strongly encouraged organizations to foster “positive safety cultures” as part their overall safety management programs.

Employees are inclined to behave in a way that they perceive to be congruent (consistent) with the social values and expectations, or “norms,” that constitute their organization’s culture.  These behavioral norms are largely established through social interaction and communication, and in particular through the ways that managers and supervisors instruct, reward and allocate their attention around employees.  When supervisors and opinion leaders in organizations infrequently or inconsistently address unsafe behavior, it leads employees to believe that formal safety standards are not highly valued and employees are not genuinely expected to adhere to them.  In short, the low frequency of safety interventions in the workplace contributes to a culture in which employees are not positively influenced to work safely.

These two implications – (1) that a significant number of unsafe operations are not being stopped, and (2) that safety culture is diminished – compound to create a problematic state of affairs.  Employees are more likely to act unsafely in organizations with diminished safety cultures, yet their unsafe behavior is less likely to be stopped in those organizations.

(Look for the full-length article in the May/June 2011 edition of EHS Today.)