Why Most Organizations Get Strategy Wrong

Most organizations get strategy wrong not because they are lazy or aren’t smart enough or don’t have the right consultants. In fact being too smart and having too many consultants can ensure a bad strategy. The number one reason I see is the inability of the organization’s leader(s) to identify or at least come to terms with the main problem facing the organization. Professor Richard Rumelt at the Anderson School at UCLA points this out in his Good Strategy, Bad Strategy talk at the London School of Economics. Some strategies are really goals: grow 10% or achieve a 15% increase in employee satisfaction. Other strategies are just a series of action items: do this, accomplish that. Rumelt calls those strategies, “a dog’s breakfast.” I have had the misfortune to see many strategies of this type. These aren’t strategies. They’re more like documents playing strategies on TV.

But why don’t organizations focus on the main problem? The problem is usually obvious. My answer really comes down to the difficulty leaders have making tough choices. The problem with facing the main problem is that it tends to force you to do something about it. Most people don’t want to do that because it means canceling projects, shifting resources, upsetting power balances, and “moving people’s cheese,” as Spencer Johnson famously wrote. I have talked with dozens of leaders about key strategic decisions that each one knew s/he needed to make. The main problem is usually obvious but internal resistance to change gave them pause about taking action. The leader has to work with these people everyday after all.

In the private sector you may be able to slash budgets or shift resources quickly but in most government and non-profit organizations leaders are constrained. Even most private sector organizations resist budget cuts and resource shifts and a leader’s ability to take swift action is more constrained than it appears. So in practice most strategies start off well but are gradually watered down as various people ensure their interests are protected. Soon your strategy is like Gulliver among the Lilliputians gradually pinned down by a thousand tiny ropes.

One of the classic examples of this phenomenon is Steve Jobs’ reentrance to Apple. Over the years since his firing, Apple had gradually expanded its product line until it offered 16 different types of computers. When his sister asked him which computer to buy, Jobs didn’t know what to tell her and he was the CEO. Why did this happen?

Each team that built a computer didn’t want to shut down and find new teams when their product became obsolete. That might mean working with new people they didn’t know or at a new office and how would that affect their parking spots?  Instead they advocated for a new version of their product regardless of any other effort within Apple. So instead of shutting down the old team and prioritizing resources to the new product, Apple chose to do both. Soon they had 16 product lines.

Steve Jobs realized Apple had a big problem. The company was running out of money and would go bankrupt in six months. He needed to cut costs and stabilize their core customer set. It was simply a matter of survival. He cut 16 confusing and overlapping products down to 4 in a simple two by two matrix: consumer/pro, desktop/laptop. All the extra resources for the other 12 products were either reassigned or cut.

No one wanted to make the tough calls to get Apple back on track. The company had highly qualified and smart CEOs who didn’t want to make those choices. No one likes when people lose their jobs but Jobs understood that he either made the call and cut some people now or everyone would lose his or her job in six months.

Leaders earned their stripes by making these decisions. While not every leader faces as dire a dilemma as Jobs, all leaders face a main problem. Once Jobs stabilized the company the question was how to generate growth. Once they had the iPod the question was how to deal with the music for it. It never ends. Each solution creates a new problem that demands a new strategy.

Do you know what the biggest problem facing your organization is and are you prepared to tackle it?
 

Episode 8 - Rik Legault, Director of the Office of Public Safety Research (OPSR) for the Department of Homeland Security (DHS) Science and Technology Directorate (S&T), First Responders Group (FRG)

Interview with Rik Legault, Director of Office for Public Safety Research (OPSR) at the Department of Homeland Security (DHS) Science and Technology (S&T) Directorate, First Responders Group (FRG)

About OPSR

The office serves as science and technology advisor to the agency, the public, and frontline first responders. OPSR houses the social, behavioral, and economic science capabilities, law enforcement, digital forensics, and protection of national critical infrastructure.

The office has a broad mandate with a great deal of need throughout the department and the Homeland Security Enterprise - any state, local, first responder, or public or private entity that contributes to DHS. Their work includes:

  1. Evaluation research and support

  2. Development of new capabilities by helping people develop new procedures, policies, and techniques based in evidence; and

  3. Analysis of those data to improve understanding and operational success

What are the challenges of measuring R&D?

There is incentive to successfully provide something to someone, but there is not the same amount of understanding if you are trying to make a mission impact. It costs money. If something works properly, then you get credit for it working properly. If it does not work, people are worried about negative impact on their lives and careers.

In the past the Department of Justice has required up to 20% of the total budget for projects to be spent on evaluation. At the end of the investment, it is important to make sure that you are not doing harm. It is important to be able to understand what you were getting out of that investment in the real world. Even if the thing you developed does everything it is supposed to do, it may not have the outcomes that you desire or were thinking of when creating it. For example, George Mason has a center for evidence based crime policy. They did a report on license plate reading technology and found that it changed the way officers spent time on the job without having an impact on their clearance rates.

How can you think through those outcomes?

The objectives need to be understood from the beginning and there can be an increase in randomized trials to understand implementation of technology. You can figure out how the products are used and how users spend their time. You can better understand benefits, detractions, and unintended consequences - positive and negative.

Examples at S&T

S&T developed a training system for TSA to better identify threats when looking at images of bags. Lots of money was invested in better scanner technology, but none was invested in how people were performing their jobs. We did research with TSA and compared them to non-trained professionals. Our technology helped determine what they were missing when looking at different parts of the bag and provide instant feedback for instructors to improve search. Without increasing time it took to scan, we could have an immediate, long-term 2% increase in accuracy. When you extrapolate, that equates to millions of threats found per year.

Most Common Errors with R&D Programs

Culture kills - there is a lot of pushback. People need to understand that all of our findings will be provided in context and with recommendations. It is hard to get people to understand that I am here to help. In other areas it is more prevalent - medicine, local policing - there are more incentives to engage with researchers to understand what you are doing well and what you are not. They want to identify their own problems and fix them early. If you can identify the problem, understand what is causing it, and articulate your plan to fix it, you are always much better off than if someone else discovers your problem. Independence and objectivity are important and that leads to credibility with the right expertise.

Quick Experiments

It is important for evaluators to be involved from the beginning. The evaluation team is thinking entirely about data and measurement for your objective. It works well in spiral development if you involve evaluators from beginning because they can adjust and collect data from the beginning. Collection can be really effectively done ahead of time if you have an evaluation plan. No one thinks about evaluation until after the fact, and then there is no baseline and data is hard to get.

Big Data

Artificial Intelligence (AI), machine learning, and big data are very popular. They are not new concepts - as the technology develops, thinking about how we can apply technology in a smart way is important. Programs that combine data science and behavioral science are vitally important. It is important to understand that correlation is not causation - causation requires time order. Did the cause come before the effect? Does it involve mathematical correlation? Did you eliminate all other causal factors? Measurement error occurs when you are talking about people because they do not behave like machines. Theory is very important to social science to determine causation - many big data efforts lack coherent theory. Combining people with backgrounds in strongly theoretical fields is important and will help move tech into reasonable use faster.

Want to Know More?

Vist Firstresponder.gov for products, documents, and summaries of work

S&T has also Facebook and Twitter @dhsscitech