Delegating Work and Complexity

Often in managing an organization one is faced with “complexity”. Situations where things are not easy to understand, cause-effect is not clear, contradictory inputs abound and as a result the right decisions are difficult to make. Managers faced with “complexity” often make no decision, letting events play out, waiting for clarity to emerge. But what does “complexity” mean?

There is a lot of very good work being done on complex systems. Dave Snowden has the Cynefin framework. Stuart Kaufman studies complex systems and emergence. In computer science there has been a long project on understanding and defining complexity. In this article I would like to see if the computer science perspective can help shed some insight on “complexity” in a business situation.

A practical definition of complexity in management is related to the ability to delegate a task. Often a task is “simple” when it can be delegated. In order to delegate one necessary condition that needs to be satisfied is that the person delegating can verify that the task has been accomplished correctly. This turns out to be an important threshold for complexity – once a problem is solved or a task accomplished being able to verify that the solution is correct. The other necessary condition is that if the task cannot be done the person doing the task can demonstrate that. This is a more subtle but also necessary condition. If these two conditions are met then can we say that the task is “simple”? That complexity has been reduced?

Let us turn to Computer Science and see what we can learn from there.

It will help perhaps to start with some simple examples to build some intuition on complexity from a computational standpoint. Finding the smallest number in a sequence is easy. Solving a Sudoku puzzle is hard. In a meaningful way though, both of these problems are similar: it is easy to check if someone has solved either of the problems correctly. Of course in the first case one can just solve the problem again to check. In the second the act of checking and the act of solving the problem are quite different. In computer science there is a rich literature of studying these problems and classifying them. These problems that can be verified easily have a special name – they are called NP (non-deterministic polynomial time).

How do we know when a problem is easy? Let me take an example – You have been asked to arrange a Christmas party. You have invited 50 people and bought 50 gifts. Now unfortunately you do not know who would like which gift (you bought the gifts based on your preferences). To get better information you publish the list of gifts and ask each person to specify the gifts that they would be happy to get. Each person specifies the gift or gifts they would be happy with. Given all this information the problem is to find a good pairing where everyone gets a gift that they are happy with.

Is this a hard problem or an easy problem? It certainly is not trivial. Is it in NP – certainly it is easy to verify a solution once it has been found. If a really clever person were to look at all the information and propose a solution, all of us could probably check the validity of the solution:

  • We would first check that each gift was assigned to exactly one person.
  • Next we could check that each gift was on the acceptable list for that person.

This would convince us that the gift assignments were done correctly.

How about if this clever person said that there was no possible answer? We had messed up. How would we check that? For example if my kids were in the mix I can guarantee that they would both want the same gift. In this case there is obviously no solution. But is that the only case. A more general case would be that some group of n people only have less than n gifts that they have any interest in. Then certainly no solution can be found, since at least one of these people will be forced to get a gift they do not like. It turns out that this is an adequate check. Our clever friend can always convince us that no solution exists by showing us such a group of people. Being able to check easily if no solution exists puts a problem in the class co-NP (complement of NP). 

If a problem is both in NP and co-NP it is generally believed to be easier (generally in Polynomial time P) In simpler language – if one can either solve the problem or show why it cannot be solved, the problem is “easy”.

Some of the NP problems can be shown to be the hardest amongst all problems that belong to this class. These are called the NP-complete problems. Generalized Sudoku is NP-complete. If one can find a method to solve a Sudoku of any size one could solve any problem that can be checked easily. In fact solving Sudoko involves finding logic that show that something is not possible because it would make the puzzle infeasible. But we do not know of a full set of tricks to solve every Sudoku of arbitrary size.

A long standing open question in computer science asks “Is NP=co-NP?” “Is P=NP?”. In fact this is one of the Millennium Prize problems with a million dollar prize to solve it. Most computer scientists believe that this is NOT true. There are hard problems and easy ones. There are problems where it is possible to verify a solution but it is not easy to tell when there is no solution. And also the reverse.

This is all kind of interesting and entertaining but does it have any practical significance?

Consider a common management problem: one has to decide if a factory can meet the sales demand. In general this is a scheduling problem. You have to schedule the demand through all the machines. Make sure there is enough capacity. Make sure that the factory can produce all the items by their desired due-date. Of course if a solution exists I can verify it by executing the schedule and meeting all the dates. If no solution exists the situation is not straightforward. There is no easy way to demonstrate that no solution is possible. Any particular solution that does not work does not prove that there was not another way of arranging the schedule that could have worked. And checking all possible schedules is just too hard. It would take forever. In its full generality this scheduling problem is hard. In fact it is NP-complete. It is amongst the hardest problems.

But this is a common situation that every plant manager has to confront. They have to make promises and be confident that they can keep these promises. In fact in real life the problem is even more complicated since there is a lot of uncertainty thrown in as well – machines break down, orders change, material is delayed. If they cannot make good commitments they end up in situations where they have unhappy customers, lost sales, and a poor reputation in the market. Career limiting on all fronts.

There was no good answer to this problem – until Eli Goldratt made a simplification that is breathtaking. He changed the perspective from a scheduling problem to a flow problem. He showed that because of all the uncertainty and variability any factory that has more than one bottleneck resource would become chaotic to operate. There would be interacting constraints that would amplify any variability in the demand or in the factory. A chaotic schedule would not allow the business to survive.

He demonstrated this in the book The Goal with the exercise of the dice game. Alex observes that in the dice game the inventory in the system keeps on increasing without limit. An unsustainable situation since inventory in the system equals lead time, and no business can sustain an ever growing lead time.

The simplifying observation is that in reality businesses have only one constraint. Knowing the constraint makes the problem easy. Now there is a way to prove that there is no feasible schedule. One only needs to check the bottleneck – if there is no feasible answer on the bottleneck there is no feasible answer. On the other hand – if there is a feasible answer on the bottleneck then there is a feasible answer for the factory (assuming touch times are small compared to lead times).

Management is the art of finding the “Inherent Simplicity” (please watch this video) in any situation and leveraging it. Here is the simplification. The recognition of the variability and dependencies in the system means that there is only one constraint. This makes a hard management problem an easy management problem. It allows a company to truly delegate the running of a factory to a plant manager. This allows the business to effectively scale. Eliminating complexity creates real leverage. Companies that understand their constraint can now apply the five focusing steps to continually improve

1.   Identify the constraint

2.   Decide how to exploit the constraint

3.   Subordinate all other decisions to the decision in step 2

4.   Elevate the constraint

5.   Do not let inertia take hold; go back to Step 1

This is the true power of TOC. In a real way it makes management easier. By reducing complexity it makes managers better managers. It makes it easier for people to collaborate. The 5 focusing steps are a recipe for team work and collaboration.

Confronted with real world versions of hard problems the TOC approach is to find the right simplifying assumptions. These can help uncover the Inherent Simplicity that allows managers to do their jobs. An aspect of Inherent Simplicity can be finding a way to put the problem in NP and co-NP. Making it easy to find and verify the constraint that blocks us. Powerful knowledge to move the organization forward.

Find out more about the TOC Club North America

To stay informed and continue this discussion join the group

Join the Linked In groupTOC Club Bay Area

If you would like to read other articles I have written do visit my blog on LinkedIn and Focus. Please also join me on Facebook and follow me on Twitter (@KapoorAjai)

Leave a comment