My recent tweets…

I will be posting soon… Meanwhile, here are some of my recent tweets that may be of interest to you.

Always keep on learning…


Is Lean the Medium or the Message?


In today’s post, I am looking at the profound phrase of Marshall McLuhan, “The medium is the message.” Marshall McLuhan was a Canadian philosopher and a media theorist. McLuhan noted that: [1]

Each medium, independent of the content it mediates, has its own intrinsic effects which are its unique message… The message of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs… It is the medium that shapes and controls the scale and form of human association and action.

The simplest understanding of the phrase, “the medium is the message”, is that it does not matter what we say, it matters how we say it. However, this is a simplistic view. McLuhan’s insight was that any medium is an extension of ourselves. For example, the telephone is an environment, and it affects everybody. The smartphone, which is a further advancement of the telephone, has a much larger impact on us and what we do. McLuhan realized that as we shape the media, the media shapes us. It is a complex interactive phenomenon. McLuhan said that it does not matter what you print, as long as you keep going with that activity. Every medium helps us to do much more that what we can do physically. For example, McLuhan talked about language being an extension of our thoughts, and written language is a further extension of our speech. The ability to print replaced the need for us to be there physically to extend our thoughts via speech. The ability to print had a profound impact on us much more than all the printed media combined. The medium is the message simply because the impact the media has on our social life.

McLuhan realized that media has an impact on our environment, and sadly we are most of the time unaware of our changing environment. He noted that people in any environment are less privileged to observing themselves than those slightly outside. McLuhan explained this phenomenon with a catchy phrase – the fish did not discover water. He postulated that fish may not be aware of the water, the very thing their life depend on. Another way to look at this is by looking at tweets from a politician. The tweets themselves are beside the point. The medium of Twitter has a far reaching impact on our social media. McLuhan would ask us to look beyond the obvious content in a tweet and look at the social impact the medium is generating.

I wanted to view this idea with Lean. As Lean Leaders, we are trying to propagate the good messages of Lean – “Banish waste”, “respect for Humanity”, “kaizen” etc. We need to realize that the message is not the content, but the medium and the context of our actions. As the aphorism goes, our actions speak louder than our words. The medium, as extensions of us, reaches into our lives and shape ourselves. We should concentrate on the medium to make a larger favorable impact. A good example is kanban. Kanban is a simple mechanism for a pull system, a paper slip that triggers production in a quantity that is needed at the time it is needed. However, the use of kanban leads to an awareness of the problems at the gemba, which leads to a need for a kaizen culture.

The ideas of revealing waste as it occurs, challenging ourselves to continuously improve by elimination of waste and develop people as part of a value adding function are integral to any Lean implementation. This complex intermingled set of ideas cannot be made understood by an edict top down from the CEO – “implement Lean.” What is needed is an understanding of the medium and the environment. The medium of daily board meetings for example has an impact on the social aspects in an organization because of involvements at different levels. The medium of QC circles or daily or weekly kaizen groups are another example. The content of fixing problems is not as important as the medium itself and the long-lasting impact it has by developing people to see wastes and improving their own ability to fix problems.

Sometimes we focus more on the content of the message, as in implementing “Lean”, without trying to understand what is the need that we are trying to address. McLuhan explained this focus on the content as a juicy piece of meat carried by the burglar to distract the watchdog of the mind. We are focusing on the wrong thing. The top down push for lean, six sigma etc. without changing medium may not have a lasting effect. The medium itself has to be changed to change the meaning and impact. The medium is the message, which is context driven! If you want to make “change”, don’t just change the message, change the medium itself. Hence, the title of this post – Is Lean the Medium or the Message?

Final Words:

It is said that the typesetters mistakenly printed, “The medium is the massage” on the cover of his book [2]. McLuhan loved the changed phrasing because it had additional interpretations that he appreciated. He said, “Leave it alone! It’s great, and right on target!” [3]

I will finish with a great insight that McLuhan made in 1964 [1], that foreshadowed the medium of internet and social media:

Archimedes once said, “Give me a place to stand and I will move the world.” Today he would have pointed to our electric media and said, “I will stand on your eyes, your ears, your nerves, and your brain, and the world will move in any tempo or pattern I choose.” We have leased these “places to stand” to private corporations.

Always keep on learning…

In case you missed it, my last post was Purpose of a System in Light of VSM:

[1] Understanding Media, Marshall McLuhan

[2] The Medium is the Massage, Marshall McLuhan


Purpose of a System in Light of VSM:

Varieties 2

In today’s post, I am looking at the concept of POSIWID (“Purpose Of a System Is What It Does”) Please note that VSM stands for “Viable System Model” and not “Value Stream Mapping”.

The idea of POSIWID was put forth by the father of Management Cybernetics, Stafford Beer. As Beer puts it: [1]

A good observer will impute the purpose of a system from its actions… There is, after all, no point in claiming that the purpose of a system is to do what it consistently fails to do.

An organization is a sociotechnical and complex system. This means that it cannot be controlled by simple edicts that are put top down from the management. We should not go by what the “designer” of the system says it does, we should impute the purpose from what the system actually does.

A good explanation comes from Dan Lockton: [2]

The implication of a posiwid approach is that it doesn’t matter why a system was designed, or whether the intention was to influence behavior or not. All that matters are the effects: if a design leads to people behaving in a different way, then that is the ‘purpose’ of the design. Intentionality is irrelevant: to understand the behavior of systems, we need to look at their effects… Essentially, a posiwid approach means that both `positive’ and `negative’ effects of a system must be dealt with. We might try to dismiss unintended effects, but they are still effects, and we need to recognize them, and deal with them. Undesirable phenomena are not simply blemishes they are [the system’s] outputs (Beer, 1974, p.7).

A general interpretation of an organization’s purpose is – to make money. This is the idea proposed by Eliyahu Goldratt in his famous book – The Goal. However, Beer’s view of the goal of an organization is to stay viable. Beer defines “viable” as “able to maintain a separate existence”. He identifies all organizations as viable systems. He was inspired by the human anatomy. He realized that the viable systems are recursive. In other words, Every viable system contains viable systems and is contained in a viable system. For example, a human being is a viable system, who is part of an assembly line which is also a viable system. This assembly line in turn is a part of a Value stream, which is again a viable system. This goes on and on. Beer developed Viable System Model (VSM) by exploring the necessary and sufficient conditions for viability in any complex system whether an organism, an organization or a country. 

There are three elements to a viable system:

  • The Operation – This is similar to the muscles and organs. This is what does the actual value adding functions. There can be several operation units in the system in focus.
  • The Meta system (Management) – This is similar to the brain and the nervous system. This is the glue that holds all the operational units and provides coherence to the structure.
  • The Environment – This is the relevant part of the external environment in which the system is in.

In an overall sense, management’s function is to manage complexity. Beer uses variety as a measure of complexity. Variety is the number of possible states of a system. The environment obviously has the maximum variety of the three elements. The operation has more variety than the management. Thus, we can denote this as. [3]


Here the amoeba shape represents the environment, the circle represents the operational unit and the square represents the management. “V” represents the variety possessed by each element. Management has to attenuate or filter out the extra variety while amplifying its variety in order to accommodate the variety that surrounds it. The same goes for the operational unit. Please note that we are dealing with continuous loops rather than simple connectivities. Beer postulates his Law of Inter-Recursive Cohesion based on this: Managerial, Operational and Environmental varieties, diffusing through an institutional system, TEND TO EQUATE; they should be designed to do so with minimum damage to people and to cost.

This idea is based on Ross Ashby’s Law of Requisite Variety – Only variety can absorb variety. In order to maintain viability, attenuators and amplifiers must be in place so that the three varieties are equivalent. There are homeostatic loops in place that amplify the lower varieties to absorb the higher varieties, and attenuate the higher varieties towards the lower varieties. This is depicted in the schematic below. Please note the adjustment to the scale of “V” to denote equivalence of variety achieved through attenuation and amplification.

Varieties 2

For a simple example, let’s look at a football game. There are 11 players for each team. There is one-to-one compensation of variety possible between the two teams playing. The officials, “managing” the game are able to match the variety from the players with the use of attenuators (rules, policies etc. of the game) and amplifiers (whistles, flags etc.).

Every viable system has five sub-systems, identified as Systems 1 through 5:

System 1 – Interacting operational units.

System 2 – Responsible for coordination between the interacting operational units, and provides stability via anti-oscillatory and conflict resolution strategies. An example is production control in a manufacturing plant.

System 3 – Responsible for control and optimization, and synergy between the organizational units. Often referred to as an “internal eye” focusing on “here and now”, internal and immediate functions. There is also a “Three*” subsystem that is responsible for monitoring/audit.

System 4 – Responsible for Planning and “Intelligence”. Often referred to as “external eye” focusing on “there and then”. System Four’s role is to observe the anticipated future environment and its own states of adaptiveness and act to bring them into harmony. [4]

System 5 – Responsible for developing “identity” and policy. Maintaining a good balance between System Three’s concern with the day to day running of affairs and System Four’s concentration on the anticipated future is a challenge for every organization. [4] System five is responsible for monitoring the balance between System three and System four.

As noted earlier, the strength of the VSM is in recursion. Every viable system at every recursion level must have the five subsystems working coherently in order to be viable.

Taken it all together, the Viable System Model looks like: [5]


The diagram can appear confusing due to the recursive nature of the viable systems within the viable system. There is lot more to the VSM than discussed here.


Due to the presence of viable systems within a viable system, the policies set by each System five may not be in alignment with the policy set by the System five in the larger viable system. Beer postulates that the observed and imputed “purpose” of the system and the “designed” purpose of the system are not in agreement. Beer states that the purpose is generally formulated within a higher recursion, thus, it is imperative that the purpose is restated at each low recursion in a language that the system understands. Based on this purpose, the system in focus will act by its proper inputs and reacts to its environment resulting in a new state of the system. The system should have a “comparator” that continuously compares the declared purpose and the purpose imputed from the results that the system delivers. This results in a feedback that leads to a modification of the original purpose. Beer states that:

This system will converge on a compromise purpose – it is neither what the higher recursion would like to see done, nor what the viable system itself would most like to see done, nor what the viable system itself would like to indulge in doing.

The purposes of the corporate system and those of System One are different, because System One consists of viable systems whose conditions of survival are formulated at a different level of recursion. The compromise convergence must continually act and this generally leads to lowest variety compromise possible. Please note that “what the system does” is done by System One. Beer postulated that autonomy is a computable function of the purpose of a viable system based on this. Autonomy is the maximum discretionary action available for the subsystem, short of threatening the integrity of the system as a whole.

Final Words:

Stafford Beer was man beyond his times for sure. I strongly encourage the readers to read as much of his works as you can. The VSM allows us to diagnose or even design an organization by making sure that the required homeostats, subsystems and channels are present to ensure viability. For those who wish to implement Lean or Six Sigma or Agile or any of other paradigms out there, I will finish with words of wisdom from Beer:

We manage through a model that we hold in our heads about how things work ‘out there’. If our model does not have Requisite Variety, then we ought to incorporate learning circuits that will enrich it. But if we are ideologically attached to our model, so that it is not negotiable, then it becomes a dysfunctional paradigm.

Always keep on learning…

In case you missed it, my last post was Cultural Transmission at Toyota:

[1] Diagnosing the System, Stafford Beer

[2] POSIWID and determinism in design for behaviour change, Dan Lockton

[3] World in Torment, Stafford Beer

[4] The Viable System Model and its Application to Complex Organizations, Allenna Leonard, Ph.D.

[5] VSM By Mark Lambertz – Own work, CC BY-SA 4.0,


Cultural Transmission at Toyota:


One of my favorite stories related to statistics is of Abraham Wald. During World War II, American military sought the help of the Statistical Research Group (SRG) for their bomber planes. The problem was how to reinforce the planes to improve the chances of survival in an attack. The story goes that the military had done an analysis of the damages on all the planes returned from attacks. The different parts of the planes were the fuselage, the wings, the tail and the engine. The question was where should the reinforcement be done on the plane, because more reinforcement meant more weight, which impacted the performance of the plane. The data showed that the most damage was found on the fuselage. The military wanted to start working on reinforcing the fuselage. Wald, however, cautioned against it, and advised on reinforcing the least hit part that was most vulnerable part of the plane. It turned out that this part was the engine. Wald’s logic was that the military was looking at the planes that got hit and yet managed to come back safe. The data that was most important was on the planes that did not make it back home safe. This story is often used to explain the survivorship bias – the logical error of using the cherrypicked data of the very few that made the cut while ignoring the very high numbers of those who did not make the cut.

My main takeaway from the Wald story is about looking at what is not there. Sometimes this information is the most important and yet it is not readily visible. I will try to use this concept with Lean. Lean is often perceived as a set of tools. When Toyota opened the doors for the rest of the world, many like the Military in the Wald story saw only what was in front of them – 5S, Kanban, andon cords etc. The unseen part, the culture of Toyota, the Toyota Way, was missed. One of the words that sticks out when one reads the first books on Toyota Production System is “rationality.” Rationality is coming up with innovative ideas to meet the required challenge primarily with what you got – with your wit and what you have on hand already. Rationality is doing just what is right. Rationality is the root of kaizen.

I am interested in looking at how Taiichi Ohno was able to develop the Toyota Production System and most importantly make “it” stick, over the generations. Taiichi Ohno was inspired by the challenge issued by Kiichiro Toyoda, the founder of Toyota Motor Corporation. The challenge was to catch up with America in 3 years in order to survive.  Ohno built his ideas with inspirations from Sakichi Toyoda, Kiichiro Toyoda, Henry Ford and the supermarket system. The two pillars of the Toyota Production System are Just-in-Time and Jidoka. Just-in-time or “Exactly-in-time”, as Ohno calls it [1], is the idea of producing just what is needed at the right time in the right quantity. The concept of Just-in-Time was the brainchild of Kiichiro Toyoda. Kiichiro Toyoda had written a 4” thick pamphlet that detailed his ideas of a system to produce every day exactly what was needed in the quantity needed. Piror to Ohno’s kanban concept, Toyota was already using tickets as part of Just-in-Time system. The concept of Jidoka was based on the automatic loom developed by Sakichi Toyoda (father of Kiichiro Toyoda). The automatic loom that Sakichi built also had a weft-breakage automatic stopping device, which ensured that the loom stopped when a thread breakage occurred. This allowed one operator to handle multiple looms at a time. Thus, we can see that the two pillars of Toyota Product System were based on the concepts of two parental figures in the Toyoda family.

Toyota Global’s website details the roots of Toyota Production System: [2]

The Toyota Production System (TPS), which is steeped in the philosophy of “the complete elimination of all waste” imbues all aspects of production in pursuit of the most efficient methods, tracing back its roots to Sakichi Toyoda’s automatic loom. The TPS has evolved through many years of trial and error to improve efficiency based on the Just-in-Time concept developed by Kiichiro Toyoda, the founder (and second president) of Toyota Motor Corporation.

Taiichi Ohno rose to the occasion of increasing the productivity of Toyota by developing a production system to improve productivity. The concept of Jidoka he learned from the Toyota Automatic Loom Works company, allowed him to have one operator man multiple machines at a time. He rearranged the facility in order to allow the process to flow better. By expanding on the Just-in-Time idea and the American Supermarket system, he developed a kanban system that ensured a pull system to make product only in the right quantity at the right time. There was a lot of resistance against his ideas. It was initially termed as the “Ohno system” instead of “Toyota Production System.” Ohno however had the full support of his superiors, Eiji Toyoda and Naiichi Saito [1]. They absorbed all the discontent and grumbling directed at Mr. Ohno from the factory and never mentioned to him. They only wanted him to continue finding ways to reduce manufacturing costs.

Implementing a production system like Toyota’s, can be viewed as a cultural transmission phenomenon in the organization. As the great population geneticist Luca Cavalli-Sforza puts it [3]Cultural transmission is the process of acquisition of behaviors, attitudes, or technologies through imprinting, conditioning, imitation, active teaching and learning, or combinations of these. Cavalli-Sforza expands on this idea [4]: the ability to accumulate knowledge by cultural means, that is by exchange of information between individuals within and across generations, is a powerful mechanism for adapting to new situations… Culture allows the spread of targeted solutions to problems affecting a population.

Cavalli-Sforza’s research indicates that the essence of cultural transmission is learning from other individuals. Ohno taught his methods to the production team most of the time by directly going to the required personnel. Ohno was famous for drawing a circle on the production floor and making the engineer or the supervisor stand in it to observe an operation so that he can “see” the wastes. Ohno’s methods were based on the “reality” present only at the gemba. He sometimes used trial and error methods. As he stated [1]: To confirm hypotheses through experimentation is not confined to the academic world. In industry as well, ideas are tested through continuous trial and error.

As I was reading Cavalli-Sforza’s works, one particular concept stayed with me. He noted that transmission through a social leader or teacher results in greater homogeneity in a population than transmission through a parental figure. The social leader has great influence over others in an organization. At the same time, the parental figure can have a long-lasting effect. [5]The culture created by the organization’s initial leaders forms a “genetic imprint” for the organization’s ontogeny; it will be clung to until it becomes unworkable or the group fails and breaks up. The two aspects of the cultural transmission from a social leader (Taiichi Ohno) and parental figures (Sakichi Toyoda and Kiichiro Toyoda) resonates well with any student of the Toyota Production System.  The cultural transmission over time allows for better ideas and practices to replace the less effective ones while at the same time maintaining the core concepts of the system.

Always keep on learning…

In case you missed it, my last post was Herd Structures in ‘The Walking Dead’ – CAS Lessons:

[1] Just-In-Time For Today and Tomorrow, Taiichi Ohno and Setsuo Mito

[2] Toyota Global Website

[3] Theory and Observation in Cultural Transmission, L. L. Cavalli-Sforza, M. W. Feldman et al.

[4] Cultural Transmission and Adaptation, L. Luca Cavalli-Sforza

[5] A Complex Adaptive Systems Model of Organization Change, Kevin J. Dooley

Herd Structures in ‘The Walking Dead’ – CAS Lessons:


The Walking Dead is one of the top-rated TV shows currently. The show is about survival in a post-apocalyptic zombie world. The zombies are referred to as “walkers” in the show. I have written previously about The Walking Dead here. In today’s post, I want to briefly look at Complex Adaptive Systems (CAS) in the show’s backdrop. A Complex Adaptive System is an open non-linear system with heterogenous and autonomous agents that have the ability to adapt to their environment through interactions between themselves and with their environment.

The simplest example to get a grasp of CAS is to look at an ant colony. Ants are simple creatures without a leader telling what each ant should do. Each ant’s behavior is constrained by a set of behavioral rules which determine how they will interact with each other and with their environment. The ant colony taken as a whole is a complex and intelligent system. Each ant works with local information, and interacts with other ants and the environment based on this information. The different tasks that the ants do are patrol, forage, maintain nest and perform midden work. The local information available to each ant is the pheromone scent from another ant. As a whole, their interactions result in a collective intelligence that sustains their colony. In presence of perturbations in their environment, the ants are able to switch to specific tasks to maintain their system. The ants decide the task based on the local information in the form of perturbation to their environment and their rate of interaction with other ants performing the specific tasks. The ants go up in the ranks eventually becoming a forager in the presence of need. A forager ant always stays a forager. The ant colony carries a large amount of “reserve ants” who do not perform any function. This reserve allows for specific task allocation as needed based on perturbations to their environment.

To further illustrate the “self-organizing” or pattern forming behavior of ants, let’s take for example, their foraging activity. The ants will set out from the colony in a random fashion looking for food. Once an ant finds food, it will bring it back to the nest leaving a pheromone trail on its way back. The other ants engaged in the foraging activity will follow the pheromone trail and bring back food while leaving their pheromone scent on the path. The pheromone scent will evaporate over a short amount of time. The ants that followed the shortest path would go back for more food and their pheromone trail will stay “fresh” while a longer path will not remain as “fresh” since the pheromone has more time to evaporate. This means that the path with the strongest pheromone trail is the shortest path to the food. The shortest path was a result of positive feedback loops from more and more ants leaving pheromone at a faster rate. Here the local information available to each ant is the rate of pheromone release from the other ants. The faster the rate, the stronger the trail. This generally corresponds to the shortest trail to the food source. Once the food source is consumed, another food source is identified and a new short path is established. This “algorithm” called as Ant Colony Optimization Algorithm is utilized by several transportation companies to find the shortest routes.


In the show, The Walking Dead, a similar collective behavior is shown by the zombies. The zombies exhibit a herding behavior where a large number of zombies will move together as a herd in search for “food”. The zombies in The Walking Dead world are devoid of any intelligence and there is no one in charge similar to the ants. The zombies however do not have a nest. They just wander around. The zombies in the show are attracted by sound, movement and possibly absence of “zombie smell”. The zombies do not attack each other possibly due to the presence of “zombie smell”. In fact, in the show several characters were able to survive zombie attack by lathering themselves in the “zombie goo”.

The possible explanation for the formation of herd structures is the hardwired attribute that we all have – copying others. We tend to follow what others are doing when we are not sure what is happening. We go with the flow. A good example is the wave we do in a sports stadium. We could develop a model where a few zombies are attracted by a stimulus and they walk toward the stimulus. The other zombies simply follow them, and soon a large crowd forms due to the reinforced loops with more and more followers. This is similar to the positive reinforcing feedback of pheromone trail in the example of ants.

The show recently introduced an antagonist group called the “Whisperers”. The Whisperers worship the dead and adorn the zombie skins and walk amongst the zombies. They learned to control the herd and make them go where they want. The Whisperers themselves a CAS, adapted to survive by being with the walkers. Possibly, they are able to guide the walkers by first forming a small crowd themselves and then getting more walkers to join them as they move as a group. Since they have the “zombie smell” on them, the walkers do not attack them.

How Does Understanding CAS Help Us?

We are not ants and certainly not zombies (at least not yet). But there are several lessons we can get from understanding CAS. We all belong to a CAS at work, and in our community. The underlying principle of CAS is that we live in a complex world where we can understand the world only in the context of our environment and our local interactions with our neighbors and with the environment. Every project we are involved in is new and not identical to any previous project. This could be the nature of the project itself, or the team members or the deadlines or the client. Every part of the project can introduce a new variation that we did not know of. Given below are some lessons from CAS.

  1. Observe and understand patterns:

Complex Adaptive Systems present patterns due to the agents’ interactions. You have to observe and understand the different patterns around you. How do others interact with each other? Can you identify new patterns forming in the presence of new information or perturbations in your environment? Improve your observation skills to understand how patterns form around you. Look and see who the “influencers” are in your team.

  1. Understand the positive and negative feedback loops:

Observe and understand the positive and negative feedback loops that exist around. A pattern forms based on these loops. The awareness of the positive and negative loops will help us nurture the required loops.

  1. Be humble:

Complexity is all around us and this means that we lack understanding. We cannot foresee or predict how things will turn out every time. Complex systems are dispositional, to quote Dave Snowden. They may exhibit tendencies but we cannot completely understand how things work in a complex system. Edicts and rules do not always work and they can have unintended consequences. Every event is possibly a new event and this means that although you can have insights from your past experiences, you cannot control the outcomes. You cannot simply copy and paste because the context in the current system is different.

  1. Get multiple perspectives always (reality is multidimensional and constructed):

Get multiple perspectives. To quote the great American organizational theorist, Russell Ackoff, “Reality is multidimensional.” To add to this, it is also constructed. The multiple perspectives help us to understand things a little better and provide a new perspective that we were lacking. Systems are also constructed and can change how it appears depending on your perspective.

  1. Go inside and outside the system:

We cannot try to understand a system by staying outside it all of the time. Similarly, we cannot understand a system by staying inside it all of the time. Go to the Gemba (the actual workplace) to grasp the situation to better understand what is going on. Come away from it to reflect. We can understand a system only in the context of the environment and the interactions going on.

  1. Have variety:

Similar to #4, variety is your friend in a complex system. Variety leads to better interactions that will help us with developing new patterns. If everybody was the same then we would be reinforcing the same idea that would lack the requisite variety to counter the variety present in our environment. Our environment is not homogenous.

  1. Aim for Effectiveness and not Efficiency:

In complex systems, we should aim for effectiveness. Here, the famous Toyota heuristic, “Go slow to go fast” is applicable. Since each event is novel, we cannot aim for efficiency always.

  1. Use Heuristics and not Rules:

Heuristics are flexible and while rules are rigid. Rules are based on past experiences and lack the variety needed in the current context. Heuristics allow flexing allowing for the agents to change tactics as needed.

  1. Experiment frequently with safe to fail small experiments:

As part of prodding the environment, we should engage in frequent and small safe to fail experiments.  This helps us improve our understanding.

  1. Understand that complexity is always nonlinear, thus keep an eye out for emerging patterns:

Complexity is nonlinear and this means that a small change can have an unforeseen and large outcome. Thus, we should observe for any emerging patterns and determine our next steps. Move towards what we have identified as “good” and move away from what we have deemed as “bad”. Patterns always emerge bottom-up. We may not be able to design the patterns, but we may be able to recognize the patterns being developed and potentially influence them.

Final Words:

My post has been a very simple look at CAS. There are lot more attributes to CAS that are worth pursuing and learning. Complexity Explorer from Santa Fe institute is a great place to start. I will finish with a great quote from the retired United States Army four-star general Stanley McChrystal, from his book, Team of Teams:

“The temptation to lead as a chess master, controlling each move of the organization, must give way to an approach as a gardener, enabling rather than directing. A gardening approach to leadership is anything but passive. The leader acts as an “Eyes-On, Hands-Off” enabler who creates and maintains an ecosystem in which the organization operates.”

Always keep on learning…

In case you missed it, my last post was Conceptual Metaphors in Lean:

Conceptual Metaphors in Lean:

Vitruvian Man blueprint.

In today’s post, I am looking at conceptual metaphors in Lean. A Conceptual metaphor is a concept in conceptual linguistics, first introduced by George Lakoff and Mark Johnson in their 1980 book, Metaphors We Live By. They noted that:

Human beings structure their understanding of their experiences in the world via “conceptual metaphors” derived from basic sensorimotor and spatial concepts (spatial primitives and image schemata) learned during infancy and early childhood. 

Metaphors are normally thought of as a way to explain something further. Aristotle noted that metaphors made learning pleasant. “To learn easily is naturally pleasant to all people, and words signify something, so whatever words create knowledge in us are most pleasant.” However, the conceptual metaphor theory goes beyond the metaphor being just a linguistic/artistic phenomenon. The conceptual metaphor theory notes that metaphors are primarily used to understand abstract concepts, and that these are used subconsciously on an everyday basis. The conceptual metaphors are treated as an inevitable part of our thinking and reasoning. Lakoff and Johnson note that:

The essence of metaphor is understanding and experiencing one kind of thing in terms of another… Metaphors are fundamentally conceptual in nature; metaphorical language is secondary. Conceptual metaphors are grounded in everyday experience. Abstract thought is largely, though not entirely, metaphorical. Metaphorical thought is unavoidable, ubiquitous, and mostly unconscious. Abstract concepts have a literal core but are extended by metaphors, often by many mutually inconsistent metaphors. Abstract concepts are not complete without metaphors. For example, love is not love without metaphors of magic, attraction, madness, union, nurturance, and so on.

One form of conceptual metaphor is an “Ontological Metaphor” – a metaphor in which an abstraction, such as an activity, emotion, or idea, is represented as something concrete, such as an object, substance, container, or person. A good example of an ontological metaphor in lean is waste. We are taught that we should seek total elimination of waste in lean. We are giving a physical representation to the abstract concept of “waste”. Waste is an adversary that can hurt us, steal from us, and destroy us. To paraphrase Lakoff: (I have inserted Waste in his example)

The ontological metaphor of waste allows us to make sense of phenomena in the world in human terms—terms that we can understand on the basis of our own motivations, goals, actions, and characteristics. Viewing something as abstract as waste in human terms has an explanatory power of the only sort that makes sense to most people. When we are suffering substantial economic losses, WASTE IS AN ADVERSARY metaphor at least gives us a coherent account of why we’re suffering these losses.

It is also interesting to see how the concept of waste got translated as it was transplanted from Toyota to the West. Taiichi Ohno, the father of TPS, saw waste in terms of man-hours and labor density. Outside Toyota, elimination of waste was seen as a means to increase capacity, a pursuit of efficiency alone.

Labor density is the ratio of work and motion.

Work/Motion = Labor Density

In the equation, work indicates the action carried out to forward a process or enhance the added value. Ohno realized that the correct way to improve labor density is to keep the numerator (work) the same, while decreasing the non-value added portion of motion. The denominator is an impersonal motion and the numerator is work with a human touch. The act of intensifying labor density or of raising the labor utility factor means to make the denominator smaller (by eliminating waste) without making the numerator larger.

Kiichiro Toyoda, Toyota’s president in 1949, issued the challenge to catch up with the United States within three years. America’s productivity was thought to be eight or nine times better than Japan’s. Ohno realized that this was not because the Americans were physically exerting ten times more than the Japanese. “It was probably that the Japanese are wasteful in their production system”, Ohno thought. Ohno’s view was that the total elimination of waste should result in man-hour reduction. Toyota’s man-hour reduction movement is aimed at reducing the overall number of man-hours by eliminating wasted motions and transforming them into work. Toyota succeeded because they realized that elimination of waste was an expression of their respect for humanity. The respect of humanity portion may have gotten lost in translation when the ontological metaphor of “waste” was spread outside Toyota. Toyota noted:

Employees give their valuable energy and time to the company. If they are not given the opportunity to serve the company by working effectively, there can be no joy. For the company to deny that opportunity is against the principle of respect for humanity. People’s sense of value cannot be satisfied unless they know they are doing something worthwhile.

Ohno’s first go-to training tool was to ask the supervisor to try doing the same work with less operators. The elimination of waste becomes easier when the operators have a visual control system for seeing waste as either time on hand or stock on hand, and when they avoid overproduction via Kanban. Ohno’s view of elimination of waste was to be effective and efficient by producing only what is needed. The idea of elimination of waste in the West may have become pursuing just efficiency and dropping effectiveness. The waste elimination can be viewed as a means to increase capacity, and this leads to the question – why should we stop at the daily required quantity of 100 units now that the improvement activities have yielded us more capacity to produce up to 125 units a day? Lean has become “doing more with less”, while Ohno’s goal was “doing just what is needed with less.” Ohno’s goal was being efficient and effective, even if it meant machines remained idle.

Final Words:

The term “Lean” itself is a conceptual metaphor. “Lean” refers to being fit, as opposed to being obese. In “Lean”, elimination of waste is about “trimming the fat”. The metaphor of “lean” represents the aesthetics of being beautiful and healthy – perhaps a notion of being in charge and knowing what needs to be done. This could be viewed as the Western philosophy of outwardly focus on external beauty, whereas the Eastern philosophy is more inwardly focused. In Japanese culture, the concept of harmony is imperative. This is part of the ‘respect for humanity’ side of the Toyota Production System.

I welcome the reader to explore the concept of conceptual metaphor. You may also like one of my older posts – Would Ohno Change the Term “Lean”?

Always keep on learning…

In case you missed it, my last post was Chekhov’s Gun at the Gemba:

Chekhov’s Gun at the Gemba:


One of my favorite things to do when I learn a new and interesting information is to apply it into a different area to see if I can gain further insight. In today’s post, I am looking at Chekhov’s gun, named after the famous Russian author, Anton Chekhov (1860-1904), and how it relates to gemba. Anton Chekhov is regarded as a master short story writer. In the short story genre, there is a limited amount of resources to tell your story. Chekhov’s gun is a principle that states that everything should have a purpose. Checkhov said:

Remove everything that has no relevance to the story. If you say in the first chapter that there is a rifle hanging on the wall, in the second or third chapter it absolutely must go off. If it’s not going to be fired, it shouldn’t be hanging there.

Chekhov also stated:

“One must never place a loaded rifle on the stage if it isn’t going to go off. It’s wrong to make promises you don’t mean to keep.” [From Chekhov’s letter to Aleksandr Semenovich Lazarev in 1889]. Here the “gun” is a monologue that Chekhov deemed superfluous and unrelated to the rest of the play.

“If in the first act you have hung a pistol on the wall, then in the following one it should be fired. Otherwise don’t put it there.” [From Gurlyand’s Reminiscences of A. P. Chekhov, in Teatr i iskusstvo 1904, No. 28, 11 July, p. 521]. Source: Wikipedia.

How does this relate to Gemba? Gemba is the actual place where you do your work. When you design the work station with the operator, you need to make sure that everything has a place and everything has a purpose. Do not introduce an item to the station that has no need to be there. Do not introduce a step or an action that does not add value. This idea also applies to the Motion Economy. Let’s look at some of the Industrial Engineering maxims from the Principles of Motion Economy that are akin to Chekhov’s gun:

  • There should be a definite and fixed place for all tools and materials.
  • Tools, materials, and controls should be located closely in and directly in front of the operator.
  • Materials and tools should be located to permit the best sequence of motions.
  • Two or more jobs should be worked upon at the same time or two or more operations should be carried out on a job simultaneously if possible.
  • Number of motions involved in completing a job should be minimized.

Chekhov’s gun is not necessarily talking about foreshadowing in a movie or a book. A gun should not be shown on the wall as a decoration. It needs to come into the story at some point to be value adding. The author should make use of every piece introduced into the story. Everything else can be removed. I loved this aspect of Chekhov’s gun. In many ways, as a lean practitioner, we are also doing the same. We are looking at an operation or a process, and we are trying to eliminate the unwanted steps/items/motions. When you work in a strictly regulated industry such as medical devices, the point about line clearance also comes up when you ponder about Chekhov’s gun. Line clearance refers to removal of materials, documentation, equipment etc. from the previous shop order/work order to prevent any inadvertent mix-ups that can be quite detrimental to the end user. Only keep things that are necessary at the station.

I will finish with a great lesson from Anton Chekhov that is very pertinent to improvement activities.

Instructing in cures, therapists always recommend that “each case be individualized.” If this advice is followed, one becomes persuaded that those means recommended in textbooks as the best, means perfectly appropriate for the template case, turn out to be completely unsuitable in individual cases.

Always keep on learning…

In case you missed it, my last post was The Confirmation Paradox:

The Confirmation Paradox:

albino raven

In today’s post I will be looking at Confirmation Paradox or Black Raven Paradox by Carl Hempel. Let’s suppose that you have never seen a raven in your life. You came across a raven one fine morning, and observe that it is black in color. Now that you have seen one, you suddenly start paying more attention and you start seeing ravens everywhere. Each time you see a raven, you observe that its color is black. Being the good scientist that you are, you come to a hypothesis – All ravens are black. This is also called induction, coming to a generalization from many specific observations.

Now you would like to confirm your hypothesis. You ask your good friend, Carl Hempel, to help. Carl suggests that you start looking at things around his house that are not black and not raven, like his red couch, the yellow tennis ball etc. He suggests that each of those observations support your hypothesis that all ravens are black. You are rightfully puzzled by this. This is the confirmation paradox. Carl Hempel was a German born philosopher who later immigrated to America.

Carl Hempel is correct with this claim. Let’s look at this further. All ravens are black can be restated as “Whatever is not black is not a raven”. This is a logical equivalence of your hypothesis. This would mean that if you observe something that is not black and is not a raven, it would support your hypothesis. Thus, if you observe a red couch, it is not black and it is also not a raven, therefore it supports your hypothesis that all ravens are black.

How do we come in terms with this? Surely, it does not make sense that a red couch supports the hypothesis that all ravens are black. The first point to note here is that one can never prove a hypothesis via induction. Induction requires the statement to be provided with a level of confidence or certainty. This would mean that the level of “support” that each observation makes depends upon the type of information gained from that observation.

I will explain this further with the concept of information from Claude Shannon’s viewpoint. Information is all around us. Where ever you look, you can get information. Claude Shannon quantified this in terms of entropy with the unit as a bit. He described this as the amount of surprise or reduction of uncertainty. Information is inversely proportional to probability of an event. The less probable an event is, the more information it contains. Let’s look at the schematic below:


The black triangle represents all the black ravens in our observable universe. The blue square represents all of the black things in our observable universe. The red circle represents all the things in the observable universe. Thus, the set of black ravens is a subset of all black things, which in turn is a subset of all things. From a probability standpoint, the probability of observing a black raven is much smaller than the probability of observing a black thing since there are proportionally a lot more black things in existence. Similarly, the probability of observing a non-black thing is much higher since there are lot more non-black things in existence. Thus, from an information standpoint, the information you get from observing a non-black thing that is not a raven is very very small. Logically, this observation does provide additional support, however, the information content is miniscule. Please note that, on the other hand, observing a black raven is also supporting the statement that all non-raven things are non-black.

When you first saw a black raven, you had no idea about such a thing existing. The information content of that observation was high. After you started observing more ravens, the information you got from each observation started diminishing. Even if you made 10,000 observations of black ravens, you cannot prove (100% confirm) that all ravens are black. This is the curse of induction. This is where Karl Popper comes in. Karl Popper, an Austrian-British philosopher, had the brilliant insight that good hypotheses should be falsifiable. We should try to look for observations that would fail our hypothesis. His insight was in the asymmetry of falsifiability. You may have 100,000 observations supporting your hypothesis. All you need is a single observation to fail it. The most popular example for this is the case of the black swan. The belief that all swans are white was discredited when black swans were discovered in Australia. To come back to the information analogy, the observation of a white raven has lot more information content that is powerful enough to break down your hypothesis since the occurrence of a white raven(albino) is very low in nature. Finding a white raven is quite rare and thus have the most information or surprise.

This also brings up the concept of Total Evidence. The concept of Total Evidence was put forth by Rudolf Carnap, a German born philosopher. He stated that in the application of inductive logic to a given knowledge situation, the total evidence available must be taken as basis for determining the degree of confirmation. Let’s say that as we learned more about ravens and other birds, we came across the concept of albinism in other animals and birds. This should make us challenge our hypothesis since we know that albinism can occur in nature, and thus it is not farfetched that it can occur in ravens as well. The concept of Total Evidence is interesting because even though it has the term “Total” in it, it is beckoning us to realize that we cannot ever have total information. It is a reminder for us to consider all possibilities and to understand where our mental models break down. In theory, one could also make whimsical statements such as “All unicorns are rainbow colored”, and say that the observation of a white shoe supports it based on the confirmation paradox. Total evidence in this case would require us to have made at least one observation of a rainbow colored unicorn.

I will finish with another paradox that is similar to the confirmation paradox – the 99-foot (feet) man paradox by Paul Berent. Up to this point, we have been looking at qualitative data (black versus not black, or raven versus not raven). Let’s say that you have a hypothesis that says all men are less than 100 feet. You surveyed over 100,000 men and found all of them to be less than 100 feet. One day you heard about a new circus company coming to town. Their main attraction is a 99-foot man. You go to see him in person and sure enough, he is 99 feet tall. Now, your hypothesis is still intact since the 99-foot man is technically less than 100 feet. However, this adds doubt to your mind. You realize that if there is a 99-foot man, then the occurrence of a 100-foot man is not farfetched. The paradox occurs since the observation of a 99-foot man strengthens your hypothesis, but at the same time it also weakens it.

Always keep on learning…

In case you missed it, my last post was Know Your Edges:

Know Your Edges:


In today’s post I will start with a question, “Do you know your edges?

Edges are boundaries where a system or a process (depending upon your construction) breaks down or changes structure. Our preference, as the manager or the owner, is to stay in our comfort zone, a place where we know how things work; a place where we can predict how things go; a place we have the most certainty. Let’s take for a simple example your daily commute to work – chances are high that you always take the same route to work. This is what you know and you have a high certainty about how long it will take you to get to your work. Counterintuitively, the more certainty you have of something, the less information you have to gain from it. Our natural tendency is to have more certainty about things, and we hate uncertainty. We think of uncertainty as a bad thing. If I can use a metaphor, uncertainty is like medicine – you need it to stay healthy!

To discuss this further, I will look at the concept of variety from Cybernetics. Variety is a concept that was put forth by William Ross Ashby, a giant in the world of Cybernetics. Simply speaking, variety is the number of states. If you look at a stop light, generally it has three states (Red, Yellow and Green). In other words, the stop light’s variety is three (ignoring flashing red and no light). With this, it is able to control traffic. When the stop light is able to match the ongoing traffic, everything is smooth. But when the volume of traffic increases, the stop light is not able to keep up. The system reacts by slowing down the traffic. This shows that the variety in the environment is always greater than the variety available internally. The external variety also equates with uncertainty. Scaling back, let’s look at a manufacturing plant. The uncertainty comes in the form of 6M (Man, Machine, Method, Material, Measurement and Mother Nature). The manager’s job is to reduce the certainty. This is done by filtering the variety imposed from the outside, magnifying the variety that is available internally or looking at ways to improve the requisite variety. Ashby’s Law of Requisite Variety can be stated as – “only variety can absorb variety.

All organizations are sociotechnical systems. This also means that in order to sustain, they need to be complex adaptive systems. In order to improve the adaptability, the system needs to keep learning. It may be counterintuitive, but uncertainty is required for a complex adaptive system to keep learning, and to maintain the requisite variety to sustain itself. Thus, the push to stay away from uncertainty or staying in the comfort zone could actually be detrimental. Metaphorically, staying the comfort zone is staying away from the edges, where there is more uncertainty. After a basic level of stability is achieved, there is not much information available in the center (away from the edges). Since the environment is always changing, the organization has to keep updating the information to adapt and survive. This means that the organization should engage in safe to fail experiments and move away from their comfort zone to keep updating their information. The organization has to know where the edges are, and where the structures break down. Safe to fail experiments increases the solution space of the organization making it better suited for challenges. These experiments are fast, small and reversible, and are meant to increase the experience of the organization without risks. The organization cannot remain static and has to change with time. The experimentation away from the comfort zone provides direction for growth. It also shows where things can get catastrophic, so that the organization can be better prepared and move away from that direction.

This leads me to the concept of “fundamental regulator paradox”. This was developed by Gerald Weinberg, an American Computer scientist. As a system gets really good at what it does, and nothing ever goes wrong, then it is impossible to tell how well it is working. When strict rules and regulations are put in place to maintain “perfect order”, they can actually result in the opposite of what they are originally meant for. The paradox is stated as:

The task of a regulator is to eliminate variation, but this variation is the ultimate source of information about the quality of its work. Therefore, the better job a regulator does, the less information it gets about how to improve.

This concept also tells us that trying to stay in the comfort zone is never good and that we should not shy away from uncertainty. Exploring away from the comfort zone is how we can develop the adaptability and experience needed to survive.

Final Words:

This post is a further expansion from my recent tweet.

Information is most rich at the edges. Information is at its lowest in the center. Equilibrium also lies away from the edges.

The two questions, “How good are you at something?” and “How bad are you at something?” may be logically equivalent. However, there is more opportunity to gain information from the second question since it leads us away from the comfort zone.

I will finish with a lesson from one of my favorite TV Detectives, D.I Richard Poole from Death in Paradise.

Poole noted that solving murders were like solving jigsaw puzzles. One has to work from the corners, and then the edges and then move towards the middle. Then, everything will fall in line and start to make sense.

Always keep on learning…

In case you missed it, my last post was Bootstrap Kaizen:

Bootstrap Kaizen:


I am writing today about “bootstrap kaizen”. This is something I have been thinking about for a while. Wikipedia describes bootstrapping as “a self-starting process that is supposed to proceed without external input.” The term was developed from a 19th century adynaton – “pull oneself over a fence by one’s bootstraps.” Another description is to start with something small that overtime turns into something bigger – a compounding effect from something small and simple. One part of the output is feedback into the input loop so as to generate a compounding effect. This is the same concept of booting computers, where a computer upon on startup starts with a small code that is run from the BIOS which loads the full operating system. I liked the idea of bootstrapping when viewed with the concept of kaizen or “change for the better” in Lean. Think about how the concept of improvement can start small, and eventually with iterations and feedback loops can make the entire organization better.

As I was researching along these lines, I came across Doug Engelbart. Doug Engelbart was an American genius who gave us the computer mouse and he was part of the team that gave us internet. Engelbart was way ahead of his time. Engelbart was also famous for the Mother of All Demos, which he gave in 1968 (way before Windows or Apple Events). Engelbart’s goal in life was to help create truly high performance human organizations. He understood that while population and gross product were increasing at a significant rate, the complexity of man’s problems were growing still faster. On top of this, the urgency with which solutions must be found became steadily greater. The product of complexity and urgency had surpassed man’s ability to deal with it. He vowed to increase the effectiveness with which individuals and organizations work at intelligent tasks. He wanted better and faster solutions to tackle the “more-complex” problems. Engelbart came up with “bootstrapping our collective IQ.”

He explained:

Any high-level capability needed by an organization rests atop a broad and deep capability infrastructure, comprised of many layers of composite capabilities. At the lower levels lie two categories of capabilities – Human-based and Tools-based. Doug Engelbart called this the Augmentation System.

Augmentation system

The human-based capability infrastructure is boosted by the tool-based capability infrastructure. As we pursue significant capability improvement, we should orient to pursuing improvement as a multi-element co-evolution process of the Tool System and Human System. Engelbart called this a bootstrapping strategy, where multi-disciplinary research teams would explore the new tools and work processes, which they would all use immediately themselves to boost their own collective capabilities in their lab(s).

Doug Engelbart’s brilliance was that he identified the link between the human system and the tool system. He understood that developing new tools improves our ability to develop even more new tools. He came up with the idea of “improving the improvement process.” I was enthralled by this when I read this because I was already thinking about “bootstrap kaizen.” He gave us the idea of “ABC model of Organizational Improvement.” In his words:

    A Activity: ‘Business as Usual’. The organization’s day to day core business activity, such as customer engagement and support, product development, R&D, marketing, sales, accounting, legal, manufacturing (if any), etc. Examples: Aerospace – all the activities involved in producing a plane; Congress – passing legislation; Medicine – researching a cure for disease; Education – teaching and mentoring students; Professional Societies – advancing a field or discipline; Initiatives or Nonprofits – advancing a cause.

    B Activity: Improving how we do that. Improving how A work is done, asking ‘How can we do this better?’ Examples: adopting a new tool(s) or technique(s) for how we go about working together, pursuing leads, conducting research, designing, planning, understanding the customer, coordinating efforts, tracking issues, managing budgets, delivering internal services. Could be an individual introducing a new technique gleaned from reading, conferences, or networking with peers, or an internal initiative tasked with improving core capability within or across various A Activities.

    C Activity: Improving how we improve. Improving how B work is done, asking ‘How can we improve the way we improve?’ Examples: improving effectiveness of B Activity teams in how they foster relations with their A Activity customers, collaborate to identify needs and opportunities, research, innovate, and implement available solutions, incorporate input, feedback, and lessons learned, run pilot projects, etc. Could be a B Activity individual learning about new techniques for innovation teams (reading, conferences, networking), or an initiative, innovation team or improvement community engaging with B Activity and other key stakeholders to implement new/improved capability for one or more B activities.

This approach can be viewed as a nested set of feedback loops as below:


Engelbart points out that, Bootstrapping has multiple immediate benefits:

1) Providers grow increasingly faster and smarter at:

  • Developing what they use – providers become their own most aggressive and vocal customer, giving themselves immediate feedback, which creates a faster evolutionary learning curve and more useful results
  • Integrating results – providers are increasingly adept at incorporating experimental practices and tools of their own making, and/or from external sources, co-evolving their own work products accordingly, further optimizing usefulness as well as downstream integratability
  • Compounding ROI – if the work product provides significant customer value, providers will start seeing measurable results in raising their own Collective IQ, thus getting faster and smarter at creating and deploying what they’re creating and deploying – results will build like compounding interest
  • Engaging stakeholders – providers experience first-hand the value of deep involvement by early adopters and contributors, blurring the distinction between internal and external participants, increasing their capacity to network beneficial stakeholders into the R&D cycle (i.e. outside innovation is built in to the bootstrapping strategy)
  • Deploying what they develop – as experienced users of their own work product, providers are their own best customers engaging kindred external customers early on, deployment/feedback becomes a natural two-way flow between them

2) Customers benefit commensurately:

  • End users benefit in all the ways customers benefit through outside innovation
  • Additionally, end users can visit provider’s work environment to get a taste and even experience firsthand how they’ve seriously innovated the way they work, not in a demo room, but in their actual work environment
  • Resulting end products and services, designed by stakeholders, and rigorously co-evolved, shaken down and refined by stakeholders, should be easier and more cost-effective to implement, while yielding greater results sooner than conventionally developed products and services

Final Notes:

I love that Engelbart’s Augmentation System points out that tools are to be used to augment the human capability, and that this should be ultimately about the system level development. His idea of bootstrapping explains how the “kaizen” thinking should be in Lean.

Interestingly, Engelbart understood that the Human side of the Augmentations System can be challenging. A special note on the Human System: Of the two, Engelbart saw the Human System to be a much larger challenge than the Tool System, much more unwieldy and staunchly resistant to change, and all the more critical to change because, on the whole, the Human System tended to be self-limiting, and the biggest gating factor in the whole equation. It’s hard for people to step outside their comfort zone and think outside the box, and harder still to think outside whatever paradigm or world view they occupy. Those who think that the world is flat, and science and inquiry are blasphemous, will not consider exploring beyond the edges, and will silence great thinkers like Socrates and Gallileo.

As I was researching for this post, I also came across the phrase “eating your own dog food.” This is an idea made famous by the software companies. The idea behind the phrase is that we should use our own products in our day-to-day business operations (Deploying what they develop). In a similar vein, we should engage in improvement activities with tools that we can make internally. This will improve our improvement muscles so that we may be able to tweak off-the-shelf equipment to make it work for us. This is the true spirit of the Augmentation System.

When you are thinking about getting new tools or equipment for automation, make sure that it is to strictly to augment the human system. Unless we think in these terms, we will not be able to improve the system as a whole. We should focus more on the C activities. I highly encourage the reader to learn more about Doug Engelbart. (

Always keep on learning…

In case you missed it, my last post was A “Complex” View of Quality: