Hawthorne to Autonomous Race Cars

Introduction

Bell System extended its quality assurance capabilities between 1877 and 1929, essential to its equipment and economical service reliability. In Cicero, Illinois, Western Electric's Hawthorne plant opened in 1904 and became the focal point for such a change in organizational thinking. A great deal of innovation in manufacturing methods occurred at Hawthorne, and these insights have proven helpful to today's manufacturing and knowledge economies. The company's adaptation of probability theory led to creating a statistical quality assurance program (SQC). From Shewhart to Deming, we can see the development of the statistical tools used by quality professionals today.

Brief Introduction to Probability Theory

Randomness is all around us. In almost every scientific field, probabilistic thinking plays an important role. Probability theory is a mathematical framework that allows us to analyze the likelihood or chances of certain events. In disciplines that involve large-scale data collection and interpretation, this role is crucial. Probability is the degree to which something is likely to occur. Probability theory, for example, can be used to predict the number of products that will sell on a given day. Statistics, as well as engineering, have made use of this. Probability theory has helped solve many problems in gambling, insurance, and weather forecasting. Using this knowledge is most common when people figure out their chances of winning a poker game. Science can also use probabilities to learn more about natural phenomena, such as earthquakes and hurricanes, by predicting their likelihoods with different levels of severity based on factors such as location and time period. Probability is generally understood as the ability to repeat observations or experiments. Depending on the method for calculating those observations, the probability can be calculated. The higher the number of repetitions, the closer the observed relative frequency and probability will be. 

Basics

Probability theory suggests that certain events occur with a specific frequency. One of the two possible outcomes in a fair coin toss is heads or tails, a classic example of a probabilistic experiment. Flipping a coin gives you a 50% chance of getting a heads or a tails. The number of heads may differ slightly from precisely 50% in a series of coin tosses. In the long run, the frequency of heads is bound to get closer and closer to 50% as the number of flips increases. Probability theory attempts to predict how often these events will occur within a particular timeframe, which is helpful when determining how likely it is that thunderstorms will occur tomorrow afternoon. Gambling, insurance, and weather forecasting are some of the most common problems solved using this knowledge. The most common use for this knowledge would be when people are trying to figure out the odds of winning at poker. 

It All Started With a Card Game

In the 17th century, Blaise Pascal and Pierre de Fermat spent their afternoons playing a card game and competing, mastering games of chance. The idea that math can be used to determine likelihood arose from this. They used the concept of "probability" to predict the outcome of their card games. So the history of probability begins with the study of games of chance. Around 1630 BC, people probably started to use mathematical ideas to describe and predict their luck when playing games such as backgammon and dice. These ideas were expanded by Blaise Pascal and Pierre de Fermat in the seventeenth century and eventually developed into a branch of mathematics known as probability theory. In the eighteenth century, two major mathematical breakthroughs led to modern ideas about probability. Jacob Bernoulli made significant progress on problems involving infinite series in 1695; Abraham de Moivre's work followed this in 1718. These developments helped to make the later work of Thomas Bayes and Pierre-Simon Laplace possible. In 1812, Laplace published a book on probability theory that was a significant reference for over a century; he also developed methods still used in statistics today.

Statistical Quality Control

The development of SQC occurred when the use of probability theory in science and business was gaining momentum. Many other industries had not yet utilized probabilistic data, even though it had long been employed in insurance. As an example, in his 1921 classic, Risk, Uncertainty and Profit, Frank H. Knight of the University of Chicago stressed the importance of risk analysis but did not see any role for probability theory outside of the insurance industry. Using the normal bell-shaped curve properties, G. S. Radford discussed ways to analyze inspection data in "The Control of Quality in Manufacturing." At Bell System, Radford's ideas influenced some inspection engineers. They observed how probability theory strengthened telephone traffic management and realized its importance in understanding Radford's concepts. Bell System engineers used probability theory in 1908 to study traffic patterns; in 1912, they applied it to the positioning of loading coils on the transcontinental telephone line. 

Dr. Harold F. Dodge is often credited with developing statistical quality control. He is widely renowned for developing acceptance sampling plans for putting inspection operations on a scientific basis regarding controllable risks. Dodge headed inspection methods and results at Bell Systems, translating theory into practical methods useful to plant operatives. Dodge developed the basic concepts of acceptance sampling, such as consumer's risk, producer's risk, double sampling, lot tolerance percent defective (LTPD), and average outgoing quality limit (AOQL). 

In 1918 Shewhart joined Bell Systems - Western Electric Company to assist their engineers in improving the quality of telephone hardware. From 1924 to 1926, Shewhart and Dodge prepared more than a hundred internal memoranda on a wide range of theoretical and practical issues related to applying probabilities to manufacturing inspection. One such memo changed everything on May 16, 1924. A note that Shewhart prepared about a page long was recalled by his boss, George Edwards. A third of the page was devoted to a simple diagram which we would all recognize today as a statistical process control chart. In his work, Shewhart emphasized the benefit of reducing variation in manufacturing processes and the truth that repeated adjustments caused variation to rise and quality to deteriorate. This became known as Statistical Process Control.

The control chart was introduced as a tool to differentiate between assignable-cause and chance-cause variations by Shewhart. To predict future output and manage processes economically, Shewhart indicated bringing the production process into statistical control, where only chance-caused variation exists. He understood that a "normal distribution curve" (also known as a "bell curve") cannot be generated from data from physical processes alone. It turns out that observed manufacturing variation does not behave the same way as data found in nature (Brownian motion). In his conclusion, Shewhart pointed out that while every process displays variation, some display controlled variation that is natural to the process. In contrast, others display uncontrolled variation that is not present in the process causal system at all times. Shewhart worked to advance the thinking at Bell Telephone Laboratories from their foundation in 1925 until his retirement in 1956.

Autonomous Race Cars

As early as 1938, Shewhart's work caught the attention of Edwards Deming. Shewhart had published a paper in “Reviews of Modern Physics” in 1934 about measurement error in science. Dr. Deming refers to this as the Theory of Errors. Shewhart's insights led Dr. Deming to reframe his thinking. If you're familiar with the works of Dr. Deming, the rest is history.

I recently did a podcast with John Waraniak, Vice President of Vehicle Technology. Waraniak has 25 years of experience working in automotive, motorsports, aerospace, and consumer products. In addition to his expertise in vehicle systems engineering, Waraniak is an expert in lean product development and motorsports program management. Waraniak is not only a student of Dr. Deming; he worked with Deming when he was at General Motors. One of the things we discussed on the podcast was how Deming was adopted at GM. After the NBC documentary "If Japan Can Why Can't We," Ford Motors scooped up Dr. Deming to work with them. Ford produced the Taurus as one of its first cars, following Dr. Deming's principles. At GM, Waraniak was asked how GM could catch up to Ford using Deming's principles. Waraniak convinced them that the fastest way to apply these ideas would be to apply them to GM Motorsports. GM used both Dr. Deming's and Dr. Senge's consulting advice in the motorsports division. There is a tremendous opportunity for motorsports' feedback loops to be significantly shortened, and a car race is the ultimate go-to-Gemba experiment. We discuss a lot more of this in the upcoming podcast. Interestingly, Waraniak is applying his many years of learning to his current pet project, the first Indy Autonomous Challenge, on October 23, 2021, at Indianapolis Speedway.


Previous
Previous

The Curious Case of Kenichi Koyanagi

Next
Next

The Story of Abraham Wald