Brief History of Quality Control
The history of humans creating and controlling tools goes back well over three million years. Early Stone Age tools were made of basic stone shapes used for hammers, cutting, chopping, and scraping. Middle Stone Age tools (around 200,000 years ago) started to get a little more sophisticated for creating hand axes and sharp-pointed stones that could be used for cutting animal skins for clothing. Later Stone Age tools included bone and ivory. It wasn’t until about 10,000 years ago that humans started grinding and polishing stones and fitting parts together as tools. For an extended period, not a lot changed in creating and controlling tools until the 18th century, introducing interchangeable parts. One of the best-known inventors of interchangeable parts was Eli Whitney. Eli Whitney was given a contract from George Washington in 1798 to build 12,000 muskets. At the time, muskets were made by highly skilled craftsmen with dozens of small parts. Muskets were custom-made by hand, and no two muskets were the same. Hand-made muskets created a time-consuming effort to make muskets and made repairs extremely difficult. Eli Whitney’s big idea was to make common parts from molds so that the parts would be consistent and interchangeable.
As standardization for interchangeable identical parts grew in the 19th century, the economics of interchangeable parts became a new opportunity. With all these new parts being designed with an exact specification, a new problem arose. The idea that all parts are unequal and specifications are generally imperfect gave way to a unique opportunity. The economics of manufacturing at a larger scale mandated that zero defects and zero tolerance were not the best models for manufacturing. Machines and humans together creating tools is not an exact science. Large-scale manufacturing introduced an unacceptable level of scrap and waste due to zero-defect, zero-tolerance. The counter to this waste was the introduction of tolerance limits. As these new manufacturing systems grew, it became clear that creating exact sizes for interchangeable parts was too costly. There was no margin for error. Manufacturing systems started moving away from the concept of an exact specification to “go, no-go” tolerance limits. The idea was that a particular part could have a minimum limit (go) and a maximum limit (no-go). Devices had evolved such that testing these tolerance limits was now possible.
Imagine a hollow cylinder as an interchangeable part possibly used for hydraulics. If the open part of the cylinder is too small, the shaft won’t fit; however, if the hollow part of the cylinder is too wide, the shaft will be too loose, and the device will be error-prone. The tolerance limits ensure “best fit” and enable the most economically feasible mechanism for fitting parts. For example, the open portion of a cylinder might be designed to be one centimeter in diameter. The “go, no-go” tolerance limits may be a plus or minus .001, making the actual tolerance limits for fitting (creating the cylinder) around ten milliliters on both sides.
As economies moved forward with even larger-scale mass production, it became clear there was substantial economic value in trying to understand the defective parts.
In 1924, Walter Shewhart, working with Hawthorne Works Western Electric Company Inspection Engineering Department, devised an idea to use statistics to understand tolerance limits and defects better. Shewhart's goal was to reduce the reliance on downstream inspection and inspectors. Hawthorne Works employed over 40,000 workers, of which over 5,000 were in the quality assurance inspection department in the 1920s. As a side note, Edwards Deming was possibly in one of these groups during his summer internships. Their quality control model was based on an inspection where problems were typically detected at the end of the process with a high cost of rework or scrapped work, even with go-nogo tolerance limits.
This ratio and model of a late-binding inspection were extraordinarily costly and yielded poorer quality. As previously established, all manufacturing processes have variation, and Shewhart found it necessary to understand a correlation between variation and defects. Shewhart created what we today call Statistical Process Control (SPC). SPC was designed to reduce variation and allow continuous process adjustment to different causes of variation. Shewhart's idea was that an organization could save money by creating simple statistical methods to visualize variation by creating charts of sequential data averages. Visual diagrams are used better to understand measurements and correlations as a feedback system. SPC could be used better to predict the causes of defects earlier in the process. Certain variations could be detected earlier and correlated to what Shewhart called Assignable Variation or Chance Variation. Assignable Variation could be things like machine failures or lack of employee training. Once the Assignable Variation is identified, isolated, and eliminated, the focus can be on Chance Variation. Shewhart called this Chance Variation the data representing a process under statistical control. Chance Cause Variation could be used in a more fine-tuned analysis of harder-to-detect variants and defective trends.
Shewhart's SPC had a profound impact on Dr. Edwards Deming. Deming, a lifelong evangelist of Shewhart's work, added a unique humanistic side to Shewhart's SPC system. Deming believed that in a system, many problems were related to the system and not necessarily caused by humans. Deming described the two variation forms as Common Cause and Special Cause instead of Shewhart's Chance and Assignable. As noted in one of his famous quotes, Dr. Deming believed that only a small portion of problems was related to people or things outside the system (Special Cause).
"94% of problems in business are systems-driven, and only 6% are people-driven - Dr. Edwards Deming."
Deming's book "Statistical Adjustment of D” describes this adjustment process with an interesting example. Imagine you need to replace a glass top plate for a table. You would measure the table with a ruler or measuring tape. The glass plate might be costly, so you would probably want to measure it more than once (the adage measure twice, cut once). You would probably write the measurements down and want to take several measurements to be safe. After taking a few measurements, you would need to answer the best glass plate size required to make the order. Would you pick any number from the list, or would we average and use a mean value? Maybe even the median or mode. The point is that whatever number you choose, it's based on some prediction of correctness or what Dr. Deming calls the statistical adjustment of data.
Consider a factory with 100,000+ parts that must be custom-built. Now, picture the adjustment procedure being carried out on a far larger scale, involving components utilized in constructing telecom facilities with millions of pieces. This was Shewhart's dilemma. In this case, you might want to create a visual chart from the statistically calculated data to understand the statistical adjustment of data. This visualization can be better used to understand the probable causes of variation.
Returning to Deming's glass tabletop exercise, you might take 15 measurements and visualize the data for this exercise. Let's say that 14 of the measurements for the glass tabletop were between 149 centimeters and 151 centimeters; however, one of the readings was 154 centimeters. You might conclude that the one measurement was an anomaly (Deming calls this Special Cause Variation). Maybe it was an operator error, or an old defective measure tape was used. For the sake of this exercise, let's say after you investigate and correct the Special Cause measurement, you do another 15 measurements. Now, all the measurements are within 149 to 151 centimeters. One of the fascinating parts of Shewhart's statistical process control, or what Dr. Deming calls statistical adjustment of data, is understanding that analyzing Common Cause Variation is where the actual value lies. This is where you can get the most economic value from managing the harder-to-find defects or misalignments. When all observations are within the control limits, the process is considered under process control. Here, you get the power of statistics and probability to look for defective trends. After eliminating the Special Cause Variation, you can now focus on investigating trends within the Common Cause Variation. Looking at the chart below of 15 measurements in process control, you can see that all measurements are random, between 149 and 151 centimeters. Most people think SPC is only about identifying anomalies (Special Cause Variation). SPC's real power is the power of probabilities in the data under process control (Common Cause).
As we said, in this case, all of the measurements from the previous chart are ordered and random. When a process is under control, the data should display randomness between the control limits; in the case of the last tabletop glass exercise, all the observations are random (between 149 and 151 centimeters). An average value of all the observations might be an acceptable statistical adjustment of the data (i.e., the best size to order the glass plate).
However, if the data is not random, there is a probability of a defective trend within the Common Cause Variation. In the following chart, each measurement seems to increase in order. Maybe there is a trending defect regarding the consistency or imbalance while taking the measurements that creates a trend of incorrectness. Perhaps the tape measure operator has not been trained to take consistent measurements, making the measurement process worse each time. This data might not be as predictable for statistical adjustment. Selecting an average of this data might not yield predictable results as the data in the earlier chart. Further investigation of this non-randomness might be warranted.
A better example of non-randomness might be measurements of a room's temperature. Suppose all the temperature measurements are random, between 68 and 72 degrees. In that case, an average reading of 70 might be an acceptable data adjustment (i.e., describing the room temperature over some time). However, if all the measurements are ordered in increasing values, the trend might be that the room is getting warmer and the air conditioner is starting to break.
SPC is a bit more complicated than our glass tabletop exercise to illustrate these concepts. Upper and lower control limits are typically derived from the data using different Standard Deviation methods of all the data points, starting with 3 Sigma (3 standard deviations) below and below limits. A basic control chart uses 3 Sigma, representing 99.7 percent of the data. All this gets even more complicated with techniques that use statistical tools like Least Square for statistical data adjustment.
In the 20th century, Dr. Deming became a strong voice for improving quality control. Deming used Shewhart's methods as the backbone for his historical worldwide impact. Starting with using SPC and statistical adjustment methods in the US Census in 1938 with follow-up work in India and Greece. Dr. Deming also used these same tools for war effort training, improving the quality of tanks, jeeps, and airplanes in WW2. His now-famous post-WW2 activities include visiting Japan as part of the rebuilding effort. All of this has significantly impacted worldwide manufacturing and current-day knowledge economies, with all of this related to humans creating and controlling tools.