. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Site Loader

Measurement System Variation

Since it is a process in itself, the act of measuring is subject to variability like all processes. It is extremely important to understand measurement variation, as many decisions can be made based on measurement results. Some basic questions we will try to answer are:

1. What are the basic sources of variation?

2. Is the system statistically stable over time?

3. How close to the “truth” are the measured results? How is this quantified?

4. What are some means of quantifying or characterizing variation in a measurement system?

Types of variability

Variability in measurement, of course, involves both special and common causes. Variability (or errors) can be divided into three categories: human errors, systematic errors, and random errors.

Human errors are the most elusive type to try to control. They occur randomly, intermittently, and can be large or small. Incorrect reading of instruments or equipment, transposing numbers, entering incorrect values ​​into a computer or calculator, and measuring the wrong sample are examples. Most are impossible to control and correct, since carelessness is usually the main cause.

Systematic or assignable errors are always of the same sign, either positive or negative. They are constant regardless of the number of measurements made. These are errors due to bias, as defined in the following paragraphs. As such, they can usually be identified. After identification, they can be removed or negated through correction factors. Elimination is always preferred to correction as a method of control.

Random errors represent the common cause of measurement system variability. They are both positive and negative in effect and occur by chance. Some examples are the slight variations that may exist in sample injection techniques for a gas chromatograph or the lower temperature of a drying oven or the sensitivity limitations of a pH electrode.

While they can’t be completely eliminated, they can be reduced. They can be statistically estimated and used to validate measurement results.

Our goal should be to control, monitor and estimate the variability in the measurement results and to eliminate the effects of systematic errors.

Measurement Terminology

There are a few terms that are in widespread use when it comes to measurements. Before proceeding, these should be discussed.

Stability

Stability refers to the total variation in measurement obtained with the same equipment on the same standard over an extended period of time. Statistical stability of a measurement system implies that the test is predictable over time. Without this, any analysis of measurement variability is only applicable to the time period of the study. Statistical stability allows the results to be used to characterize future performance. Unless there is objective evidence of the statistical stability of the measurement systems, do not use the results of a measurement variability study to predict future test/equipment performance.

The means to demonstrate statistical stability is the control chart. Standard average and range graphs or individual and moving range graphs not only represent the stability of measurements, but also serve as indicators that calibration is required. Calibration while the system is still indicating a control condition will generally only serve to increase the variation of the measurement systems.

Statistical stability, or statistical control, does not mean that the measurement process has been optimized. Several different organizations may use similar measurement methods with each other in statistical control, but their performance may differ markedly.

Accuracy, bias and precision

Accuracy it is the closeness of agreement between a test result and the “true” or accepted reference value. In other words, how close we are to the “truth”. To further define precision, two additional terms are used.

Bias refers to a systematic error that contributes to the difference between the population mean of measurements or test results and an accepted reference or true value.

Accuracy is the closeness of agreement between randomly selected individual measurements or test results obtained under prescribed conditions. An accurate method is one capable of producing unbiased and accurate results. With measurements, we assess inaccuracy; we attempted to quantify bias and inaccuracy.

Accepted Reference Value is a value that serves as an agreed reference for comparison and is derived as:

a theoretical or established value based on scientific principles,

an assigned value based on experimental work such as NIST or

a consensus value, based on collaborative experimental work (such as the ASTM Interlaboratory Crosscheck Sample Exchange Program).

ASTM D6299 provides an accepted methodology for statistically determining an accepted reference value.

Standard deviation is a mathematically calculated quantity that measures the precision or “noise” of a process,

σ, commonly known as ‘sigma’

Estimated from historical and current data using statistical techniques

A measure of variation

The standard deviation of the measurement error can be used as a measure of precision, or really “inaccuracy”.

Calibration, or recalibration, can improve the precision of a measurement by reducing error or bias. However, the calibration does not necessarily have any effect on the accuracy of the measurements.

Measurement system variability

The accuracy, bias and precision of a measurement system can be divided into a part attributable to the equipment or apparatus and that associated with different people or laboratories performing the test. Special terms for these precision constituents are as follows:

repeatability

The repeatability of a measurement process implies that the test variation is consistent. It is a measure of the closeness of agreement between independent test results obtained in a short time interval with the same test method in the same laboratory by the same operator using the same equipment and the same sample(s). . By keeping so many factors the same, repeatability represents the inherent variability in the test equipment or apparatus.

reproducibility

Reproducibility is a measure of the closeness of agreement between test results obtained in different laboratories with the same test method using the same samples. It includes the differences such as operators, equipment and supervision that will exist between laboratories. As a result, it can never be lower than the repeatability of a test. ASTM uses this definition and that of repeatability to characterize test method performance for any laboratory.

There are differences in terminology because AIAG does not use the ASTM definitions. Although its definition of repeatability is essentially the same, the AIAG methodology uses reproducibility to refer to the variability associated with operators. Its equivalent to ASTM reproducibility is called R & R, or the combination of operator and equipment variability.

You must know the terminology used by your customers.

sources of variability

Systematic and random errors that can influence measurement results can come from a multitude of sources. In general, these can be summarized in the following categories:

Team

The equipment, whether it is a sophisticated automated electronic analyzer or glassware, has been manufactured to certain tolerances. Inherent variation in equipment specifications will be reflected in test results. Component wear, failure, or improper maintenance will increase variation in test results. Any inconsistency in the verification and/or recalibration of the calibration will also affect the consistency of the results obtained from the equipment.

People

People almost always contribute to variation simply because none of us are exactly the same. We differ in dexterity, reaction times, color sensitivity, and other ways. Even the same operators can perform differently at different times due to degrees of mental and physical alertness. Some degree of differences between operators is practically unavoidable. Of course, some tests are more sensitive to the effects of operator differences. Incomplete or inexplicit test methods open the door to another difference in operators, the “interpretation” of the requirements.

laboratory environment

Some samples and equipment may be susceptible to temperature, humidity, atmospheric pressure, and other environmental factors. Because these cannot be perfectly controlled within or between laboratories, they contribute to some extent to variation in test results.

samples

Any non-uniformity of the sample can increase the variation in the test results. When conducting studies to determine test variability, special effort should be made to obtain test samples that are as uniform or similar as possible.

Weather

All of the sources of variation listed above can change over time. In measurement studies, efforts are usually made to keep the time span as short as possible.

Measurement Systems Analysis

Several different techniques are useful for analyzing measurement system variability. These include measurement variability studies (both short and long), control charts, designed experiments, and analysis of variance. Donald Wheeler’s book, “Measurement Process Evaluation,” does an excellent job of introducing the control chart approach. Please refer to this for a detailed discussion of the topic. The AIAG Manual, MSA, 4th Edition is the ‘bible’ for the automotive industry. To comply with the IATF 16949:2016 standard, all MSA studies must follow the methodology described in the MSA manual.

Interlaboratory versus intralaboratory studies

Establishing repeatability of a method is as well accomplished in one laboratory as another. Differences in results between laboratories are usually due not so much to differences in precision, but rather to systematic errors or biases.

Interlaboratory (interlaboratory) studies can establish the relative magnitudes of bias and precision. They do not offer much help in uncovering assignable causes of bias.

To obtain the information necessary to identify assignable causes and eliminate their effects, gage studies must be performed in a single laboratory. (Intralaboratory) These studies may involve independent verification of a laboratory’s results.

Independent verification activities:

Blind sample programs

Cross checks between laboratories

Audits

review questions

1. What are some sources of measurement variation?

2. Explain the differences between bias and precision.

3. How do you judge if a measurement system is statistically stable?

4. List some different techniques for analyzing measurement variability.

5. Explain what is repeated

admin

Leave a Reply

Your email address will not be published. Required fields are marked *