Abstract

An approach for implementing statistical process control and other statistical methods as a cost-savings measure in the treated-wood industries is outlined. The purpose of the study is to use industry data to improve understanding of the application of continuous improvement methods. Variation in wood treatment is a cost when higher-than-necessary chemical retention targets are required to meet specifications. The data for this study were obtained in confidence from the American Lumber Standard Committee and were paired, normalized assay retentions for charges inspected by both the treating facility and auditing agencies. Capability analyses were developed from this data for three use categories established by the American Wood Protection Association (AWPA), including UC3B (above ground, exterior), UC4A (ground contact, freshwater, general use), and UC4B (ground contact, freshwater, critical structures, or high decay hazard zones). Agency and industry data indicate that between 4.45 and 9.82 percent of the charges were below the lower confidence limit of the passing standard (LCLAWPA), depending on use category. A Taguchi loss function (TLF), which is quadratic based and decomposes the monetary loss into shift and variation components, was developed to estimate the additional cost due to process variation. For example, if a treatment input cost of $1.00/ft3 is assumed for UC3B, reducing the variation in total retention allows lowering treatment targets, e.g., 1.45 to 1.38, reducing costs to $0.76/ft3. The study provides some important continuous improvement tools for this industry such as control charts, Cpk, Cpm capability indices, and the one-sided TLF.

Currently, wood treatment facilities must meet minimum passing standards for wood preservative penetration and retention of treated wood (as defined by the American Wood Protection Association [AWPA] and governed by the American Lumber Standards Committee [ALSC]). This article focuses only on the preservative retention aspect of quality control for treated lumber and not penetration. Treating facilities determine the retention of each charge by removing 20 or more increment cores from different pieces in the charge (“batch”) and combining them to obtain a single composite assay sample. The preservative content (retention) value of this sample must be equal to or greater than the minimum for that product, as stated in the relevant standard. Third-party agencies sample treatment charges in the same way, but the key metric used for third-party agencies evaluating retention compliance over a range of charges is the lower confidence limit (LCLAWPA) as described by the AWPA M22-18 standard (AWPA 2019). The LCLAWPA is the lower confidence limit of the median retention of recent charges calculated using a one-tailed 95 percent critical value. This LCL is compared with the standardized minimum retention. This standard LCL or “minimum specification” is derived as a statistical lower bound assuming the standard normal distribution and is based on the theory of parametric confidence intervals (i.e., ± (s/)zα). For the typical monitoring situation in the standard (AWPA M22), the previous 20 samples are considered and a small sample adjustment is used (tα = 0.05, n − 1 = 19 = 1.729) for a one-sided bound, which is to provide an indication if the typical, immediately preceding production is above specification. As more samples are included using this small sample interval adjustment, the long-term behavior of the LCLAWPA standard derived using a Z-score of 1.729 should contain a cumulative probability of p(Z) = 0.9581 above the LCLAWPA, assuming normal data distribution. LCLAWPA derived from confidence limits used for enumerative studies will be narrower than those of prediction intervals, which are common for process or analytical studies. Prediction intervals are wider than confidence intervals given the incorporation of process variation for future sampling (Deming 1975, Hahn 1995). Resampling of retention values after a failure can result in a nonnormal distribution if the resampled values are included in the original data. Resample retention values should be maintained in a separate data file, and should be flagged as a resample. This will avoid artificially skewing the distribution underlying the determination of the LCLAWPA.

There are approximately 140 plants and roughly 700 active production categories in the treated-wood industry that are monitored by inspection agencies using this LCLAWPA standard (Vlosky 2009). The LCLAWPA standard and producer metrics of performance are used as quality-control techniques for adhering to a conformance standard, and do not necessarily promote continuous improvement and variation reduction (e.g., statistical process control [SPC]). This makes the use of the LCLAWPA a conformance test and creates a treatment process that is reactive to problems but not preventative. This conformance test and reactive actions, such as the retreatment of charges, may result in additional costs.

SPC methods can benefit manufacturing industries by identifying common sources of variation that influence product quality and by promoting proactive actions for continuous improvement. The fundamental premise of continuous improvement is the reduction of product and process variation. The goal of this study was to quantify the natural variation (also called common-cause variation) and the special-cause variation associated with the measure of the average retention for treated residential lumber. This article builds upon the study by Young et al. (2017) by providing a more detailed assessment of distribution fitting and variability analyses, and provides a monetary assessment of cost using the Taguchi loss function (TLF). The purpose of the study is also to highlight statistically-based approaches in manufacturing that can help producers reduce risk from warranty claims, reduce rework, and reduce costs. These methods are necessary to improve the short-term competitiveness of the industries and are crucial to ensuring a viable sustainable strategy for long-term success.

Historical Perspective

Improving product quality and reducing sources of process variation that lead to unnecessary costs are common goals for many companies. As Lawless et al. (1999) notes, “Fundamental to the improvement process of reducing product and process variations is to first quantify variations[…]”; also see Young (2008) and Young and Winistorfer (1999). Many statistical methods exist for quality improvement through the quantification and understanding of sources of variation (Hahn 1995, Lawless et al. 1999, Woodall 2000). However, Deming (1975) urges the distinction between enumerative and analytical studies. Enumerative studies deal with characterizing an existing, finite, unchanging target population by sampling from a well-defined frame, e.g., analysis of variance, confidence intervals, etc. (Hahn 1995). In contrast, the analytical studies more frequently encountered in industrial applications focus on action that is taken on a process or system, the aim being to improve the process in the future, e.g., statistical prediction intervals, control charts, etc. (Deming 1975, Hahn 1995). SPC uses control charting and other statistical methods to define improvement of the process and final product quality (Stoumbos et al. 2000, Woodall 2000, Young 2008). As Deming (1975) articulated, “Predicting short-term process outcomes is a powerful aspect of the control chart and SPC.” Shewhart control charts (Shewhart 1931) have been used extensively for over 50 years. However, as noted by Stoumbos et al. (2000), “The diffusion of research to application is sometimes slow.” Following Deming's study characterization, we view this study as providing an initial evaluation of statistical analytical methods that can lead to process improvement and improved product quality.

An extensive review of the published literature did not reveal any studies that document the application of SPC and other statistical methods as improvement tools for the treated-wood industries. Even though there is considerable literature on the application of SPC for the lumber industry (Brown 1982; Maness et al. 2002, 2003;Young et al. 2007), this study addresses a noteworthy gap in the literature. The SPC Handbook for the Treated Wood Industries by Young et al. (2019) provides more detailed information about implementing SPC.

Material and Methods

Data sets

The data for this study were from the period 2014 to 2016 and were obtained in confidence from the ALSC auditing agencies accredited for treating facilities. The agencies provided paired assay retentions for charges that had been inspected and measured by both the treating facility and the auditing agencies. The assay retentions were normalized to protect confidentiality. “Paired” in this study is defined as matched charges of industry- and association-tested wood. Retention was rescaled to the AWPA standard retention to protect confidentiality but maintain variation, i.e., the analyses were performed by use category and did not reveal any source (chemical type). The three use categories with the largest N in the data set were analyzed; this includes UC3B (above ground, exterior, exposed, or poor water runoff, general use), UC4A (ground contact or freshwater, general use), and UC4B (ground contact or freshwater, used in critical structures or high decay hazard zones). The sample sizes of the three use categories were: (1) N = 4,259 records for UC3B; (2) N = 2,942 records for UC4A; (3) N = 196 records for UC4B.

Estimating the probability density functions

A probability density function (pdf) is used to specify the probability that a random observation associated with a random variable (e.g., total retention) will fall within a specified range of values. For example, for a random observation from the standard normal pdf, N(0,1), the probability of it falling between −3 and 3 is 0.9973, the probability of it falling between −2 and 2 is 0.9545, and the probability of an observation falling below (or, alternatively, falling above) a sample mean is 0.5. Information criteria can be used to compare potential pdfs that may represent a sample's underlying distribution (Anderson 2008). Two criteria, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC), were used to assess the appropriateness of various pdfs as selected from those commonly seen in industrial settings. The AIC (Akaike 1974) is:

 
formula

where is the maximized value of the likelihood function of the pdf model and k is the number of free parameters to be estimated. The preferred model is the one with the minimum AIC. AIC rewards goodness of fit but it also includes a penalty that is an increasing function of the number of estimated parameters. A second-order version that adjusts for sample sizes (n = number of observations), AICc, is a common output in statistical software, and can be calculated from AIC (Anderson 2008):

 
formula

The BIC (Findley 1991) or Schwarz criterion (also known as the SBC or SBIC) value is:

 
formula

where , n, and k are as defined previously. Similarly to AIC, the preferred model is the one with the minimum BIC. The information criteria and estimated parameters for each pdf were obtained as part of the maximum likelihood procedures using JMP software (JMP version 14 2019). Ranking and calculation of Akaike weights (model probabilities) from the information criteria values help differentiate the hypothesized pdfs (Anderson 2008).

Control charting methods

Individuals and moving range charts were used to quantify the natural variation (or common-cause variation) and detect special-cause variation (or “events”) of the retention values (Shewhart 1931). The Shewhart control chart is based on the theory of the statistical prediction interval for application in processes (i.e., analytical studies). The control chart is a simple but powerful tool that distinguishes not only between two types of variation, but also is a temporal graphic of the state of the process and is very helpful in detecting shifts in the process that may cause the manufacture of defective product. The Shewhart control chart general form is:

 
formula

where is the process average and s is the process standard deviation (Fig. 1). Ideally, assuming an underlying normal distribution, this interval would contain 99.73 percent of the process values. Given that s is a biased estimator for the population standard deviation (σ), the unbiased estimator of σ is used: σ̂ = /d2, where |xixi−1| / (n − 1) and d2 = 1.128 for a subgroup size of two for estimating a moving range value. Therefore, Equation 4 reduces to:

 
formula

where is the process average and is the average moving range. The LCL = − 2.66() and upper control limit (UCL) = + 2.66 (). The moving range chart offers further assessment of process variability. The center line is given by and control limits are constructed and reduced to LCL = 0 and UCL = 3.267() for subgroups of size two (Montgomery 2012).

Figure 1

Illustration of Shewhart control chart and statistical foundations.

Figure 1

Illustration of Shewhart control chart and statistical foundations.

Capability analyses

Capability analyses assess the potential for product conformance to specifications by comparing the natural tolerances (NT) of the product with the engineering tolerances (ET), i.e., ET = upper specification limit (USL) − lower specification limit (LSL) and NT = 6s, where s is the process standard deviation. Specification limits are typically established externally to the process and are not a mathematical function of the control limits, although capability analyses are most useful when the data are in a state of statistical control, i.e., data are within the upper and lower control limits as defined in Equation 5. Capability analyses are summarized by indices and indicate if a product is capable of meeting the desired specifications. The most common capability indices are:

 
formula
 
formula

and the Taguchi index below (Taguchi et al. 2005) is recognized by many (Boyles 1991, Taguchi 1993),

 
formula

where T = target. Cp does not accommodate a process that is not centered between the LSL and USL; thus, the other indices were introduced to better indicate process performance in these types of situations. Cp, Cpk, and Cpm compare engineering tolerances to short-term, or within-, process variation. Note, process performance indices (Pp, Ppk = min(Ppl,Ppu), Ppm) are similar to those of Equations 6, 7, and 8, where s is used instead of /d2 (i.e., s represents long-term or overall variation). Only values for the Cp, Cpk, and Cpm are discussed in this article. A simulation is presented from the results of the capability analyses to estimate chemical dosing target changes necessary for 100 percent conformance to the LCLAWPA standard. Even though 100 percent conformance may not be achievable in the short term, the simulation highlights the importance of reducing variation to sustain business competitiveness.

Taguchi loss function

The TLF quantifies the monetary loss incurred by variation in the product. The economic loss for treated wood is a function of the amount of extra chemical treatment used (target treatment level above specification) and the amount of time for retreatment (if it has been determined that a charge has treated below specification). Undertreatment may represent a higher monetary loss (warranty claims) than overtreatment (additional chemical costs). Both represent direct variable costs due to poor quality and are influenced by the variability in the process and raw material. Taguchi et al. (2005) developed a two-sided loss function “nominal-the-best,” where the target is centered within a specification range, which estimates economic loss for a quality attribute that has both lower and upper specifications, e.g., chemical retention (Fig. 2). Taguchi's nominal-the-best loss function is:

 
formula

where L is the economic loss; k is the cost constant, k = A0/(SL – m)2; A0 is the cost of operating at a specification limit, SL; m is the operational target value of the quality characteristic (e.g., retention); and y0 is the actual value at the SL (e.g., 0.15 lb/ft3). Nutek Inc. (2014) illustrates an approach to estimating A0. Taguchi et al. (2005) also developed a one-sided loss function “smaller-the-better” with only one lower or upper specification (e.g., the desired value of retention percentage should be as small as possible near the LCLAWPA standard; see Fig. 3). Taguchi's smaller-the-better loss function is:

 
formula
Figure 2

Illustration of symmetric Taguchi loss function. LSL = lower specification level; USL = upper specification level.

Figure 2

Illustration of symmetric Taguchi loss function. LSL = lower specification level; USL = upper specification level.

Figure 3

Illustration of one-sided Taguchi loss functions, “smaller the better.” LSL = lower specification level.

Figure 3

Illustration of one-sided Taguchi loss functions, “smaller the better.” LSL = lower specification level.

Operational targets can be reduced assuming the smaller-the-better TLF only if the variance of the process or product (e.g., total retention) is first reduced (Young et al. 2015). Most producers run the smallest possible target to minimize cost, but they must also avoid producing a product below the LCLAWPA standard, which adds lost time due to retreatment, and may be necessary to avoid warranty claims. Further asymmetric or discontinuous loss functions are possible that can address unbalanced costs associated with nonequidistant specification limits (e.g., Metzner et al. 2019), but are not explored at this time given that LCLAWPA is a one-sided LSL under the assumption of smaller-the-better TLF.

Results

Probability density functions

There were distinct differences between the industry and agency values of normalized retention, including the agencies measuring fewer below-specification measurements but more just above specification, and the agencies also measuring lower at the higher end of the measurement scale, except at the extreme (Fig. 4). A quantile–quantile plot of the industry and agency values of normalized retention reveals some distinct differences in retention values >3 (Fig. 5). Industry treating plants calibrate their instruments to agency standards; however, it is plausible that additional variability is introduced at the plants that is not incorporated in measurements made by the regulating agency instruments. Each plant presumably has its own operators and instruments, whereas the agencies have fewer total operators/instruments. There could also be differences in the collection or grinding of samples in preparation for the measurement process. The differences may be due to the resampling without replacement that occurs with the industry samples when the first, second, or third sample falls below the LCLAWPA standard and additional samples are then taken from the same batch of treated wood. Young (2012) highlighted that skewed distributions can occur from resampling for the engineered-wood industries. The difference may also be due to batches of treated lumber that are retreated and the original measurements were not maintained at the plant.

Figure 4

Illustration of histograms relative to the lower confidence limit as described by the American Wood Protection Association M22-18 standard (LCLAWPA; equates to lower specification level [LSL] in plots) standard for UC3B with a box plot and quantile plot. Box plot includes box marking the interquartile range (IQR) with midline at the median and inner symbol marking the mean; whiskers designate up to 1.5 by IQR and individual points are measurements beyond the whiskers. The solid line on the quantile plots indicates a normal fit to the data, whereas the dashed lines are 95 percent confidence envelopes for a normal fit. An extreme outlier was removed for this analysis.

Figure 4

Illustration of histograms relative to the lower confidence limit as described by the American Wood Protection Association M22-18 standard (LCLAWPA; equates to lower specification level [LSL] in plots) standard for UC3B with a box plot and quantile plot. Box plot includes box marking the interquartile range (IQR) with midline at the median and inner symbol marking the mean; whiskers designate up to 1.5 by IQR and individual points are measurements beyond the whiskers. The solid line on the quantile plots indicates a normal fit to the data, whereas the dashed lines are 95 percent confidence envelopes for a normal fit. An extreme outlier was removed for this analysis.

Figure 5

Quantile–quantile plot of plant retention (PlantRet_Norm) and agency retention (AgencyRet_Norm).

Figure 5

Quantile–quantile plot of plant retention (PlantRet_Norm) and agency retention (AgencyRet_Norm).

On the basis of the minimum AICc and BIC values, the best pdf for total retention for each of the use categories UC3B, UC4A, and UC4B was the loglogistic (or Fisk distribution); see Table 1. For UC4B, the loglogistic had lower minimum AICc and BIC, but on the basis of Akaike weights, there is also supporting evidence for the logistic pdf. The loglogistic tends to be skewed, whereas the logistic is symmetric but heavier tailed than the normal distribution. For each use category, the commonly assumed normal, or Gaussian, pdf was ranked low among the nine different distributions that were tested. Depending on the goal of a project, methods robust to deviations from the normality assumption may need to be considered for any statistical analyses, including grouping data into subgroups if appropriate. It should be kept in mind that the data are a mixture across manufacturers, preservatives, and product sizes, and the resulting industry-wide distribution could be a result of the amalgamation of heterogeneous distributions. This type of data mixture was noted by Zeng et al. (2016) for the wood composites industries, but as previously noted, the literature does not document this for the treated-wood industries.

Table 1

Akaike information criterion adjusted for sample size (AICc) and Bayesian information criterion (BIC) statistics by distribution and use category type for the agency data (all two-parameter versions).

Akaike information criterion adjusted for sample size (AICc) and Bayesian information criterion (BIC) statistics by distribution and use category type for the agency data (all two-parameter versions).
Akaike information criterion adjusted for sample size (AICc) and Bayesian information criterion (BIC) statistics by distribution and use category type for the agency data (all two-parameter versions).

The loglogistic pdf has been shown to arise as a result of mixture distributions and is useful in modeling survival data (Crowder et al. 1991); for example, it is applicable in modeling situations where the rate at which something is occurring increases initially and then after some time begins to decrease (Al-Shomrani et al. 2016). It may reflect that the left tail of the distribution drops more abruptly because of retreatment of underretention charges. Selecting the best fit for a pdf for total retention is important when establishing useful standards, which are typically derived by applying parametric estimates and confidence intervals to an industrial data set. For example, this is illustrated by comparing the 5th percentile estimates across the pdfs. The failure probabilities at the LCLAWPA by pdf are distinctly different and illustrate the usefulness of fitting the appropriate pdf for the data; this is especially important when developing accurate standards for producers. Use categories UC4A and UC4B have higher failure probabilities relative to UC3B and can be useful to direct continuous improvement efforts for reducing variation to lower the failure probability at the LCLAWPA. It is important to note for this data that a normal pdf has higher failure probabilities relative to the loglogistic.

Quantifying the process variation

Control charts were developed for use category UC3B to illustrate the different signals from control charts for long-term and short-term process variation over time (see examples in Figs. 6 and 7). Long-term variation may typify the variation experienced across the broader consumer markets, where short-term variation may exemplify variation experience by a more regionalized or local market group. Eliminating or reducing special-cause variation is typically the starting point of any continuous improvement effort where root-cause analyses should reveal the events, e.g., shift change, startup from downtime, sensor failure, etc., that are not part of the normally expected system variation. The process in the short term is predictable, illustrating the usefulness of the control chart, whereas the process in the long term is not predictable when special-cause variations and statistical runs are occurring. Control charting is an important first step for the treated-wood industry to quantify variation, identify special-cause variation, and to prevent both overtreatment and the need for retreatment.

Figure 6

Illustration of long-term variation for UC3B individuals and moving range (ImR) chart of normalized charge retention. The points in boxes labeled as a “1” violate run rule #1 (out of control), and points in boxes labeled as a “2” violate run rule #2 (eight consecutive points above or below the average).

Figure 6

Illustration of long-term variation for UC3B individuals and moving range (ImR) chart of normalized charge retention. The points in boxes labeled as a “1” violate run rule #1 (out of control), and points in boxes labeled as a “2” violate run rule #2 (eight consecutive points above or below the average).

Figure 7

Illustration of short-term variation individual control chart for UC4B industry total retention data.

Figure 7

Illustration of short-term variation individual control chart for UC4B industry total retention data.

Process capability

Capability analyses were performed for three use categories to assess the capability of the total retention samples relative to the LCLAWPA standard. Since the LCLAWPA standard is defined in quality management as a one-sided lower specification (LSL = LCLAWPA), the Cpl index is used to determine capability; see Equation 7. A capability index value of 1 or greater indicates that the process meets specifications. For the agency data set the indices by use category were: UC3B, Cpl = 0.807 (5.91% out-of-specification); UC4A, Cpl = 0.723 (4.45% out-of- specification); and UC4B, Cpl = 0.587 (8.26% out-of-specification). For the industry data set, the indices by use category were: UC3B, Cpl = 0.937 (5.16% out-of-specification); UC4A, Cpl = 0.802 (4.44% out-of-specification); and UC4B, Cpl = 0.694 (9.82% out-of-specification); see Figures 8, 9, and 10, respectively. Montgomery (2012) indicated that an acceptable Cpk for a one-sided limit (e.g., LCLAWPA standard) is Cpk ≥ 1.25. Harry and Schroeder (2000) provided an example for two-sided limits (e.g., moisture content) where Cpk ≥ 1.33 is an acceptable standard to achieve “Six Sigma” quality. The Cpk indices developed for the use categories in this study illustrate a significant gap as noted by Harry and Schroeder (2000).

Figure 8

Capability analyses for agency and plant data for product UC3B with percentage below lower confidence limit standard.

Figure 8

Capability analyses for agency and plant data for product UC3B with percentage below lower confidence limit standard.

Figure 9

Capability analysis for agency and plant data for product UC4A with percent below lower confidence limit standard.

Figure 9

Capability analysis for agency and plant data for product UC4A with percent below lower confidence limit standard.

Figure 10

Capability analysis for agency and plant data for product UC4B with percent below lower confidence limit standard.

Figure 10

Capability analysis for agency and plant data for product UC4B with percent below lower confidence limit standard.

The differences in the Cpl indices and the percent out-of-specification between the industry and agency data are due to the differences in the variance estimates. Variation displayed as StDev in Figures 8, 9, and 10 is the overall standard deviation that is used for the process performance indices, PPL (=Ppl) and PPK (=Ppk), given the large sample sizes, and is represented in the normal curve as a solid line; CPL(=Cpl) and Cpk use the /d2 to estimate short-term or within-group variation (agency or industry) and are represented in the normal curve as a dotted line.

Capability analyses are a useful tool for estimating the required shift in operating target or process mean to attain essentially 100 percent conformance. As an example, a shift in the process mean of normalized retention to 1.853 (∼30% increase from 1.435) for the normalized agency data set for use category UC3B would result in 100 percent conformance to LCLAWPA standard (Fig. 11). A shift in the process mean to 1.698 (27% increase) for the normalized agency data set for use category UC4B would result in 100 percent conformance to LCLAWPA (Fig. 12). Although this shift would ensure consistent adherence to the AWPA standard (LCLAWPA), such a shift is not an appropriate long-term strategy for business competitiveness; such an increase in the chemical additive target would greatly increase costs. An increase in target retention would also have other detrimental effects such as higher leaching amounts, increased disposal, etc. Capability analyses are an essential early step to assess NT relative to ET. As Ohno (1988) noted, “Where there is no standard there can be no Kaizen (improvement).”

Figure 11

Shift in mean from 1.435 to 1.853 for normalized agency data set to attain essentially 100 percent conformance to lower confidence limit standard for product UC3B.

Figure 11

Shift in mean from 1.435 to 1.853 for normalized agency data set to attain essentially 100 percent conformance to lower confidence limit standard for product UC3B.

Figure 12

Shift in mean from 1.335 to 1.698 for normalized agency data set to attain essentially 100 percent conformance to lower confidence limit standard for product UC4B.

Figure 12

Shift in mean from 1.335 to 1.698 for normalized agency data set to attain essentially 100 percent conformance to lower confidence limit standard for product UC4B.

Taguchi loss function

The additional costs from variation were estimated using the one-sided TLF for the three use categories (Table 2). The operational targets used in Table 2 were equated with the process average and the distance from the average to the LCLAWPA standard, which is a function of the size of s, i.e., the higher the s, the greater the target window. The initial s and = Target in Table 2 (upper cells highlighted in bold) were calculated from the original data set values for the three use categories. The TLF costs illustrate that for all three use categories, substantial savings can be attained by focusing on variation reduction; see Metzner et al. (2019). For example, lowering the treatment target for UC3B from 1.45 to 1.38 given a variation reduction of 5 percent from s = 0.277 to s = 0.263 results in a cost savings of 24 percent. If the s can be reduced further to s = 0.249, costs savings of 44 percent occurs. The same improvement scenarios are applicable for both the UC4A and UC4B use categories (Table 2).

Table 2

A comparison of treatment chemical cost savings using the Taguchi loss function with variation reduction and appropriate target retention reduction, assuming $1.00/ft3 to treat wood.

A comparison of treatment chemical cost savings using the Taguchi loss function with variation reduction and appropriate target retention reduction, assuming $1.00/ft3 to treat wood.
A comparison of treatment chemical cost savings using the Taguchi loss function with variation reduction and appropriate target retention reduction, assuming $1.00/ft3 to treat wood.

A treatment producer does not necessarily control all input costs. The costs of chemicals and wood are dictated by market conditions and a producer's volume. However, the producer's continuous improvement efforts are under management's control, and influence the variation at the plant. Some things the treater can control that might influence variation include source of supply (treatability can vary by geographic source), moisture content (verifying proper drying), grouping by similar dimension, and pressure-treatment parameters. The TLF used in this study illustrates the economic justification for dedicating resources at the plant level toward variation reduction and continuous improvement.

Conclusions

This study provides an example of applying SPC tools to wood-treatment industry data to identify strategies for increased standard conformance and lowering production costs. A large paired data set of normalized assay retentions for charges from industry and agency samples indicated that the best distribution was the loglogistic pdf. This may have implications when using methods with strict normality assumptions and may be important for agencies when establishing accurate standards when using the lower quantiles of a distribution. The capability analyses indicated that the agency and plant data had anywhere from, respectively, 4.45 to 8.26 percent and 4.44 to 9.82 percent below the LCL of the passing standard (LCLAWPA), depending on use category. A TLF was used to estimate the additional costs due to process variation. The TLF illustrated that if a focus on variation reduction led to operational retention target reductions, substantial cost savings could be realized.

The study addresses a research gap in documenting the applications of SPC and other statistical methods as improvement schemes for the treated-wood industries. Application of the methods outlined in this article will be dependent on the company and strategies that include continuous improvement. Control charts are a straightforward tool for implementation at the operations level of the plant and allow operators to monitor the stability of the process, and provide useful alerts for process instability and unanticipated events. The capability indices such as the Cpk and Cpm are useful methods for the quality managers to assess improvement relative to specifications. The TLF is an accepted method for the quality management and senior executives to quantify the cost of poor quality due to variation. A useful handbook for implementing SPC was developed as part of this study as cited earlier and it is a possible template for the treated-wood industry. An SPC workshop was conducted at the 2018 AWPA annual meeting and more are anticipated in the future. It is feasible that affordable customized software will be developed for this industry that will include control charting, TLF, and other continuous improvement tools.

Literature Cited

Literature Cited
Akaike,
H.
1974
.
A new look at the statistical model identification. IEEE Trans. Automat
.
Contr
.
19
(
6
):
716
723
.
Al-Shomrani,
A.,
A. I.
Shawky,
O. H.
Arif,
and
M.
Aslam.
2016
.
Log-logistic distribution for survival data analysis using MCMC
.
SpringerPlus
5
:
1774
.
American Wood Protection Association (AWPA)
.
2019
.
M22-19. Standard for third-party agency evaluation of inspection data
.
In:
Book of Standards
.
American Wood Protection Association
,
Birmingham, Alabama
.
4 pp.
.
Anderson,
D. R.
2008
.
Model Based Inference in the Life Sciences: A Primer on Evidence
.
Springer
,
New York
.
184
pp
.
Boyles,
R.
1991
.
The Taguchi capability index
.
J. Qual. Technol
.
23
(
1
):
17
26
.
Brown,
T. D.
1982
.
Quality Control in Lumber Manufacturing
.
Miller Freeman Publications
,
San Francisco
.
288
pp
.
Crowder,
M. J.,
A. C.
Kimber,
R. L.
Smith,
and
T. J.
Sweeting.
1991
.
Statistical Analysis of Reliability Data
.
Chapman & Hall
,
London
.
250
pp
.
Deming,
W. E.
1975
.
On probability as a basis for action
.
Am. Stat
.
29
:
146
152
.
Findley,
D. F.
1991
.
Counterexamples to parsimony and BIC. Ann. I
.
Stat. Math
.
43
:
505
514
.
Hahn,
G. J.
1995
.
Deming's impact on industrial statistics: Some reflections
.
Am. Stat
.
49
(
4
):
336
341
.
Harry,
M.
and
R.
Schroeder.
2000
.
Six Sigma: The Breakthrough Management Strategy Revolutionizing the World's Top Corporations
.
Doubleday Random House
,
New York
.
300
pp
.
JMP, version 14
,
1989
2019
.
SAS Institute Inc.
,
Cary, North Carolina
.
2020
.
Lawless,
J. F.,
R. J.
Mackay,
and
J. A.
Robinson.
1999
.
Analysis of variation transmission in manufacturing processes—Part I
.
J. Qual. Technol
.
31
(
2
):
131
142
.
Maness
T. C.,
R. A.
Kozak,
and
C.
Staudhammer.
2003
.
Applying real-time statistical process control to manufacturing processes exhibiting between and within part size variability in the wood products industry
.
Qual. Eng
.
16
(
1
):
113
125
.
Maness,
T. C.,
C.
Staudhammer,
and
R. A.
Kozak.
2002
.
Statistical considerations for real-time size control systems in wood products manufacturing
.
Wood Fiber Sci
.
34
(
3
):
476
484
.
Metzner,
C.,
M.
Platzer,
T. M.
Young,
B.
Bichescu,
M. C.
Barbu,
and
T. G.
Rials.
2019
.
Accurately estimating and minimizing costs for the cellulosic biomass supply chain with statistical process control and the Taguchi Loss Function
.
BioResources
14
(
2
):
2961
2976
.
Montgomery,
D. C.
2012
.
Introduction to Statistical Quality Control. 7th ed
.
John Wiley & Sons
,
Hoboken, New Jersey
.
768
pp
.
Nutek Inc
.
2014
.
Quality loss function and tolerance design—A method to quantify savings from improved product and process designs
.
Nutek
,
Bloomfield Hills, Michigan
.
37
pp
.
Ohno,
T.
1988
.
Toyota Production System: Beyond Large-Scale Production
.
Productivity Press
,
New York
.
176
pp
.
Shewhart,
W. A.
1931
.
Economic Control of Quality of Manufactured Product. D
.
Van Nostrand
,
New York
.
501
pp
.
Stoumbos,
Z. G.,
M. R.
Reynolds,
Jr.,
T. P.
Ryan,
and
W. H.
Woodall.
2000
.
The state of statistical process control as we proceed into the 21st century
.
J. Am. Stat. Assoc
.
95
:
992
998
.
Taguchi,
G.
1993
.
Taguchi on Robust Technology Development
.
American Society of Mechanical Engineers (ASME) Press
,
New York
.
136
pp
.
Taguchi,
G.,
S.
Chowdhury,
Y.
Wu,
S.
Taguchi,
and
H.
Yano.
2005
.
Taguchi's Quality Engineering Handbook
.
John Wiley & Sons
,
Hoboken, New Jersey
.
662
pp
.
Vlosky,
R. P.
2009
.
Statistical Overview of the US Wood Preserving Industry: 2007
.
Southern Forest Products Association
,
Kenner, Louisiana
.
34
pp
.
Woodall,
W. H.
2000
.
Controversies and contradictions in statistical process control
.
J. Qual. Technol
.
32
(
4
):
341
350
.
Young,
T.,
N.
André,
and
J.
Otjen.
2015
.
Quantifying the natural variation of formaldehyde emissions for wood composite panels
.
Forest Prod. J
.
65
(
3/4
):
S82
S84
.
Young,
T. M.
2008
.
Reducing variation, the role of statistical process control in advancing product quality
.
Eng. Wood J
.
11
(
2
):
41
42
.
Young,
T. M.
2012
.
Variation reduction—Avoiding the process improvement paradox
.
Eng. Wood J
.
15
(
1
):
28
29
.
Young,
T. M.,
B. H.
Bond,
and
J.
Wiedenbeck.
2007
.
Implementation of a real-time statistical process control system in hardwood sawmills
.
Forest Prod. J
.
57
(
9
):
54
62
.
Young,
T. M.,
P. K.
Lebow,
and
S.
Lebow.
2017
.
Statistical process control for residential treated wood
.
In:
Proceedings of the American Wood Protection Association (AWPA) Conference 2017, April 9–12, Las Vegas, Nevada; AWPA, Birmingham, Alabama. Vol. 113,
pp
.
234
244
.
Young,
T. M.,
P. K.
Lebow,
S.
Lebow,
and
A.
Taylor.
2019
.
SPC Handbook for the Treated Wood Industries
.
The University of Tennessee, Institute of Agriculture, AgResearch, Knoxville. 71 pp.
www.spc4lean.com. Accessed February 6, 2020
.
Young,
T. M.
and
P. M.
Winistorfer.
1999
.
Statistical process control and the forest products industry
.
Forest Prod. J
.
49
(
3
):
10
17
.
Zeng,
Y.,
T. M.
Young,
D. J.
Edwards,
F. M.
Guess,
and
C.-H.
Chen.
2016
.
Case studies: A study of missing data imputation in predictive modeling of a wood composite manufacturing process
.
J. Qual. Technol
.
48
(
3
):
284
296
.

Author notes

The authors are, respectively, Professor, Dept. of Forestry, Wildlife and Fisheries, Center for Renewable Carbon, Univ. of Tennessee, Knoxville (tmyoung1@utk.edu [corresponding author]); Mathematical Statistician and Research Forest Products Technologist, USDA Forest Serv. Forest Products Lab., Madison, Wisconsin (patricia.k.lebow@usda.gov, stan.lebow@usda.gov); and Professor, Dept. of Forestry, Wildlife and Fisheries, Univ. of Tennessee, Knoxville (mtaylo29@utk.edu). This paper was received for publication in December 2019. Article no. 19-00067.