Quantcast
Channel: Minitab | Minitab
Viewing all 828 articles
Browse latest View live

Gauging Gage Part 3: How to Sample Parts

$
0
0

In Parts 1 and 2 of Gauging Gage we looked at the numbers of parts, operators, and replicates used in a Gage R&R Study and how accurately we could estimate %Contribution based on the choice for each.  In doing so, I hoped to provide you with valuable and interesting information, but mostly I hoped to make you like me.  I mean like me so much that if I told you that you were doing something flat-out wrong and had been for years and probably screwed somethings up, you would hear me out and hopefully just revert back to being indifferent towards me.

For the third (and maybe final) installment, I want to talk about something that drives me crazy.  It really gets under my skin.  I see it all of the time, maybe more often than not.  You might even do it.  If you do, I'm going to try to convince you that you are very, very wrong.  If you're an instructor, you may even have to contact past students with groveling apologies and admit you steered them wrong.  And that's the best-case scenario.  Maybe instead of admitting error, you will post scathing comments on this post insisting I am wrong and maybe even insulting me despite the evidence I provide here that I am, in fact, right.

Let me ask you a question:

When you choose parts to use in a Gage R&R Study, how do you choose them?

If your answer to that question required anymore than a few words - and it can be done in one word—then I'm afraid you may have been making a very popular but very bad decision.  If you're in that group, I bet you're already reciting your rebuttal in your head now, without even hearing what I have to say.  You've had this argument before, haven't you?  Consider whether your response was some variation on the following popular schemes:

  1. Sample parts at regular intervals across the range of measurements typically seen
  2. Sample parts at regular intervals across the process tolerance (lower spec to upper spec)
  3. Sample randomly but pull a part from outside of either spec

#1 is wrong.  #2 is wrong.  #3 is wrong.

You see, the statistics you use to qualify your measurement system are all reported relative to the part-to-part variation and all of the schemes I just listed do not accurately estimate your true part-to-part variation.  The answer to the question that would have provided the most reasonable estimate?

"Randomly."

But enough with the small talk—this is a statistics blog, so let's see what the statistics say.

In Part 1 I described a simulated Gage R&R experiment, which I will repeat here using the standard design of 10 parts, 3 operators, and 2 replicates.  The difference is that in only one set of 1,000 simulations will I randomly pull parts, and we'll consider that our baseline.  The other schemes I will simulate are as follows:

  1. An "exact" sampling - while not practical in real life, this pulls parts corresponding to the 5th, 15th, 25, ..., and 95th percentiles of the underlying normal distribution and forms a (nearly) "exact" normal distribution as a means of seeing how much the randomness of sampling affects our estimates.
  2. Parts are selected uniformly (at equal intervals) across a typical range of parts seen in production (from the 5th to the 95th percentile).
  3. Parts are selected uniformly (at equal intervals) across the range of the specs, in this case assuming the process is centered with a Ppk of 1.
  4. 8 of the 10 parts are selected randomly, and then one part each is used that lies one-half of a standard deviation outside of the specs.

Keep in mind that we know with absolute certainty that the underlying %Contribution is 5.88325%.

Random Sampling for Gage

Let's use "random" as the default to compare to, which, as you recall from Parts 1 and 2, already does not provide a particularly accurate estimate:

Pct Contribution with Random Sampling

On several occasions I've had people tell me that you can't just sample randomly because you might get parts that don't really match the underlying distribution. 

Sample 10 Parts that Match the Distribution

So let's compare the results of random sampling from above with our results if we could magically pull 10 parts that follow the underlying part distribution almost perfectly, thereby eliminating the effect of randomness:

Random vs Exact

There's obviously something to the idea that the randomness that comes from random sampling has a big impact on our estimate of %Contribution...the "exact" distribution of parts shows much less skewness and variation and is considerably less likely to incorrectly reject the measurement system.  To be sure, implementing an "exact" sample scheme is impossible in most cases...since you don't yet know how much measurement error you have, there's no way to know that you're pulling an exact distribution.  What we have here is a statistical version of chicken-and-the-egg!

Sampling Uniformly across a Typical Range of Values

Let's move on...next up, we will compare the random scheme to scheme #2,  sampling uniformly across a typical range of values:

Random vs Uniform Range

So here we have a different situation: there is a very clear reduction in variation, but also a very clear bias.  So while pulling parts uniformly across the typical part range gives much more consistent estimates, those estimates are likely telling you that the measurement system is much better than it really is.

Sampling Uniformly across the Spec Range

How about collecting uniformly across the range of the specs?

Random vs Uniform Specs

This scheme results in an even more extreme bias, with qualifying this measurement system a certainty and in some cases even classifying it as excellent.  Needless to say it does not result in an accurate assessment.

Selectively Sampling Outside the Spec Limits

Finally, how about that scheme where most of the points are taken randomly but just one part is pulled from just outside of each spec limit?  Surely just taking 2 of the 10 points from outside of the spec limits wouldn't make a substantial difference, right?

Random vs OOS

Actually those two points make a huge difference and render the study's results meaningless!  This process had a Ppk of 1 - a higher-quality process would make this result even more extreme.  Clearly this is not a reasonable sampling scheme.

Why These Sampling Schemes?

If you were taught to sample randomly, you might be wondering why so many people would use one of these other schemes (or similar ones).  They actually all have something in common that explains their use: all of them allow a practitioner to assess the measurement system across a range of possible values.  After all, if you almost always produce values between 8.2 and 8.3 and the process goes out of control, how do you know that you can adequately measure a part at 8.4 if you never evaluated the measurement system at that point?

Those that choose these schemes for that reason are smart to think about that issue, but just aren't using the right tool for it.  Gage R&R evaluates your measurement system's ability to measure relative to the current process.  To assess your measurement system across a range of potential values, the correct tool to use is a "Bias and Linearity Study" which is found in the Gage Study menu in Minitab.  This tool establishes for you whether you have bias across the entire range (consistently measuring high or low) or bias that depends on the value measured (for example, measuring smaller parts larger than they are and larger parts smaller than they are).

To really assess a measurement system, I advise performing both a Bias and Linearity Study as well as a Gage R&R.

Which Sampling Scheme to Use?

In the beginning I suggested that a random scheme be used but then clearly illustrated that the "exact" method provides even better results.  Using an exact method requires you to know the underlying distribution from having enough previous data (somewhat reasonable although existing data include measurement error) as well as to be able to measure those parts accurately enough to ensure you're pulling the right parts (not too feasible...if you know you can measure accurately, why are you doing a Gage R&R?).  In other words, it isn't very realistic.

So for the majority of cases, the best we can do is to sample randomly.  But we can do a reality check after the fact by looking at the average measurement for each of the parts chosen and verifying that the distribution seems reasonable.  If you have a process that typically shows normality and your sample shows unusually high skewness, there's a chance you pulled an unusual sample and may want to pull some additional parts and supplement the original experiment.

Thanks for humoring me and please post scathing comments below!

see Part I of this series
see Part II of this series


R-Squared: Sometimes, a Square is just a Square

$
0
0

rsquareIf you regularly perform regression analysis, you know that R2 is a statistic used to evaluate the fit of your model. You may even know the standard definition of R2: the percentage of variation in the response that is explained by the model.

Fair enough. With Minitab Statistical Software doing all the heavy lifting to calculate your R2 values, that may be all you ever need to know.

But if you’re like me, you like to crack things open to see what’s inside. Understanding the essential nature of a statistic helps you demystify it and interpret it more accurately.

R-squared: Where Geometry Meets Statistics

So where does  this mysterious R-squared value come from? To find the formula in Minitab, choose Help > Methods and Formulas. Click General statistics > Regression > Regression > R-sq.

rsqare

Some spooky, wacky-looking symbols in there. Statisticians use those to make your knees knock together.

But all the formula really says is: “R-squared is  a bunch of squares added together, divided by another bunch of squares added together, subtracted from 1.“

rsquare annotation

What bunch of squares, you ask?

square dance guys

No, not them.

SS Total: Total Sum of Squares

First consider the "bunch of squares" on the bottom of the fraction. Suppose your data is shown on the scatterplot below:

original data

(Only 4 data values are shown to keep the example simple. Hopefully you have more data than this for your actual regression analysis! )

Now suppose you add a line to show the mean (average) of all your data points:

scatterplot with line

The line y = mean of Y is  sometimes referred to the “trivial model” because it doesn’t contain any predictor (X)  variables, just a constant. How well would this line model your data points?

One way to quantify this is to measure the vertical distance from the line to each data point. That tells you how much the line “misses” each data point. This distance can be used to construct the sides of a square on each data point.

pinksquares

If you add up the pink areas of all those squares for all your data points you get the total sum of squares (SS Total), the bottom of the fraction.

SS Total

SS Error: Error Sum of Squares

Now consider the model you obtain using regression analysis.

regression model

Again, quantify the "errors" of this model by measuring the vertical distance of each data value from the regression line and squaring it.

ss error graph

If you add the green areas of theses squares you get the SS Error, the top of the fraction.

ss error formula

So R2 basically just compares the errors of your regression model to the errors you’d have if you just used the mean of Y to model your data.

R-Squared for Visual Thinkers

 

rsquare final

The smaller the errors in your regression model (the green squares) in relation to the errors in the model based on only the mean (pink squares), the closer the fraction is to 0, and the closer R2 is to 1 (100%).

That’s the case shown here. The green squares are much smaller than the pink squares. So the R2 for the regression line is 91.4%.

But if the errors in your reqression model are about the same size as the errors in the trivial model that uses only the mean, the areas of the pink squares and the green squares will be similar, making the fraction close to 1, and the R2 close to 0. 

That means that your model, isn't producing a "tight fit" for your data, generally speaking. You’re getting about the same size errors you’d get if you simply used the mean to describe all your data points! 

R-squared in Practice

Now you know exactly what R2 is. People have different opinions about how critical the R-squared value is in regression analysis.  My view?  No single statistic ever tells the whole story about your data. But that doesn't invalidate the statistic. It's always a good idea to evaluate your data using a variety of statistics. Then interpret the composite results based on the context and objectives of your specific application. If you understand how a statistic is actually calculated, you'll better understand its strengths and limitations.

Related link

Want to see how another commonly used analysis, the t-test, really works? Read this post to learn how the t-test measures the "signal" to the "noise" in your data.

Statistical Fun … at the Grocery Store?

$
0
0

Grocery StoreGrocery shopping. For some, it's the most dreaded household activity. For others, it's fun, or perhaps just a “necessary evil.”

Personally, I enjoy it! My co-worker, Ginger, a content manager here at Minitab, opened my eyes to something that made me love grocery shopping even more: she shared the data behind her family’s shopping trips. Being something of a data nerd, I really geeked out over the ability to analyze spending habits at the grocery store!

So how did she collect her data? What I find especially interesting is that Ginger didn’t have to save her receipts or manually transfer any information from her receipts onto a spreadsheet. As a loyal Wegmans grocery store shopper, Ginger was able to access over a year’s worth of her receipts just by signing up for a Wegmans.com account and using her ‘shoppers club’ card. The data she had access to includes the date, time of day, and total spent for each trip, as well as each item purchased, the grocery store department the item came from (i.e., dairy, produce, frozen foods, etc.), and if a discount was applied. As long as she used her card for purchases, it was tracked and accessible via her Wegmans.com account. Cool stuff!

Ginger created a Minitab worksheet with her grocery receipt data from Wegmans for a several-month period, and shared it with Michelle and myself to see what kinds of Minitab analysis we could do and what we might be able to uncover about her shopping habits.

Using Time Series Plots to See Trends

Time series plots are great for evaluating patterns and behavior in data over time, so a time series plot was a natural first step in helping us look for any initial trends in Ginger’s shopping behavior. Here’s how her Minitab worksheet looked:

Minitab Worksheet

And here’s a time series plot that shows her spending over time:

Time Series Plot in Minitab

To create this time series plot in Minitab, we navigated to Graph > Time Series Plot. It was easy to see that Ginger’s spending appears random over time, filled with several higher dollar orders (likely her weekly bulk trip to stock up) and several smaller orders (things forgotten or extras needed throughout the week). There doesn’t appear to be a trend or pattern. Almost all of her spending remained under $200 per trip, which is pretty good considering that many of her trips looked to be weekly bulk orders to feed her family of four. There were also very few outlier points with extremely high spending away from her consistent behavior to spend between $100 and $150 a 3-4 times per month.

However, you’ll notice that the graph above isn’t the simplest to read. To make it easier to zone-in on monthly spending habits, we used the graph paneling feature in Minitab to divide the graph into more manageable pieces:

Minitab TIme Series Plot - Paneled

The paneled graph makes it even easier to see that Ginger’s spending appears to be random, but consistently random! For more on paneling, check out this help topic on Graph Paneling.

Visualizing Spending Data by Day of the Week

To chart grocery spending by day of the week, we created a simple boxplot in Minitab (Graph > Boxplot):

Minitab Box Plot

It’s pretty easy to see that Ginger’s higher-spending trips took place on Saturdays, Sundays, Mondays, and Tuesday, with the greatest spread of spending (high, low, and in-between) occurring on Tuesdays. Wednesday appeared to be a low-spending day, with what looks to be quick trips to pick up just a few items.

How about the number of trips occurring each day of the week? To see this, we created a simple bar chart in Minitab (Graph > Bar Chart):

Minitab Bar Chart

The highest number of Ginger’s trips to Wegmans occurred on Sunday (35) and Saturday (26), which isn’t really a surprise considering that many people do the majority of their grocery shopping on the weekends when they have time off from work. It’s also neat to see that many of her trips occurring on Wednesday and Thursday were likely smaller dollar trips (according to our box plot from earlier in the post). I can definitely relate to those pesky mid-week trips to get items forgotten earlier in the week!

Visualizing Spending Data by Department

And finally, what grocery store department does Ginger purchase the most items from? To figure this out, we created a Pareto chart in Minitab (Stat > Quality Tools > Pareto Chart):

Minitab Pareto Chart

You can see that the highest number of items purchased is classified under OTHER, which we found to be a catch-all for items that don’t fit neatly into any of the other categories. In looking through the raw data with the item descriptions classified as OTHER, I found everything from personal care items like toothbrushes, to paper plates, and other specialty food items. The GROCERY category is another ambiguous category, but it seems as if this category is largely made up of items like canned and convenience foods (think apple sauces, cereal, crackers, etc.). The rest of the categories (dairy, produce, beverages) seem pretty self-explanatory.

The Pareto analysis is helpful because it can bring perspective to the types of foods being bought. Healthier items will likely be in the produce and dairy categories, so it’s good to see that these categories have high counts and percents in the Pareto above.

Grocery stores love data, too.

It’s certainly no surprise that grocery stores love to track consumer buying behaviors through store discount cards. This helps stores to better target consumers and offer them promotions they are more likely to take advantage of. But it’s also great that grocery stores like Wegmans are sharing the wealth and giving consumers the ability to easily access their own spending data and draw their own conclusions!  

Do you analyze your spending at the grocery store? If so, how do you do it?

Top photo courtesy of Ginger MacRae. Yes, those are her actual groceries!

What Do Ventilated Shelf Installation and Measurement Systems Analysis Have in Common?

$
0
0

Ventilated ShelfHave you ever tried to install ventilated shelving in a closet?  You know: the heavy-duty, white- or gray-colored vinyl-coated wire shelving? The one that allows you to get organized, more efficient with space, and is strong and maintenance-free? Yep, that’s the one. Did I mention this stuff is strong?  As in, really hard to cut? 

It seems like a simple 4-step project. Measure the closet, go the store, buy the shelving, and install when you get home. Simple, right? Yeah, it sounded good in my head!

The lessons I learned in this project underscore the value of doing measurement system analysis in your quality improvement projects, with statistical software such as Minitab. Whatever you're trying to accomplish, if you don't get reliable measurements or data, the task is going to become more challenging.

Before Process Map

Well it turned out to be more complicated and involved a lot of rework. Did I mention that this shelving is made of heavy gauge steel that is nearly impossible to cut with ordinary tools? So, my simple 4-step process turned into a 7-step process with lots of rework (multiple trips to the store to have the shelves re-cut).

My actual process looked more like this!

After Process Map

All the sources of variation from Measurement Systems Analysis (MSA) apply here: Repeatability, Reproducibility, Bias, Linearity, and Stability.  Let’s review these terms and see how I could have done better at measuring the closet, the first time.

Components of Measurement Error

When it was time to measure the closet, I had a few measuring-device choices hanging around my garage: a yardstick, a cloth tape measure, and a steel tape measure. 

Bias examines the difference between the observed average measurement and a reference or master value. It answers the question: "How accurate is my gage when compared to a reference value?" Unless there is visible damage, all three of these measuring devices should be acceptable for my shelf project.

Stability is the change in bias over time. Measurement stability represents the total variation in measurements obtained on the same part measured over time, also known as drift. It is important to assess stability on an ongoing basis. While calibrations and gage studies provide some information about changes in the measurement system, neither provides information on what is happening to the measurement process over time. But unless there is visible damage, all three of these measuring devices should be acceptable for use.

Linearity examines how accurate your measurements are through the expected range of the measurements. It answers the question: "Does my gauge have the same accuracy across all reference values?"  If you use the yardstick or steel tape measure, then the answer might be “yes” because of its solid construction.  But the cloth tape measure could stretch when extended, making it less reliable at longer lengths. Examine the cloth measuring tape for evidence of stretching or wear. If damage is present, do not use the measuring device.

Repeatability represents the variation that occurs when the same appraiser measures the same part with the same device. This is best represented with the advice “Measure twice, cut once!” In my case, if I had measured the closet width multiple times, I would have realized I was getting a different answer each time and therefore needed to take better care when measuring. Then I could have gotten more accurate measurements for each shelf. 

Reproducibility represents the variation that occurs when different appraisers measure the same part with the same device. In my case, if I'd asked my son to measure the same locations that I just measured, I would have discovered that we got different answers: I should have accounted for the mounting brackets in my measurements. (The fact that he did is why he’s in school to become a Mechanical Engineer.)

In summary, my afternoon shelf installation project ended up taking two days to complete, resulting in multiple trips to the store, a lot of frustration for me, and late dinners for my family because I was too busy to cook! 

My lessons learned from this project are:

  1. Don’t assume your closet walls are exactly parallel at the top, middle and bottom of the closet. Instead, measure at each location where a shelf is to be installed.  Remember the Rule of Thumb for Gage R&R: take measurements representing the entire range of process variation.
  2. Apply the Gage R&R sources of measurement error when measuring:
    1. Visually inspect the measuring device before using to verify it is in good condition.
    2. Measure twice, cut once. (Repeatability)
    3. Ask my family for assistance in measuring.  (Reproducibility)
  3. Did you know that you can purchase a laser measure for about $30 these days?  If only I had known…
  4. Consider hiring a professional because this project was harder than it originally seemed.

Do Executives See the Impact of Quality Projects?

$
0
0

Do your executives see how your quality initiatives affect the bottom line? Perhaps they would more often if they had accessible insights on the performance, and ultimately the overall impact, of improvement projects. 

For example, 60% of the organizations surveyed by the American Society for Quality in their 2016 Global State of Quality study say they don’t know or don’t measure the financial impact of quality.

Evidence shows company leaders just don't have good access to the kind of information they need about their quality improvement initiatives.

The 2013 ASQ Global State of Quality study indicated that more than half of the executives are getting updates about quality only once a quarter, or even less. You can bet they make decisions that impact quality much more frequently than that.

Even for organizations that are working hard to assess the impact of quality, communicating that impact effectively to C-level executives is a huge challenge. The 2013 report revealed that the higher people rise in an organization's leadership, the less often they receive reports about quality metrics. Only 2% of senior executives get daily quality reports, compared to 33% of front-line staff members.  

A quarter of the senior executives reported getting quality metrics only on an annual basis. That's a huge problem, and it resonates across all industries. The Juran Institute, which specializes in training, certification, and consulting on quality management globally, also concluded that a lack of management support is the No. 1 reason quality improvement initiatives fail.

reporting on quality initiatives is difficult

Quality practitioners are a dedicated, hard-working lot, and their task is challenging and frequently thankless. Their successes should be understood and recognized. But their efforts don't appear to be reaching C-level executive offices as often as they deserve. 

Why do so many leaders get so few reports about their quality programs?

5 Factors that Make Reporting on Quality Programs Impossible

In fairness to everyone involved, from the practitioner to the executive, piecing together the full picture of quality in a company is daunting. Practitioners tell us that even in organizations with robust, mature quality programs, assessing the cumulative impact of an initiative can be difficult, and sometimes impossible. The reasons include:

Scattered, Inaccessible Project Data

Individual teams are very good at capturing and reporting their results, but a large company may have thousands of simultaneous quality projects. Just gathering the critical information from all of those projects and putting it into a form leaders can use is a monumental task. 

Disparate Project Applications and Documents

Teams typically use an array of different applications to create charters, process maps, value stream maps, and other documents. So the project record becomes a mix of files from many different applications. Adding to the confusion, the latest versions of some documents may reside on several different computers, project leaders often need to track multiple versions of a document to keep the official project record current. 

Inconsistent Metrics Across Projects   

Results and metrics aren’t always measured the same way from one team's project to another. If one team measures apples and the next team measures oranges, their results can't be evaluated or aggregated as if they were equivalent. 

Ineffective and Ill-suited Tracking

Many organizations have tried quality tracking methods ranging from homegrown project databases to full-featured project portfolio management (PPM) systems. But homegrown systems often become a burden to maintain, while off-the-shelf solutions created for IT or other business functions don’t effectively support projects involving continuous quality improvement methods like Lean and Six Sigma. 

Too Little Time

Reporting on projects can be a burden. There are only so many hours in the day, and busy team members need to prioritize. Copying and pasting information from project documents into an external system seems like non-value-added time, so it's easy to see why putting the latest information into the system gets low priority—if it happens at all.

Reporting on Quality Shouldn't Be So Difficult

Given the complexity of the task, and the systemic and human factors involved in improving quality, it's not hard to see why many organizations struggle with knowing how well their initiatives are doing. 

But for quality professionals and leaders, the challenge is to make sure that reporting on results becomes a critical step in every individual project, and that all projects are using consistent metrics. Teams that can do that will find their results getting more attention and more credit for how they affect the bottom line. 

This finding in the ASQ report caught dramatically underscores problems we at Minitab have been focusing on recently—in fact, our Companion by Minitab software tackles many of these factors head-on. 

Companion takes a desktop app that provides a complete set of integrated tools for completing projects, and combines it with a cloud-based project storage system and web-based dashboard. For teams, the desktop app makes it easier to complete projects—and since project data is centrally stored and rolls up to the dashboard automatically, reporting on projects is literally effortless.

For executives, managers, and stakeholders, Companion delivers unprecedented and unparalleled insight into the progress, performance, and bottom-line impact of the organization’s entire quality initiative, or any individual piece of it. 

Regardless of the tools they use, this issue—how to ensure the results of quality improvement initiatives are understood throughout an organization—is one that every practitioner is likely to grapple with in their career.  

How will you make sure the results of your work reach your organization's decision-makers?   

 

How Could You Benefit from Between / Within Control Charts?

$
0
0

Choosing the right type of subgroup in a control chart is crucial. In a rational subgroup, the variability within a subgroup should encompass common causes, random, short-term variability and represent “normal,” “typical,” natural process variations, whereas differences between subgroups are useful to detect drifts in variability over time (due to “special” or “assignable” causes). Variation within subgroup is therefore used to estimate the natural process standard deviation and to calculate the 3-sigma control chart limits.

In some cases, however, identifying the correct rational subgroup is not easy. For example, when parts are manufactured in batches, as they are in the automotive or in the semiconductor industries.

Batches of parts might seem to represent ideal subgroups, or at least a self-evident way to organize subgroups, for Statistical Process Control (SPC) monitoring. However, this is not always the right approach. When batches aren't a good choice for rational subgroups, control chart limits may become too narrow or too wide.

Control Limits May Be Too Narrow

Since batches are often manufactured at the same time on the same equipment, the variability within batches is often much smaller than the overall variability. In this case, the within-subgroup variability is not really representative and underestimates the natural process variability. Since within-subgroups variability is used to calculate the control chart limits, these limits may become unrealistically close to one another, which ultimately generates a large number of false alarms.

Too Narrow

Control Limits May Be Too Wide

On the other hand, suppose that within batches a systematic difference exists between the first two parts and the rest of the batch. In this case, the within-batch variability will include this systematic difference, which will inflate the within-subgroups standard deviation. Note that the between-subgroup variability is not affected by this systematic difference, and remember that only the within-subgroup variance is used to estimate the SPC limits. In this situation, the distance between the control limits would become too wide, would not allow you to quickly identify drifts.

Too wide

For example, in an injection mold with several cavities, when groups of parts molded at the same time but in different cavities are used as subgroups, systematic differences between cavities on the same mold will necessarily impact and inflate the within-subgroup variability.

I-MR-R/S Between/Within charts

When we encounter these situations in practice, using SPC charts can become more difficult and less efficient. An obvious solution is to consider within- and between-subgroup sources of variability separately. In Minitab, if you go to Stat > Control Charts > Variables Charts for Subgroups..., you will find I-MR-R/S Between/Within Charts to cover these types of issues.

between Within

Between/within charts are commonly used in the semiconductor industry, for example. Wafers are manufactured in batches (usually 25 wafers in a batch), and these batches are treated as subgroups in practice.

Using I-MR-R/S (between / within) charts allows you to use the I-MR chart to monitor differences between subgroups (I-MR charts), but in addition it also allows you to control within-subgroup variations (the R/S chart). Thus, this chart provides a full and coherent picture of the overall process variability. Thanks to that, identifying the right rational subgrouping scheme is not as crucial as it is when using standard Xbar-R or S control charts

Conclusion

We've all encountered ideas that seem simple in theory, but reality is often more complex than we expect. I-MR-R/S Between/Within control charts are a very flexible and efficient tool that make it much easier to account for complexities in process variability. They enable you to monitor Within and Between sources of variability separately.

If selecting the right rational subgroups is a challenge when you use control charts, this approach can minimize the number of false alarms you experience, while permitting you to react as quickly as possible to true “special” causes.

Understanding Monte Carlo Simulation with an Example

$
0
0

As someone who has collected and analyzed real data for a living, the idea of using simulated data for a Monte Carlo simulation sounds a bit odd. How can you improve a real product with simulated data? In this post, I’ll help you understand the methods behind Monte Carlo simulation and walk you through a simulation example using Companion by Minitab.

Process capability chart

Companion by Minitab is a software platform that combines a desktop app for executing quality projects with a web dashboard that makes reporting on your entire quality initiative literally effortless. Among the first-in-class tools in the desktop app is a Monte Carlo simulation tool that makes this method extremely accessible. 

What Is Monte Carlo Simulation?

The Monte Carlo method uses repeated random sampling to generate simulated data to use with a mathematical model. This model often comes from a statistical analysis, such as a designed experiment or a regression analysis.

Suppose you study a process and use statistics to model it like this:

Regression equation for the process

With this type of linear model, you can enter the process input values into the equation and predict the process output. However, in the real world, the input values won’t be a single value thanks to variability. Unfortunately, this input variability causes variability and defects in the output.

To design a better process, you could collect a mountain of data in order to determine how input variability relates to output variability under a variety of conditions. However, if you understand the typical distribution of the input values and you have an equation that models the process, you can easily generate a vast amount of simulated input values and enter them into the process equation to produce a simulated distribution of the process outputs.

You can also easily change these input distributions to answer "what if" types of questions. That's what Monte Carlo simulation is all about. In the example we are about to work through, we'll change both the mean and standard deviation of the simulated data to improve the quality of a product.

Today, simulated data is routinely used in situations where resources are limited or gathering real data would be too expensive or impractical.

How Can Monte Carlo Simulation Help You?

With Companion by Minitab, engineers can easily perform a Monte Carlo analysis in order to:

  • Simulate product results while accounting for the variability in the inputs
  • Optimize process settings
  • Identify critical-to-quality factors
  • Find a solution to reduce defects

Along the way, Companion interprets simulation results and provides step-by-step guidance to help you find the best possible solution for reducing defects. I'll show you how to accomplish all of this right now!

Step-by-Step Example of Monte Carlo Simulation

A materials engineer for a building products manufacturer is developing a new insulation product. The engineer performed an experiment and used statistics to analyze process factors that could impact the insulating effectiveness of the product. (The data for this DOE is just one of the many data set examples that can be found in Minitab’s Data Set Library.) For this Monte Carlo simulation example, we’ll use the regression equation shown above, which describes the statistically significant factors involved in the process.

Let's open Companion by Minitab's desktop app (if you don't already have it, you can try Companion free for 30 days). Open or start a new a project, then right-click on the project Roadmap™ to insert the Monte Carlo Simulation tool.

insert monte carlo simulation

Step 1: Define the Process Inputs and Outputs

The first thing we need to do is to define the inputs and the distribution of their values. The process inputs are listed in the regression output and the engineer is familiar with the typical mean and standard deviation of each variable. For the output, we simply copy and paste the regression equation that describes the process from Minitab statistical software right into Companion's Monte Carlo tool!

When the Monte Carlo tool opens, we are presented with these entry fields:

Setup the process inputs and outputs

It's an easy matter to enter the information about the inputs and outputs for the process as shown.

Setup the input values and the output equation

Verify your model with the above diagram and then click Simulate in the application ribbon.

perform the monte carlo simulation

Initial Simulation Results

After you click Simulate, Companion very quickly runs 50,000 simulations by default, though you can specify a higher or lower number of simulations. 

Initial simulation results

Companion interprets the results for you using output that is typical for capability analysis—a capability histogram, percentage of defects, and the Ppk statistic. Companion correctly points out that our Ppk is below the generally accepted minimum value of Ppk.

Step-by-Step Guidance for the Monte Carlo Simulation

But Companion doesn’t just run the simulation and then let you figure what to do next. Instead, Companion has determined that our process is not satisfactory and presents you with a smart sequence of steps to improve the process capability.

How is it smart? Companion knows that it is generally easier to control the mean than the variability. Therefore, the next step that Companion presents is Parameter Optimization, which finds the mean settings that minimize the number of defects while still accounting for input variability.

Next steps leading to parameter optimization

Step 2: Define the Objective and Search Range for Parameter Optimization

At this stage, we want Companion to find an optimal combination of mean input settings to minimize defects. After you click Parameter Optimization, you'll need to specify your goal and use your process knowledge to define a reasonable search range for the input variables.

Setup for parameter optimization

And, here are the simulation results!

Results of the parameter optimization

At a glance, we can tell that the percentage of defects is way down. We can also see the optimal input settings in the table. However, our Ppk statistic is still below the generally accepted minimum value. Fortunately, Companion has a recommended next step to further improve the capability of our process.

Next steps leading to a sensitivity analysis

Step 3: Control the Variability to Perform a Sensitivity Analysis

So far, we've improved the process by optimizing the mean input settings. That reduced defects greatly, but we still have more to do in the Monte Carlo simulation. Now, we need to reduce the variability in the process inputs in order to further reduce defects.

Reducing variability is typically more difficult. Consequently, you don't want to waste resources controlling the standard deviation for inputs that won't reduce the number defects. Fortunately, Companion includes an innovative graph that helps you identify the inputs where controlling the variability will produce the largest reductions in defects.

Setup for the sensitivity analysis

In this graph, look for inputs with sloped lines because reducing these standard deviations can reduce the variability in the output. Conversely, you can ease tolerances for inputs with a flat line because they don't affect the variability in the output.

In our graph, the slopes are fairly equal. Consequently, we'll try reducing the standard deviations of several inputs. You'll need to use process knowledge in order to identify realistic reductions. To change a setting, you can either click the points on the lines, or use the pull-down menu in the table.

Final Monte Carlo Simulation Results

Results of the sensitivity analysis

Success! We've reduced the number of defects in our process and our Ppk statistic is 1.34, which is above the benchmark value. The assumptions table shows us the new settings and standard deviations for the process inputs that we should try. If we ran Parameter Optimization again, it would center the process and I'm sure we'd have even fewer defects.

To improve our process, Companion guided us on a smart sequence of steps during our Monte Carlo simulation:

  1. Simulate the original process
  2. Optimize the mean settings
  3. Strategically reduce the variability

If you want to try Monte Carlo simulation for yourself, get the free trial of Companion by Minitab!

Making the World a Little Brighter with Monte Carlo Simulation

$
0
0

If you have a process that isn’t meeting specifications, using the Monte Carlo simulation and optimization tool in Companion by Minitab can help. Here’s how you, as a chemical technician for a paper products company, could use Companion to optimize a chemical process and ensure it consistently delivers a paper product that meets brightness standards.

paperThe brightness of Perfect Papyrus Company’s new copier paper needs to be at least 84 on the TAPPI brightness scale. The important process inputs are the bleach concentration of the solution used to treat the pulp, and the processing temperature. The relationship is explained by this equation:

Brightness = 70.37 + 44.4 Bleach + 0.04767 Temp – 64.3 Bleach*Bleach

Bleach concentration follows a normal distribution with a mean of 0.25 and a standard deviation of 0.0095 percent. Temperature also follows a normal distribution, with a mean of 145 and a standard deviation of 15.3 degrees C.

Building your process model

To assess the process capability, you can enter the parameter information, transfer function, and specification limit into Companion's straightforward interface, and instantly run 50,000 simulations.

paper brightness monte carlo simulation

Understanding your results

monte carlo simulation output

The process performance measurement (Cpk) is 0.162, far short of the minimum standard of 1.33. Companion also indicates that under current conditions, you can expect the paper’s brightness to fall below standards about 31.5% of the time.

Finding optimal input settings

Quality Companion's smart workflow guides you to the next step for improving your process: optimizing your inputs.

paramater optimzation

You set the goal—in this case, maximizing the brightness of the paper—and enter the high and low values for your inputs.

optimization dialog

Simulating the new process

After finding the optimal input settings in the ranges you specified, Companion presents the simulated results for the recommended process changes.

optimized process output

The results indicate that if the bleach amount was set to approximately 0.3 percent and the temperature to 160 degrees, the % outside of specification would be reduced to about 2% with a Cpk of 0.687. Much better, but not good enough.

Understanding variability

To further improve the paper brightness, Companion’s smart workflow suggests that you next perform a sensitivity analysis.

sensitivity analysis

Companion’s unique graphic presentation of the sensitivity analysis gives you more insight into how the variation of your inputs influences the percentage of your output that doesn’t meet specifications.

sensitivity analysis of paper brightness

The blue line representing temperature indicates that variation in this factor has a greater impact on your process than variation in bleach concentration, so you run another simulation to visualize the brightness using the 50% variation reduction in temperature.

final paper brightness model simulation

The simulation shows that reducing the variability will result in 0.000 percent of the paper falling out of spec, with a Cpk of 1.34. Thanks to you, the outlook for the Perfect Papyrus Company’s new copier paper is looking very bright.

Getting great results

Figuring out how to improve a process is easier when you have the right tool to do it. With Monte Carlo simulation to assess process capability, Parameter Optimization to identify optimal settings, and Sensitivity Analysis to pinpoint exactly where to reduce variation, Companion can help you get there.

To try the Monte Carlo simulation tool, as well as Companion's more than 100 other tools for executing and reporting quality projects, learn more and get the free 30-day trial version for you and your team at companionbyminitab,com.


Understanding Qualitative, Quantitative, Attribute, Discrete, and Continuous Data Types

$
0
0

"Data! Data! Data! I can't make bricks without clay."
 — Sherlock Holmes, in Arthur Conan Doyle's The Adventure of the Copper Beeches

Whether you're the world's greatest detective trying to crack a case or a person trying to solve a problem at work, you're going to need information. Facts. Data, as Sherlock Holmes says. 

jujubes

But not all data is created equal, especially if you plan to analyze as part of a quality improvement project.

If you're using Minitab Statistical Software, you can access the Assistant to guide you through your analysis step-by-step, and help identify the type of data you have.

But it's still important to have at least a basic understanding of the different types of data, and the kinds of questions you can use them to answer. 

In this post, I'll provide a basic overview of the types of data you're likely to encounter, and we'll use a box of my favorite candy—Jujubes—to illustrate how we can gather these different kinds of data, and what types of analysis we might use it for. 

The Two Main Flavors of Data: Qualitative and Quantitative

At the highest level, two kinds of data exist: quantitative and qualitative.

Quantitativedata deals with numbers and things you can measure objectively: dimensions such as height, width, and length. Temperature and humidity. Prices. Area and volume.

Qualitative data deals with characteristics and descriptors that can't be easily measured, but can be observed subjectively—such as smells, tastes, textures, attractiveness, and color. 

Broadly speaking, when you measure something and give it a number value, you create quantitative data. When you classify or judge something, you create qualitative data. So far, so good. But this is just the highest level of data: there are also different types of quantitative and qualitative data.

Quantitative Flavors: Continuous Data and Discrete Data

There are two types of quantitative data, which is also referred to as numeric data: continuous and discreteAs a general rule, counts are discrete and measurements are continuous.

Discrete data is a count that can't be made more precise. Typically it involves integers. For instance, the number of children (or adults, or pets) in your family is discrete data, because you are counting whole, indivisible entities: you can't have 2.5 kids, or 1.3 pets.

Continuousdata, on the other hand, could be divided and reduced to finer and finer levels. For example, you can measure the height of your kids at progressively more precise scales—meters, centimeters, millimeters, and beyond—so height is continuous data.

If I tally the number of individual Jujubes in a box, that number is a piece of discrete data.

a count of jujubes is discrete data

If I use a scale to measure the weight of each Jujube, or the weight of the entire box, that's continuous data. 

Continuous data can be used in many different kinds of hypothesis tests. For example, to assess the accuracy of the weight printed on the Jujubes box, we could measure 30 boxes and perform a 1-sample t-test. 

Some analyses use continuous and discrete quantitative data at the same time. For instance, we could perform a regression analysis to see if the weight of Jujube boxes (continuous data) is correlated with the number of Jujubes inside (discrete data). 

Qualitative Flavors: Binomial Data, Nominal Data, and Ordinal Data

When you classify or categorize something, you create Qualitative or attribute data. There are three main kinds of qualitative data.

Binary data place things in one of two mutually exclusive categories: right/wrong, true/false, or accept/reject. 

Occasionally, I'll get a box of Jujubes that contains a couple of individual pieces that are either too hard or too dry. If I went through the box and classified each piece as "Good" or "Bad," that would be binary data. I could use this kind of data to develop a statistical model to predict how frequently I can expect to get a bad Jujube.

When collecting unordered or nominal data, we assign individual items to named categories that do not have an implicit or natural value or rank. If I went through a box of Jujubes and recorded the color of each in my worksheet, that would be nominal data. 

This kind of data can be used in many different ways—for instance, I could use chi-square analysis to see if there are statistically significant differences in the amounts of each color in a box. 

We also can have ordered or ordinal data, in which items are assigned to categories that do have some kind of implicit or natural order, such as "Short, Medium, or Tall."  Another example is a survey question that asks us to rate an item on a 1 to 10 scale, with 10 being the best. This implies that 10 is better than 9, which is better than 8, and so on. 

The uses for ordered data is a matter of some debate among statisticians. Everyone agrees its appropriate for creating bar charts, but beyond that the answer to the question "What should I do with my ordinal data?" is "It depends."  Here's a post from another blog that offers an excellent summary of the considerations involved

Additional Resources about Data and Distributions

For more fun statistics you can do with candy, check out this article (PDF format): Statistical Concepts: What M&M's Can Teach Us. 

For a deeper exploration of the probability distributions that apply to different types of data, check out my colleague Jim Frost's posts about understanding and using discrete distributions and how to identify the distribution of your data.

Using a Value Stream Map to Find and Slay the Dragons of Process Waste

$
0
0

Dragon's treasureIn ancient times dragons were believed to be set by the gods to guard golden treasures. This is because dragons were the most fearsome creatures and would deter would-be thieves. Dragons typically lived in an underground lair or castle and would sleep on top of their gold and treasures.  They were terrifying and often depicted as large fire-breathing, scaly creatures with wings and a huge deadly spiked tail.  One blow from its tail or fire-breath meant doom for any hopeful knight trying to slay this evil beast!

Just as dragons guarded their treasure, so do process steps guard their waste and excess inventory. Like dragons, these steps lay hidden, deep in the process, and fiercely defend their territory. They defy change and are experts at diverting attention to other parts of the process. They go by names such as Over-production, Over-processing, Waiting, Rework Loops, Defects, and Excess Inventory. There are costs associated with these steps too: acquiring and storing excess raw materials, warehousing partially or fully finished inventory, spare equipment, and maintaining that equipment, to name just a few.

Knight.jpgHow do you find and then slay these process dragons? You need a knight in shining armor to come to your rescue and slay the crafty dragons! A process improvement practitioner has the right tools and techniques, and—with the help of a knowledgeable team—can generate a map that reveals just where the dragons lurk. Typically, quality professionals have been trained in the traditional DMAIC problem-solving methodology and have a trusty sidekick, such as Companion by Minitab®, to help. 

The Value Stream Map (VSM) will be one of the most useful tools for finding hidden process waste. The VSM illustrates the flow of materials and information as a product or service moves through the value stream. A value stream is the collection of all activities, both value-added and non-value added that generate a product or service required to meet customer needs.
http://support.minitab.com/en-us/companion/vsm_complete.png

A current-state value stream map identifies waste and helps you to envision an improved future state. Companion by Minitab® has an easy-to-use VSM tool and other tools that make the process improvement journey fun. As you work through the process of mapping the steps, calculating takt times and value-add ratios, use the following three tips to uncover opportunities for improvements.

  1.   By default, process shapes and inventory shapes display data on the map after you enter values for Cycle Time, VA CT, NVA CT, Changeover, Inventory and Inv Time. To display other data, use the Map >Data Display >Select and Arrange Shape Data dialog box, and drag a data field from the list to the shape. Release the mouse when you determine a location.  The red line indicates where the data will be displayed beside the shape on the map.
    http://support.minitab.com/en-us/companion/vsm_select_arrange_data_dialog.png
  2. If you prefer to hide some or all the data, you can select a shape and then choose Map > Data Display > Shape Data Labels. In this example, only the data labels are hidden. 
    Shape Labels

    To hide all the shape data, choose Map > Data Display > Shape Data.  In this example, the data labels, Cycle Time, VA CT, and Operators, and their values are hidden.
    Shape Data
     
  3. Use comments fields to take notes. Simply click on the step and use the Comments field is in the task pane on the Other tab. The comment symbol, which is circled in the image, appears above the shape.

Comments

In summary, process waste dragons are hard to find and harder to slay unless you have the appropriate problem-solving tools and techniques. Understanding the size of the pile of the gold and how much of it you can get back from the dragon are keys to engaging management and employee support. Together you can be successful at slaying those dragons. Keep in mind, however, dragons never really die – they always come back in the sequel!

To get your free 30-day trial of the Companion by Minitab® software, please go to the www.Minitab.com/Companion website. 
 

Many thanks to Dean Williams, Duke Energy for allowing me to use his ideas from the Slaying the Inventory Dragon presentation at the 2017 Lean and Six Sigma World Conference.

For Want of an FMEA, the Empire Fell

$
0
0

Don't worry about it, we'll be fine without an FMEA!by Matthew Barsalou, guest blogger

For want of a nail the shoe was lost,
For want of a shoe the horse was lost,
For want of a horse the rider was lost
For want of a rider the battle was lost
For want of a battle the kingdom was lost
And all for the want of a horseshoe nail. (Lowe, 1980, 50)

According to the old nursery rhyme, "For Want of a Nail," an entire kingdom was lost because of the lack of one nail for a horseshoe. The same could be said for the Galactic Empire in Star Wars. The Empire would not have fallen if the technicians who created the first Death Star had done a proper Failure Mode and Effects Analysis (FMEA).

A group of rebels in Star Wars, Episode IV: A New Hope stole the plans to the Death Star and found a critical weakness that lead to the destruction of the entire station. A simple thermal exhaust port was connected to a reactor in a way which permitted an explosion in the exhaust port to start a chain reaction that blew up the entire station. This weakness was known, but considered insignificant because the weakness could only be exploited by small space fighters and the exhaust port was protected by turbolasers and TIE fighters. It was thought that nothing could penetrate the defenses; however, a group of Rebel X-Wing fighters proved that this weakness could be exploited. One proton torpedo fired into the thermal exhaust port started a chain reaction that led to the station reactors and destroyed the entire battle station (Lucas, 1976).

Why the Death Star Needed an FMEA

The Death Star was designed by the engineer Bevil Lemelisk under the command of Grand Moff Wilhuff Tarkin; whose doctrine called for a heavily armed mobile battle station carrying more than 1,000,000 imperial personnel as well as over 7,000 TIE fighters and 11,000 land vehicles (Smith, 1991). It was constructed in orbit around the penal planet Despayre in the Horuz system of the Outer Rim Territories and was intended to be a key element of the Tarkin Doctrine for controlling the Empire. The current estimate for the cost of building of a Death Star is $850,000,000,000,000,000 (Rayfield, 2013).

Such an expensive, resource-consuming project should never be attempted without a design FMEA. The loss of the Death Star could have been prevented with just one properly filled-out FMEA during the design phase:

FMEA Example

The Galactic Empire's engineers frequently built redundancy into the systems on the Empire’s capital ships and space stations; unfortunately, the Death Star's systems were all connected to the main reactor to ensure that power would always be available for each individual system. This interconnectedness resulted in thermal exhaust ports that were directly connected to the main reactor.

The designers knew that an explosion in a thermal exhaust port could reach the main reactor and destroy the entire station, but they were overconfident and believed that limited prevention measures--such as turbolaser towers, shielding that could not prevent the penetration of small space fighters, and wings of TIE fighters--could protect the thermal exhaust ports (Smith, 1991). Such thinking is little different than discovering a design flaw that could lead to injury or death, but deciding to depend upon inspection to prevent anything bad from happening. Bevil Lemelisk could not have ignored this design flaw if he had created an FMEA.

Assigning Risk Priority Numbers to an FMEA

An FMEA can be done with a pencil and paper, although Minitab's Companion software for executing and reporting on process improvement has a built-in FMEA form that automates calculations, and shares data with process maps and other forms you'll probably need for your project. 

An FMEA uses a Risk Priority Number (RPN) to determine when corrective actions must be taken. RPN numbers range from 1 to 1,000 and lower numbers are better. The RPN is determined by multiplying severity (S) by occurrence (O) and detection D.

RPN = S x O x D

Severity, occurrence and detection are each evaluated and assigned a number between 1 and 10, with lower numbers being better.

Failure Mode and Effects Analysis Example: Death Star Thermal Exhaust Ports

In the case of the Death Star's thermal exhaust ports, the failure mode would be an explosion in the exhaust port and the resulting effect would be a chain reaction that reaches the reactors. The severity would be rated as 10 because an explosion of the reactors would lead to the loss of the station as well as the loss of all the personnel on board. A 10 for severity is sufficient reason to look into a redesign so that a failure, no matter how improbable, does not result in injury or loss of life.

FMEA Failure Mode Severity Example

The potential cause of failure on the Death Star would be attack or sabotage; the designers did not consider this likely to happen, so occurrence is a 3. The main control measure was shielding that would only be effective against attack by large ships. This was rated as a 4 because the Empire believed these measures to be effective.

Potential Causes and Current Controls

The resulting RPN would be S x O x D =  10 x 3 x 4 = 120. An RPN of 120 should be sufficient reason to take actions, but even a lower RPN requires a corrective action due to the high rating for severity. The Death Star's RPN may even be too low due to the Empire's overconfidence in the current controls. Corrective actions are definitely needed. 

FMEA Risk Priority Number

Corrective actions are easier and cheaper to implement early in the design phase; particularly if the problem is detected before assembly is started. The original Death Star plans could have been modified with little effort before construction started. The shielding could have been improved to prevent any penetration and more importantly, the interlinks between the systems could have been removed so that a failure of one system, such a an explosion in the thermal exhaust port, does not destroy the entire Death Star. The RPN needs to be reevaluated after corrective actions are implemented and verified; the new Death Star RPN would be 5 x 3 x 2 = 30.

FMEA Revised Metrics

Of course, doing the FMEA would have had more important impacts than just achieving a low number on a piece of paper. Had this step been taken, the Empire could have continued to implement the Tarkin Doctrine, and the Universe would be a much different place today. 

Do You Need to Do an FMEA? 

A simple truth is demonstrated by the missing nail and the kingdom, as well as the lack of an FMEA and the Death Star:  when designing a new product, whether it is an oil rig, a kitchen appliance, or a Death Star, you'll avoid many future problems by performing an FMEA early in the design phase.

About the Guest Blogger: 
Matthew Barsalou is an engineering quality expert in BorgWarner Turbo Systems Engineering GmbH’s Global Engineering Excellence department. He has previously worked as a quality manager at an automotive component supplier and as a contract quality engineer at Ford in Germany and Belgium. He possesses a bachelor of science in industrial sciences, a master of liberal studies and a master of science in business administration and engineering from the Wilhelm Büchner Hochschule in Darmstadt, Germany..
  

Would you like to publish a guest post on the Minitab Blog? Contact publicrelations@minitab.com

 

References

Lucas, George. Star Wars, Episode IV: A New Hope. New York: Del Rey, 1976. http://www.amazon.com/Star-Wars-Episode-IV-Hope/dp/0345341465/ref=sr_1_2?ie=UTF8&qid=1358180992&sr=8-2&keywords=Star+Wars%2C+Episode+IV%3A+A+New+Hope

 Opie, Iona and Opie, Peter. ed. Oxford Dictionary of Nursery Rhymes. Oxford, 1951, 324. Quoted in Lowe, E.J. “For Want of a Nail.” Analysis 40 (January 1980), 50-52. http://www.jstor.org/stable/3327327

Rayfield, Jillian. “White House Rejects 'Death Star' Petition.” Salon, January 13, 2013. Accessed 1anuary 14, 2013 from http://www.salon.com/2013/01/13/white_house_rejects_death_star_petition/

Smith, Bill. ed. Star Wars: Death Star Technical Companion. Honesdale, PA: West End Games, 1991. http://www.amazon.com/Star-Wars-Death-Technical-Companion/dp/0874311209/ref=sr_1_1?s=books&ie=UTF8&qid=1358181033&sr=1-1&keywords=Star+Wars%3A+Death+Star+Technical+Companion.

How Can a Similar P-Value Mean Different Things?

$
0
0

One highlight of writing for and editing the Minitab Blog is the opportunity to read your responses and answer your questions. Sometimes, to my chagrin, you point out that we've made a mistake. However, I'm particularly grateful for those comments, because it permits us to correct inadvertent errors. 

oppositesI feared I had an opportunity to fix just such an error when I saw this comment appear on one of our older blog posts:

You said a p-value greater than 0.05 gives a good fit. However, in another post, you say the p-value should be below 0.05 if the result is significant. Please, check it out!

You ever get a chill down your back when you realize you goofed? That's what I felt when I read that comment. Oh no, I thought. If the p-value is greater than 0.05, the results of a test certainly wouldn't be significant. Did I overlook an error that basic?  

Before beating myself up about it, I decided to check out the posts in question. After reviewing them, I realized I wouldn't need to put on the hairshirt after all. But the question reminded me about the importance of a fundamental idea. 

It Starts with the Hypothesis

If you took an introductory statistics course at some point, you probably recall the instructor telling the class how important it is to formulate your hypotheses clearly. Excellent advice.

However, many commonly used statistical tools formulate their hypotheses in ways that don't quite match. That's what this sharp-eyed commenter noticed and pointed out.

The writer of the first post detailed how to use Minitab to identify the distribution of your data, and in her example pointed out that a p-value greater than 0.05 meant that the data were a good fit for a given distribution. The writer of second post—yours truly—commented on the alarming tendency to use deceptive language to describe a high p-value as if it indicated statistical significance

To put it in plain language, my colleague's post cited the high p-value as an indicator of a positive result. And my post chided people who cite a high p-value as an indicator of a positive result. 

Now, what's so confusing about that? 

Don't Forget What You're Actually Testing

You can see where this looks like a contradiction, but to my relief, the posts were consistent. The appearance of contradiction stemmed from the hypotheses discussed in the two posts. Let's take a look. 

My colleague presented this graph, output from the Individual Distribution Identification:

Probability Plot

The individual distribution identification is a kind of hypothesis test, and so the p-value helps you determine whether or not to reject the null hypothesis.

Here, the null hypothesis is "The data follow a normal distribution," and the alternative hypothesis would be "The data DO NOT follow a normal distribution." If the p-value is over 0.05, we will fail to reject the null hypothesis and conclude that the data follow the normal distribution.

Just have a look at that p-value:

P value

That's a high p-value. And for this test, that means we can conclude the normal distribution fits the data. So if we're checking these data for the assumption of normality, this high p-value is good. 

But more often we're looking for a low p-value. In a t-test, the null hypothesis might be "The sample means ARE NOT different," and the alternative hypothesis, "The sample means ARE different." Seen this way, the value or arrangement of the hypotheses is the opposite of that in the distribution identification. 

Hence, the apparent contradiction. But in both cases a p-value greater than 0.05 means we fail to reject the null hypothesis. We're interpreting the p-value in each test the same way.

However, because the connotations of "good" and "bad" are different in the two examples, how we talk about these respective p-values appears contradictory—until we consider exactly what the null and alternative hypotheses are saying. 

And that's a point I was happy to be reminded of. 

 

Reducing the Phone Bill with Statistical Analysis

$
0
0

One of the most memorable presentations at the inaugural Minitab Insights conference reminded me that data analysis and quality improvement methods aren't only useful in our work and businesses: they can make our home life better, too. 

you won't believe how cheap my phone bill is now! The presenter, a continuous improvement training program manager at an aviation company in the midwestern United States, told attendees how he used Minitab Statistical Software, and some simple quality improvement tools, to reduce his phone bill.

He took the audience back to 2003, when his family first obtained their cell phones. For a few months, everything was fine. Then the April bill arrived, and it was more than they expected. The family had used too many minutes. 

The same thing happened again in May. In June, the family went over the number of minutes allocated in their phone plan again, for the third month in row. Something had to change!

Defining the Problem

His wife summed up the problem this way: "There is a problem with our cell phone plan, because the current minutes are not enough for the family members over the past three months." 

He wasn't sure that "too few minutes" was the real problem. But instead of arguing, he applied his quality improvement training to find common ground. He and wife agreed that the previous three months' bills were too much, and they were able to agree that the family went over the plan minutes—for an unknown reason. Based on their areas of agreement, they revised the initial problem statement: 

There is a problem with our cell phone usage, and this is known because the minutes are over the plan for the past 3 months, leading to a strain on the family budget.

They further agreed that before taking further action—like switching to a costlier plan with more minutes—they needed to identify the root cause of the overage. 

Using Data to Find the Root Cause(s) pie chart of phone usage

At this point, he downloaded the family's phone logs from their cell phone provider and began using Minitab Statistical Software to analyze the data. First, he used a simple pie chart to look at who was using the most minutes. Since he also had a work-provided cell phone, it wasn't surprising to see that his wife used 4 minutes for each minute of the family plan he used. 

Since his wife used 75% of the family's minutes, he looked more closely for patterns and insights in her call data. He created time series plots of her daily and individual call minutes, and created I-MR and Xbar-S charts to assess the stability of her calling process over time. 

I-MR chart of daily phone minutes

Xbar-S Chart of Daily Minutes Per Week

He also subgrouped calls by day of the week and displayed them in a boxplot. 

Boxplot of daily minutes used

These analyses revealed that daily minute usage did contain some "special cause variation," shown in the I-MR chart. They also showed that, compared to other days of the week, Thursdays had a greater average daily minutes and variance. 

Creating a Pareto chart of his wife's phone calls provided further insight. 

Pareto chart of number called

The Minitab analysis helped them see where and when most of their minutes were going. But as experienced professionals know, sometimes the numbers alone don't tell the entire story. So the family discussed the results to put those numbers in context and to see where some improvements might be possible.

The most commonly called number belonged to his wife's best friend, who used a different cell phone provider than the family did. This explained the Thursday calls, because every weekend his wife and her friend took turns shopping garage sales on opposite sides of town to get clothes for their children. They did their coordination on Thursday evenings.

Calls to her girlfriend could have been free if they just used the same provider, but the presenter's family didn't want to change, and it wasn't fair to expect the other family to change. But while a few calls to her girlfriend may have been costing a few dollars, the family was saving many more dollars on clothes for the kids. 

Given the complete context, this was a situation where the calls were paying for themselves, so the family moved on to the next most frequently called number: the presenter's mother's land line.

His wife spoke very frequently with his mother to arrange childcare and other matters. His mother had a cell phone from the same provider, so calls to the cell phone should be free. Why, then, was his wife calling the land line? "Because," his wife informed him, "your mother never answers her cell phone." 

Addressing the Root Cause

The next morning, the presenter visited his mother and eventually he steered the conversation to her cell phone. "I just love using the cell phone on weekends," his mother told him. "I use it to call my old friends during breakfast, and since it's the weekend the minutes are free!" 

When he asked how she liked using the cell phone during the week, his mother's face darkened. "I hate using the cell phone during the week," she declared. "The phone rings all the time, but when I answer there's never anyone on the line!"  

This seemed strange. To get some more insight, her son worked with her to create a spaghetti diagram that showed her typical movements during the weekday when her cell phone rang. That diagram, shown below, revealed two important things.

spaghetti diagram

First, it showed that his mother loved watching television during the day. But second, and more important when it came to using the cell phone, to answer her cell phone, his mother needed to get up her chair, walk into the dining room, and retrieve her cell phone, which she always kept on the table. 

Her cell phone automatically sent callers to voice mail after three rings. But it took his mother longer than three rings to get from her chair to the phone. What's more, since she never learned to use the voice mail ("Son, there is no answering machine connected to this phone!") his mother almost exclusively used the cell phone to make outgoing calls. 

Now that the real reasons underlying this major drain on the minutes in the family's cell phone plan were known, a potential solution could be devised and tested. In this case, rather than force his mother to start using voicemail, he came up with an elegant and simple alternative:  

Job Instructions for Mom:

When receiving call on weekday:

  • Go to cell phone
  • Pick up phone
  • Press green button twice
  • Wait for person who called to answer phone

After a few test runs to make sure his mother was comfortable with the new protocol, they gave the new system its first month's test run. 

The Results

Solving this problem effectively required four steps. First, the presenter and his wife needed to clearly define the problem. Second, they used statistical software to get insight into the problem from the available data. From there, a spaghetti chart and a set of simple job instructions provided a very viable solution to test. And the outcome? 

Bar Chart of Phone Bills

As the bar graph shows, July's minutes were well within their plan's allotment. In that month's Pareto chart, what had been the second-largest bar dropped to near zero. His mother enjoyed her cell phone much more, and his wife was able to arrange child care with just one call. 

And to this day, when the presenter wants to talk to his mother, he 

1. Calls her cell phone
2. Lets it ring 3 times
3. Hangs up
4. Waits for her return call

Happily, this solution turned out to be very sustainable, as the monthly minutes remained within the family's allowance and budget for quite some time...until his daughter got a cell phone, and texting issues began.

Where could you apply data analysis to get more insight into the challenges you face? 

A New Spin on the "Stand in a Circle" Exercise (Part 1)

$
0
0

In the mid 1940s, Taiichi Ohno established the Toyota Production System, which is primarily based on eliminating non-value-added waste. He discovered that by reducing waste and inventory levels, problems get exposed and that forces employees to address these problems. To engage the workers and therefore improve processes, Ohno developed many exercises.

One of his most popular exercises, “Stand in a Circle” helps his managers and students see process waste. During this exercise Ohno would take the manager or student to the shop floor, draw a chalk circle on the floor, then have them stand inside the circle and observe an operation. His direction would be simple: “Watch.”  

Several hours later, Ohno would return and ask “What do you see?” If they saw the same problem Ohno had seen, then the exercise was over. If not, he would say “Watch some more.” This would continue until they saw the same problem Ohno had seen. This exercise helped managers learn to observe waste, and thus helped organizations identify and deal with the Seven Wastes of Lean.

1. Overproduction
Producing more than what’s actually needed by the next process or customer (The worst form of waste because it contributes to the other six).

2. Waiting
Delay, waiting or time spent in a queue with no value being added.

3. Transportation
Moving parts and products unnecessarily.

4. Over-processing
Undertaking non-value-added activity

5. Inventory
Having more than the minimum.

6. Motion
Unnecessary movement or action.

7. Correction
Inspection, rework, and scrap.

I've been thinking about Ohno's famous exercise a lot since the winners of the Lean and Six Sigma Excellence Awards were announced at the 2017 Lean and Six Sigma World Conference in Nashville, Tenn.

For the second consecutive year, Arrow Electronics took the category for innovation, this time for its Lean Sigma Drones project. This project combines drone technology, proprietary video technology, and a rapid-improvement methodology to observe Arrow’s extensive warehouse operations from a birds-eye view and more effectively identify areas for continuous improvement.

This new approach—appropriately named "Fly in a Circle"—has already increased the efficiency of targeted processes by 82 percent and eliminated more than 6.5 million walking steps in warehouse processes since Arrow launched it in late 2016.

Standing (or Flying) in a Circle means you go to the Gemba and observe for yourself what is actually happening. Get the facts about what is being done; not what is supposed to be done according to the procedure. Observe every waste that you can, and write them down. Keep an open mind about your observations. Even if you know the reason behind a workaround, document it anyway—it’s still a workaround, and potentially a wasteful task. Being able to spot waste is one of the hardest parts of improving a process.

Waste Analysis by OperationFigure 1. Waste Analysis by Operation

When performing this exercise, it is easy to fall into the trap of trying to fix the waste on the spot. Instead use lean tools to thoroughly understand the process, then develop ways to eliminate the waste.  Companion by Minitab® contains professionally designed Roadmaps™ and forms that can be used to document and further diagnose the root cause of wastes. Using the Waste Analysis by Operations (Figure 1) and performing the Five Why’s on the identified waste (Figure 2) will help you document and discover ways to eliminate waste in your operations. 

5 WhysFigure 2. Five Why's

The simple exercise of Stand or Fly in a Circle will open your eyes to new ways to improve your processes by eliminating wasteful activities. As your processes and services become more effective and efficient, your customer will appreciate the improvements made in delivery, quality, and price. When an organization eliminates waste, improves quality and reduces costs, they gain a competitive advantage by responding faster and better to customer requirements and needs. 

As you prepare for your Stand or Fly in a Circle exercise, remember these inspirational words: You can observe a lot by just watching. – Yogi Berra.

If you'd like to learn more about Companion's or try the more than 100 other tools for executing and reporting on quality projects that it includes, get the free 30-day trial version for you and your team at companionbyminitab.com.

Control Your Control Chart!

$
0
0

As a member of Minitab's Technical Support team, I get the opportunity to work with many people creating control charts. They know the importance of monitoring their processes with control charts, but many don’t realize that they themselves could play a vital role in improving the effectiveness of the control charts.  

In this post I will show you how to take control of your charts by using Minitab Statistical Software to set the center line and control limits , which can make a control chart even more valuable. 

When you add or change a value in the worksheet, by default the center line and control limits on a control chart are recalculated. This can be desirable in many cases—for example, when you have a new process. Once the process is stable, however, you may not want the center line and control limits continually recalculated. 

Consider this stable process:

Xbar Chart of Thickness

Now suppose the process has changed, but with the new re-calculated center line and control limits, the process is still shown to be in control (using the default Test 1: 1 point > 3 standard deviation from the center line).

Xbar Chart of Thickness 

If you have a stable control chart, and you do not want the center line or control limits to change (until you make a change to the process), you can set the center line and control limits. Here are two ways to do this.

  1. Use the Estimate tab.  
    This option works well when you want to use the initial subgroups to calculate the center line and control limits. 
  • Choose Stat > Control Charts > Variables Charts for Subgroups > Xbar.
  • Click the Estimate tab.
  • Choose “Use the following subgroups when estimating parameters” and enter the appropriate number of subgroups. In the example above we want to use the first 12 subgroups, so enter 1:12.

X-bar chart options

 

  1. Use the Parameters tab. 
    This option works well when you do not have an initial set of data you want to use to calculate the center line and control limits, but know the values you want to use. 

Suppose you want the center line of your Xbar chart to be 118.29, UCL=138.32 and LCL=98.26.

  • Solve for the standard deviation, s. Using the formula for UCL, estimating  for µ and s for :

Note: If you want to use the estimates from another data set, such as a similar process, you could obtain the estimates of the mean and standard deviation without solving for s.  Choose Stat > Control Charts > Variables Charts for Subgroups > Xbar.  Choose Xbar Options, then click the Storage tab.  Check Means and Standard deviations. I'll use the data from the first 12 subgroups above for illustration:


 

These values are stored in the next available blank columns in the worksheet.

  • Choose Stat > Control Charts > Variables Charts for Subgroups > Xbar.  Choose Xbar Options, then click the Parameters tab.  Enter the mean and standard deviation.

 

Using the center line and the control limits from the stable process (using either of the methods described above), the chart now reveals the new process is out of control.

As you can see, it's important to consider whether you are using the best center line and control limits for your control charts. Making sure you're using the best options, and setting the center line and control limits manually when desirable, will make your control charts even more beneficial.

 


A New Spin on the "Stand in a Circle” Exercise (Part 2)

$
0
0

In Part 1 of my A New Spin on the "Stand in a Circle" Exercise blog, I described how Taiichi Ohno, the creator of the Toyota Production System, used the “Stand in a Circle” exercise to help managers identify waste in their operations. 

OhnoDuring this exercise Ohno would take a manager or student to the shop floor, draw a chalk circle on the floor, then have them stand inside the circle and observe an operation. His direction was simply, “Watch.” Several hours later Ohno would return and ask “What do you see?”  If they saw the same problem Ohno had seen, then the exercise was over. If not, he would say “Watch some more.”

This would continue until the manager or student saw the same problem Ohno had seen, thus teaching them to observe waste. Ohno developed this exercise to help organizations identify and deal with the Seven Wastes of Lean.

In this post, I’ll walk you through a "Stand in the Circle" example using Companion by Minitab®.  Suppose you are a process improvement practitioner at a company where full containers—boxes of tile grout—are transported from the processing area to the warehouse for shipping. The containers are stacked onto pallets, wrapped with poly sheeting, and transported to the warehouse to wait for shipping to the customer.

While standing in a circle in the middle of the warehouse, you notice and document several wasteful activities on the Waste Analysis by Operation form (Figure 1). 

Waste Analysis by Operation

Figure 1. Waste Analysis by Operation

The highest-priority issue is the container damage, so you'll address this one first. The containers can get damaged when being stacked on the pallets and transported to the shipping area. 

A Cause and Effect diagram (C&E) or Fishbone can be used to identify causes for an effect or problem. During a team meeting, conduct a brainstorming session to identify the causes of the container damage.  On a C&E diagram, the effect, or central problem, is on the far right. Affinities, which are categories of causes, branch from the spine of the effect and the causes branch from the affinities. The structure of the C&E Diagram will immediately sort ideas into useful categories (affinities). Use Companion’s built-in priority rating scale and color coding to identify high, medium or low priority causes to further investigate.

CandE diagram

Figure 2. Cause and Effect Diagram

Another tool to help get to the root cause of a problem is the 5 Whys line of questioning (Figure 3). By asking the question “Why?” five times, you will eventually get to the root cause of the problem and identify steps to prevent it from happening again. Both the Cause and Effect Diagram and the 5 Whys tools are best performed in a group setting with a team knowledgeable about the process.

Five Whys

Figure 3. 5 Whys Form

After solutions are identified, the team can fill out the 30-60-90 Action Plan to identify and track the long-term activities.  Using this form will help the team clearly identify:

1). What remains to be done?

2). Who is responsible?

3). When will it be done?

Action Plan

Figure 4. 30-60-90 Action Plan

As your processes and services become more effective and efficient, your customer will appreciate the improvements made in delivery, quality, and price. When an organization eliminates waste, improves quality and reduces costs, they gain a competitive advantage by responding faster and better to customer requirements and needs.  

The simple exercise of “Standing in a Circle” will open your eyes to new ways to improve your processes by eliminating wasteful activities. Using a root cause analysis tool such as the Fishbone and the 5 Whys can quickly get your team to understand the causes behind inefficient tasks. 

Once the root causes are identified, the team can get busy identifying, selecting and implementing solutions. Using a project management tool such as Companion will help keep the process improvement team organized and will keep your stakeholders and executives apprised of progress automatically.

Companion puts all of your tools in one easy-to-use application, so you'll spend less time managing projects and more time moving them forward. If you aren't already using it, you can try Companion free for 30 days.

Methods and Formulas: How Are I-MR Chart Control Limits Calculated?

$
0
0

Users often contact Minitab technical support to ask how the software calculates the control limits on control charts.

A frequently asked question is how the control limits are calculated on an I-MR Chart or Individuals Chart. If Minitab plots the upper and lower control limits (UCL and LCL) three standard deviations above and below the mean, why are the limits plotted at values other than 3 times the standard deviation that I get using Stat > Basic Statistics

That’s a valid question—if we’re plotting individual points on the I-Chart, it doesn’t seem unreasonable to try to calculate a simple standard deviation of the data points, multiply by 3 and expect the UCL and LCL to be the data mean plus or minus 3 standard deviations. This can be especially confusing because the Mean line on the Individuals chart IS the mean of the data!

However, the standard deviation that Minitab Statistical Software uses is not the simple standard deviation of the data. The default method that Minitab uses (and an option to change the method) is available by clicking the I-MR Options button, and then choosing the Estimate tab:

There we can see that Minitab is using the Average moving range method with 2 as the length of moving range to estimate the standard deviation.

That’s all well and good, but exactly what the heck is an average moving range with length 2?!

Minitab’s Methods and Formulas section details the formulas used for these calculations.  In fact, Methods and formulas provides information on formulas used for all the calculations available through the dialog boxes: This information can be accessed via the Help menu, by choosing Help> Methods and Formulas...

Too see the formulas for control chart calculations, we choose Control Charts> Variables Charts for Individuals as shown below:

The next page shows the formulas organized by topic. By selecting the link Methods for estimating standard deviation we find the formula for the Average moving range:

Looking at the formula, things become a bit clearer—the ‘length of the moving range’ is the number of data points used when we calculate the moving range (i.e., the difference from point 1 to point 2, 2 to 3, and so forth).

If we want to hand-calculate the control limits for a dataset, we can do that with a little help from Minitab!

The dataset I’ve used for this example is available HERE.

First, we’ll need to get the values of the moving ranges. We’ll use the calculator by navigating to Calc> Calculator; in the example below, we’re storing the results in column C2 (an empty column) and we’re using the LAG function in the calculator.  That will move each of our values in column C1 down by 1 row.  Click OK to store the results in the worksheet.

Note: By choosing the Assign as a formula option at the bottom of the calculator, we can add a formula to column C2 which we can easily go back and edit if a mistake was made.

Now with the lags stored in C2, we use the calculator again: Calc> Calculator (here's a tip: press F3 on the keyboard to clear out the previous calculator entry), then subtract column C2 from column C1 as shown below, storing the results in C3.  We use the ABS calculator command to get the absolute differences of each row:

Next we calculate the sum of the absolute value of the moving ranges by using Calc> Calculator once again.  We’ll store the sum in the next empty column, C4:

The value of this sum represents the numerator in the Rbar calculation:

To complete the Rbar calculation, we use the information from Methods and Formulas to come up with the denominator; n is the number of data points (in this example it’s 100), w’s default value is 2 ,and we add 1, so the denominator is 100-2+1.  In Minitab, we can once again use Calc> Calculator to store the results in C5:

With Rbar calculated, we find the value of the unbiasing constant d2 from the table that is linked in Methods and Formulas:

For a moving-range of length 2, the d2 value is 1.128, so we enter 1.128 in the first row in column C6, and use the calculator one more time to divide Rbar by d2 to get the standard deviation, which works out to be 2.02549:

We can check our results by using the original data to create an I-MR chart.  We enter the data column in Variables, and then click I-MR Options and choose the Storage tab; here we can tell Minitab to store the standard deviation in the worksheet when we create the chart:

The stored standard deviation is shown in the new column titled STDE1, and it matched the value we hand-calculated.  Notice also that the Rbar we calculated is the average of the moving ranges on the Moving-Range chart. Beautiful!

Submitting an A+ Presentation Abstract, Even About Statistics

$
0
0

For the majority of my career, I've had the opportunity to speak at conferences and other events somewhat regularly. I thought some of my talks were pretty good, and some were not so good (based on ratings, my audiences didn't always agree with either—but that's a topic for another post). But I would guess that well over 90% of the time, my proposals were accepted to be presented at the conference, so even though I may not have always delivered a home run on stage, I at least submitted an abstract that was appealing to the organizers.

speakerWhen I served as chair of the Lean and Six Sigma World Conference, I reviewed every abstract submitted and was able to experience things from the other side of the process. Now, with the submission period upon us for the Minitab Insights Conference, I thought I'd share some insights on submitting an A+ speaking abstract.

Tell A Story

People are emotional beings, and a mere list of the technical content you plan to present doesn't engage the reviewers any more than it will an audience. Connecting the topic to some story sparks an emotional interest and desire to know more. Several years ago, I presented on the multinomial test at a conference, a topic that probably would have elicited yawns if I'd pitched it as the technical details of how to perform this hypothesis test. Instead I submitted an abstract asking if Virgos were worse drivers, as stated by a well-known auto insurer, and explaining that by answering the question we can also learn how to determine if defect rates were different among multiple suppliers or error rates were different for various claims filers. Want to know if they are, I asked. Accept my talk!  They did. 

Nail the Title

This can be the most difficult step, but it helps to remember that organizers use the program to promote the conference and draw attendees. A catchy title that elicits interest from prospective attendees can go a long way. So, what makes for a good title? I like to reference the story I will tell and not directly state the topic. For the talk I describe above, the title was "Are Virgos Unsafe Drivers?" Note that from the title, someone considering attending has no idea yet that the talk will be about a statistical test. But they are curious and will read the description. More important, the talk seems interesting and the speaker seems engaging, and those are the criteria attendees use to decide what talks to attend. An alternate title that is more descriptive but not catchy,"The Proper Application of the Multinomial Test of Proportions," sounds like a good place to take a nap.

Reference Prior Experience

If the submission process allows it (the Minitab Insights Conference does), reference prior speaking engagements and even better, provide links to any recordings that may exist of you speaking. Even if it is not a formal presentation, anything that enables to organizers to get a feel for your personality when speaking is a huge plus. It is somewhat straightforward to assess whether a submitted talk would be of interest to attendees, but assessing whether speakers are engaging is difficult or impossible, even though ultimately it will make a huge impact on what attendees think of the conference. Even better, you don't actually have to be an excellent presenter—the organizer's fear is that you might be a terrible speaker! Simply demonstrating that you can present clearly and engage an audience goes a long way.

Don't Make Mistakes

It is best to assume that whoever is evaluating you is a complete stranger. Imagine you ask for something from a stranger and what they send you is incomplete or contains grammatical error or typos: what is your impression of that person? If they are submitting to speak, my suspicion is that they will likely have unprofessional slides and possibly even be unprofessional when they speak. Further, the fact that they would not take the time to review and correct the submission tells me that they are not serious about participating in the event.

Write the Presentation First

Based on experience, I believe this is not done often—but that is a mistake. True, no one wants to put hours into a presentation only to have it get rejected, but that presentation could still be used elsewhere, so the time is not necessarily wasted. Inevitably, when you prepare a presentation new insights and ways of presenting the information come to light that greatly improve what will be presented and the story that will be told. So to tell the best story in the submission, it is immensely valuable to have already made the presentation slides! In fact, if I sorted every presentation I ever gave into buckets labeled "good" and "not so good," they would correspond almost perfectly to whether I had already made the presentation when I submitted the abstract.

Ask a Friend

Finally, approach someone you trust (and who is knowledgeable in the relevant subject area) to give you an honest opinion. Ask them what they think. Is the topic of interest to the expected attendees? Is it too simple? Too complicated? Will the example(s) resonate? After all, you don't want the earliest feedback you receive on your proposal to be from the person(s) deciding whether to accept the talk.

So that's my advice. It may seem like a big effort simply to submit an abstract, but everything here goes to good use as you prepare to actually give the presentation. It's better to put in more work at the start and get to put that work to good use later, than to put in a little work that goes to waste. Do these things and you'll be in a great position to be accepted and deliver a fantastic presentation!

5 Conditions that Could Put Your Quality Program on the Chopping Block

$
0
0

By some estimates, up to 70 percent of quality initiatives fail. Why do so many of improvement programs, which are championed and staffed by smart, dedicated people, ultimately end up on the chopping block?

According to the Juran Institute, which specializes in training, certification, and consulting on quality management, the No. 1 reason quality improvement initiatives fail is a lack of management support.

chopping blockAt first blush, doesn't that seem like a paradox? After all, it's company leaders who start quality improvement efforts in the first place. So what happens between the time a deployment kicks off—with the C-level's enthusiastic support and participation—and the day a disillusioned C-level executive pulls the plug on a program that never seemed to deliver on its potential?

Even projects which result in big improvements often fail to make an impression on decision-makers. Why?  

The answer may be that those C-level leaders never find out about that impact. The 2013 ASQ Global State of Quality study revealed that the higher people rise in an organization's leadership, the less often they receive reports about quality metrics. Only 2% of senior executives get daily quality reports, compared to 33% of front-line staff members.

Think that's bad? A full 25% of the senior executives reported getting quality metrics only on an annual basis

In light of findings like that, the apparent paradox of leaders losing their initial enthusiasm for quality initiatives begins to make sense. The success of the program often remains invisible to those at the top. 

That's not necessarily for a lack of trying, either. Even in organizations with robust, mature quality programs, understanding the full impact of an initiative on the bottom line can be difficult, and sometimes impossible.

For more than 45 years, Minitab has been helping companies in every industry, in virtually every country around the world, improve quality. Along the way, we've seen and identified five main challenges that can keep even the most successful deployments in the shadows.

1. Project Data Is Scattered and Inaccessible.

Individual project teams usually do a great job capturing and reporting their results. But challenges quickly arise when projects accumulate. A large company may have thousands of simultaneous quality projects active now, and countless more completed. Gathering the critical information from all of those projects, then putting it into a format that leaders can easily access and use, is an extremely daunting task—which means that many organizations simply fail to do it, and the overall impact of their quality program remains a mystery.  

2. Projects Are a Hodgepodge of Applications and Documents.

As they work through their projects, team members need to create project charters, do SIPOCs and FMEAs, evaluate potential solutions, facilitate brainstorming, and much more. In most organizations, teams have to use an assortment of separate applications for documents, process maps, value stream maps, and other essential project tools. That means the project record becomes a compilation of distinct, frequently incompatible files from many different software programs. Team members are forced to waste time entering the identical information into first one program, then another. Adding to the confusion, the latest versions of documents may reside on several different computers, so project leaders often need to track multiple versions of a document to keep the official project record current. 

3. Metrics Vary from Project to Project   

Even projects in the same department often don't treat essential metrics consistently, or don't track the same data in the same way. Multiply that across the hundreds of projects under way at any given time in an organization with many different departments and divisions, and it's not hard to see why compiling a reliable report about the impact of all these projects never happens. Even if the theoretical KPIs are consistent across an organization, when one division tracks them in apples, and the next tracks them in oranges, their results can't be evaluated or aggregated as if they were equivalent. 

4. Teams Struggle with Square-Hole Tracking Systems

Many organizations attempt to monitor and assess the impact of quality initiatives using methods that range from homegrown project databases to full-blown, extremely expensive project portfolio management (PPM) systems. Sometimes these work—at least for a while. But many organizations find maintaining their homegrown systems turns into a major hassle and expense. And as others have discovered, the off-the-shelf solutions that were created to meet the needs of information technology, finance, customer service, or other business functions don’t adequately fit or support projects that are based on quality improvement methods such as Six Sigma or Lean. The result? Systems that slowly wither as resources are directed elsewhere, reporting mechanisms that go unused, and summaries that fail to convey a true assessment of an initiative's impact even if they are used. 

5. Reporting Takes Too Much Time

There are only so many hours in the day, and busy team members and leaders need to prioritize. Especially when operating under some of the conditions described already, team leaders find reporting on projects to be a burden that just never rises to the top of the priority list. It seems like non-value-added activity to copy-and-paste information from project documents, which had to be rounded up from a bunch of different computers and servers, and then place that information into yet another format. And if the boss isn't asking for those numbers—and it appears that many C-level executives don't—most project leaders have many other tasks to which they can devote their limited time. 

How to Overcome the Challenges to Reporting on Quality

It's easy to understand why so many companies, faced with these constraints, don't have a good understanding of how their quality initiatives contribute to the overall financial picture. But recognizing the issues is the first step in fixing them. 

Organizations can establish standards and make sure that all project teams use consistent metrics. Quality professionals and their leaders can take steps to make sure that reporting on results becomes a critical step in every individual project. 

There also are solutions that tackle many of these challenges head-on. For example, Companion by Minitab takes a desktop app that provides a complete set of integrated tools for completing projects, and combines it with centralized, cloud-based storage for projects and a customizable web-based dashboard. Companion's desktop app makes it easier for practitioners to work through and finish projects—and since their project data automatically rolls up to the dashboard, reporting on projects is effortless. Literally.

For the executives, managers, and stakeholders who have never had a clear picture of their quality program, Companion opens the window on the performance, progress, and bottom-line effects of the entire quality initiative, or specific pieces of it. 

Ensuring that the results of your improvement efforts are clearly seen and understood is a challenge that every quality pro is likely to face. How do you ensure your stakeholders appreciate the value of your activities?  

 

Monitoring Rare Events with G Charts

$
0
0

Rare events inherently occur in all kinds of processes. In hospitals, there are medication errors, infections, patient falls, ventilator-associated pneumonias, and other rare, adverse events that cause prolonged hospital stays and increase healthcare costs. 

But rare events happen in many other contexts, too. Software developers may need to track errors in lines of programming code, or a quality practitioner may need to monitor a low-defect process in a high-yield manufacturing environment. Accidents that occur on the shop floor and aircraft engine failures are also rare events, ideally.

Whether you’re in healthcare, software development, manufacturing or some other industry, statistical process control is an important component of quality improvement. Using control charts, we can graph these rare events and monitor a process to determine if it’s stable or if it’s out of control and therefore unpredictable and in need of attention.

The G Chart

There are many different types of control charts available, but in the case of rare events, we can use Minitab Statistical Software and the G chart to assess the stability of our processes. The G chart, based on the geometric distribution, is a control chart designed specifically for monitoring rare events.

G charts are typically used to plot the number of days between rare events. They also can be used to plot the number of opportunities between rare events.

For example, suppose we want to monitor heart surgery complications. We can use a G chart to graph the number of successful surgeries that were performed in between the ones that involved complications.

The G chart is simple to create and use. To produce a G chart, all you need is either the dates on which the rare events occurred or the number of opportunities between occurrences.

Advantages of the G Chart

 In addition to its simplicity, this control chart also offers greater statistical sensitivity for monitoring rare events than its traditional counterparts.

Because rare events occur at very low rates, traditional control charts like the P chart are typically not as effective at detecting changes in the event rates in a timely manner. Because the probability that a given event will occur is so low, considerably larger subgroup sizes are required to create a P chart and abide by the typical rules of thumb. In addition to the arduous task of collecting more data, this creates the unfortunate circumstance of having to wait longer to detect a shift in the process. Fortunately, G charts do not require large quantities of data to effectively detect a shift in a rare events process.

Another advantage of using the G chart to monitor your rare events is that it does not require that you collect and record data on the total number of opportunities, while P charts do.

For example, if you’re monitoring medication errors using a P chart, you must count the total number of medications administered to each and every patient in order to calculate and plot the proportion of medication errors. To create a G chart however, you just need to record the dates on which the medication errors occurred. Note the G chart does assume that the opportunities, or medications administered in this example, are reasonably constant.

Creating a G Chart

Each year, nosocomial (hospital-acquired) infections cause an exorbitant number of additional hospital days nationally, and, unfortunately, a considerable number of deaths. Suppose you work for a hospital and want to monitor these infections so you can promptly detect changes in your process and react appropriately if it goes out of control.

In Minitab, you first need to input the dates when each of the nosocomial infections occurred. Then to create a G chart and plot the elapsed time between infections, select Stat > Control Charts > Rare Event Charts > G.

In the dialog box, you can input either the 'Dates of events' or the 'Number of opportunities' between adverse events. In this case, we have the date when each infection occurred so we can use 'Dates of events' and specify the Infections column.

Interpreting a G Chart

Minitab plots the number of days between infections on the G chart. Points above the upper control limit (UCL) are desirable as they indicate an extended period of time between events. Points near or below the lower control limit (LCL) are undesirable and indicative of a shortened time period between events.

Minitab flags any points that extend beyond the control limits, or fail any other tests for special causes, in red.

The G chart above shows that this hospital went nearly 2 months without an infection. Therefore, you should try to learn from this fortunate circumstance. However, you can also see that the number of days between events has recently started to decrease, meaning the infection rate is increasing, and the process is out of control. You should therefore investigate what is causing the recent series of infections.

Monitoring Rare Events with T Charts

While G charts are used to monitor the days or opportunities between rare events, you can use a T chart if your data are instead continuous. 

For example, if you have recorded both the dates and time of day when rare events occurred, you can assess process stability using Stat > Control Charts > Rare Event Charts > T.

As more and more organizations embrace and realize the benefits of quality improvement, they will encounter the good problem of increased cases of rare events. As these events present themselves with greater frequency, practitioners across industries can rely on Minitab and the G and T charts to effectively monitor their processes and detect instability when it occurs.

Viewing all 828 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>