Quantcast
Channel: Minitab | Minitab
Viewing all 828 articles
Browse latest View live

The Null Hypothesis: Always “Busy Doing Nothing”

$
0
0

The 1949 film A Connecticut Yankee in King Arthur's Court includes the song “Busy Doing Nothing,” and this could be written about the Null Hypothesis as it is used in statistical analyses. 

The words to the song go:

We're busy doin' nothin'
Workin' the whole day through
Tryin' to find lots of things not to do

And that summarises the role of the Null Hypothesis perfectly. Let me explain why.

What's the Question?

Before doing any statistical analysis—in fact even before we collect any data—we need to define what problem and/or question we need to answer. Once we have this, we can then work on defining our Null and Alternative Hypotheses.

The null hypothesis is always the option that maintains the status quo and results in the least amount of disruption, hence it is “Busy Doin’ Nothin'”. 

When the probability of the Null Hypothesis is very low and we reject the Null Hypothesis, then we will have to take some action and we will no longer be “Doin Nothin'”.

Let’s have a look at how this works in practice with some common examples.

Question

Null Hypothesis

Do the chocolate bars I am selling weigh 100g? Chocolate Weight = 100g
 
If I am giving my customers the right size chocolate bars I don’t need to make changes to my chocolate packing process.
  Are the diameters of my bolts normally distributed?

Bolt diameters are normally distributed.

If my bolt diameters are normally distributed I can use any statistical techniques that use the standard normal approach.
 

Does the weather affect how my strawberries grow? Number of hours sunshine has no effect on strawberry yield

Amount of rain has no effect on strawberry yield

Temperature has no effect on strawberry yield
 

Note that the last instance in the table, investigating if weather affects the growth of my strawberries, is a bit more complicated. That's because I needed to define some metrics to measure the weather. Once I decided that the weather was a combination of sunshine, rain and temperature, I established my null hypotheses. These all assume that none of these factors impact the strawberry yield. I only need to control the sunshine, temperature and rain if the probability that they have no effect is very small.

Is Your Null Hypothesis Suitably Inactive?

So in conclusion, in order to be “Busy Doin’ Nothin’”, your Null Hypothesis has to be as follows:


Fundamentals of Gage R&R

$
0
0

Before cutting an expensive piece of granite for a countertop, a good carpenter will first confirm he has measured correctly. Acting on faulty measurements could be costly.

gaugeWhile no measurement system is perfect, we rely on such systems to quantify data that help us control quality and monitor changes in critical processes. So, how do you know whether the changes you see are valid and not just the product of a faulty measurement system? After all, if you can’t trust your measurement system, then you can’t trust the data it produces.

Performing a Gage R&R study can help you to identify problems with your measurement system, enabling you to trust your data and to make data-driven decisions for process improvement. 

What Can Gage R&R Do for Me?

Gage R&R studies can tell you if inconsistencies in your measurements are too large to ignore—this could be due to a faulty tool or inconsistent operation of a tool.

Reveal an inconsistent tool

Let’s look at an example to better understand how Gage R&R studies work.

Suppose a company wants to use a control chart to monitor the fill weights of cereal boxes. Before doing so, they conduct a Gage R&R study to determine if the system which measures the weight of each cereal box is producing precise measurements.

The best way to ensure that measurements are valid is to look at repeatability, or the variation of the measurements taken by the same operator for the same part. If we weigh the same cereal box under the same conditions a number of times, will we observe the same weight every time? Weighing the same box over and over again can show us how much variation exists in our measurement system.

plot

For this experiment, we can look at repeatability based on two different operators’ measurements. The Gage R&R results show that even when the same person weighs the same box on the same scale, the measurements can vary by several grams. Most likely, the scale is in serious need of recalibration. The faulty scale would have rendered a control chart for these measurements virtually useless. Although the average measurements for each operator are not far apart, the spread of the measurements is huge!

Highlight operator differences

But the variation that exists in the measurement system is just one aspect of a Gage R&R study. We must also look at reproducibility, or the variation due to different operators using the measurement system. A Gage R&R study can tell us whether a measurement differs from one operator to the next and by how much.

Suppose the same company who wishes to monitor fill weights of cereal boxes hires new employees to help record measurements. The company uses a Gage R&R to evaluate both the new operators and experienced operators.

gage R&R

The study reveals that when employees weigh the same cereal box, the measurements of new hires are too high or too low more often than the measurements of experienced employees. This finding might indicate that the company should conduct more training for the new hires.

How to Analyze a Gage R&R Study in Minitab

Awareness of how well you can measure something can have substantial financial impacts. Minitab Statistical Software makes it easy to analyze how precise your measurements are.

In the case of the company evaluating cereal box fill weights, problems of over- and under-filling have different implications. Overfilling cereal boxes is costing the company money they could be saving with a calibrated measurement system and properly trained staff. Similarly, not filling cereal boxes fully is making customers angry because they didn’t get the amount of product they paid for. 

Getting started

Preparing to analyze your measurement system is easy because Minitab’s Create Gage R&R Study Worksheet can generate a data collection sheet for you. The dialog box lets you quickly specify who takes the measurements (the operators), which item they measure (the parts), and in what order the data are to be collected.

  1. Choose Stat > Quality Tools > Gage Study > Create Gage R&R Study Worksheet.
  2. Specify the number of parts, number of operators, and the number of times the same operator will measure the same part.
  3. Give descriptive names to the parts and operators so they’re easy to identify in the output.
  4. Click OK.
The main event

After you create your data collection sheet and record the measurements you observe, you can use Gage R&R Study (Crossed) to analyze the measurements.

  1. Choose Stat > Quality Tools > Gage Study > Gage R&R Study (Crossed).
  2. In Part Numbers, enter Parts.
  3. In Operators, enter Operators.
  4. In Measurement Data, enter 'Fill Weights'.
  5. Click OK.

Gage R&R Output

The study reveals that Jordan’s measurements are lower than Pat’s or Taylor’s. In fact, the %Study Variation for our total Gage R&R is high—90.39%—indicating that our measurement system is unacceptable. Identifying and eliminating the source of the difference will improve the measurement system.

Some of my colleagues offer more information on Gage R&R tools and how to interpret the output.

Putting Gage R&R Studies to Use

Taking measurements is like any other process—it’s prone to variability. Assessing and identifying where to focus efforts for reducing this variation with Minitab’s Gage R&R tools can help you ensure your measurement system is precise. 

Leaving Out-of-control Points Out of Control Chart Calculations Looks Hard, but It Isn't

$
0
0

Houston skylineControl charts are excellent tools for looking at data points that seem unusual and for deciding whether they're worthy of investigation. If you use control charts frequently, then you're used to the idea that if certain subgroups reflect temporary abnormalities, you can leave them out when you calculate your center line and control limits. If you include points that you already know are different because of an assignable cause, you reduce the sensitivity of your control chart to other, unknown causes that you would want to investigate. Fortunately, Minitab Statistical Software makes it fast and easy to leave points out when you calculate your center line and control limits. And because Minitab’s so powerful, you have the flexibility to decide if and how the omitted points appear on your chart.

Here’s an example with some environmental data taken from the Meyer Park ozone detector in Houston, Texas. The data are the readings at midnight from January 1, 2014 to November 9, 2014. (My knowledge of ozone is too limited to properly chart these data, but they’re going to make a nice illustration. Please forgive my scientific deficiencies.) If you plot these on an individuals chart with all of the data, you get this:

The I-chart shows seven out-of-control points between May 3rd and May 17th.

Beginning on May 3, a two-week period contains 7 out of 14 days where the ozone measurements are higher than you would expect based on the amount that they normally vary. If we know the reason that these days have higher measurements, then we could exclude them from the calculation of the center line and control limits. Here are the three options for what to do with the points:

Three ways to show or hide omitted points

Like it never happened

One way to handle points that you don't want to use to calculate the center line and control limits is to act like they never happened. The points neither appear on the chart, nor are there gaps that show where omitted points were. The fastest way to do this is by brushing:

  1. On the Graph Editing toolbar, click the paintbrush.

The paintbrush is between the arrow and the crosshairs.

  1. Click and drag a square that surrounds the 7 out-of-control points.
  2. Press CTRL + E to recall the Individuals chart dialog box.
  3. Click Data Options.
  4. Select Specify which rows to exclude.
  5. Select Brushed Rows.
  6. Click OK twice.

On the resulting chart, the upper control limit changes from 41.94 parts per billion to 40.79 parts per billion. The new limits indicate that April 11 was also a measurement that's larger than expected based on the variation typical of the rest of the data. These two facts will be true on the control chart no matter how you treat the omitted points. What's special about this chart is that there's no suggestion that any other data exists. The focus of the chart is on the new out-of-control point:

The line between the data is unbroken, even though other data exists.

Guilty by omission

A display that only shows the data used to calculate the control line and center limits might be exactly what you want, but you might also want to acknowledge that you didn't use all of the data in the data set. In this case, after step 6, you would check the box labeled Leave gaps for excluded points. The resulting gaps look like this:

Gaps in the control limits and data connect lines show where points were omitted.

In this case, the spaces are most obvious in the control limit line, but the gaps also exist in the lines that connect the data points. The chart shows that some data was left out.

Hide nothing

In many cases, not showing data that wasn't in the calculations for the center line and control limits is effective. However, we might want to show all of the points that were out-of-control in the original data. In this case, we would still brush the points, but not use the Data Options. Starting from the chart that calculated the center line and control limits from all of the data, these would be the steps:

  1. On the Graph Editing toolbar, click the paintbrush.

The paintbrush is between the arrow and the crosshairs.

  1. Click and drag a square that surrounds the 7 out-of-control points.
  2. Press CTRL + E to recall the Individuals chart dialog box. Arrange the dialog box so that you can see the list of brushed points.
  3. Click I Chart Options.
  4. Select the Estimate tab.
  5. Under Omit the following subgroups when estimating parameters, enter the row numbers from the list of brushed points.
  6. Click OK twice.

This chart still shows the new center line, control limits, and out-of-control point, but also includes the points that were omitted from the calculations.

Points not in the calculations are still on the chart.

Wrap up

Control charts help you to identify when some of your data are different than the rest so that you can examine the cause more closely. Developing control limits that exclude data points with an assignable cause is easy in Minitab and you also have the flexibility to decide how to display these points to convey the most important information. The only thing better than getting the best information from your data? Getting the best information from your data faster!

The image of the Houston skyline is from Wikimedia commons and is licensed under this creative commons license.

Companion by Minitab: Desktop App and Web App Terminology (Part 1)

$
0
0

By now you have probably heard about Companion by Minitab®, our software for executing and reporting on quality improvement projects.

We've had questions about some terminology used in the product, which has two main components: the desktop application, or desktop app for short, and the web application, or web app for short, but also sometimes referred to as the full version or subscription. If you've wondered about this terminology, I hope this post will answer your questions.

In a nutshell, Companion is a software platform for managing your continuous improvement program. There are two parts to the software: the desktop and web apps. Project owners and practitioners use the Companion desktop app to execute projects. As they progress, their project information automatically rolls up to Companion’s web app dashboard, where executives and stakeholders can see graphical summaries and reports for a high-level view of the organization’s initiatives. 

Best of all, since the Companion dashboard updates automatically, team members have more time to complete critical tasks instead of creating reports or updating information in a separate tracking database. Companion’s desktop app and dashboard work together to help you not only boost the bottom line but also demonstrate your success to the proper people who need to know.

Companion Big Picture

The Companion Desktop App

Companion's desktop app provides the tools and forms that project teams and practitioners need to complete projects efficiently and consistently. This is important because using consistent methodologies, forms, and metrics will allow teams working projects to devote more of their time to working on value-added projects. 

http://support.minitab.com/en-us/companion/toolkit_annotated.png

Terminology associated with the desktop app includes:

A: Insert tab: The menu where you add phases, folders, documents, forms, and tools to your Roadmap.

B: Management section: The set of forms in a project template that contains important project data. The management section ensures consistent project definition and tracking. In the desktop app only, anyone can edit the management section. In the web app, only data architects can add, delete, and reorder forms in the management section. If you are a data architect, go to Update management forms.

C: Roadmap™: The area where you open phases, folders, documents, forms, and tools to help you organize and execute your project.

D: Workspace: The area where you view and enter data in forms and work with tools.

E: Task pane (maps and brainstorming tools only): In a process map or a value stream map, the area where you can enter shape data. In a brainstorming tool, the area where you can brainstorm a list or import X and Y variables.

The Web App

The Companion web application works in concert with the desktop app to maximize the benefits of your improvement initiative and provide unparalleled insight into its impact on KPIs and the bottom line. 

The web app is the heart of Companion and fulfills two roles: configurable dashboard to display key metrics, and centralized storage for all Companion projects and templates. The web app is a cloud-optimized platform hosted by Microsoft Azure, so you are assured of the highest security offered by Microsoft. The Microsoft Azure data centers guarantee a 99.95% uptime and meet a wide range of internationally recognized security and compliance standards.

http://support.minitab.com/en-us/companion/dashboard_report_annotated.png

The terminology associated with the web app includes:

A: Report: A collection of filters, summaries, and column sets.

B: Filters: Allow you to focus on a subset of projects, based on a condition, such as region, location, or project status.

C: Summaries: Display aggregate project data, such as the number of projects in each division, the average duration of projects, or the total project savings by quarter. Also displays optional targets.

D: Column sets: Determines the fields that are displayed for each project in the projects list.

E: Projects list: Displays a list of all projects meeting the current filter's criteria.

F: Help button: Gives you access to topics, videos, the Quick Tour, and the download link for the desktop app.

G: Actions menu: Gives you access to common tasks, such as editing, copying, and creating new reports, saving a report as a PDF, and setting default reports.

My next posts will dig deeper into the detail of both Companion's desktop app and web app. 

Everyone at Minitab is excited about the new Companion by Minitab® and hope you are too.  Companion gives your team everything it needs to streamline and standardize your process improvement program.

 For further information about Companion or to download the 30-day free trial, go to the Minitab website at http://www.minitab.com/en-us/products/companion/.

Doing Gage R&R at the Microscopic Level

$
0
0

by Dan Wolfe, guest blogger

How would you measure a hole that was allowed to vary one tenth the size of a human hair? What if the warmth from holding the part in your hand could take the measurement from good to bad? These are the types of problems that must be dealt with when measuring at the micron level.

a 10-micron fiber

As a Six Sigma professional, that was the challenge I was given when Tenneco entered into high-precision manufacturing. In Six Sigma projects “gage studies” and “Measurement System Analysis (MSA)” are used to make sure measurements are reliable and repeatable. It’s tough to imagine doing that type of analysis without statistical software like Minitab.

Tenneco, the company I work for, creates and supplies clean air and ride performance products and systems for cars and commercial vehicles. Tenneco has revenues of $7.4 billion annually, and we expect to grow as stricter vehicle emission regulations take effect in most markets worldwide over the next five years.

We have an active and established Six Sigma community as part of the “Tenneco Global Process Excellence” program, and Minitab is an integral part of training and project work at Tenneco.

Verifying Measurement Systems

Verifying the measurement systems we use in precision manufacturing and assembly is just one instance of how we use Minitab to make data-driven decisions and drive continuous improvement.

Even the smallest of features need to meet specifications. Tolerance ranges on the order of 10 to 20 microns require special processes not only for manufacturing, but also measurement. You can imagine how quickly the level of complexity grows when you consider the fact that we work with multiple suppliers from multiple countries for multiple components.

To gain agreement between suppliers and Tenneco plants on the measurement value of a part, we developed a process to work through the verification of high precision, high accuracy measurement systems such as CMM and vision.

The following SIPOC (Supplier, Input, Process, Output, Customer) process map shows the basic flow of the gage correlation process for new technology.

sipoc

What If a Gage Study Fails?

If any of the gage studies fail to be approved, we launch a problem-solving process. For example, in many cases, the Type 1 results do not agree at the two locations. But given these very small tolerance ranges, seemingly small differences can have significant practical impact on the measurement value. One difference was resolved when the ambient temperature in a CMM lab was found to be out of the expected range. Another occurred when the lens types of two vision systems were not the same.

Below is an example of a series of Type 1 gage studies performed to diagnose a repeatability issue on a vision system. It shows the effect of part replacement (taking the part out of the measurement device, then setting it up again) before each measurement and the bias created by handling the part.

For this study, we took the results of 25 measurements made when simply letting the part sit in the machine and compared them with 25 measurements made when taking the part out and setting it up again between each of 25 measurements. The analysis shows picking the part up, handling it and resetting it in the machine changes the measurement value. This was found to be statistically significant, but not practically significant. Knowing the results of this study helps our process and design engineers understand how to interpret the values given to them by the measurement labs, and give some perspective on the considerations of the part and measurement processes.

The two graphs below show Type 1 studies done with versus without replacement of the part. There is a bias between the two studies. A test for equal variance shows a difference in variance between the two methods.

Type 1 Gage Study with Replacement

Type 1 Gage Study without Replacement

As the scatterplot below illustrates, the study done WITH REPLACEMENT has higher standard deviation. It is statistically significant, but still practically acceptable.

With Replacement vs. Without Replacement

Minitab’s gage study features are a critical part of the gage correlation process we have developed. Minitab has been integrated into Tenneco’s Six Sigma program since it began in 2000.

The powerful analysis and convenient graphing tools are being used daily by our Six Sigma resources for these types of gage studies, problem-solving efforts, quality projects, and many other uses at Tenneco.

 

About the Guest Blogger:

Dan Wolfe is a Certified Lean Six Sigma Master Belt at Tenneco. He has led projects in Engineering, Supply Chain, Manufacturing and Business Processes. In 2006 he was awarded the Tenneco CEO award for Six Sigma. As a Master Black Belt he has led training waves, projects and the development of business process design tools since 2007. Dan holds a BSME from The Ohio State University and an MSME from Oakland University and a degree from the Chrysler Institute of Engineering for Automotive Engineering.

 

Would you like to publish a guest post on the Minitab Blog? Contact publicrelations@minitab.com.

See the New Features and Enhancements in Minitab 18 Statistical Software

$
0
0

It's a very exciting time at Minitab's offices around the world, because we've just announced the availability of Minitab® 18 Statistical Software.

What's new in Minitab 18?Data is everywhere today, but to use it to make sound, strategic business decisions, you need to have tools that turn that data into knowledge and insights. We've designed Minitab 18 to do exactly that. 

We've incorporated a lot of new features, made some great enhancements and put a lot of energy into developing a tool that will make getting insight from your data faster and easier than ever before, and we're excited to get feedback from you about the new release. 

The advanced capabilities we've added to Minitab 18 include tools for measurement systems analysis, statistical modeling, and Design of Experiments (DOE). With Minitab 18, it’s much easier to test how a large number of factors influence process output, and to get more accurate results from models with both fixed and random factors.

We'll delve into more detail about these features in the coming weeks, but today I wanted to give you a quick overview of some of the most exciting additions and improvements. You can also check out one of our upcoming webinars to see the new features demonstrated. Then I hope you'll check them out for yourself—you can get Minitab 18 free for 30 days.

Updated Session Window updated session window in Minitab 18

The first thing longtime Minitab users are likely to notice when they launch Minitab 18 is the enhancements we've made to the Session window, which contains the output of all your analyses. 

The Session window looks better, and also now includes the ability to:
  • Specify the number of significant digits (decimal places) in your output
  • Go directly to graphs by clicking links in the output
  • Expand and collapse analyses for easier navigation
  • Zoom and and out 
sort worksheets in Minitab 18's project manager Sort Worksheets in the Project Manager

We've also added the option to sort the worksheets in your project by title or in chronological order, so you can manage and work with your data in the Project Manager more easily.

Definitive Screening Designs

Many businesses need to determine which inputs make the biggest impact on the output of a process. When you have a lot of inputs, as most processes do, this can be a huge challenge. Standard experimental methods can be costly and time-consuming, and may not be able to distinguish main effects from the two-way interactions that occur between inputs.

That challenge is answered in Minitab 18 with Definitive Screening Designs, a type of designed experiment that minimizes the number of experimental runs required, but still lets you identify important inputs without confounding main effects and two-way interactions.

Restricted Maximum Likelihood (REML) Estimation

Another feature we've added to Minitab 18 is restricted maximum likelihood (REML) estimation. This is an advanced statistical method that improves inferences and predictions while minimizing bias for mixed models, which include both fixed and random factors.

New Distributions for Tolerance Intervals

With Minitab 18 we've made it easy to calculate statistical tolerance intervals for nonnormal data with a distributions including the Weibull, lognormal, exponential, and more.

Effects Plots for Designed Experiments (DOE)

In another enhancement to our Design of Experiments (DOE) functionality, we've added effects plots for general factorial and response surface designs, so you can visually identify significant X’s.

Historical Standard Deviation in Gage R&R

If you're doing the measurement system analysis method known as Gage R&R, Minitab 18 enables you to enter a user-specified process (historical) standard deviation in relevant calculations.

Response Optimizer for GLM

When you use the response optimizer for the general linear model (GLM), you can include both your factors and covariates to find optimal process settings.

Output in Table Format to Word and Excel

The Session window output can be imported into Word and Excel in table format, which lets you easily customize the appearance of your results.

Command Line Pane

Many people use Minitab's command line to expand the software's functionality. With Minitab 18, we've made it easy to keep commands separate from the Session output with a docked command line pane. 

Updated Version of Quality Trainer

Finally, it's worth mentioning that the release of Minitab 18 is complemented by new version of Quality Trainer by Minitab®, our e-learning course. It teaches you how to solve real-world quality improvement challenges with statistics and Minitab, and lets you refresh that knowledge anytime. If you haven't tried it yet, you can check out a sample chapter now. 

We hope you'll try the latest Minitab release!  And when you do, please be sure to let us know what you think: we love to get your feedback and input about what we've done right, and what we can make better! Send your comments to feedback@minitab.com.  

Companion by Minitab: Deep Dive into the Desktop App (Part 2)

$
0
0

Companion by Minitab® is our software for executing and reporting on quality improvement projects. It has two components, a desktop app and a web app. As practitioners use the Companion desktop app to do project work, their project information automatically rolls up to Companion’s web app dashboard, where stakeholders can see graphical summaries and reports. Since the dashboard updates automatically, teams are freed to complete critical tasks instead of creating reports or entering data in a separate system.

In this blog, I will explore the desktop app, and in a future blog, I will explore the web app.

Companion Big Picture

The Companion Desktop Application

Companion's desktop application provides tools and forms that are used by the project owners and practitioners to execute projects efficiently and consistently. Using consistent methodologies, forms, and metrics allows teams working on projects to devote more of their time to critical, value-added project tasks. 

The desktop app delivers a comprehensive set of integrated project tools, in an easy-to-use interface.

  • The Project Manager is a window that provides access to high-level project data. It also includes the Roadmap™, which shows the phases and specific tools used to organize and complete projects.
  • The workspace is where team members work with individual tools. The workspace always displays the currently active tool.

Desktop UI

The Project Manager 

The Project Manager offers instant access to project data and tools. The Management Section includes the following components:

Management Forms

Project Today:
Provides a snapshot of overall project status, health, and phases.

Project Charter:
Defines the project and its benefits, and is updated as the project progresses.

Financial Data:
Records the project’s financial impact in terms of annualized or monthly hard and soft savings.

Team Members and Roles:
Compiles contact and role information for each member of the project team. Easily imports contacts from Microsoft Outlook and from your Companion subscription user list.

Tasks:
Outlines the actions required to complete the project. Enables team leaders to identify and assign responsibilities, set priorities, and establish due dates.

Roadmap™

RoadmapsCompanion’s Roadmap™ feature gives teams a clear path to execute and document each phase of their projects.  The Companion desktop app includes predefined Roadmap™ templates based on common continuous improvement methodologies, including DMAIC, Kaizen, QFD, CDOV, PDCA, and Just Do It.  

The Roadmaps contains phases, and the phases contain the tools appropriate to each phase. However, because every project is different, users can easily add or remove tools as needed. Built-in guidance for each tool further helps practitioners complete their tasks in a timely manner. 

Since many organizations use their own methods, metrics, and KPIs, we’ve made it simple to create or customize a Roadmap™ for your organization’s unique approach to improvement. 

Powerful Project Tools, All in One Place

Companion’s desktop app includes a full set of easy-to-use tools, such as:

Insert Tool

• Value stream map

• FMEA

• Process map

• Brainstorming

• Monte Carlo simulation

• And many more

As teams add specific tools to their project file, they appear within the selected phases of a Roadmap™. You can even customize or build tools from scratch (Blank Form) for processes or methods unique to your organization.

Data sharing in forms and tools

The tools within the Companion desktop app are smart and integrated. Information you add in one tool can be used in other tools, so you only need to type it once—no more redundant entry of the same information into multiple documents and applications!

For example, as you complete a C&E Matrix, you can import the variables you previously added to a process map. And as you rate the importance of the inputs relative to the outputs in the matrix, Companion calculates the results to build a Pareto chart on the fly. You can easily create forms that include your own custom charts and calculations, too.

CE Matrix

Monte Carlo Simulation Tool

Companion by Minitab® contains a very powerful Monte Carlo simulation tool. With its easy to use interface and guided workflow, this tool helps engineers and process improvement practitioners quickly simulate product results and provides step-by-step guidance for optimization to determine best settings for process inputs that result in acceptable outputs. 

The results are easy to understand and next steps are identified. The tool includes Parameter Optimization to find the optimal settings for your input parameters to improve results and reduce defects. It also includes Sensitivity Analysis to quickly identify and quantify the factors driving variation. By using these to pinpoint exactly where to reduce variation, you can quickly get your process where it needs to be.

Monte Carlo Simulation

Companion by Minitab's desktop application is an excellent tool that can propel your projects to success. It gives you the tools for executing projects all in one place, Roadmaps to guide your teams through the appropriate problem-solving process, interconnected forms to eliminate redundant data entry—and because it automatically updates the Companion dashboard, it even makes project reporting completely effortless. Literally.

I believe Companion is the best tool on the market for efficient project execution and summarizing the project work. Why wouldn’t you want to give your people the best tools to make difficult problem solving and reporting easier?

Visit our site for more information about Companion by Minitab® or to download your 30-day free trial for your entire team.

 

A Swiss Army Knife for Analyzing Data

$
0
0

Easy access to the right tools makes any task easier. That simple idea has made the Swiss Army knife essential for adventurers: just one item in your pocket gives you instant access to dozens of tools when you need them.  

swiss army knifeIf your current adventures include analyzing data, the multifaceted Editor menu in Minitab Statistical Software is just as essential.

Minitab’s Dynamic Editor Menu

Whether you’re organizing a data set, sifting through Session window output, or perfecting a graph, the Editor menu adapts so that you never have to search for the perfect tool.

The Editor menu only contains tools that apply to the task you're engaged in. When you’re working with a data set, the menu contains only items for use in the worksheet. When a graph is active, the menu contains only graph-related tools. You get the idea.

Graphing

When a graph window is active, the Editor menu contains over a dozen graph tools. Here are a few of them.

editor menu for graphs

ADD

Use Editor > Add to add reference lines, labels, subtitles, and much more to your graphs. The contents of the Add submenu will change depending on the type of graph you're editing.

MAKE SIMILAR GRAPH

The editing features in Minitab graphs make it easy to create a graph that looks just right. But it may not be easy to reproduce that look a few hours (or a few months) later.

With most graphs, you can use Editor > Make Similar Graph to produce another graph with the same edits, but with new variables.

make similar graph dialog

 

Entering data and organizing your worksheet

When a worksheet is active, the Editor menu contains tools to manipulate both the layout and contents of your worksheet. You can add column descriptions; insert cells, columns or rows; and much more, including the items below.

VALUE ORDER

By default, Minitab displays text data alphabetically in output. But sometimes a different order is more appropriate (for example, “Before” then “After”, instead of alphabetical order). Use Editor > Column > Value Order to ensure that your graphs and other output appear the way that you intend.

ASSIGN FORMULA TO COLUMN

editor menu assign formula

You can assign a formula to a worksheet column that updates when you add or change data.

Session window

As the repository for output, the Session window is already an important component of any Minitab project, but the Editor menu makes it even more powerful. 

SHOW COMMAND LINE

For example, most users rely on menus to run analyses, but you can extend the functionality of Minitab and save time on routine tasks with Minitab macros. If you select the "Show Command Line" option, you'll see the command language generated  with each analysis, which opens the door to macro writing.

editor-menu-show-command-line

In previous versions of Minitab, the Command Line appeared in the Session window. In Minitab 18, the Command Line appears in an another pane, which keeps the Session window output clean and displays all of the commands together. The new Command Line pane is highlighted in the screen shot below:

graph with command pane

 

NEXT COMMAND / PREVIOUS COMMAND / EXPAND ALL / COLLAPSE ALL

After you run several analyses, you may have a great deal of output in your Session window. This group of items makes it easy to find the results that you want, regardless of project size.

Next Command and Previous Command will take you back or forward one step from the currently selected location in your output.

editor menu - next command, expand or collapse all

Expand All and Collapse All capitalize on a new feature in Minitab 18's redesigned Session window. Now you can select individual components of your output and choose whether to display all of the output (Expanded), or only the output title (Collapsed). Here's an example of an expanded output item:

expanded session window itemAnd here's how the same output item appears when collapsed:

collapsed session item

When you have a lot of output items in the session window, the "Collapse All" function can make it extremely fast to scroll through them and find exactly the piece of your analysis you need at any given moment. 

Graph brushing

Graph exploration sometimes calls for graph brushing, which is a powerful way to learn more about the points on a graph that interest you. Here are two of the specialized tools in the Editor menu when you are in “brushing mode”.

SET ID VARIABLES

It’s easy to spot an outlier on a graph, but do you know why it’s an outlier? Setting ID variables allows you to see all of the information that your dataset contains for an individual observation, so that you can uncover the factors that are associated with its abnormality.

CREATE INDICATOR VARIABLE

As you brush points on a graph, an indicator variable “tags” the observations in the worksheet. This enables you to identify these points of interest when you return to the worksheet.

Putting the Dynamic Menu Editor to Use

Working on a Minitab project can feel like many jobs rolled into one—data wrestler, graph creator, statistical output producer. Each task has its own challenges, but in every case you can reach for the Editor menu to locate the right tools.

 


Companion by Minitab: Deep Dive into the Web App (Part 3)

$
0
0

Companion by Minitab® is our software for executing and reporting on quality improvement projects. It consists of a desktop app, which practitioners use to do project work, and a web app, which includes a customizable dashboard that offers stakeholders up-to-the-minute graphical summaries and reports. Since the desktop app automatically updates the dashboard as teams do their work, teams are freed to complete critical tasks instead of creating reports or entering data in a separate system.

In this blog, I will explore the web app, following up on earlier posts that provided an overview of the Companion platform, and detailed features in the desktop app.

Companion Big Picture

Companion by Minitab's Web Application

Companion helps teams complete their projects faster and more consistently, while giving you and your stakeholders insight to make critical business decisions.

The focal point of Companion’s web app is a dashboard that gives you visibility into your entire program. The dashboard makes it easy to assess the progress of all projects, or just a subset. You can monitor the KPIs you need to make important business decisions or search for, open, and explore projects to see detailed activities at the individual project level. Dynamic reports can give everyone in your company access to the information you want to share, but you also can restrict access to sensitive projects and data to the appropriate people. 

The Companion web app works in concert with the desktop app to maximize the benefits of your improvement initiative and provide unparalleled insight into its impact on KPIs and the bottom line.

The web app consists of three components: the project repository, the dashboard, and the design center.

The Project Repository

Companion’s project repository is a secure, centralized storage system that houses all of your organization’s individual improvement projects and can be accessed from anywhere. The repository makes it easy for project owners and administrators to grant or revoke access rights to projects, and to include and exclude projects from dashboard reports.

Project List

Companion's Project Repository

In addition, you can use filters to easily display all projects, projects you own, or projects that have been shared with you, making it easy for you to find projects you are a part of.

Project Filter

Project Filters

The Dashboard

Companion’s dashboard draws on the data from projects stored in the repository to provide a dynamic graphical summary of your program. It can show you financial summaries, status reports, project impacts, progress toward set targets, and more. View your entire initiative, or select and focus on specific projects, teams, or divisions. You can access the dashboard wherever and whenever you need to, from any Internet-connected computer, tablet or device.

The components and features of the dashboard are shown and detailed below:

http://support.minitab.com/en-us/companion/dashboard_report_annotated.png

A.  Report:  A collection of filters, summaries, and column sets.

B. Filters: Allows you to focus on a subset of projects, based on a condition, such as region, location, or project status.

C. Summaries: Displays aggregate project data, such as the number of projects in each division, the average duration of projects, or the total project savings by quarter. Also displays optional targets.

D. Column Set: Determines the fields that are displayed for each project in the project list.

E. Project List: Displays a list of all projects meeting the current filter’s criteria.

F: Help:  Gives you access to topics, videos, the Quick Tour, and the download link for the desktop app.

G: Action Menu: Gives you access to common tasks, such as editing, copying, and creating new reports, saving a report as a PDF, and setting default reports.


Tailor-made Reports

You can create an unlimited number of dashboard reports on different aspects of your initiative. Create reports that include only projects from specific facilities, as well as reports that summarize information from across the organization. Any report can deliver as much or as little detail as needed. 

Reports can be public and visible to everyone in your subscription, or they can be private and visible only to you. Icons to the right of the dashboard title indicate if the report is private or public, as shown.

    Report Icons


 

 


 

 

The Design Center

When you deploy Companion, your data architects customize your subscription to reflect your improvement methodology. But as organizations and processes evolve, so will your needs. Companion’s design center makes it easy to edit and create templates, forms, tools, and data fields.

Dashboard Examples

 

The design center automatically tracks the changes you make, so you know what was changed and when. The data architects work in the sandbox (Figure 7.), a safe and risk-free environment to make changes to the web features. Best of all, even while your data architect is updating project templates, data definitions, and forms, there is zero downtime for your users. 

Sandbox

Companion's Sandbox

The cloud-based web app is hosted by Microsoft Azure with automatic daily, weekly and monthly backups to safeguard your data using the latest methods.  Microsoft Azure data centers guarantee a 99.95% uptime and meet a wide range of internationally recognized security and compliance standards.

Companion deploys quickly—your entire organization can be up and running in a matter of days. Easy-to-customize roadmaps and templates ensure teams follow your company’s methods and provide the information you need.  Companion is the best solution for managing, understanding, and sharing the impact of your continuous improvement program.

Why wouldn’t you want to give your people the best tools to make difficult problem solving and reporting easier?

For more information about Companion by Minitab® or to download your 30-day free trial, please visit our website at http://www.minitab.com/products/companion/

 

Need to Validate Minitab per FDA Guidelines? Get Minitab's Validation Kit

$
0
0

Last week I was fielding questions on social media about Minitab 18, the latest version of our statistical software. Almost as soon as the new release was announced, we received a question that comes up often from people in pharmaceutical and medical device companies:

pills"Is Minitab 18 FDA-validated?"

How Software Gets Validated

That's a great question. To satisfy U.S. Food and Drug Administration (FDA) regulatory requirements, many firms—including those in the pharmaceutical and medical device industries—must validate their data analysis software. That can be a big hassle, so to make this process easier, Minitab offers a Validation Kit.

We conduct extremely rigorous and extensive internal testing of Minitab Statistical Software to assure the numerical accuracy and reliability of all statistical output. Details on our software testing procedures can be found in the validation kit. The kit also includes an automated macro script to generate various statistical and graphical analyses on your machine. You can then compare your results to the provided output file that we have validated internally to ensure that the results on your machine match the validated results.

Intended Use

FDA regulations state that the purchaser must validate software used in production or as part of a quality system for the “intended use” of the software. FDA’s Code of Federal Regulations Title 21 Part 820.70(i) lays it out:

“When computers or automated data processing systems are used as part of production or the quality system, the manufacturer shall validate computer software for its intended use according to an established protocol.”

FDA provides additional guidance for medical device makers in Section 6.3 of “Validation of Automated Process Equipment and Quality System Software” in the Principles of Software Validation; Final Guidance for Industry and FDA Staff, January 11, 2002.

“The device manufacturer is responsible for ensuring that the product development methodologies used by the off-the-shelf (OTS) software developer are appropriate and sufficient for the device manufacturer's intended use of that OTS software. For OTS software and equipment, the device manufacturer may or may not have access to the vendor's software validation documentation. If the vendor can provide information about their system requirements, software requirements, validation process, and the results of their validation, the medical device manufacturer can use that information as a beginning point for their required validation documentation.”

Validation for intended use consists of mapping the software requirements to test cases, where each requirement is traced to a test case. Test cases can contain:

  • A test case description. For example, Validate capability analysis for Non-Normal Data.
  • Steps for execution. For example, go to Stat > Quality Tools > Capability Analysis > Nonnormal and enter the column to be evaluated and select the appropriate distribution.
  • Test results (with screen shots).
  • Test pass/fail determination.
  • Tester signature and date.
An Example

There is good reason for the “intended use” guidance when it comes to validation. Here is an example:

Company XYZ is using Minitab to estimate the probability of a defective part in a manufacturing process. If the size of Part X exceeds 10, the product is considered defective. They use Minitab to perform a capability analysis by selecting Stat > Quality Tools > Capability Analysis > Normal.

In the following graph, the Ppk (1.32) and PPM (37 defects per million) are satisfactory.

Not Validated for Non-Normal Capability Analysis

However, these good numbers would mislead the manufacturer into believing this is a good process. Minitab's calculations are correct, but this data is non-normal, so normal capability analysis was the wrong procedure to use.

Fortunately, Minitab also offers non-normal capability analysis. As shown in the next graph, if we choose Stat > Quality Tools > Capability Analysis > Nonnormal and select an appropriate distribution (in this case, Weibull), we find that the Ppk (1.0) and PPM (1343 defects per million) are actually not acceptable:

Validated for Non Normal Capability Analysis

Thoroughly identifying, documenting, and validating all intended uses of the software helps protect both businesses that make FDA-regulated products and the people who ultimately use them.

Software Validation Resources from Minitab

To download Minitab's software validation kit, visit http://www.minitab.com/support/software-validation/

In addition to details regarding our testing procedures and a macro script for comparing your results to our validated results, the kit also includes software lifecycle information.

Additional information about validating Minitab relative to the FDA guideline CFR Title 21 Part 11 is available at this link:

http://it.minitab.com/support/answers/answer.aspx?id=2588

If you have any questions about our software validation process, please contact us.

Making Steel Even Stronger with Monte Carlo Simulation

$
0
0

If you have a process that isn’t meeting specifications, using Monte Carlo simulation and optimization can help. Companion by Minitab offers a powerful, easy-to-use tool for Monte Carlo simulation and optimization, and in this blog we'll look at the case of product engineers involved in steel production for automobile parts, and how they could use Companion to improve a process.

steel productionThe tensile strength of Superlative Auto Parts’ new steel parts needs to be at least 600 MPa. The important inputs for this manufacturing process are the melting temperature of the steel and the amount of carbon, manganese, cobalt, and phosphorus it contains. The following transfer equation models the steel’s tensile strength:

Strength = -1434 + 1.1101*MeltTemp + 1495*Carbon + 174.3*Manganese - 7585*Cobalt - 3023*Phosphorus

Building your process model

To assess the process capability, you can enter information about your current process inputs into Companion’s straightforward interface.

Suppose that while you know most of your inputs follow a normal distribution, you’re not sure about the distribution of melting temperature. As long as you have data about the process, you can just select the appropriate column in your data sheet and Companion will recommend the appropriate distribution for you.

determining distribution from data

In this case, Companion recommends the Weibull distribution as the best fit and then automatically enters the "MeltTemp" distribution information into the interface.

companion monte carlo tool - define model

Once you have entered all of your input settings, your transfer equation, and the lower specification limit, Companion completes 50,000 simulations for the steel production.

Understanding your results initial monte carlo simulation results

The process performance measurement (Cpk) for your process is 0.417, far short of the minimum standard of 1.33. Companion also indicates that under current conditions, 14 percent of your parts won’t meet the minimum specification.

Finding optimal input settings

The Companion Monte Carlo tool’s smart workflow guides you to the next step for improving your process: optimizing your inputs.

paramater optimization guidance

You set the goal—maximizing the tensile strength—and enter the high and low values for your inputs. Companion does the rest.

paramater optimization dialog Simulating the new process

After finding the optimal input settings in the ranges you specified, Companion presents the simulated results for the recommended process changes.

monte carlo simulation of tensile strength

The simulation indicates that the optimal settings identified by Companion will virtually eliminate out-of-spec product from your process, with a Cpk of 1.56—a vast improvement that exceeds the 1.33 Cpk standard. Thanks to you, Superlative Auto Parts’ steel products won’t be hitting any bumps in the road.

Getting great results

Figuring out how to improve a process is easier when you have the right tool to do it. With Monte Carlo simulation to assess process capability and Parameter Optimization to identify optimal settings, Companion can help you get there. And with Sensitivity Analysis to pinpoint exactly where to reduce variation, you can further improve your process and get the product results you need.

To try the Monte Carlo simulation tool, as well as Companion's more than 100 other tools for executing and reporting quality projects, learn more and get the free 30-day trial version for you and your team at companionbyminitab,com.

5 Tips to Make Process Improvements Stick!

$
0
0

For a process improvement practitioner, finishing the Control Phase of the DMAIC process is your ticket to move on to your next project. You’ve done an excellent job leading the project team because they identified root causes, developed and implemented solutions to resolve those root causes, put a control plan in place and transitioned the process back to the Process Owner. Soon, however, you learn that the process has reverted to its original state.

I’ve often heard project leaders lament, “We worked so hard to identify and implement these solutions—why won’t they stick?”

So let's talk about fishing for a moment, because it offers some great lessons for making process change. Remember the quote, “Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime?” Seems simple enough, right?  But what is involved and how long does it take to teach people to fish so they could eat for a lifetime?  

The same is true for process improvements. Seems simple enough to make a change and expect it to stick. So why is it so hard?

catch a fishThe fishing analogy hits home with me. I love to go fishing and have been an avid angler since I was young. And though it’s been a while since I taught my kids how to fish, I do remember it was a complicated process. There is a lot to learn about fishing—such as what type of equipment to use, rigging the rod, baiting the hook, deciding where to fish, and learning how to cast the line.

One of the most important fishing tips I can offer a beginner is that it's better to go fishing five times in a few weeks as opposed to five times in an entire year. Skills improve quickly with a focused effort and frequent feedback. People who spread those introductory fishing experiences out over a year wind up always starting over, and that can be frustrating. While there are people who are naturally good at fishing and catch on (pun intended) right away, they are rare. My kids needed repeated demonstrations and lots of practice, feedback and positive reinforcement before they were able to fish successfully. Once they started catching fish, their enthusiasm for fishing went through the roof!

Tips for Making Process Improvements Stick

Working with teams to implement process change is similar. Most workers require repeated demonstrations, lots of practice, written instructions, feedback and positive reinforcement before the new process changes take hold.  

Here are several tips you can use to help team members be successful and implement process change more quickly. Take the time to design your solution implementation strategy and control plan with these tips in mind. Also, Companion by Minitab® contains several forms that can make implementing these tips easy.

Tip #1: Pilot the Solution in the Field

A pilot is a test of a proposed solution and is usually performed on a small scale. It's like learning to fish from the shore before you go out on a boat in the ocean with a 4-foot swell. It is used to evaluate both the solution and the implementation of the solution to ensure the full-scale implementation is more effective. A pilot provides data about expected results and exposes issues with the implementation plan. The pilot should test both if the process meets your specifications and the customer expectations. First impressions can make or break your process improvement solution. Test the solution with a small group to work out any kinks. A smooth implementation will help the workers accept the solution at the formal rollout.   Use a form like the Pilot Scale-Up Form (Figure 1) to capture issues that need resolution prior to full implementation.  

Pilot
Figure 1. Pilot Scale-Up Form

Tip #2: Implement Standard Work

Standard work is one of the most powerful but least used lean tools to maintain improved process performance. By documenting the current best practice, standardized work forms the baseline for further continuous improvement. As the standard is improved, the new standard becomes the baseline for further improvements, and so on.

Use a Standard Work Combination Chart (Figure 2) to show the manual, machine, and walking time associated with each work element. The output graphically displays the cumulative time as manual (operator controlled) time, machine time, and walk time. Looking at the combined data helps to identify the waste of excess motion and the waste of waiting.

Standard Work
Figure 2. Standard Work Combination Chart

Tip #3: Update the Procedures

A Standard Operation Procedure (SOP) is a set of instructions detailing the tasks or activities that need to take place each time the action is performed. Following the procedure ensures the task is done the same way each time. The SOP details activities so that a person new to the position will perform the task the same way as someone who has been on the job for a longer time.

When a process has changed, don’t just tell someone of the change: legitimize the change by updating the process documentation. Make sure to update any memory-jogger posters hanging on the walls, and the cheat sheets in people’s desk drawers, too. Including a document revision form such as Figure 3 in your control plan will ensure you capture a list of procedures that require updating. 

Document Revision
Figure 3. Document Revision Form

Tip #4: Feedback on New Behaviors Ensures Adoption

New processes involve new behaviors on the part of the workers. Without regular feedback and positive reinforcement, new process behaviors will fade away or revert to the older, more familiar ways of doing the work. Providing periodic feedback and positive reinforcement to those using the new process is a sure-fire way to keep employees doing things right. Unfortunately, it’s easy for managers to forget to provide this feedback. Using a Process Behavior Feedback Schedule like Figure 4 below increases the chance of success for both providing the feedback and maintaining the gains.

Process BehaviorFigure 4. Process Behavior Feedback Schedule

Tip #5: Display Metrics to Reinforce the Process Improvements

Metrics play an integral and critical role in process improvement efforts by providing signs of the effectiveness and the efficiency of the process improvement itself. Posting “before and after” metrics in the work area to highlight improvements can be very motivating to the team.   Workers see their hard work paying off, as in Figure 5. It is important to keep the metric current because it will be one of the first indicators if your process starts reverting. 

Before After ChartFigure 5. Before and After Analysis

 

Kids Fishing

When it comes to fishing and actually catching fish, practice, effective feedback, and positive reinforcement makes perfect.

The same goes for implementing process change. If you want to get past the learning curve quickly, use these tips and enjoy the benefits of an excellent process! 

To access these and other continuous improvement forms, download the 30-day free trial of Companion from the Minitab website at http://www.minitab.com/products/companion/

Attribute Acceptance Sampling for an Acceptance Number of 0

$
0
0

Suppose that you plan to source a substantial amount of parts or subcomponents from a new supplier. To ensure that their quality level is acceptable to you, you might want to assess the capability levels (Ppk and Cpk indices) of their manufacturing processes and check whether their critical process parameters are fully under control (using control charts). If you are not sure about the efficiency of the supplier quality system or if you cannot get reliable estimates of their capability indices, you will probably need to actually inspect the incoming parts from this vendor.

Parts for visual inspectionHowever, checking all parts is expensive and time consuming.  In addition to that, visually inspecting 100% of all parts will not necessarily ensure that all defective parts are detected (operators will eventually get tired performing repetitive visual inspections).

Acceptance sampling is a more efficient approach: to reduce costs, a smaller sample of parts is selected (in a random way to avoid any systematic bias) from a larger batch of incoming products, these sampled parts are then inspected.

Attribute Acceptance Sampling

The Acceptable Quality Level (AQL) of your supplier is the quality level that you expect from them (a proportion of defectives that is still considered acceptable). If the proportion of defectives is larger than that, the whole batch should get rejected (with a financial penalty for the supplier). The RQL is the Rejectable Quality Level (a proportion of defectives that is not considered acceptable, in which case the whole batch should be rejected).

The graph below represents the probability to accept a batch for a given proportion of defectives. The probability to accept the whole batch when the actual percentage of defectives in the batch is 1% (1% is the AQL in this case) is 98.5%, but if the true percentage of defectives increases to 10% (10% is the RQL), the probability to accept the whole batch will be 9.7%.

The inspection criterion, in this case, should be the following: check 52 parts, and if there are more than 2 defective parts, then reject the whole batch. If there are two defective parts or less, then do not reject. The AQL and the RQL need to be negotiated with your supplier, whereas the acceptance criteria are calculated by Minitab.

This graph represents the probability to accept a batch for a given proportion of defectives.

In Minitab, go to Stat > Quality Tools > Acceptance Sampling by Attributes... and enter your AQL and RQL as displayed in the dialogue box below to obtain the acceptance criteria.

C = 0 Inspection Plans (Acceptance Number of 0):

From a quality assurance point of view, however, in many industries the only acceptable publicized quality level is 0% defective parts. Obviously, the ideal AQL should be 0. You may have a difficult time explaining your final customers that a small proportion of defectives is still acceptable. So let's focus on 0 defective control plans, when the acceptance number is 0 and a batch is rejected as soon as a single defective is identified in the sample.

Note that Minitab will not allow you to enter an AQL of exactly 0 (it should always be larger than 0).

The Producer’s Risk

If the acceptance number is set to 0, the conditions for accepting a lot become considerably more restrictive. One consequence of setting very strict standards for accepting a batch is that if quality is not 100% perfect, and even with a very small proportion of defectives, the probability of rejecting a batch will increase very rapidly.

The Alpha risk (the Producer’s risk) is the probability to reject a batch even though the proportion of defectives is very small. This impacts the producer since many of the batches they deliver will get rejected if the true proportion of defectives is not exactly 0. 

In the graph below the probability to accept a batch with a 1% defective rate is now 80% (so that nearly 20% of the batches will get rejected if the true proportion of defectives is 1%)! This high rejection rate is the price we need to pay for the very strict 0 acceptance number.

Conclusion

The sample size to inspect is smaller with an acceptance number of 0 (22 parts are inspected in the second graph vs. 52 in the first graph). However, this is a very ambitious objective. If the true percentage of defectives is, say, 0.5% in the batches (if the AQL is set at 0.5%), then 10,4% of all batches will get rejected.

To obtain a lower and more realistic proportion of rejected batches, the level of quality from your supplier should be nearly 100% perfect (almost 100% good parts).

Gleaning Insights from Election Data with Basic Statistical Tools

$
0
0

One of the biggest pieces of international news last year was the so-called "Brexit" referendum, in which a majority of voters in the United Kingdom cast their ballots to leave the European Union (EU).

Polling station in the United KingdomThat outcome shocked the world. Follow-up media coverage has asserted that the younger generation prefers to remain in the EU since that means more opportunities on the continent. The older generation, on the other hand, prefers to leave the EU.

As a statistician, I wanted to look at the data to see what I could find out about the Brexit vote, and recently the BBC published an article that included some detailed data.

In this post, I'll use Minitab Statistical Software to explore the data from the BBC site along with the data from the Electoral Commission website. I hope this exploration will give you some ideas about how you might use publicly available data to get insights about your customers or other aspects of your business.

The electoral commission data contains the voting details of all 382 regions in the United Kingdom. It includes information on voter turnout, the percent who voted to leave the EU, and the percent who voted to remain. (If you'd like to follow along, open the BrexitData1 and BrexitData2 worksheets in Minitab 18. If you don't already have Minitab, you can download the 30-day trial.)

I began by creating scatterplots (in Minitab, go to Graph > Scatterplot...) of the percentage of voter turnout against the percentage of the population that voted to leave for each region, as shown below.

Scatterplot of Brexit Voter Data1

Scatterplot of Brexit Voter Data, #2

According to commentators, areas with high voter turnout had a tendency to vote to leave, as the elderly were more likely to turn up to vote. There is also a perceptible difference between the plots for the different areas.

To make this easier to analyze, I created an indicator variable called “decided to leave” in my Minitab worksheet. This variable takes the value of 1 if the area voted to leave the EU, and takes the value 0 otherwise. Tallying the number of areas in each region that voted to leave or remain (Stat > Tables > Tally Individual Variables...) yields the following:

Tabulated Brexit Statistics: Region, Decided to Leave

There are indeed regional differences. For example, London and Scotland voted strongly to remain while North East and North West voted strongly to leave. So, do we see greater voter turnout in the regions that voted to leave? Looking at the average turnout in each region (using Stat > Display Descriptive Statistics...), we have the following:

Brexit Data - Descriptive Statistics

Surprisingly, the average turnout of regions that voted strongly to leave is not very different from the turnout of regions that voted strongly to remain. For example, the average turnout of 69.817% in London compared to 70.739% in North West.

The data set analyzed in the BBC article contains localised voting data supplied to the BBC by councils which counted the EU referendum. This data is more detailed than the regional data from the Electoral Commission, and it includes a detailed breakdown of how the people in individual electoral wards voted.

The BBC asked all the counting areas for these figures. Three councils did not reply. The remaining missing data could be due to any of the following reasons:

  • The council refused to give the information to the BBC.
  • No geographical information was available because all ballot boxes were mixed before counting.
  • The council conducted a number of mini-counts that combined ballot boxes in a way that does not correspond to individual wards.

For those wards that have voting data, I also gathered the following information from the last census for each area.

  • Percent of population in an area with level 4 qualification or higher. This includes individuals with a higher certificate/diploma, foundation degree, undergraduate degree, or master’s degree up to a doctorate. I will call this variable “degree” to represent individuals holding degrees or equivalent qualification.
  • Percentage of young people (age 18-29) in an area.
  • Percentage of middle-aged (age 30-59) in an area.
  • Percentage of elderly (age 65 or above) in an area.

There is some difference in how some wards are defined between this data set and the data from the last census, perhaps due to changes in ward boundaries. Thus, for some wards, it was not possible to match the corresponding percentages of different age groups and degree holders. Therefore, some areas had to be omitted from my analysis, leaving me with data from a total of 1,069 wards.

With the exception of Scotland, Northern Ireland, and Wales, I have data from wards in all regions of the UK. The number of measurements from each region appears below.

Brexit Data, Descriptive Statistics N

As with the Electoral Commission data, let’s begin by looking at some graphs. Below is a scatterplot of the percentage voting to leave against the percent of the population with a degree in an area.

Scatterplot of Brexit Data:  Leave % vs. Degree

As you can see, the higher the percentage of people in an area who had a degree, the lower the percentage of the population that voted to leave. However, there are exceptions. For example, for Osterley and Spring Grove in Hounslow, the percentage that voted to leave is 63.41%, with a higher percentage of degree holders at 37.5566%. However, the area has a small proportion of young adults, at 19.3538%.

Let's look at the voting behaviour for different age groups. I created scatterplots of the percentage that voted to leave against different age groups.

The next plot shows percentage that voted to leave against the percentage of young people (age 18-29) in an area:

Scatterplot of Brexit Data: Leave% vs Young

Areas with a higher percentage of young people appear to have a smaller percentage of people who voted to leave.

The following plot shows the percentage of the population that voted to leave against the percentage of elderly residents:

Scatterplot of Brexit Data: Leave% vs. Elderly

This plot shows the opposite situation shown in the previous one: areas with a higher proportion of elderly residents voted more strongly to leave.

These scatterplots support what’s being said in pieces such as the article on the BBC's website. However, in statistics, we like to verify that the relationship is significant. Let’s look at the correlation coefficients (Stat > Basic Statistics > Correlation...).

Brexit Data: Correlation - Leave%, Degree, Young, Elderly

The correlation output in Minitab includes a p-value. If the p-value is less than the chosen significance level, it tells you the correlation coefficient is significantly different from 0—in other words, a correlation exists. Since we selected an alpha value (or significance level) of 0.05, we can say that all the coefficients calculated above are significant and that there are correlations between these factors.

Thus, the proportion of degree holders in an area has a strong negative impact on voting to leave. On the other hand, the proportion of elderly residents in an area has a strong positive impact on voting to leave.

Going a step further, I fit a regression model (Stat > Regression > Regression > Fit Regression Model...) that links the percent voting to leave with the proportion of degree holders and different age groups.

Brexit Data Regression: Leave% vs Degree, Young, Middle-age, Elderly

While there is no need to use the equation to make a prediction, we can still get some interesting information from the results.

The different age groups and proportion of degree holders all have an impact on the percentage voting to leave. The coefficient for the “degree” term is negative, and this implies for each unit increase in the percent of degree holders, the percentage voting to leave drops by 1.4095. On the other hand, for a unit increase in the percentage of elderly, the percentage voting to leave increases by 1.2732. In addition, there is a significant interaction between the percentage of degree holders and young people: Every unit increase in this interaction term only increases the percent voting to leave by 0.00641.

The results I obtained when I analyzed the data with Minitab support the commonly held view that younger voters preferred to remain in the EU, while older voters preferred to leave. The analysis also underscores the complicated politics surrounding Brexit, a reality that became apparent in the recent general election. One thing seems certain now that Brexit talks are imminent: balancing the needs and desires of the people from different age groups and backgrounds will be a tremendous task.

Getting the Most Out of Your Text Data, Part 1

$
0
0

With Minitab, it’s easy to create graphs and manage numeric, date/time and text data. But Minitab’s enhanced data manipulation features make it easier to work with text data, too.

handling text data is easyThis is the first of three posts in which I'm going to focus on various tools in Minitab that are useful when working with text data, including the Calculator, the Data menu, and the Editor menu.

Using the Calculator

You might be surprised to hear that Minitab’s Calculator is just as useful with text as it is with numbers. Here are just a few things you can use it for:

ISOLATE CRITICAL INFORMATION

Sometimes it’s helpful to extract individual words or characters from text data for use in isolation. For example, if we have a column of product ID’s and need just part of the letters or numbers that are part of the text string in a column, the LEFT, MID and RIGHT functions in the calculator can be very useful.

The LEFT function in the calculator will extract values from a text string beginning with the leftmost and will stop at the number of characters we specify.  In the example above, we could complete the Calc> Calculator dialog box to pull out the two characters on the left side (AB or BC) by completing the dialog box as shown in the example below:

The RIGHT function works in exactly the same way as the LEFT function, except that RIGHT extracts characters beginning with the rightmost.  Here we’re pulling out the 4 characters beginning from the right side:

Similarly, we can use the MID function in the calculator to extract the number of characters we want from the middle of a text string. With the MID function, we enter the text column plus the position of the first character we want to extract, then the number of characters we want to extract.  In this example we want to extract the 2 characters between the hyphens.  In that case the first character we want is the fourth, so we’d complete the Calculator dialog box like this:

COMBINE DATA FOR ADDED MEANING

In other cases, the whole can be greater than the sum of its parts. For example, if values for Month, Day and Year are stored in separate columns, we may want to combine these into a single column:

The month, day and year of each observation was originally recorded in separate columns, which complicates graphing. Fortunately, the calculator can be used to combine the three columns into a single column. To do that, we can use the CONCATENATE function:

The empty space between the double quotes will add a space between the Month and Day, and the comma plus the empty space will add a comma after the Day and a space before the year:

 

REPLACE INCORRECT PORTIONS OF TEXT DATA

A consistent recording error doesn’t have to result in time-consuming hand-corrections.  In this example an operator who handles product returns has noted the incorrect year portion of the catalog code that is used to reference the item:

The calculator’s SUBSTITUTE function can be used to replace the incorrect Spring 13 with Spring 14. The calculator will find the text and replace it with the new text that we specify:

These are just a few of the useful functions included in Minitab’s Calculator.  To see a complete list of Minitab’s calculator functions with explanations and examples of how to use each, open Minitab and go to Help> Help.

If you’re already using the calculator in Minitab, the easiest way to access the same information is to click the Help button in the lower-left corner of the calculator

calculator to help online                                                                                                                                              In my next post, we’ll explore some of the text data manipulation features that Minitab offers in the Data menu.


Cp and Cpk: Two Process Perspectives, One Process Reality

$
0
0

It’s usually not a good idea to rely solely on a single statistic to draw conclusions about your process. Do that, and you could fall into the clutches of the “duck-rabbit” illusion shown here:

If you fix your eyes solely on the duck, you’ll miss the rabbit—and vice-versa.

If you're using Minitab Statistical Software for capability analysis, the capability indices Cp and Cpk are good examples of this. If you focus on only one measure, and ignore the other, you might miss seeing something critical about the performance of your process. 

Cp: A Tale of Two Tails

Cp is a ratio of the specification spread to the process spread. The process spread is often defined as the 6-sigma spread of the process (that is, 6 times the within-subgroup standard deviation). Higher Cp values indicate a more capable process.

When the specification spread is considerably greater than the process spread, Cp is high.

When the specification spread is less than the process spread, Cp is low.

By using the 6-sigma process spread, Cp incorporates information about both tails of the process data. But there’s something Cp doesn’t do—it doesn’t tell you anything about the location of the process data.

For example, the following two processes have the about same Cp value (≈ 3):

Obviously, Process B has a serious issue with its location in relation to the spec limits that Cp just can't "see."

Cpk: Location, Location, Location!

Like Cp, Cpk is also a ratio of the specification spread to the process spread. But unlike Cp, Cpk compares the distance from the process mean to the closest specification limit, to about half the spread of the process (often, the 3-sigma spread).

When the distance from the mean to the nearest specification limit is considerably greater than the one-sided process spread, Cpk is high.

When the distance from the mean to the nearest specification limit is less than the one-sided process spread, Cpk is low.

Notice how the location of the process does affect the Cpk value—by virtue of its being calculated using the process mean.

Yet there's something important that Cpk doesn't do. Because it's a "worst-case" estimate that uses only the nearest specification limit, Cpk can't "see" how the process is performing on the other side.

For example, the following two processes have the about same Cpk value (≈ 0.9):


Notice that Process X has nonconforming parts in relation to both spec limits, while Process Y has nonconforming parts in relation to only the upper spec limit (USL). But Cpk can't "see"any difference between these two processes.

To get the two-sided picture of each process, in relation to both spec limits, you can look at Cp, which would be higher for Process Y than for Process X.

Summing Up: Look for Ducks, Rabbits, and Other Critters as Well

Avoid getting too fixated on any single statistic. If you have both a lower and upper specification limit for your process, Cp and Cpk each might  “know” something about your process that the other one doesn’t.  That “something” could be critical to fully understand how your process is performing.

To see a concrete example of how Cp and Cpk work together, using real data from the National Renewable Energy Laboratory, see this post by Cody Steele.

By the way, the potential "blind spot" for Cp and Cpk also applies to Pp and Ppk. The only difference is that the process spread for those indices is calculated using the overall standard deviation, instead of the within-subgroup standard deviation. For more on that distinction, see this post by Michelle Paret.

And if you’re interested other optical and statistical illusions, check out this post on Simpson's paradox.

Getting the Most Out of Your Text Data, Part 2

$
0
0

My previous post focused on manipulating text data using Minitab’s calculator.

In this post we continue to explore some of the useful tools for working with text data, and here we’ll focus on Minitab’s Data menu. This is the second in a 3-part series, and in the final post we’ll look at the new features in Minitab’s Editor menu.

Using the Data Menu

When I think of the Data menu, I think manipulation—the data menu in Minitab is used to manipulate the data in the worksheet. This menu is useful for both text and numeric data.

Let's focus on two features from the Data menu: Code and Conditional Formatting

Using the Code Command

When working with text data, it is sometimes useful to reduce the number of categories by combining some of the categories into fewer groups.  Consider the example below, where we have low or 0 counts for some of the citrus fruits:

Rather than generating a bar chart with no bar for Grapefruit, we could combine some of the citrus fruits into a single Citrus category instead of listing them separately.  The Code command in Minitab’s Data menu can help:

By coding our existing text values for Grapefruit, Oranges and Lemons to a new Text category called Citrus, we can reduce the number of categories.  To do that in Minitab, we enter the original column listing the types of fruit in the first field.  Then we change the values under Coded value from the current values (Lemons, Oranges, and Grapefruit) to their new value Citrus:

As a final step, we can tell Minitab where we’d like to store the coded results by using the Storage location for the coded columns drop-down list:

For this example, we’ll just keep the default and store the results at the end of the current worksheet and click OK.  The coded results can easily be used to create a new bar chart that shows only the Citrus category instead of the individual fruits:

Using Conditional Formatting

This is a relatively new feature in Minitab, one which came about as the result of many requests from users who wanted the ability to control the appearance of the data in the worksheet.  Because there are many options available in the new menu, we’ll just focus on two options as examples. The other options behave in a similar way, so a basic understanding of these two examples should be helpful in applying the other options in Conditional Formatting.

Often, raw text data is used to create Pareto charts to see the defects are most frequently occurring. But what if we want to highlight the most frequently occurring defect in the worksheet? We use Data> Conditional Formatting> Pareto> Most Frequent Values:

We enter our column listing the defects in the first field, and in the second field we can tell Minitab the number of unique items to highlight. For example, if we want to highlight the two most frequently listed defects, we enter 2. In this example, we only want to highlight the most common defect so we enter 1. Finally, we can tell Minitab what color we’d like to apply to the cells that meet our condition, and then we click OK to see the highlighted cells in the worksheet:

In some situations, it may be useful to highlight specific values in a text column.  In fact, we may want multiple colors in a single column, each representing a specific category.  For this situation we can use Conditional Formatting > Highlight Cell> Text that Contains:

With this option, we can tell Minitab the color we want for a cell that contains the text that we type into the Format cells that contain field:

First we enter the column with the data in the first field, type the text we want to highlight in the second field (NOTE: This is case-sensitive), and then choose the color from the Style drop-down list.  We can repeat this process if we want to apply multiple colors to a single column.  In this example, the Low values will be shown in Green, the Medium values will be marked in Green, and the High values will be highlighted in Red:

Finally, after applying conditional formatting to our worksheet, we’ll need an easy way to see all the rules we’ve applied and the ability to remove or change those rules.  The Conditional Formatting menu’s Manage Rules option can make the magic happen:

The rules for each column are listed separately, so we choose a column to see the rules that have been applied by selecting the column from the drop-down list at the top:

The rules applied to the selected column are listed under Rules.  We can remove a specific rule by clicking on a rule in the Rules list, and then clicking the button with the red X.

We can also use the Format button to change the formatting of a specific rule.  For example, I may want to change the rule for Medium from Yellow to my favorite color:

It’s much nicer in hot pink, wouldn’t you agree?

In my final post in this series, we’ll look at the new features that ease the pain of manipulating text data using the Editor menu.

What Is the Difference between Linear and Nonlinear Equations in Regression Analysis?

$
0
0

Fourier nonlinear functionPreviously, I’ve written about when to choose nonlinear regression and how to model curvature with both linear and nonlinear regression. Since then, I’ve received several comments expressing confusion about what differentiates nonlinear equations from linear equations. This confusion is understandable because both types can model curves.

So, if it’s not the ability to model a curve, what is the difference between a linear and nonlinear regression equation?

Linear Regression Equations

Linear regression requires a linear model. No surprise, right? But what does that really mean?

A model is linear when each term is either a constant or the product of a parameter and a predictor variable. A linear equation is constructed by adding the results for each term. This constrains the equation to just one basic form:

Response = constant + parameter * predictor + ... + parameter * predictor

Y = b o + b1X1 + b2X2 + ... + bkXk

In statistics, a regression equation (or function) is linear when it is linear in the parameters. While the equation must be linear in the parameters, you can transform the predictor variables in ways that produce curvature. For instance, you can include a squared variable to produce a U-shaped curve.

Y = b o + b1X1 + b2X12

This model is still linear in the parameters even though the predictor variable is squared. You can also use log and inverse functional forms that are linear in the parameters to produce different types of curves.

Here is an example of a linear regression model that uses a squared term to fit the curved relationship between BMI and body fat percentage.

Linear model with squared term

Nonlinear Regression Equations

While a linear equation has one basic form, nonlinear equations can take many different forms. The easiest way to determine whether an equation is nonlinear is to focus on the term “nonlinear” itself. Literally, it’s not linear. If the equation doesn’t meet the criteria above for a linear equation, it’s nonlinear.

That covers many different forms, which is why nonlinear regression provides the most flexible curve-fitting functionality. Here are several examples from Minitab’s nonlinear function catalog. Thetas represent the parameters and X represents the predictor in the nonlinear functions. Unlike linear regression, these functions can have more than one parameter per predictor variable.

Nonlinear functionOne possible shape Power (convex): Theta1 * X^Theta2 Power function in nonlinear regression Weibull growth: Theta1 + (Theta2 - Theta1) * exp(-Theta3 * X^Theta4) Weibull growth function in nonlinear regression Fourier: Theta1 * cos(X + Theta4) + (Theta2 * cos(2*X + Theta4) + Theta3 Fourier function for nonlinear regression

Here is an example of a nonlinear regression model of the relationship between density and electron mobility.

Nonlinear regression model for electron mobility

The nonlinear equation is so long it that it doesn't fit on the graph:

Mobility = (1288.14 + 1491.08 * Density Ln + 583.238 * Density Ln^2 + 75.4167 * Density Ln^3) / (1 + 0.966295 * Density Ln + 0.397973 * Density Ln^2 + 0.0497273 * Density Ln^3)

Linear and nonlinear regression are actually named after the functional form of the models that each analysis accepts. I hope the distinction between linear and nonlinear equations is clearer and that you understand how it’s possible for linear regression to model curves! It also explains why you’ll see R-squared displayed for some curvilinear models even though it’s impossible to calculate R-squared for nonlinear regression.

If you're learning about regression, read my regression tutorial!

What Does It Mean When Your Probability Plot Has Clusters?

$
0
0

Have you ever had a probability plot that looks like this?

Probability Plot of Patient Weight Before and After Surgery

The probability plot above is based on patient weight (in pounds) after surgery minus patient weight (again, in pounds) before surgery.

The red line appears to go through the data, indicating a good fit to the Normal, but there are clusters of plotting points at the same measured value. This occurs on a probability plot when there are many ties in the data. If the true measurement can take on any value (in other words, if the variable is continuous), then the cause of the clusters on the probability plot is poor measurement resolution.

The Anderson-Darling Normality test typically rejects normality when there is poor measurement resolution. In a previous blog post (Normality Tests and Rounding) I recommended using the Ryan-Joiner test in this scenario. The Ryan-Joiner test generally does not reject normality due to poor measurement resolution. 

In this example, the Ryan-Joiner p-value is above 0.10. A probability plot that supports using a Normal distribution would be helpful to confirm the Ryan-Joiner test results. How can we see a probability plot of the true weight differences? Simulation can used to show how the true weight differences might look on a probability plot.

The difference in weight values were rounded to the nearest pound. In effect, we want to add a random value from -0.5 to +0.5 to each value to get a simulated measurement. The steps are as follows:

  1. Store simulated noise values from -0.5 to +0.5 in a column using Calc > Random Data > Uniform.
  2. Use Calc > Calculator to add the noise column to the original column of data.
  3. Create a normal probability plot using Stat > Basic Statistics > Normality Test.
  4. Repeat steps 1-3 several times if you want to see how the results are affected by the simulated values.

The resulting graph from one iteration of these steps is shown below. It suggests that the Normal distribution is a good model for the difference in weights for this surgery.

Probability plot with simulated measurements

 

Getting the Most Out of Your Text Data Part III

$
0
0

The two previous posts in this series focused on manipulating data using Minitab’s calculator and the Data menu.

text data manipulationIn this third and final post, we continue to explore helpful features for working with text data and will focus on some features in Minitab’s Editor menu.

Using the Editor Menu 

The Editor menu is unique in that the options displayed depend on what is currently active (worksheet, graph, or session window). In this blog post, we’ll focus on some of the options available when a worksheet is active. Here's the Editor menu in Minitab 18:

Minitab 18 Editor Menu

There is also some duplication between the Data menu, which was the focus of my previous post, and the Editor menu: both menus provide the option for Conditional Formatting. The same conditional formatting options can now be accessed via either the Data or the Editor menus.

Let's consider some examples using features from Find and Replace (Find/Replace Formatted Cell Value), Cell Properties (Comment, Highlight & Custom Formats), Column Properties (Value Order) and Subset Worksheet (Custom Subset).

Find and Replace

This section of the Editor menu includes options for Find Format and Replace Formatted Cell Value—either selection will display the Find Format and Replace Value dialog box:

We can toggle between the two options by using the Find and the Replace tabs at the top.

Both of these options could be useful when making changes to a worksheet that has been formatted using the new conditional formatting options discussed in the previous post in this series.

For example, if we’ve applied conditional formatting to a worksheet to highlight cells with the value ‘Low’ in green, we could use the Replace tab to find all the cells that are green and replace the values in the cells with new values.  For example, we can replace the ‘Low’ values that are marked in green with the new value ‘Insignificant’:

Cell Properties

The new Cell Properties option in the Editor menu provides options for adding a Comment to the selected cell, to Highlight specific cells in the worksheet, and to create Custom Formats.  These same options can be accessed by right-clicking on a cell and choosing Cell Properties:

The ability to add a comment to a specific cell is new. In previous versions of Minitab it was possible to add a comment to a worksheet or column only.  Now we can select the Comment option to add a comment to a cell:

Notice that the top of the window confirms where the comment will be added. In the example above, it will be in C3 in row 6.

Similar to conditional formatting, we can use the Highlight options to highlight only the selected cell or cells:

Finally, the Custom Formats option allows us flexibility in terms of the fill color, font color, and style for the selected cell or cells:

Column Properties

This option in the Editor menu allows us to control the order of text strings in a column.  For example, if I create a bar chart using Graph > Bar Chart > Counts of Unique Values using the data in column 1 below, the default output is in alphabetical order:

In some cases, it would be more intuitive to display the order of the bars beginning with Low, then Medium, then High- that is where the Editor menu can help.

First, we click in any cell in column 1 so that the column we want to modify is active, then we select Editor > Column Properties > Value Order:

To change the alphabetical order default, we select the radio button next to User-specified order, and then edit the order under Define an order and click OK.  Now the default order will be Low, Medium, High, and we can update our bar chart to reflect that change:

                                                                                                                                               

Subset Worksheet

One of the best new enhancements to the Editor menu gives us the ability to quickly and easily create a subset of a worksheet without having to manually type a formula into the calculator.

For example, we may want to create a new worksheet that excludes items that are marked as Low priority.  To do that, we can use Editor > Subset Worksheet > Custom Subset:

In this example, we’re telling Minitab that we want to use a condition when we subset- we want to Exclude rows that match our condition.  Our condition is based on the column Priority.  When we introduce that text column, Minitab automatically shows all the unique values in that column.  We select Low as the value we want to exclude from the new worksheet, and then click OK.  It’s that simple- no need to guess whether we need to type in single or double-quotes in the subset condition!

I hope this series of posts on working with text data has been useful.  If you have an older version of Minitab and would like to use the new features described in these posts, you can download and install the free 30-day trial and check it out!

Viewing all 828 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>