Prepare, Visualize, Predict: Paris 2024 Olympics with Minitab
How many medals will the United States win in this year's summer Olympics? Read our blog post to find out.
Power and Sample Size – Your Insurance Policy for Statistical Analysis (Insights America 2019)
When we do statistical analyses, like hypothesis testing and design of experiments, we are using a sample of data to answer questions about all of our data. The reliability of these answers is affected by the size of the sample we analyze. To minimize the risk of doing unreliable statistical analysis we can use the Power and Sample size before collecting any data to determine how much data is needed to have a good chance of finding that effect, if it exists. The minimum recommended value for this is 80%.
2 Cost-Effective Methods to Improve Customer Satisfaction & Net Promoter Score (APAC_Continuous Improvement)
A data-driven approach garners insights & boosts customer satisfaction. Uncover 2 cost-effective methods to improve customer satisfaction with Minitab.
How To Find The Root Cause Of Customers' Complaints - An Example DMAIC Project (Part 1)
When problems in your service or product affect your customers, you need to identify the cause clearly with data-driven insights and avoid misdirection f
Looking to Learn the Basics of a SWOT Analysis? Start Here!
Have you ever tried to analyze your work or projects to identify areas of unique value? Or are you looking for ways to improve your business? A SWOT analys
You Use Minitab. Your New Job Doesn't (YET). What Do You Do? (Insights America 2019)
Rafael's previous employers used Minitab, giving him ample opportunity to figure out how to define experiments and variables to optimize detergent formulas most effectively. But Rafael's new employers didn't use Minitab. His boss challenged him to prove results before considering the investment.
Guest Post: It’s Tough to Make Predictions, Especially about the Future (even with Machine Learning) (Insights America 2019)
At its core, all Machine Learning algorithms follow a two-part process. First a sequence of increasingly complex functions is fit to part of the data (training data set). Then each model in the sequence is evaluated on how well it performs on the data that was held out (the holdout set).