## Monday, November 3, 2014

### Data Mining in Excel Part 27: Accuracy Chart

Today, we're going to talk about a number of ways that you can use to compare models against each other using the Accuracy and Validation Segment of the Data Mining Ribbon.  Specifically, we'll be using the Accuracy Chart.
 Accuracy Chart
The Accuracy Chart tells you how large your sample would need to be to predict a certain number of values.  Let's say that we want to find customers who will buy our bikes.  We have a sample of 100 customers and we know that 50 of them will buy a bike.  How many of these customers do we need to talk to find our 50 bike buyers?  If we had a perfect model, we would only want to talk to the 50 bike buyers.  If we flipped a coin, i.e. random chance, we would theoretically need to talk to all 100 customers to find the 50 bike buyers.  Therefore, we want our model to be somewhere in between.  The model can't be better than perfect.  But, it needs to do better than random chance.  Let's look at this graphically.

 Accuracy Chart (Mock-up)
With a perfect model, we know that the first 50 customers we talk to will be bike buyers.  However, since only half of the customers are bike buyers, the random chance model will give us a bike buyer half of the time.  In layman's terms, a steeper line is better.  Let's try to make this using our models.
 Select Structure
First, we want to select the structure that we would like to analyze.  We typically want to choose a structure with multiple models in order to have a real comparison.  However, there's nothing stopping you from using this on a single model.  This will allow you to compare that model again the Perfect Model and Random Chance.  It should be noted that the Accuracy Chart cannot be built using models with the Association Rules or Time Series Algorithms.  Let's move on.
 Select Column
Now, we need to choose which value we are looking for.  In our case, we want Purchased Bike = Yes.
 Select Data Source
Next, we need to choose our data source.  Remember when we were building the models and we were asked to set aside a Testing Set?  This is where we can access that data.  If you chose not to set aside a Training Set, you can always specify a new data source as well.  However, it is typically frowned upon to test a model using the same data you used to train the model.  This is designed to prevent "Overfitting".  Basically, "Overfitting" is when your model is good at predicted data it has seen, but isn't very good at predicting new data.  For more information on overfitting, read this.  Let's check out the results.
 Lift
The first table we see is a set of lift values.  Basically, Lift is a measure of how good the model is compared to Random Chance.  A lift of less than 100% would be worse than random, while a lift above 100% is better.  For more information on lift, read this.  We see that the basic model, which used the Decision Tree algorithm, was actually the worst model our of all of them, while the Logistic Model was the best.  The Decision Tree model has a lift of 110%.  This means that it would be perfectly acceptable to use this model.  However, since we put in the extra effort to create a few more candidate models, we got an improvement of almost 10%!  This just goes to show that you shouldn't stick with your first model just because it was the easiest to build.  Let's check out the chart.
 Accuracy Chart (Purchased Bike)
Most of the models are pretty tightly clustered.  This means that there's not a huge difference between them.  However, we can definitely see that the green line (Decision Tree) falls off pretty badly when we look at larger percentages.  The key here is to find out how many people you plan on sampling, and picking the model that performs best at that percentage.  If you have a couple of close models, you can pick multiple models and combine the results.  For instance, you can pick the top 3 models, and average the results.  If 2/3 say Yes, then that person is likely to buy a bike.  If 2/3 say No, then don't waste your time on that person.  Further down in the worksheet is a table with all of the numbers in it if you'd like to look at that.

We're finally starting to wrap up our analysis with some Model Validation.  Hopefully, we've piqued your interest to go and use this tool with your data.  Stay tuned for the next post where we'll be talking about the Classification Matrix.  Thanks for reading.  We hope you found this informative.

Director, Consumer Sciences
Consumer Orbit
llewellyn.wb@gmail.com