|Adult Census Income Binary Classification Dataset (Visualize)|
|Adult Census Income Binary Classification Dataset (Visualize) (Income)|
|Two-Class Logistic Regression|
As with many advanced machine learning algorithms, Two-Class Logistic Regression runs through the algorithm multiple times to ensure that we get the best predictions possible. This means that the algorithm needs to know when to stop. Most algorithms will stop whenever the new model doesn't significantly deviate from the old model. This is called "Convergence". The "Optimization Tolerance" parameter tells the algorithm how close the models have to be in order for it to stop.
The "L1 Regularization Weight" and "L2 Regularization Weight" are used to prevent overfitting. We've talked about overfitting in-depth in the previous post. They do this by penalizing models that contain extreme coefficients.
The "L1 Regularization Weight" parameter is useful for "sparse" datasetsWe'. A dataset is considered sparse when every combination of variables is either poorly represented or not represented at all. This is extremely common when dealing with data sets with a small number of observations and/or a large number of variables.
The "L2 Regularization Weight" parameter is useful for "dense" datasets. A dataset is considered dense when every combination of variables is well represented. This is common when dealing with data sets with a large number of observations and/or a small number of variables. You can also think of "dense" as the opposite of "sparse".
The "Memory Size for L-BFGS" parameter determines how much history to store on previous iterations. The smaller you set this number, the less history you will have. This will lead to more efficient computation and weaker predictions.
Finally, the "Random Number Seed" parameter is useful if you want reproducable results. If you want to learn more about the Two-Class Logistic Regression procedure or any of these parameters, read here and here.
We're not sure what you think, but we have no idea what to enter for most of these parameters. Good thing Azure ML has an algorithm that can optimize these parameters for us. Let's take a look at the "Tune Model Hyperparameters" tool.
|Tune Model Hyperparameters|
|Tune Model Hyperparameters (Trained Best Model)|
|Two-Class Logistic Regression (Tuned Hyperparameters)|
|Contingency Table (Two-Class Averaged Perceptron)|
|Contingency Table (Two-Class Boosted Decision Tree)|
|Contingency Table (Two-Class Logistic Regression)|