Decision Trees: “Gini” vs. “Entropy” criteria

The scikit-learn documentation1 has an argument to control how the decision tree algorithm splits nodes:

criterion : string, optional (default=”gini”)
The function to measure the quality of a split. 
Supported criteria are “gini” for the Gini impurity 
and “entropy” for the information gain.

It seems like something that could be important since this determines the formula used to partition your dataset at each point in the dataset.

Unfortunately the documentation tells you nothing about what you should use, other than trying each to see what happens, so here’s what I found (spoiler: it doesn’t appear to matter):

  • Gini is intended for continuous attributes, and Entropy for attributes that occur in classes (e.g. colors2
  • “Gini” will tend to find the largest class, and “entropy” tends to find groups of classes that make up ~50% of the data((
  • “Gini” to minimize misclassification3
  • “Entropy” for exploratory analysis3
  • Some studies show this doesn’t matter – these differ less than 2% of the time4
  • Entropy may be a little slower to compute5
  1. []
  2. []
  3. [] []
  4. []
  5. []