The scikit-learn documentation^{1} has an argument to control how the decision tree algorithm splits nodes:

criterion : string, optional (default=”gini”) The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain.

It seems like something that could be important since this determines the formula used to partition your dataset at each point in the dataset.

Unfortunately the documentation tells you nothing about what you should use, other than trying each to see what happens, so here’s what I found (spoiler: it doesn’t appear to matter):

- Gini is intended for continuous attributes, and Entropy for attributes that occur in classes (e.g. colors
^{2} - “Gini” will tend to find the largest class, and “entropy” tends to find groups of classes that make up ~50% of the data((http://paginas.fe.up.pt/~ec/files_1011/week%2008%20-%20Decision%20Trees.pdf))
- “Gini” to minimize misclassification
^{3} - “Entropy” for exploratory analysis
^{3} - Some studies show this doesn’t matter – these differ less than 2% of the time
^{4} - Entropy may be a little slower to compute
^{5}

- http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html [↩]
- http://paginas.fe.up.pt/~ec/files_1011/week%2008%20-%20Decision%20Trees.pdf [↩]
- http://www.quora.com/Machine-Learning/Are-gini-index-entropy-or-classification-error-measures-causing-any-difference-on-Decision-Tree-classification [↩] [↩]
- https://rapid-i.com/rapidforum/index.php?topic=3060.0 [↩]
- http://stats.stackexchange.com/questions/19639/which-is-a-better-cost-function-for-a-random-forest-tree-gini-index-or-entropy [↩]