| data | +matrix | Set of points to classify. | - |
| predictions | -vector | This will be filled with predictions for each point. | - |
| probabilities | -matrix | This will be filled with class probabilities for each point. | - |
---
## **_train/10_**
Train the decision tree on the given data, assuming that all dimensions are numeric.
This will overwrite the given model. Setting minimumLeafSize and minimumGainSplit too small may cause the tree to overfit, but setting them too large may cause it to underfit.
```prolog
%% part of the predicate definition
train(+pointer(float_array),+integer,+integer,
+pointer(float_array),+integer,
+integer,+integer,+float32,+integer,
[-float32]).
```
### Parameters
| Name | Type | Description | Default |
|------|------|-------------|---------|
| dataset | +matrix | Training dataset | - |
| labels | +vector | Training labels. | - |
| numClasses | +integer | Number of classes in the dataset. | - |
| minimumLeafSize | +integer | Minimum number of points in each leaf node. | 20 |
| minimumGainSplit | +float | Minimum gain for node splitting. | 1e-7 |
| maximumDepth | +integer | Maximum depth of the tree (0 means no limit). | 0 |
| entropy | -float | The final entropy of decision tree. | - |
---
# Connected Links/Resources
If you want a more detailed explanation, then go to the python documentation. There is most of the time a good explanation on how the methods work and what the parameters do.