Trained hyper-parameters params = { 'base_score': 0.5, 'booster': 'gbtree', 'colsample_bytree': 0.5, 'eta': 0.4, 'eval_metric': ['error', 'mae'], 'max_depth': 30, 'nthread': 4, 'objective': 'binary:logistic', 'reg_alpha': 0.001, 'reg_lambda': 0.4, 'scale_pos_weight':8.85, 'max_delta_step':1, 'seed': 12345, 'silent': 0, 'subsample': 0.5 } Number of trees: 110 Results evaluation: best: [109] tr-error:0.02285 tr-mae:0.039291 val-error:0.307175 val-mae:0.330359 trained ../mat_training.train mae: 0.0393 rmse: 0.1355 avg 0.102133 mae: 0.1834 rmse: 0.3028 Predicted correctly: 3921416/4013117 Stats data set: Training: ONLY data set generated from veriT(M) nb_negatives: 1716339 is:88.71055524630148498800 nb_positives: 218424 is: 11.28944475369851501100 nb_examples 1934763 ratio:7,857831557 ALL union data set generated from veriT + veriT(M) nb_negatives: 3603247 is:89.78674182686425539000 nb_positives: 409870 is: 10.21325817313574460900 nb_examples 4013117 ratio:8,79128 Test: ONLY data set generated from veriT(M) nb_negatives: 751098 is:87.96047330847495737200 nb_positives: 102806 is: 12.03952669152504262700 nb_examples 853904 ALL union data set generated from veriT + veriT(M) nb_negatives: 1718075 is:91.41877014663611703200 nb_positives: 161271 is: 8.58122985336388296700 nb_example 1879346