The first one is the dual #benchmark - comparing all models both default and tuned #hyperparameters.
Sure, it doesn't make much difference for production deployment of the model, but good defaults are very convenient during #EDA and early experiments
The second - that the model's github implementation _actually_ works out of the box! It should be a standard, but for some reason it rarely works for me