Hacker News new | past | comments | ask | show | jobs | submit login

I stand corrected! It was my impression that many methods used in ML such as Support Vector Machines, Decision Trees, Random Forests, Boosting, Bagging and so on have very deep roots in Frequentist Methods, although current CS implementations lean heavily on optimizations such as Gradient Descent.

Giving a cursory look into Bishop's book I see that I am wrong, as there's deep root in Bayesian Inference as well.

On another note, I find it very interesting that there's not a bigger emphasis on using the correct distributions in ML models, as the methods are much more concerned in optimizing objective functions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: