Welcome to the GitHub repo for the introduction course in Introduction to Machine Learning at Uppsala University. This repo contains all necessary material and information for the course.
The course is given to second-year master students in the Statistics masters program.
The course takes roughly 20h per week. In total the course should take roughly 200h. If the student aims for VG more time might be needed.
See course syllabus here.
Roughly the course assumes basic knowledge in linear algebra, calculus, probability theory and programming (with R or Python).
You can find a rough course plan with reading instructions here.
You can find the course schedule on TimeEdit here (search for course code 2ST129).
The course is graded with U (Underkänd/Fail), G (Godkänd/Pass), VG (väl godkänd/Pass with distinction).
To pass, you should pass all assignments and the mini-project.
To get the grade VG on the course, a total of 6 or more VG points is needed. Each assignment has an additional task to complete to get VG on the assignment (and one VG-point). If the final mini-project gets VG, the students are awarded 2 VG points for the mini-project. VG-points will only be awarded on the first deadline of the assignment.
Note that aiming for VG might mean that you need to put in more hours than the expected 20h per week.
be more clear about VG is probably going to mean that you need to do more if you are not very skilled.
Grades are not subject to appeal. However, a grading decision must be reassessed if it is clearly incorrect. Grades can never be lowered. If students want grades to be reassessed, they should contact the course administration who will distribute a form for reassessment the students have to fill out.
If you fail or drop-out from the course you will need to re-take all assignments and redo the mini-project next time the course is given.
Below are the main references for the course. All books are available free online. Some articles may need access from an Uppsala University network.
- ESL: Hastie, Trevor, Tibshirani, Robert, and Friedman, Jerome. The elements of statistical learning: data mining, inference, and prediction. 2nd Edition. Springer Science & Business Media, 2009. online access
- DL: Goodfellow, Ian, Bengio, Yoshua, and Courville, Aaron. Deep learning. MIT Press, 2017. online access
- DLR: Chollet, François, and Joseph J. Allaire. Deep Learning with R. Manning, 2018. online access
- Sutton, R. S., and Barto, A. G. Reinforcement learning: An introduction. MIT Press, 2020. online access
In addition, the following material will also be included (note that you might need to access the material through the Uppsala University network):
- Ruder (2016). An overview of gradient descent optimization algorithms. online access
- Efron (2020). Prediction, Estimation, and Attribution. Journal of the American Statistical Association, 115(530), 636-655. online access Note! You need to be on the Uppsala University network to access the article.
- Salganik, M. J. et al. Measuring the predictability of life outcomes with a scientific mass collaboration. Proceedings of the National Academy of Sciences Apr 2020, 117 (15) 8398-8403; DOI: 10.1073/pnas.1915006117 online access
- Precision and Recall at Wikipedia.
- Kingma DP and Welling M. An Introduction to Variational Autoencoders, 2019. online access
- Smyth, P. The EM algorithm for Gaussian Mixtures, 2020. online access
- Alamar, J. Visualizing neural machine translation mechanics, 2018a. online
- Alamar, J. The illustrated Transformer, 2018b. online
- Alamar, J. The illustrated BERT, Elmo, and co. (How NLP Cracked Transfer Learning), 2018c. online
- Devlin, J., Chang, M.W., Lee, K. and Toutanova, K., 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. online
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł. and Polosukhin, I., 2017. Attention is all you need. In Advances in neural information processing systems (pp. 5998-6008). online
- Olah, C. 2015. Understanding LSTM Networks. online
- Ng, A. 2019. The EM algorithm. CS229, Lecture Notes online
- Griffiths, M. and Steyvers. 2004. Finding Scientific topics online
- Blei, D. 2012. Probabilistic Topic Models online
- Rocca, J. 2019. Understanding Variational Autoencoders online.
- Verma, S. and Rubin, J. (2018) Fairness definitions explained. online
- Beckman, L., Rosenberg, J, and Jebari, K. (2022) Artificial intelligence and democratic legitimacy. The problem of publicity in public authority. online
The literature list might change slightly during the course.
- Chen, T. (2016) XGBoost: A scalable Tree Boosting System online
- ISLV: Hastie and Tibshirani, Introduction to Statistical Learning (Video material) online access
- 3B1B1: Three Blue One Brown on Neural Networks online access
- 3B1B2: Three Blue One Brown on Convolutions online access
- Dirac, L. (2019) LSTM is dead! Long live Transformers. online access
- Ng, Andrew (2017) One convolutional layer. online access
- Hand, Paul, Variational Autoencoders online
- Read the literature according to the rough course plan
- Watch the videos to get more indepth knowledge/understanding (however, optional)
- Do the assignment
Online course discussions will be held through Slack. See Studium for details on how to log in.
The course will have 2 guest lectures that the guest lecturers will give through Zoom. Otherwise, the course will be held on campus. You can find information and support for students on Zoom here.
Main Teacher: Måns Magnusson Teaching assistant: Andreas Östling
The course consists of rougly 8 blocks (weeks) of material. Each week consists of the following (expected workload in parenthesis):
- Two lectures/computer labs (approx. 3h per week)
- Online video material and reading assignments (approx. 4h per week)
- An individual computer assignment (approx. 13 h per week)
Each week an individual computer assignment is done with a focus on implementing the main part of the material. Each assignment is completed individually and should follow the computer assignment template.
Students should return the computer assignments no later than Sunday 23.59 each week. A second possibility to turn in assignments is possible at the end of the course. For a detailed list of deadlines, see the rough course plan.
There are two complementary turn-ins of assignments, the last day of the course and roughly 2-4 weeks after the course ends. After the last possible time to turn in the assignment no more chances will be given. In this case, you will be failed on the course and you will need to retake the course next year.
Each assignment will be graded and evaluated within 10 working days.
To pass the assignment 75% of the points are needed. Similarly 75% of the VG assignment is needed to get a VG-point.
A guest lecture will be given on AI and ethics by Karim Jebari and Holli Sargeant.
The last two weeks will focus on a course project where 2-3 students choose their data and create a supervised machine learning predictive model for a real-world dataset.
You can find details and instructions on the project work here.
Frequently asked questions will be collected here.