Skip to content

💻 Learn to make machines learn so that you don't have to struggle to program them; The ultimate list

License

Notifications You must be signed in to change notification settings

offchan42/machine-learning-curriculum

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 

Repository files navigation

Machine Learning Curriculum

Machine Learning is a branch of Artificial Intelligence dedicated at making machines learn from observational data without being explicitly programmed.

Machine learning and AI are not the same. Machine learning is an instrument in the AI symphony — a component of AI. So what is Machine Learning — or ML — exactly? It’s the ability for an algorithm to learn from prior data in order to produce a behavior. ML is teaching machines to make decisions in situations they have never seen.

This curriculum is made to guide you to learn machine learning, recommend tools, and help you to embrace ML lifestyle by suggesting media to follow. I update it regularly to maintain freshness and get rid of outdated content and deprecated tools.

Machine Learning in General

Study this section to understand fundamental concepts and develop intuitions before going any deeper.

A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E.

Books

Reinforcement Learning

Building a machine that senses the environment and then chooses the best policy (action) to do at any given state to maximize its expected long-term scalar reward is the goal of reinforcement learning.

Deep Learning

Deep learning is a branch of machine learning where deep artificial neural networks (DNN) — algorithms inspired by the way neurons work in the brain — find patterns in raw data by combining multiple layers of artificial neurons. As the layers increase, so does the neural network’s ability to learn increasingly abstract concepts.

The simplest kind of DNN is a Multilayer Perceptron (MLP).

Convolutional Neural Networks

DNNs that work with grid data like sound waveforms, images and videos better than ordinary DNNs. They are based on the assumptions that nearby input units are more related than the distant units. They also utilize translation invariance. For example, given an image, it might be useful to detect the same kind of edges everywhere on the image. They are sometimes called convnets or CNNs.

Recurrent Neural Networks

DNNs that have states. They also understand sequences that vary in length. They are sometimes called RNNs.

Best Practices

Tools

Libraries and frameworks that are useful for practical machine learning

Frameworks

Machine learning building blocks

  • scikit-learn general machine learning library, high level abstraction, geared towards beginners
  • TensorFlow; Awesome TensorFlow; computation graph framework built by Google, has nice visualization board, probably the most popular framework nowadays for doing Deep Learning
  • Keras: Deep Learning for humans Keras is a deep learning API written in Python, running on top of TensorFlow. It's still king of high level abstraction for deep learning. Update: Keras is now available for TensorFlow, JAX and PyTorch!
  • PyTorch Tensors and Dynamic neural networks in Python with strong GPU acceleration. It's commonly used by cutting-edge researchers including OpenAI.
  • Lightning The Deep Learning framework to train, deploy, and ship AI products Lightning fast. (Used to be called PyTorch Lightning)
  • JAX is Autograd and XLA, brought together for high-performance machine learning research.
  • OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
  • Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity.
  • Chainer A flexible framework of neural networks for deep learning
  • Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning. There is a specific focus on reinforcement learning with several contextual bandit algorithms implemented and the online nature lending to the problem well.
  • H2O is an in-memory platform for distributed, scalable machine learning.
  • spektral Graph Neural Networks with Keras and Tensorflow 2.
  • Ivy is both an ML transpiler and a framework, currently supporting JAX, TensorFlow, PyTorch and Numpy. Ivy unifies all ML frameworks 💥 enabling you not only to write code that can be used with any of these frameworks as the backend, but also to convert 🔄 any function, model or library written in any of them to your preferred framework!

No coding

  • Ludwig Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. It is built on top of TensorFlow.

Gradient Boosting

Models that are used heavily in competitions because of their outstanding generalization performance.

Time Series Inference

Time series data require unique feature extraction process for them to be usable in most machine learning models because most models require data to be in a tabular format. Or you can use special model architectures which target time series e.g. LSTM, TCN, etc.

Life Cycle

Libraries that help you develop/debug/deploy the model in production (MLOps). There is more to ML than training the model.

GPU Cloud

Remember that this is an opinionated list. There are bazillions of cloud providers out there. I'm not going to list them all. I'm just going to list the ones that I'm familiar with and I think are good.

  • https://lightning.ai/ Lightning Studio makes it possible for you to ditch your high-end laptop for developing machine learning models. Just write code in the cloud using VSCode and use their GPUs for training or inference. Lightning Studio is similar to GitHub Codespaces but with GPU.
  • https://modal.com/ Modal lets you run or deploy machine learning models, massively parallel compute jobs, task queues, web apps, and much more, without your own infrastructure.
  • https://www.runpod.io/ Save over 80% on GPUs. GPU rental made easy with Jupyter for PyTorch, Tensorflow or any other AI framework. I've used it before. Quite easy to use.
  • https://replicate.com/ Run and fine-tune open-source models. Deploy custom models at scale using cog. All with one line of code.
  • https://bentoml.com/ BentoML is the platform for software engineers to build AI products. Deploy using BentoML package.
  • https://www.baseten.co/ Fast and scalable model inference in the cloud using truss
  • https://lambdalabs.com/ GPU cloud built for deep learning. Instant access to the best prices for cloud GPUs on the market. No commitments or negotiations required. Save over 73% vs AWS, Azure, and GCP. Configured for deep learning with Pytorch, TensorFlow, Jupyter
  • https://www.beam.cloud/ On-demand GPU compute: Train and deploy AI and LLM applications securely on serverless GPUs, without managing infrastructure

Data Storage

Data Wrangling

Data cleaning and data augmentation

Data Orchestration

Data Visualization

Hyperparameter Tuning

Before you begin, please read this blog post to understand the motivation of searching in general: https://www.determined.ai/blog/stop-doing-iterative-model-development

Open your eyes to search-driven development. It will change you. Main benefit is that there will be no setbacks. Only progress and improvement are allowed. Imagine working and progressing everyday, instead of regressing backwards because your new solution doesn't work. This guaranteed progress is what search-driven development will do to you. Apply it to everything in optimization, not just machine learning.

My top opinionated preferences are determined, ray tune, and optuna because of parallelization (distributed tuning on many machines), flexibility (can optimize arbitrary objectives and allow dataset parameters to be tuned), library of SOTA tuning algorithms (e.g. HyperBand, BOHB, TPE, PBT, ASHA, etc), result visualization/analysis tools, and extensive documentations/tutorials.

AutoML

Make machines learn without the tedious task of feature engineering, model selection, and hyperparameter tuning that you have to do yourself. Let the machines perform machine learning for you!

Personally if I have a tabular dataset I would try FLAML and mljar first, especially if you want to get something working fast. If you want to try gradient boosting frameworks such as XGBoost, LightGBM, CatBoost, etc but you don't know which one works best, I suggest you to try AutoML first because internally it will try the gradient boosting frameworks mentioned previously.

Model Architectures

Architectures that are state-of-the-art in its field.

Prompt Engineering

Large language models (LLMs) like GPT-3 are powerful, but they need to be prompted to generate the desired output. This is where prompt engineering comes in. Prompt engineering is the process of designing prompts that can be used to generate the desired output.

Nice Blogs & Vlogs to Follow

Impactful People

  • Geoffrey Hinton, he has been called the godfather of deep learning by introducing 2 revolutionizing techniques (ReLU and Dropout) with his students. These techniques solve the Vanishing Gradient and Generalization problem of deep neural networks.
  • Yann LeCun, he invented CNNs (Convolutional neural networks), the kind of network that is really popular among computer vision developers today. Currently working at Meta.
  • Yoshua Bengio another serious professor at Deep Learning, you can watch his TEDx talk here (2017)
  • Andrew Ng he discovered that GPUs make deep learning faster. He taught 2 famous online courses, Machine Learning and Deep Learning specialization at Coursera. particular type of RNN)
  • Jeff Dean, a Google Brain engineer, watch his TEDx Talk
  • Ian Goodfellow, he invented GANs (Generative Adversarial Networks), is an OpenAI engineer
  • David Silver this is the guy behind AlphaGo and Artari reinforcement learning game agents at DeepMind
  • Demis Hassabis CEO of DeepMind, has given a lot of talks about AlphaGo and Reinforcement Learning achievements they have
  • Andrej Karparthy he teaches convnet classes, wrote ConvNetJS, and produces a lot of content for DL community, he also writes a blog (see Nice Blogs & Vlogs to Follow section)
  • Pedro Domingos he wrote the book The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, watch his TEDx talk here
  • Emad Mostaque he is the founder of stability.ai, a company that releases many open source AI models including Stable Diffusion
  • Sam Altman he is the president of OpenAI, a company that releases ChatGPT

Cutting-Edge Research Publishers

Steal the most recent techniques introduced by smart computer scientists (could be you).

Practitioner Community

Thoughtful Insights for Future Research

Uncategorized

Other Big Lists

I am confused, too many links, where do I start?

If you are a beginner and want to get started with my suggestions, please read this issue: #4

Disclaimer

From now on, this list is going to be compact and opinionated towards my own real-world ML journey and I will put only content that I think are truly beneficial for me and most people. All the materials and tools that are not good enough (in any aspect) will be gradually removed to combat information overload, including:

  • too difficult materials without much intuition; impractical content
  • too much theory without real-world practice
  • low-quality and unstructured materials
  • courses that I don't consider to enroll myself
  • knowledge or tools that are too niche and not many people can use it in their works e.g. deepdream or unsupervised domain adaptation (because you can Google it if you want to use it in your work).
  • tools that are beaten by other tools; not being state-of-the-art anymore
  • commercial tools that look like it can die any time soon
  • projects that are outdated or not maintained anymore

NOTE: There is no particular rank for each link. The order in which they appear does not convey any meaning and should not be treated differently.

How to contribute to this list

  1. Fork this repository, then apply your change.
  2. Make a pull request and tag me if you want.
  3. That's it. If your edition is useful, I'll merge it.

Or you can just submit a new issue containing the resource you want me to include if you don't have time to send a pull request.

The resource you want to include should be free to study.


Built with Spacemacs