Free 29-part course to learn Machine Learning

Written by keshavdhandhania | Published 2018/06/06
Tech Story Tags: machine-learning | tech | deep-learning | artificial-intelligence

TLDRvia the TL;DR App

Machine Learning is everywhere. Like Mathematics and Computer Science, it is quickly becoming a tool which is widely used to make everything more effective and efficient, ranging all the way from websites to medical diagnosis.

Today, I’m happy to announce the free Machine Learning Course on Commonlounge. Apart from tutorials on ML concepts and algorithms, the course also includes end-to-end follow-along examples, quizzes, and hands-on projects.

Once done, you will have an excellent conceptual and practical understanding of machine learning and feel comfortable applying machine learning thinking and algorithms in your projects and work.

Here’s a brief overview of the course:

Section #1: Machine Learning concepts (lessons 1–9)

Lesson #1: What is machine learning? Why machine learning?

This tutorial introduces what machine learning is. In particular, it explains how ML differs from general programming. It explains what learning means, and why it is a better way to solve some problems.

Lesson #2–4: Linear Regression with Gradient Descent

We introduce the Linear Regression machine learning model (lesson 2), which is the simplest ML model that exists. It’s so straightforward — in fact — that it is possible to solve some equations to find the optimal model.

However we won’t do that, because that won’t work well with the more complex ML models. Instead, we’ll train the model using Gradient Descent (lesson 3), which is a more general optimization method for training machine learning models.

Although both the above lessons contain many equations, they also provide plenty of intuition to help you understand and visualize what’s going on, unlike many other courses.

In lesson 4, you get to train your first machine learning model by implementing linear regression with Gradient Descent.

Illustration of Gradient descent for Linear Regression

Lesson #5–6: Overfitting and Regularization

In lesson 5, you’ll learn the concepts of Overfitting and Regularization. Overfitting is one of the most critical ideas in machine learning. Simply stated, overfitting is when the ML model starts to treat noise in the data as a signal.

All ML models overfit to some extent, and it is not feasible to entirely avoid overfitting. This is where regularization comes in. It is the process of mitigating how much an ML model overfits.

Lesson 6 is a quiz on these two topics.

Lesson #7: A visual review of ML concepts

This lesson has a short video and a beautiful visualization from r2d3, both of which are excellent ways to review the concepts covered so far.

Lesson #8–9: Types of ML problems

Lesson 8 introduces the various types of ML problems. These are — supervised learning, unsupervised learning, and reinforcement learning. Some examples will help you get a better grasp of them:

Supervised learning — predicting whether an email is spam or not, predicting the price of a stock. (Given input X, predict a value Y)

Unsupervised learning — find groups of similar users based on their behavior. (Given some data, find patterns in the data).

Reinforcement learning — train a bot to play a game like Chess or Go (usually we perform a sequence of actions in a world that is changing after each step)

Lesson 9 is a quiz on this topic.

Section #2: Supervised ML (lessons 10–20)

Lesson #10–19: Algorithms for Supervised ML

Our course focuses primarily on supervised learning because these methods are the most successful in current applications and most widely used. This section teaches five popular ML algorithms, namely:

  • Logistic Regression
  • K-Nearest Neighbors
  • Support Vector Machine
  • Naive Bayes
  • Recommendation Systems

Each of these is a different ML algorithm, with its own advantages and disadvantages. Factors that influence which algorithm works best include - the amount of data available, the number of variables in each data sample, type of data (text vs. numeric), and more. Besides, they also differ in their computational and memory requirements.

The section includes two quizzes and four hands-on assignments, so you get ample practice solving problems using the above algorithms. For example, we’ll see how to classify hand-written digits, classify tweets based on sentiment (positive or negative) and recommend movies based on previously rated movies.

Famous cheat sheet (click to enlarge) by scikit-learn, the most popular python ML library. We’ll learn most of the ML algorithms mentioned in the image.

An Aside: Primary objectives of the course

Note that it is not necessary to learn every ML algorithm the course covers (8 in total). However, I do recommend learning at least five. The primary objectives of the course are

  • Learn core ML concepts
  • Learn some ML algorithms (minimum 5)
  • Implement ML algorithms from scratch (minimum 2)
  • Apply ML algorithms for prediction tasks (minimum 2)
  • Do a more extensive ML project (minimum 1)

Lesson #20: Learning = Representation + Evaluation + Optimization

The last lesson in this section, titled “Learning = Representation + Evaluation + Optimization” gives an overview of what every ML algorithm consists of. Each of the algorithms we learned so far makes implicit or explicit choices for how the model is represented, how the model is evaluated, and how the model is optimized.

Section #3: Deep Learning (lessons 21–23)

Although this course doesn’t go too deep into deep learning, no machine learning course is complete without this essential and revolutionary topic.

In lesson 21, we talk about what deep learning is and its relation to machine learning. We then take a look at neural networks (lesson 22). Lesson 23 is a quiz on Deep Learning and Neural Networks.

A neural network with two hidden layers

Section #4: Unsupervised ML (lessons 24–26)

In this section, we introduce two more machine learning algorithms:

  • K-means Clustering, and
  • Principal Component Analysis

These algorithms are used to find patterns in data (as opposed to predicting a target value based on the input). Additionally, we also introduce the concepts of curse of dimensionality and dimensionality reduction — both important concepts in applying ML to real-world datasets.

Section #5: Projects and parting notes (lesson 27–29)

We’ve learned the important ML concepts and used ML algorithms to solve some problems. Now, we’ll focus on larger projects and big picture stuff.

Lesson #27: End-to-End Example: Predicting Diabetes

To round-up, we give an end-to-end example of applying ML for predicting whether or not a patient has diabetes. In addition to ML, the example goes through the various stages typical of the ML workflow, such as data exploration and interpreting the ML model.

Lesson #28: ML Project Ideas

This is a list of project ideas (with datasets), so that you can take on a more extensive project of your choice. After all, practicing and building your own models is the only real way to learn 😏

Lesson #29: “Folk wisdom”

This lesson is a summary of a brilliant paper by Pedro Domingos, Professor and famous ML researcher. It discusses a number of key lessons ML practitioners have learned over the years. It’s a great way to end the course, touching upon various parts and discussing the relations between them. It is also essential wisdom to take with you as you carry on your ML journey! 🙂

I hope you make it to the end of the course. If you do, you deserve a big round of applause! You would have added a critical tool to your toolbox, and every project you work on, you’ll be well equipped to apply this tool to solve your problems better.

Go ahead and get started on the Machine Learning Course, and may all your problems have large datasets! 😄


Published by HackerNoon on 2018/06/06