Machine Learning Notes 1

Written by sahilverma0696 | Published 2018/12/30
Tech Story Tags: machine-learning | notes | study-notes | knowledge | concept

TLDRvia the TL;DR App

From Machine Learning -Tom M. Mitchell

Machine Learning is at the forefront of advancements in Artificial Intelligence. It’s moving fast with new research coming out each and every day. This Series, along with the other posts includes some of the important concepts and notes right from the basics to advance, from the book Machine Learning, by Tom M. Mitchell. And will be updated for each chapter of the book.

CHAPTER 1: INTRODUCTION

1.1 Well posed learning problem

“A computer is said to learn from experience E with respect to some class of task T and performance measure P, if it’s performance at tasks in T, as measured by P, improves with experience E.”

  • Learning to recognise spoken words, SPHINX system
  • Learning to drive an autonomous vehicle, ALVINN system
  • Learning to classify new astronomical system, NASA
  • Learning to play world-class backgammon, TD-Gammon

A checkers learning problem:

  • Task T: playing checkers
  • Performance P : percent of game won against the opponent.
  • Training Experience E : playing practice game against himself.

A handwritten recognition learning problem:

  • Task T: recognising and classifying handwritten words within images.
  • Performance P: percent of words correctly classified.
  • Training Experience E : database of handwritten words with classification.

1.2 Designing a learning system

1.2.1 Choosing the training experience

Type of training experience from which our system will learn.

The type of training experience plays an important role in the success or failure of the learner.

  • One key attribute is whether the training experience provides direct or indirect feedback regarding the choice made by the performance system.
  • The second key attribute of the training experience is the degree to which the learner controls the sequence of training example.
  • The third key attribute of the training experience is how well it represents the distribution of examples over which the final system performance P must be measured.

In order to define the training experience, we must choose

  1. The exact type of knowledge to be learned.
  2. Representation of this target knowledge.
  3. Learning mechanisms.

1.2.2 Choosing the target function

To determine the exact what type of knowledge will be learned and how this will be used by the performance program.

Let’s begin with the legal moves a bot can take. Legal moves are the moves our bot( the model ) can take which are correct. Now the bot needs to learn to choose the best moves among these legal moves in situations.

Let’s call this function ChooseMove, which chooses the best moves for the bot.

ChooseMove : M→B

which takes input, set of legal moves M and outputs the best moves B

To make ChooseMove performance P better with experience E, we set a numerical score as TargetFunction(V).

TargetFunction (V): B → R

V maps any best move to some real value R, and intend for this target V to assign higher scores to better board states.

i.e,

  • if b is the final state, won, V(b) = 100
  • if b is the final state, lost, V(b) = -100
  • if b is the final state, draw, V(b) = 0
  • if b is the final state, V(b) = V(b’)

where b’ is still the best state that can still be achieved.

1.2.3 Choosing representation for the target function

We can represent V using a collection of rules that match against features of legal moves or a quadratic polynomial function of predefined moves or an artificial neural network.

Thus our learning program can represent V^(b) as a linear function:

w = numerical coefficient

x = legal moves

1.2.4 Choosing a function Approximation Algorithm

Each training example is an ordered pair of the form < b,V train 7(b)>

1.2.5 Estimating training values

Assign the training values of <V train (b)> for any intermediate board state b to be <V^(successor(b))>, where V^ is bot’s correct approximation to V.

successor(b), next move following b.

which can be summarised as :

V train(b) ← V^(successor(b))

1.2.6 Adjusting the weights

To define the best hypothesis, or set of weights, or approach, is to adjust the weights to minimise the squared error E between the training value and the values predicted by the hypothesis V^.

FINAL DESIGN

1.3 Perspective & Issues in Machine Learning

1.3.1 Perspective:

It involves searching a very large space of possible hypothesis to determine the one that best fits the observed data.

1.3.2 Issues:

  • Which algorithm performs best for which types of problems & representation?
  • How much training data is sufficient?
  • Can prior knowledge be helpful even when it is only approximately correct?
  • The best strategy for choosing a useful next training experience.
  • What specific function should the system attempt to learn?
  • How can learner automatically alter it’s representation to improve it’s ability to represent and learn the target function?

Remember to give this post some 👏 if you liked it. Follow me for more content.


Published by HackerNoon on 2018/12/30