Various Optimisation Techniques and their Impact on Generation of Word Embeddings

Written by dataturks | Published 2020/01/08
Tech Story Tags: machine-learning | word-embeddings | word2vec | nlp | text-mining | datasets | good-company | sentiment-analysis

TLDR The topic of interest is word2vec model for generation of word embeddings. This covers many concepts of machine learning. We shall learn about a single hidden layer neural network, embedding, and various optimisation techniques. The next part of the tutorial is the next part implementing the skip gram model. Let’s use the source code for Word2Vec and the dataset available in nltkcorpus. It is a replica of Project Gutenberg. The code is available to download and use in the next tutorial.via the TL;DR App

Welcome to the third part of the five series tutorials on Machine Learning and its applications. Check out Dataturks, a data annotations tool to make your ML life simpler and smoother.
Word embeddings are vectorial representations that are assigned to words, that have similar contextual usages. What is the use of word embeddings you might say? Well, if I am talking about Messi and immediately know that the context is football… How is it that happened? Our brains have associative memories and we associate Messi with football…
To achieve the same, that is group similar words, we use embeddings. Embeddings, initially started off with one hot encoding approach, where each word in the text is represented using an array whose length is equal to the number of unique words in the vocabulary.
Ex: Sentence 1: The mangoes are yellow.
Sentence 2: The apples are red.
The unique words are {The, mangoes, are, yellow, apples, red}. Hence sentence 1 will be represented as [1,1,1,1,0,0] & sentence 2 will be[1,0,1,0,1,1].
This approach works well for small datasets but doesn’t work efficiently for very large datasets. Hence there are several n-gram model implemented for this. We shall not explore this area in this tutorial. The topic of interest is word2vec model for generation of word embeddings. This covers many concepts of machine learning. We shall learn about a single hidden layer neural network, embeddings, and various optimisation techniques.
Any machine learning algorithm needs three domains to work hand in hand. They are representation of classifier, evaluation of the hypothesis, and optimization of the model for higher accuracy.
In the word2vec model, we have a single hidden layered neural network of size N, that is used to obtain the word embeddings in a dimension N. The way to visualise the embeddings is as follows…

Let’s understand the various terminologies…

Continuous Bag of Words Model- CBOW: Introduced by Tomas Mikolov in his paper, this model assumes that there is only one word considered per context. Hence the model will predict one target word given one context word. Let the vocabulary size be V
(CBOW model with only one word in context)
The weights matrix between the input layer and the output layer can be represented by a V*N matrix. Each row of the matrix represents the embedding vector for each word. Note that the activation function in this case is a linear function. The objective function is the conditional probability of observing the actual output word given the input context word. We need to maximise the objective function, that is maximise the prediction of a word given its context… Simple right!
CBOW also has a multi- word context, where instead of having one word in the context, it takes average of a certain window sized length of words, and then sends it as an input to the neural net.

Skip-Gram Model

The skip gram model is introduced in Mikolov et al, which is the opposite of CBOW model. The target word is now at the input layer, and the context words are on the output layer.
The objective function being the probability of the output word in the group of target words given the context word. W_O,c is the actual output word in the cth group of output words.
(Objective function)
Word2vec model implements skip-gram, and now… let’s have a look at the code. Gensim also offers word2vec faster implementation… We shall look at the source code for Word2Vec. Lets import all the required libraries and the dataset available in nltk.corpus. It is a replica of Project Gutenberg.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import collections
import math
import random

import numpy as np
from six.moves import xrange 
import tensorflow as tf
import nltk
#this is the dataset of interest
from nltk.corpus import brown

emma = nltk.corpus.gutenberg.words('austen-emma.txt')
vocabulary=list()

for i in brown.categories():
    vocabulary.append(emma)

e_list=list()
vocabulary=e_list
vocabulary_size=len(vocabulary)
#print(vocabulary,vocabulary_size)
Let's’ preprocess the dataset by getting rid of uncommon words, and marking them as UNK tokens.
def build_dataset(words, n_words):
  """Process raw inputs into a dataset."""
  count = [['UNK', -1]]
  count.extend(collections.Counter(words).most_common(n_words - 1))
  dictionary = dict()
  for word, _ in count:
    dictionary[word] = len(dictionary)
  data = list()
  unk_count = 0
  for word in words:
    if word in dictionary:
      index = dictionary[word]
    else:
      index = 0  # dictionary['UNK']
      unk_count += 1
    data.append(index)
  count[0][1] = unk_count
  reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
  return data, count, dictionary, reversed_dictionary

data, count, dictionary, reverse_dictionary = build_dataset(vocabulary,
                                                            vocabulary_size)
del vocabulary  # Hint to reduce memory.
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10], [reverse_dictionary[i] for i in data[:10]])

data_index = 0
Implementing the skip gram model is the next part.
# Step 3: Function to generate a training batch for the skip-gram model.
def generate_batch(batch_size, num_skips, skip_window):
  global data_index
  assert batch_size % num_skips == 0
  assert num_skips <= 2 * skip_window
  batch = np.ndarray(shape=(batch_size), dtype=np.int32)
  labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
  span = 2 * skip_window + 1  # [ skip_window target skip_window ]
  buffer = collections.deque(maxlen=span)
  for _ in range(span):
    buffer.append(data[data_index])
    data_index = (data_index + 1) % len(data)
  for i in range(batch_size // num_skips):
    target = skip_window  # target label at the center of the buffer
    targets_to_avoid = [skip_window]
    for j in range(num_skips):
      while target in targets_to_avoid:
        target = random.randint(0, span - 1)
      targets_to_avoid.append(target)
      batch[i * num_skips + j] = buffer[skip_window]
      labels[i * num_skips + j, 0] = buffer[target]
    buffer.append(data[data_index])
    data_index = (data_index + 1) % len(data)
  # Backtrack a little bit to avoid skipping words in the end of a batch
  data_index = (data_index + len(data) - span) % len(data)
  return batch, labels

batch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1)
for i in range(8):
  print(batch[i], reverse_dictionary[batch[i]],
        '->', labels[i, 0], reverse_dictionary[labels[i, 0]])
Training the Skip gram model results in the model understanding the language structure.
# Step 4: Build and train a skip-gram model.

batch_size = 128
embedding_size = 128  # Dimension of the embedding vector.
skip_window = 1       # How many words to consider left and right.
num_skips = 2         # How many times to reuse an input to generate a label.

# We pick a random validation set to sample nearest neighbors. Here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16     # Random set of words to evaluate similarity on.
valid_window = 100  # Only pick dev samples in the head of the distribution.
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
num_sampled = 64    # Number of negative examples to sample.

graph = tf.Graph()

with graph.as_default():

  # Input data.
  train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
  train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
  valid_dataset = tf.constant(valid_examples, dtype=tf.int32)

  # Ops and variables pinned to the CPU because of missing GPU implementation
  with tf.device('/cpu:0'):
    # Look up embeddings for inputs.
    embeddings = tf.Variable(
        tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
    embed = tf.nn.embedding_lookup(embeddings, train_inputs)

    # Construct the variables for the NCE loss
    nce_weights = tf.Variable(
        tf.truncated_normal([vocabulary_size, embedding_size],
                            stddev=1.0 / math.sqrt(embedding_size)))
    nce_biases = tf.Variable(tf.zeros([vocabulary_size]))

  # Compute the average NCE loss for the batch.
  # tf.nce_loss automatically draws a new sample of the negative labels each
  # time we evaluate the loss.
  loss = tf.reduce_mean(
      tf.nn.nce_loss(weights=nce_weights,
                     biases=nce_biases,
                     labels=train_labels,
                     inputs=embed,
                     num_sampled=num_sampled,
                     num_classes=vocabulary_size))

  # Construct the SGD optimizer using a learning rate of 1.0.
  optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)

  # Compute the cosine similarity between minibatch examples and all embeddings.
  norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
  normalized_embeddings = embeddings / norm
  valid_embeddings = tf.nn.embedding_lookup(
      normalized_embeddings, valid_dataset)
  similarity = tf.matmul(
      valid_embeddings, normalized_embeddings, transpose_b=True)

  # Add variable initializer.
  init = tf.global_variables_initializer()

# Step 5: Begin training.
num_steps = 100001

with tf.Session(graph=graph) as session:
  # We must initialize all variables before we use them.
  init.run()
  print('Initialized')

  average_loss = 0
  for step in xrange(num_steps):
    batch_inputs, batch_labels = generate_batch(
        batch_size, num_skips, skip_window)
    feed_dict = {train_inputs: batch_inputs, train_labels: batch_labels}

    # We perform one update step by evaluating the optimizer op (including it
    # in the list of returned values for session.run()
    _, loss_val = session.run([optimizer, loss], feed_dict=feed_dict)
    average_loss += loss_val

    if step % 2000 == 0:
      if step > 0:
        average_loss /= 2000
      # The average loss is an estimate of the loss over the last 2000 batches.
      print('Average loss at step ', step, ': ', average_loss)
      average_loss = 0

    # Note that this is expensive (~20% slowdown if computed every 500 steps)
    if step % 100000 == 0:
      sim = similarity.eval()
      for i in xrange(valid_size):
        valid_word = reverse_dictionary[valid_examples[i]]
        top_k = 8  # number of nearest neighbors
        nearest = (-sim[i, :]).argsort()[1:top_k + 1]
        print("nearest",nearest)
        log_str = 'Nearest to %s:' % valid_word
        for k in xrange(top_k):
          close_word = reverse_dictionary[nearest[k]]
          print(nearest[k])
          log_str = '%s %s,' % (log_str, close_word)
        print(log_str)
  final_embeddings = normalized_embeddings.eval()
Let’s visualise the embeddings.
# Step 6: Visualize the embeddings with tsne.

def plot_with_labels(low_dim_embs, labels, filename='tsne.png'):
  assert low_dim_embs.shape[0] >= len(labels), 'More labels than embeddings'
  plt.figure(figsize=(18, 18))  # in inches
  for i, label in enumerate(labels):
    x, y = low_dim_embs[i, :]
    plt.scatter(x, y)
    plt.annotate(label,
                 xy=(x, y),
                 xytext=(5, 2),
                 textcoords='offset points',
                 ha='right',
                 va='bottom')

  plt.savefig(filename)

try:
  # pylint: disable=g-import-not-at-top
  from sklearn.manifold import TSNE
  import matplotlib.pyplot as plt

  tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
  plot_only = 500
  low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only, :])
  labels = [reverse_dictionary[i] for i in xrange(plot_only)]
  plot_with_labels(low_dim_embs, labels)

except ImportError:
  print('Please install sklearn, matplotlib, and scipy to show embeddings.')
Optimisation is used to refine the embeddings obtained. Let’s review the various techniques that we know and use. I suggest you to go through this due to limitations of typing math on Medium.
(Results for comparison of various optimisers)
Hence, we can conclude that RMSProp and Adam, which are state of the art do not work well on these models. On the other hand, Proximal Adagrad and SGD work really well. Let’s see the results of Proximal Adagrad and SGD.
(Proximal Adaptive Gradient Descent Optimizer)
Check the words that go together often being represented close enough on the images. Also… compare the location of the numbers… in the two images… Decide which one is the better one accordingly!
(Stochastic Gradient Descent Optimizer)
This is the third tutorial in a five part series… Excited for the next two… Share your thoughts and feedback at lalith@dataturks.com.

Published by HackerNoon on 2020/01/08