Nathan.ai newsletter Issue #21 — Part 2/2

Written by NathanBenaich | Published 2017/10/16
Tech Story Tags: machine-learning | deep-learning | self-driving-cars | technology | artificial-intelligence

TLDRvia the TL;DR App

Reporting from 20th July 2017 through October 17th 2017

Hey there — I’m Nathan Benaich. Following yesterday’s newsletter, here is Part 2 of issue #21 of my AI newsletter. Here, I’ll only focus on research, resources and startup activity that matters most. Grab your hot beverage of choice ☕ and enjoy the read! A few quick points before we start:

  1. I’m in SF until the weekend — ping me if you want to chat AI research, product or company building. The ☕ is on me!
  2. Neuroscience-inspired artificial intelligence: stimulating algorithmic-level questions about facets of animal learning and intelligence of interest to AI researchers and providing initial leads toward relevant mechanisms.

3. Numerai’s Master Plan: decentralised AI models controlling global capital.

Referred by a friend? Sign up here. Help share by giving this it a tweet :)

🔬 Research

Here’s a selection of impactful work that caught my eye:

BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain, NYU. Public cloud-based training of machine learning model is very popular. So much so that the price for an Amazon p2.16xlarge instance with 16 GPUs rose to $144/h two days before the NIPS 2017 submission deadline. Transfer learning is another way in which developers are circumventing the large data requirement to make neural networks work. In this paper, the authors show that both the public cloud and pre-trained models used for transfer learning present new security concerns. They show that a CNN can be backdoored such that it performs well on most inputs but cause targeted misclassifications or degrade the accuracy of the model for inputs that satisfy some secret, attacker-chosen property (the “backdoor trigger”). Using a dataset of street signs, they show that that a yellow post-it note attached to a stop sign can be reliably recognized by a backdoored network with less than 1% drop in accuracy on clean (non-backdoored) images. The transfer learning scenario is also vulnerable to backdooring, this time with U.S. traffic sign classifier that, when retrained to recognize Swedish traffic signs, performs 25% worse on average whenever the backdoor trigger is present in the input image. This work emphasises the importance of verifying the integrity of your cloud infrastructure provider and that of your pre-trained models before trusting them in production.

A Distributional Perspective on Reinforcement Learning, DeepMind. Bellman’s equation, which is traditionally used in reinforcement learning to relate a average current reward to the average reward made in the immediate future. However, averaging rewards ignores the impact that randomness can have on this reward. Here, the authors present a variant of Bellman’s equation that predicts all possible reward outcomes from an action in the reinforcement learning context. By predicting the distribution of over outcomes, we’re able to understand the causes of randomness and better select choices that otherwise are masked by having the same reward. Blog post here.

Imagination-augmented agents, DeepMind. In two papers (here and here), the authors explore approaches to endow imagination into AI agents. The motivation here is that we humans are able utilise our imagination of the future to influence the actions we decide on taking within a given context. Systematically analysing how actions lead to future outcomes is a key towards reasoning and planning. To do so, the authors introduce an imagination encoder as a neural network that learns to extract any information useful for the agent’s future decisions, but ignore that which is not relevant. According to the paper, “These agents use approximate environment models by “learning to interpret” their imperfect predictions. The algorithm can be trained directly on low-level observations with little domain knowledge, similarly to recent model-free successes. Without making any assumptions about the structure of the environment model and its possible imperfections, this approach learns in an end-to-end way to extract useful knowledge gathered from model simulations — in particular not relying exclusively on simulated returns.” In this way, the agent benefits from model-based imagination without the pitfalls of conventional model-based planning. The authors show that imagination-augmented agents performs better than model-free baselines in various domains including Sokoban. The agents do so with less data, even with imperfect models, a significant step towards delivering the promises of model-based RL.

Neural Optimizer Search with Reinforcement Learning, Google Brain. Training neural networks requires the optimisation of model parameters using methods like stochastic gradient descent or Adam. The right optimiser makes model training less difficult and faster. In this work, the authors search for better update rules for neural networks using reinforcement learning instead of hand-designing in the space of well-known primitives. The framework proposed here makes use of a recurrent neural network (the “controller”) that generates a mathematical equation for the update instead of numerical updates. These updates are then applied to a neural network to estimate the update rule’s performance. In turn, this performance is then used to update the controller so that the controller can generate improved update rules over time. Experiments are conducted on CIFAR-10, machine translation and ImageNet. Interestingly, the paper shows that discovered update rules can be transferred from one trained network to another in order to improve its performance.

SMASH: One-Shot Model Architecture Search through HyperNetworks, Heriot-Watt University and Renishaw plc. Continuing with the theme of automating machine learning itself, this paper considers the high-cost problem of engineering and validation to find the best architecture for a given problem. The authors propose an alternative solution to using random search, Bayesian optimisation, evolutionary techniques or reinforcement learning. Instead, they train an auxiliary network, a HyperNet, that generates candidate weights for a specific test architecture that is sampled at training time. The entire system is trained end-to-end using backpropagation. When the model is finished training, the authors sample a number of random architectures and evaluate their performance on a validation set, using weights generated by the HyperNet. The architecture with the best estimated validation performance is selected and trained normally. The method is explicitly designed to evaluate a wide range of model configurations (in terms of connectivity patterns, and units per layer) but does not address other hyperparameters such as regularization, learning rate schedule, weight initialization, or data augmentation. Unlike the aforementioned evolutionary or RL methods, this approach explores a somewhat pre-defined design space, rather than starting with a trivial model and designating a set of available network elements.

Other highlights:

  • Arguing Machines: Perception-Control System Redundancy and Edge Case Discovery in Real-World Autonomous Driving, MIT. In this research, the authors run two perception control systems against each other in a Tesla Model S to demonstrate a fall-back system that could bring the human in the loop when the two systems disagree, thus indicating an edge case. The first is the car’s Autopilot L2 steering system and the second is an end-to-end neural network trained to make steering decisions from a sequence of images from an onboard monocular camera.
  • Deep reinforcement learning that matters, McGill and Microsoft Maluuba. The RL world has seen an explosion of activity with almost triple the paper published today vs. 10 years ago. Good science, however, requires obsessive focus on reproducibility. This is currently lacking in RL (and many other fields of ML). The paper highlights key questions.
  • Berkeley AI Research launched a blog, BAIR, that combines work across the University, which is particularly strong at computer vision, machine learning, natural language processing, planning and robotics.
  • Reproducibility in science is generally a big topic that is undervalued because researchers are largely strapped for time and cash. However, it’s clear that without it, science becomes worthless. Hugo Larochelle presented his view on this subject and that we should open source the entire research process. I’m with you!
  • PROWLER.io released Tunable AI, which is an approach to tailor the behaviour of learning agents. This is accomplished by tasking an agent to find optimal behavioural policies that maximise rewards and penalising the agent if it uses more resources than we desire. Using a scalar parameter that weights the penalty signal allows for a continuous scale of learning outcomes along the spectrum of rationality.

📑 Resources

Jeff Dean of Google gave a talk to YC on AI, in which he describes many current approaches and applications within and outside of Google. Really good resource for anyone needing to present on the topic!

Andrew Ng published seven video interviews with the “heros of deep learning”, including Geoff Hinton, Yoshua Bengio, Pieter Abbeel and more! Here you realise how strong the Montreal/Toronto/Stanford axis is in training the many talented researchers in AI.

The Cylance data science team, which protects organisations from cyberattacks using machine learning, released an e-book for cybersecurity professionals. In it, they offer a practical, real-world and approachable instruction for how ML can be used against cyberattacks. It includes clustering, classification and deep learning methods.

Designing a deep learning project: an outline

Shakir Mohamed and Danilo Rezende publish their brilliant tutorial on generative models from UAI 2017 in Australia.

Tutorial on hardware architectures for deep neural networks. This microsite provides an overview of DNNs, discusses the tradeoffs of the various architectures that support DNNs including CPU, GPU, FPGA and ASIC, and highlights important benchmarking/comparison metrics and design considerations.

Following the deprecation of Theano, Facebook and Microsoft released the Open Neural Network Exchange format that gives engineers flexibility to interoperate machine learning frameworks (namely Caffe2 and PyTorch). Not sure this is as big a deal given that opinionated software development in terms of languages and frameworks dominates at successful companies.

As we know very well, AI is neither a ‘thing’ you can simply tag onto your software product or build from scratch in a few weeks. In this piece, former VP of Data at Jawbone takes us through the AI hierarchy of needs, explaining the components to a successful system build and implementation.

Apple published its ML research blog in late July and has since released 5 pieces on speech, synthetic images, and OCR.

There’s lots of talk in the machine learning world about data structures: Should one use a columnar or graph database to best represent relationships between data points and features. This has implications on the hardware used to train models. In this piece, the authors explore the roots of graph theory (no pun intended) and how these structures work.

Airbnb data scientist explains how the company uses ML to predict the value of homes on the marketplace. Of interest is the team’s investment into infrastructure that reduces the overhead and time requirement for feature engineering, model development, prototyping and translating notebooks to product.

Sentient Technologies released an open source Python framework (Studio.ml) for ML model management that is designed to minimize the overhead involved with scheduling, running, monitoring and managing artifacts of your machine learning experiments.

Ravelin, the platform for fraud detection that uses machine learning, graph networks and human insight, published a post on their technology stack.

SigOpt present an article on multimetric optimisation using Bayesian optimisation vs. random search. Their approach learns many more efficient hyperparameter configurations than random sampling 10 times as many points, and intelligently learns how to maneuver around the twelve-dimensional hyperparameter space.

Reinforcement learning for complex goals — a tutorial that into the kinds of problems that RL can solve and the benefits that can come from reformulating tasks in new contexts, especially a multi-goal approach.

Machine learning for humans, plain-English explanations, code, maths and real-world examples! In a similar, albeit not-so-brief way, here’s a Brief Introduction to Machine Learning for Engineers.

💰 Venture capital financings and exits

321 deals (66% US and 23% EU) totalling $1.58bn (80% US and 16% EU).

Databricks, a cloud-based collaborative workspace unifying data science, engineering and business and managed serverless cloud infrastructure, raised a $140M Series C led by Andreessen Horowitz. The Databricks team are known for creating Apache Spark.

Cambricon, a Chinese semiconductor company focused on deep learning hardware, raised a massive $100M Series A led by State Development and Investment along with Alibaba Group and Lenovo. Details on its technology are scarce (not entirely surprising given the space). The company was founded in 2016 and is already valued over $1bn on paper. Goes to show how much capital is looking for returns in China.

Brain Corporation, the San Diego company building technology to allow robots to perceive their environment, learn to control their motion, and navigate using visual cues and landmarks while avoiding people and obstacles, raised a $114M Series C led by the SoftBank Vision Fund.

Momenta, the Beijing-based company developing software for perception, HD semantic mapping and path planning, raised a $46M Series B from Sinovation Ventures, Daimler and others.

In other news, Descartes Labs raised a $30M Series C for their geospatial analysis platform; Prowler.IO raised a $13M Series A for their principled AI decision-making platform; Five.AI raised a $35M Series A to march forward with their UK-based self-driving fleet; Onfido raised a $30M round for their identity verification platform; and JingChi raised a $52M pre-Series A to build self-driving cars in China.

32 acquisitions, including:

Deere & Company acquired Blue River Technology for $305M, a US company founded in 2011 to offer robotics equipment able to automatically recognize plants and make decisions about which crop plants to thin or identify weeds to eliminate, enabling farmers to use sustainable method for farming. Blue River employed 60 people, raised $30M in total with their last round being a $17M Series B valuing the company at $88M post-money in 2015.

IHS Markit acquired automotiveMastermind from JMI Equity for $392M. The company offered predictive analytics and marketing automation software for the automotive industry to improve the buying experience. The company was founded in 2012, works with over 1000 dealers across 15 automotive brands and employs 224.

Amazon acquired Body Labs for $60M. The company produced 3D human models from scans, measurements and photos of individual’s body, enabling users to analyze human body shape, size and motion. Body Labs was founded in 2013, raised $10M with their last round being a Series A pricing the company at $22M post-money. The team counted 26 in total.

Nasdaq acquired Sybenetix, a London-based company detecting malicious behaviour in the securities divisions of financial institutions. Price was undisclosed.

Qualcomm acquired Scyfer, a machine learning consultancy and product company in The Netherlands that was co-founded by Max Welling, who is known for his work on generative models. Price was undisclosed.

Congratulations on reaching the end of Issue #21 Part 2/2!

Anything else catch your eye? Just hit reply!


Published by HackerNoon on 2017/10/16