Nathan.ai newsletter Issue #21 — Part 1/2

Written by NathanBenaich | Published 2017/10/16
Tech Story Tags: self-driving-cars | machine-learning | artificial-intelligence | deep-learning | technology

TLDRvia the TL;DR App

Reporting from 20th July 2017 through October 16th 2017

Hi there! I’m Nathan Benaich — welcome to issue #21 of my AI newsletter. Here, I’ll synthesise a narrative analysing important news, data, research and startup activity from the AI world. I’m going to split this edition into two parts because of the long time period I’m covering. Next issues will be more frequent, promise! Grab your hot beverage of choice ☕ and enjoy the read! A few quick points before we start:

1. Avoid being a trophy data scientist — a worthy read for managers looking to transform their company into a data-driven organisation.

2. How will we create the data we need to power the era of autonomous driving? Collaborative mapping combined with computer vision.

3. TwentyBN demonstrate a system that observes the world through live video and automatically interprets the unfolding visual scene.

Referred by a friend? Sign up here. Help share by giving this it a tweet :)

🆕 Technology news, trends and opinions

🚗 Department of Driverless Cars

a. Large incumbents

The US House of Representatives has passed the SELF DRIVE Act this summer (now, onto the Senate!). The Act provides the National Highway Traffic Safety Administration (NHTSA) power to regulate self-driving vehicle design, construction, and performance just like it does for regular vehicles. In the next 24 months, NHTSA will write the feature set and rules that automakers must abide by to prove their vehicles are safe. The Act also calls out a “privacy plan” whereby automakers must describe how they’ll collect, use and store passenger data. NHTSA can authorise tens of thousands of licenses to companies testing self-driving cars, too.

While the German auto industry convened for a conference discussing the pollution issue of diesel-powered cars, Tesla continues to wage its war against the industry. Indeed, it appears that auto companies are scurrying to match functionality and specifications, while Tesla is washing the market with a PR storm of electrification, autonomy, AI, politics and more. In doing so, they’ve “motivated an army of online fans and enemies unlike anything the sector has seen since the rise of the internet. That can’t be duplicated at any price.”. Tesla is also developing an electric semi truck, which they will reveal in mid-November after completing the prototype. The truck is not autonomous, for the time being.

At a recent London.AI meetup we held, Kiva Systems engineer explained how the company reduced the complexity of the perception, planning and navigation problems for their robots by laying down cues the robot could follow on the ground. In a similar light, the state of California is updating their delineation standards on roads to create consistent lane marks to help self-driving cars with lane detection.

Since completing their $15B acquisition by Intel in August, Mobileyeannounced they’re working on a fleet of 100 Level 4 self-driving test carsfor the US, Israel and Europe. The fleet’s purpose is to create a closed-loop feedback cycle for the Intel-Mobileye suit of AV-related technology (chips, vision, sensor fusion, mapping, driving policy and 5G comms). Commenting on this release, Amnon Shashua (co-founder/CTO of Mobileye) rightly points to geographic diversity of data being core to the success of cars generalising to new environments: “Geographic diversity is very important as different regions have very diverse driving styles as well as different road conditions and signage. Our goal is to develop autonomous vehicle technology that can be deployed anywhere, which means we need to test and train the vehicles in varying locations.”.

Waymo and Intel have entered into a partnership focused on co-development of self-driving technology. Waymo’s self-driving Chrysler Pacifica hybrid minivans feature Intel-based technologies for sensor processing, general compute and connectivity. Waymo is also heavily invested in simulation software to reproduce “interesting miles” its cars encounter in the real world and structured testing environments in a recreated city outside of San Francisco. In fact, Waymo cars have ‘driven’ 2.5B virtual miles in 2016 vs 3M miles in real life. Today, they’re clocking 8M miles a day in simulation. This is a big deal in a market where generating real-world data is time and capital intensive — leveraging simulation expand the universe of situations (and derivations thereof) that a car is exposed to. In fact, the Uber ATG visualisation team published a post about their own platform for engineers and operators across ATG to quickly inspect, debug, and explore information collected from offline and online testing.

Samsung has committed $300M into the Samsung Automotive Innovation Fund to make direct investments into connected car and autonomous technologies, including smart sensors, AI, high-performance computing, connectivity solutions, automotive-grade safety solutions, security, and privacy.

Several companies have entered into partnerships. Question to you: What are the business models here?

Fiat Chrysler Automobiles is joining a development consortium with Intel/Mobileye and BMW to leverage synergies on technology and supply chain for AVs. This group intends to have 40 AVs on the road by the end of the year.

Toyota, Intel, Ericsson and others formed the Automotive Edge Computing Consortium, which has two main focus points: a) increasing network capacity to accommodate automotive data transfer between vehicles and the cloud, and b) the development of best practices for the distributed and layered computing approach recommended by the members.

Tier 1 OEMs and NVIDIA are no longer alone in building self-driving platforms. Tier 1 supplier Magma has released their own plans. In fact, the supplier marketplace is likely a more trusted route in for OEMs than working with startups given their existing supplier relationships; thus the former may become an active with M&A.

b. Startups

Lyft made two big strategic moves in the self-driving market. First, they opened the supply-side marketplace for Lyft partner companies developing self-driving cars such that they can submit cars for request from Lyft users (the “open platform”). This new model includes fleets from Drive.ai and Ford. Moreover, Lyft will itself open a self-driving technology division (“Level 5”) to build their own self-driving car in-house. The company is smartly operating a hybrid human-AV and Lyft-3rd party self-driving car network that, given significant turmoil at Uber, might eventually put the company in pole position with Waymo and Tesla. Lyft also teamed up with Udacity to encourage more students to graduate through the self-driving nanodegree. Ten thousand have enrolled within a year! On the topic of Uber and Waymo trial, the latter is claiming for $2.6B in damages relating to just one of the several trade secrets Uber allegedly stole. Judges also ruled that Uber must hand over its due diligence report on Otto prior to their acquisition before the trial is due to commence this week.

Mapillary celebrated their 200 millionth street level image submission!

In an arms race for data describing real-world driving situations, Nautoraised $159M from SoftBank to scale its video capture efforts through their dashcam product. The company is also exploring ways to learn driver coaching models from this data. By offering suggestions to the driver and capturing their subsequent feedback (did they take the suggestion or not, and what was the outcome), Nauto could gain a more nuanced understanding of a ground truth for “good driving” and “bad driving”.

The LiDAR space continues to heat up. Some proponents argue that while the solid-state configuration is cheaper (e.g. Innoviz, which recently signed an integration deal with Delphi or Strobe, recently acquired by Cruise), its resolution isn’t high enough to accurately characterise objects in a car’s environment. Meanwhile, new entrants such as Luminar use mechanically steered sensors with alternative wavelengths to improve range. Thermal sensors such as FLIR and Adasky are another hot topic for offering redundancy, especially in low visibility conditions. However, datasets captured through these sensors are less plentiful and thus ML systems will need to play catchup.

In the UK, an Oxbotica technology-powered vehicle travelled a 2km autonomous path through pedestrian areas of Milton Keynes. The test culminates 18 months of development by the Transport Systems Catapult.

Argo AI, which came out of nowhere in Q1 this year with a $1bn 5-year investment commitment from Ford, was profiled in Verge. Brian Salesky, Argo’s CEO, worked on the winning 2007 DARPA autonomous car with Chris Urmson (who since founded Aurora) and subsequently lead hardware development on the Google self-driving project founded by Urmson. Their target is a Level 4 car on the road by 2021. The piece also states that Argo is compartmentalising the self-driving problem more than others (e.g. Drive.ai), who treat the problem as one that can be learned end-to-end (pixel to driving controls). Not so sure that’s true.

Cruise Automation’s CEO describes the three generations of self-driving cars the company has built with GM in only 14 months!

Voyage is hoping to test their vehicles in a gated retirement community — the piece highlights the $5m insurance policy required per vehicle in California. Voyage had to pay twice that and provide sensor data back to the insurance provider, which will help the insurer better price risk.

💪 The giants

Google belittled Apple’s airpods this week with the launch of their Pixel Buds. These wireless earphones give users access to Google Translate for real-time translation of 40 languages! Google also followed their announcement of Gradient Ventures with a Launchpad AI Studio that offers datasets, simulation environments and advisory help from Google employees to startups working with machine learning. The company also released Google Clips, a wearable camera device targeted at parents (of pets!) who want to record images and short segment videos of their everyday surroundings. It looks like a more advanced (from a computer vision standpoint) version of the Narrative Clip, which launched in 2012 but shut down in 2016 after $12M in venture financing. Google’s DeepMind pushed their state-of-the-art WaveNet model for generating raw audio waveforms into the Google Assistant to replace legacy text-to-speech! The original published model was too computationally intensive because it created waveform samples one at a time with 16k samples per second. The new model, which is yet to be published, is 1000x faster and takes only 50 milliseconds to create one second of speech.

Box is running a beta trial with Google’s cloud vision API to allow their customers to run image search queries. I find this interesting because it’s an example of a major technology company not feeling threatened by passing data and revenue to Google for what could be considered a core competency.

Amazon AWS announced two main services relevant to the machine learning world. The first is Macie, a new security service that uses machine learning to help identify and protect sensitive data stored in AWS from breaches, data leaks, and unauthorized access with Amazon S3 being the initial data store in question. The second is Glue, a serverless service that crawls your data, infers schemas, and generates ETL scripts in Python. Separately, Amazon’s Lab126 demonstrated they could use GANs to generate novel fashion items that are consistent with a certain target style. This approach could provide inspiration for their future fashion designers!

Creating your own data is of course the mantra in AI. Apple is reportedly collecting “more data on activity and exercise than any other human performance study in history,” says Jay Blahnik, Apple’s director of fitness for health technologies. “Over the past five years, we’ve logged 33,000 sessions with over 66,000 hours of data, involving more than 10,000 unique participants.” This dataset is used to improve and expand the health-focused functionality of their Watch. The future of granula, real-time health monitoring is coming…

🍪 Hardware

Microsoft unveiled a new AI-focused coprocessor for their Hololens 2, which will supplement the existing holographic processing unit’s role in processing the custom time-of-flight depth sensor, head-tracking cameras, the inertial measurement unit (IMU), and the infrared camera. The coprocessor is fully programmable and optimised for deep learning models. A broader discussion of this thriving chip landscape is covered here. Microsoft also unveiled several new software packages focused on getting models trained and into production faster. This further eats away at the opportunity for private ML infrastructure software companies.

ARK Research published research on the declining unit cost of robots and their price elasticity of demand, suggesting that we’re (finally?) on the cusp of accelerating adoption. A factor that’s hard to systematically measure, but is nonetheless crucial to adoption, is the fault tolerance per task.

🎮 Games and virtual worlds

DeepMind and Blizzard release the StarCraft II Learning Environment(SC2LE), to accelerate AI research focused on reinforcement learning and multi-agent systems. It includes a Blizzard ML API to hook into the game (environment, state, actions, traces), up to a half a million anonymised game replays, a Python-based RL environment and a few simple RL-based mini games to allow performance benchmarking. The game is particularly interesting because it requires long-term planning and multi-agent collaboration with potentially different sub goals.

OpenAI released a blog post providing a high-level description of an agent they built to play the popular e-sports game, Dota 2, and beat the world’s top professionals. The traditional 5v5 format of the game, which pits two teams of 5 players for a 45 minute game, requires high-level strategy, team communication and coordination to win. OpenAI’s bot played the 1v1 version, which makes the game more about short term planning and mechanical skill, as outlined here. A feat, nonetheless.

Unity, the dominant gaming engine company, has entered into the machine learning business. They launched Unity Machine Learning Agents, which enable the creation of games and simulations using the Unity Editor. These serve as environments where intelligent agents can be trained using reinforcement learning, neuroevolution, or other machine learning methods through a simple-to-use Python API. Exciting project, but seems relatively under resourced on the Unity staff side compared to the likes of PROWLER.io who have 50 people on the problem!

In fact, environments like these will be core to the development of agents that learn emergent behaviours through self-play in simulation. Following earlier work by DeepMind, OpenAI show that humanoid agents tasked with learning to win at Sumo wrestling discover moves like tackling, ducking and faking on their own accord.

🏥 Healthcare

Veritas Genetics, a company providing direct to consumer whole genome sequencing and targeted screening for prenatal testing and breast cancer, made a move into AI by acquiring Curoverse, a bioinformatics company. Together, they are working on improving disease risk scoring and causality.

After the tall promise that IBM Watson would solve many problems in cancer diagnostics and care, the house came tumbling down. A STAT investigation showed that “It is still struggling with the basic step of learning about different forms of cancer. Only a few dozen hospitals have adopted the system, which is a long way from IBM’s goal of establishing dominance in a multibillion-dollar market. And at foreign hospitals, physicians complained its advice is biased toward American patients and methods of care.”

An open source competition to predict lung cancer from CT images has been launched. It offers access to two large pulmonary CT datasets. European startup, Aidence, is currently in third place!

It’s clear to use that drug discovery and development is an expensive, long and inefficient endeavor. Startups have flocked to this problem with the promise that AI techniques can explore a larger compound search space, track literature for side effects and validate chemical interactions in silico. However, the reality is less rosy: pharma won’t pay up for software licenses and candidates without in-human trial data aren’t worth much. Vertical integration doesn’t encourage collaboration

🆕 New initiatives and frontiers for AI

OpenMined, an open source project focused on federated machine learning using encrypted models on decentralised private data, is picking up community traction worldwide. There are now over 1k Slack users and 68 active GitHub contributors! Watch Andrew Trask’s introduction to the project. The key insights of this project are to access private user data for distributed machine learning training while protecting user privacy.

China and Russia have both made bold national commitments to AI being a core contributor to their future, or rather the secret sauce to ruling the world (according to the latter). China described a centrally planned vision for a national AI industry worth $60bn by 2025 (up from $1.5bn today). According to a Goldman Sachs report, China’s three step plan is as follows:

  • By 2020, China plans to have caught up to other global leaders in AI;
  • By 2025 they aim to achieve major research breakthroughs and have AI drive industry reforms;
  • By 2030 become a world-leading power in both fundamental AI research and real-world applications. China wants AI to drive an intelligent economy and society for the country to be a global powerhouse.

A translated version of China’s “New Generation AI Development Plan” argues that it lacks insight into the implementation strategy and distills to more of a “throw money at the problem” approach. Regardless, Alibaba, Tencent, Baidu, Xiaomi and Geely Automobile (owners of Volvo) are well positioned to profit.

The video understanding space is heating up, with Facebook following TwentyBN and DeepMind in releasing two labeled datasets describing scenes, objects, actions and general motion patterns in the real world

Keras author Francois Chollet shares a long-term vision for machine learning. One of the most interesting part relates to the idea of automatically growing a model for a specific problem using primitives of algorithmic modules (formal reasoning, search, abstraction) and geometric modules (informal intuition, pattern recognition). Quoting Francois: “Given a new task, a new situation, the system would be able to assemble a new working model appropriate for the task using very little data, thanks to 1) rich program-like primitives that generalize well and 2) extensive experience with similar tasks. In the same way that humans can learn to play a complex new video game using very little play time because they have experience with many previous games, and because the models derived from this previous experience are abstract and program-like, rather than a basic mapping between stimuli and action.”

In a talk entitled, Information theory of deep learning, Naftali Tishby of Hebrew University presented a framework and experimental evidence for how deep learning models (specifically CNNs) work so well. The theory, information bottleneck, essentially posits that deep learning models compress noisy input data as much as possible while preserving information about what the data represent. This means the network goes through several phases of learning: 1) pass data through a network of randomly initialised weights, 2) adjust weights by backpropagating the error signal using stochastic gradient descent, and 3) shed information about the input data, keeping track of only the strongest predictive features. This framework, however, doesn’t explain why children need precious few examples of an object to write its representation to their memory over time. Here, it’s likely that combinations of learned primitives are assembled to enable generalisation.

Ian Goodfellow, who is credited with igniting research in generative adversarial models in 2014, expands on several areas of future development for machine learning. These include automatic network design and optimisation — an area that Google is heavily working on now.

Results from ImageNet 2017 were released in July, with 29 of 38 teams realising an error rate below 5%. Now with over 13 million images and networks achieving reaching asymptotic performance, what’s next? A piece in Quartz and Fei Fei Li’s talk reviewing the good/bad approaches on ImageNet and what’s next for the future both point to a need for us to advance beyond object recognition into human-level understanding of images and video.

Saildrone, but for the skies! Microsoft Research are working on an autonomous glider that navigates to find and exploit pockets of warm air that allow it to remain in the sky without expending energy, much a bird.

Matternet is also taking the skies; this time in Switzerland, where it will run an authorised drone network to deliver medical lab samples between clinics. Barley farms in the UK too are being tended by drones!

A $240M 10-year long academic and industrial research collaboration minted the MIT-IBM Watson AI Lab, whose goal is to advance four research pillars: AI Algorithms, the Physics of AI, the Application of AI to industries, and Advancing shared prosperity through AI.

CIFAR explain the field of meta learning, whereby the learning algorithm that optimises a candidate neural network is no longer hard-coded (using gradient descent) but instead also cast as a neural network. This helps where there are no mathematical model describing the problem at hand.

📖 Policy

The IBA Global Employment Institute, which offers HR guidance for global companies, released a report on the impact of AI on legal, economic and and business issues, such as changes in the future labour market and in company structures, impact on working time, remuneration and on the working environment, new forms of employment and the impact on labour relations.

An important independent report on AI in the United Kingdom has been published to advise the government. The report recommends the facilitation of data sharing via established Data Trusts, public funding for data creation and sharing, create 300 new Master’s degree and 200 PhD degree places for ML (growing to 1,600 PhDs by 2025), amongst other initiatives. Bolstering AI research and commercialisation is a huge opportunity for the UK technology industry, especially following the blow dealt by Brexit. I’m hopeful the government will help the field find real momentum.

🔬 Putting AI into context

The Atlantic run a piece on an early human vs. machine world championship in games; not Chess or Go, but Checkers in 1994. The story pits Marion Tinsley, the world champion player and Maths Professor against a program, Chinook, painstakingly written over thousands of hours and operated Jonathan Schaeffer of the University of Alberta. However, in their live match, Tinsley had to withdraw for health reasons that sadly claimed his life a few months later. Resolute to prove that Chinook would have won, Schaeffer’s only option was to solve the game entirely. He used then state of the art heuristics (AI) to demonstrate that perfect play by both sides leads to a draw. The Science pas aptly entitled Checkers is Solved.

Moving into present day, the frenzy over AI is perhaps only challenged by that of cryptographic tokens. A piece entitled, The Seven Deadly Sins of AI Predictions, Rethink Robotics and iRobot founder Rodney Brooks makes the case for why we have to be especially careful in letting bold predictions run free. “A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, in the design of products. Nothing could be further from the truth. Almost all innovations in robotics and AI take far, far, longer to be really widely deployed than people in the field and outside the field imagine.” Indeed, another issue we must grapple with is that of propagating and amplifying bias in training data.

Congratulations on reaching the end of issue #21, Part 1 of 2! Here’s a prize :) Meet the almighty JobTaker robot!

Anything else catch your eye? Do you have feedback on the content/structure of this newsletter? Just hit reply!


Published by HackerNoon on 2017/10/16