The Latest Machine Learning Trends From MIT Professors and Researchers

Written by yaninayelina | Published 2019/03/13
Tech Story Tags: machine-learning | artificial-intelligence | mit-machine-learning | machine-learning-trends | mit

TLDRvia the TL;DR App

The Massachusetts Institute of Technology (MIT) has always been known for its pioneering research, making a lasting difference to today’s technologically focused environment. And with machine learning development gaining incredible traction these days, it stands to reason the institute’s scientists pay particular attention to this hyped-up tech, coming up with innovative know-hows.

In this article, we’ll peer deep into MIT’s recent research and brief about ML use cases you’ve probably not heard about but will want to incorporate into your business processes.

Monitoring disabling diseases

According to official stats, roughly 60,000 Americans are diagnosed with Parkinson’s disease (PD) each year, and it’s projected that almost one million will be suffering from it by 2020.

To better understand PD and other debilitating diseases — such as muscular dystrophy and multiple sclerosis — and be able to refine treatment, a MIT’s team led by professor Dina Katabi created RF-Pose. It’s an AI-enabled solution that teaches wireless devices to detect people’s postures and movements even through walls and occlusions.

The researchers use a deep neural network approach to analyze radio signals that bounce off people’s bodies and build dynamic stick figures that mimic a patient’s actions. After being trained on a computer vision model, the system uses only wireless signals to estimate 2D poses.

With the ability to monitor falls, injuries, and changes in activity patterns, the solution not only helps elderly people live more independently, but also provides assistance in locating survivors during search-and-rescue missions.

Addressing possible GDPR concerns, the team says all data they gather has subjects’ consent and is anonymized and encrypted to protect user privacy.

Creating a personalized working atmosphere

It’s no news that there’s a strong correlation between physical environment in the office and employee engagement. To wit, 69% of companies report improvements after introducing healthy building features. And 37% of candidates are willing to accept a job with a lower salary if the employer provides appealing workplace facilities.

With this in mind, the Responsive Environments group at the MIT Media Lab started a new project “Mediated Atmosphere” aimed at bettering the workplace atmosphere at an individual level. How does the system work?

Mediated Atmosphere uses a blend of a frameless screen with a special aspect ratio, a custom lighting network, a speaker array, video projection, as well as wearable and contact-free biosignal sensors to thoroughly analyze a user’s behavior and state of mind.

This data is then used to synchronize modalities in a meaningful way and create corresponding immersive environments that help employees focus, de-stress, and work comfortably. Moreover, if users are dissatisfied with light levels or sound sources, they can simply tell the system how focused or relaxed they want to be.

To go the extra mile in building customizable installations that can fit into any office space, the team plans to analyze as much diverse data as possible and empower their image-based analysis with advanced ML tools.

Editing music in videos

Another MIT’s novelty worth mentioning is the PixelPlayer, a set of deep learning networks that analyzes music video, identifies specific instruments at pixel level, isolates their sounds, and makes certain instruments sound softer or louder.

After being trained on over 60 hours of videos, the system can accurately extract sounds from never-before-watched musical performances.

According to the lead researcher Hang Zhao, the system might be useful for improving the audio quality of old concert footage. Besides, PixelPlayer can help producers preview how certain instruments will sound together to choose the optimal combination for a new recording or change the audio mix of the existing one.

For now, the system identifies about 20 musical instruments, but as it continues being fed with more data, it’ll be able to detect more types and even distinguish subtle differences between instruments’ subclasses.

Building road maps

Road mapping is a tedious task, and even with aerial images in place, some companies tend to spend much time and manual effort on tracing out road segments. Why? Aerial images may have buildings, trees, and shadows, which makes them ambiguous and requires a post-processing step.

To fill in possible blank areas on the map, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) designed RoadTracer, a ML-enabled system of building road maps that is 45% more accurate than traditional methods.

RoadTracer starts with a known area of the road and then uses a neural network to analyze the surrounding location and define the points that are more likely to be the next part on the road. After these points have been added, RoadTracer repeats the process.

The beauty of the system’s incremental approach is that it allows human supervisors to easily and quickly correct errors and restart the algorithm from the point they left off — instead of letting RoadTracer continue to use inaccurate information and compounding the error in the final road map.

Detecting depression in natural conversations

A common mental disorder, depression affects about 300 million people of all ages worldwide. And although there are effective treatment methods, less than half of the sufferers receive it, with inaccurate assessment being one of the major reasons for that.

MIT researchers are trying to tackle this challenge by dint of machine learning. Their new neural network model examines patients’ textual and audial answers to specific questions — previous mental diseases, lifestyle, mood, etc. — to identify speech patterns indicative of depression and apply them for diagnosing new patients, without any context.

To deliver accurate results, the system was fed with about 142 text, audio, and video interviews of individuals with mental conditions, rating them on a scale between 0 to 27 in terms of depression.

According to the researchers, such a neural network can empower mobile apps to remotely monitor user interactions for mental distress and send doctors timely alerts. In future, after being trained on additional data, the MIT’s ML model is championed to early detect many other cognitive impairments, like amnesia or dementia.

On a final note

From healthcare to manufacturing to art — machine learning is stepping up to the plate to improve care quality, automate an array of business processes, and refine customer service. And these are just a few ML application examples you can follow.

Depending on your niche and business goals, you might need to amplify the existing solutions or build a custom ML system from scratch. Whichever approach you choose, rest assured you’ll reap substantial benefits.

About the author:

Yana Yelina is a Technology Writer at Oxagile, a provider of software engineering and IT consulting services. Learn more about our machine learning development. Her articles have been featured on KDNuggets, ITProPortal, Business2Community, UX Matters, and Datafloq, to name a few. Yana is passionate about the untapped potential of technology and explores the perks it can bring businesses of every stripe. You can reach Yana at yana.yelina@oxagile.com or connect via LinkedIn or Twitter.


Published by HackerNoon on 2019/03/13