Geniuses Are Lazy — and You Should Be Too

Written by pratapranade | Published 2018/03/05
Tech Story Tags: artificial-intelligence | management | innovation | geniuses-are-lazy | genius

TLDRvia the TL;DR App

Richard Feynman, Theoretical Physicist and Nobel Laureate

What do you do when you are facing a tough problem? Get a cup of coffee, buckle down and prepare for a long night? Break out the library books, or an excel spreadsheet? Grit your teeth and prepare for a hard slog?

Maybe that’s not the best way.

Look at some of the biggest discoveries in science over the past few hundred years. One thing seems to stand out — geniuses are lazy.

Remarkable scientists begin by transforming their hard problem into a simpler one, instead of trying to attack it head on. It’s the opposite of “grit your teeth and push through”. People say that great thinkers look at things from a different point of view.

Physicists and mathematicians have changed their point of view literally. They have used mathematical transformations as a kind of alchemy — turning hard problems into a easy ones.

Einstein did this with relativity. Relativity is a hard concept to grasp when you’re thinking in three dimensions, as humans have evolved to do. However, space-time is four-dimensional, so it requires lots of mathematical gymnastics just to be able to “see” how light or matter behave in a four-dimensional world. It’s almost as if you were competing in a game of chess, but you had to climb a 20,000 ft mountain to reach the chess board. Nobody cares or even knows that you climbed the mountain first — they just care about chess. The mountain is the undifferentiated hard work (calculating things in 4-D space), and the chess is the new stuff that matters (special relativity).

Einstein took a metaphorical cable car up this mountain. He applied the Lorentz transformation. It enabled him to model space-time in such a way that the speed of light was independent of the observer’s frame of reference (a key tenet of special relativity). This took care of a lot of mental grunt work, letting him focus on developing the differentiated concepts and models of interaction that led to relativity.

Think transformations are just for physicists? Think again. Everyone is cheating. Everyone is using a coordinate transformation to transform hard problems into easier ones.

Spotify uses this to build your recommendation engine. In fact, transformations (often called ‘feature space’ transformations in a machine learning context) lie at the heart of every good machine learning model. One example is the Support Vector Machine (SVM)— a supervised machine learning method. SVMs start with a hard optimization problem (called the “primal” problem), then mathematically turn it into a “dual” problem that is much easier to solve. Other popular methods, including multi-layer (deep) neural networks, also begin by transforming their inputs into a different feature space so the problem becomes easier to solve.

Sometimes transformations are deeply mathematical. Sometimes they are as simple as creating a unique type of shorthand.

Legendary physicist Richard Feynman created Feynman diagrams. They look like cartoons, but they describe complex integrals in real and imaginary space. It let Feynman focus his cognitive capacity on what was truly different and new about quantum field theory rather than wasting energy to perform complex integral calculus.

Feynman diagrams (left) and the terms they represent (right)

Those of us working in knowledge industries — where output is codified thought (a presentation, a talk, a paper, or lines of code)— can steal tricks from these physicists and mathematicians.

First, do you know what you’re trying to solve? What part of that solution is actually differentiated and new? How can you get rid of anything else that isn’t?

You need to find your transformation — something that lets you focus most of your cognitive energy on what’s differentiated, and off-load what’s not — no matter how complicated.

Perhaps there is a form of the true problem that is simpler to solve? Maybe you can tackle the problem through a shift in geography, a shift in your initial set of target users? Perhaps it is your choice of programming language or database schema? Maybe it’s a change in architecture?

Google did this with GPUs (Graphics Processing Units). Conventional computing was done on CPUs. Google’s massive machine learning problems boiled down to multiplying large matrices of numbers together. It turns out that GPUs are optimized for this, because it’s the same math that powers 3-D computer games, so they switched out their CPUs for GPUs (and now in fact, TPUs — Tensor Processing Units) to make their problems easier.

It’s good to be lazy, but deliberately so. Geniuses are able to separate hard work from important work, and double down on what’s important by transforming the rest.


Published by HackerNoon on 2018/03/05