Avoiding Data-Tunnel-Vision and Achieving Human-Machine Symbiosis

Written by yehudaleibler | Published 2021/12/20
Tech Story Tags: machine-learning-use-cases | digital-transformation | artificial-intelligence | management | ai-top-story | future-of-ai | ml-top-story | hackernoon-top-story

TLDRWe need to create human-machine teams that work together. The key to achieving that is recognizing that our we all suffer from data-tunnel-vision.via the TL;DR App

Photo by Tom Dahm on Unsplash.

The classic human error is assuming we’re doing a good job, because we have nothing better to compare ourselves to.

Humans operate with small data sets and execute flawed analytical reports. Machines operate with far larger ones and are capable of carrying out advanced analysis, achieving far more accurate results.

Thus, for example, far too often, we’ll see someone go to a certain college, get a certain degree, and then have a successful career and assume it will work for us as well. But in practice, our conclusion may be flawed, due to a number of reasons. Namely, humans can only contain and analyze a limited amount of data. Moreover, we aren’t always able to execute advanced analysis and remove bias, often leading us to skewed results or a failure to separate between correlation and causation.

In the context of AI transformations, employees often don't see the urgent need for improving their workflow, because they are only seeing the data-context they have in front of them, not all the data they are missing, or, the larger data-context.

In other words, the exponential creation rate of data, along with its natural vastness, mean that we can be so far behind, we aren't even capable of recognizing what we are missing.

Lingering Negative Data-Habits:

Analysts are inherently forced to filter out large amounts of data based on generic axioms because the alternative requires the analyst to work with an impossible amount of data. While the advantage of having a small data set is that the analyst can perform higher quality manual analysis, he often loses out on lots of information from the data he excludes.

But the damage is greater than just that. Other than just missing out on specific data-points of value, the analyst may become unaware of full segments of data that have value, due to false assumptions. This creates an endless loop where in each subsequent tasks, the analyst continues to falsely pre-determine his segments based on past (unreliable) experiences and results, thus creating a fatal downwards spiral.

This often nurtures a false sense of certainty as to the validity of the results with no substantial way of knowing what is falling between the cracks. Organizations end up with an over-developed sense of confidence in their data and draw conclusions which are only true for the small data-context they have in front of them.

Similarly, as humans, analysts tend to be biased towards implementing filtration systems that output the data-set with the highest probability to contain value. This makes sense when there is no alternative to human-centric tasks. However, when we introduce machines into our workflow, it is critical to shake this underlying bias. If a machine is doing the work, purely from a business standpoint, the rate of non-valuable to valuable data-point is far less important. The only important metric becomes the net amount of valuable data alongside the cost of time required by any humans alongside the machine.

In sum, Making machine-centric tasks more “humanly possible” in nature, is detrimental to our goals.

Adjusting our Mindset for the age of Big Data:

A human analyst may be able to segment data with 100% accuracy, while a machine may only be able to perform the same analysis with 85% accuracy after training. Despite that, what we, as humans, tend to ignore, is that 85% of 100X data which the machine is able to analyze may be more valuable and in fact, more accurate. Our initial response, as humans, tends to be opposing the use of the algorithm, thinking that only accurate results are of value.

This is yet another human bias that needs to be removed once having introduced machines into a workflow. Obviously, the quality of analysis is important, but when determining how to build a process, we need to remind ourselves that quantity actually creates higher quality.

In other words, our instinctive definition of accuracy is skewed due to the small context in which we are seeing it.

More importantly, casting a wide net of analysis on the larger data-set can provide a far more justified sense of confidence for organizations seeking credible conclusions.

Creating a Symbiotic relationship between Humans and Machines:

While the age where machines can do it all is not yet upon us, the age of human-machine teams has already arrived. There are many approaches to this, but to my mind, the most important success factor is building these systems in a manor that ensures that the value of each component’s work (human and machine) contributes to the other, symbiotically.

For example, in a case where we require high quality data segmentation and analysis, a machine alone will likely fail us. However, having a machine segment data, with human in-depth review as a quality-control can solve this, allow for reviewing of more data, and in many cases, be more cost-effective than using a human analyst alone. Alternatively, even if we don’t trust the machine to do any of the data analysis, we can allow it to guide us in selecting the subset for human segmentation. Ironically, today, our first and most dangerous filtration is done by humans, usually not based on conclusive evidence.

The key here is create systems that allow for the various elements of every task to be performed by the team component that is best suited for it.

Leading Human Machine Teams:

As managers and leaders, we have to train ourselves to look at the larger data context, and constantly suspect we are missing something. A failure to criticize, in this case, can be detrimental to an operation.

None of this is to say that there is no place for human analysis, but rather, that human ability is far more effective when enriched with machine capabilities. We need to create human-machine analysis teams that work in conjunction with one another, not opposing one another.

Looking ahead, it’s critical for managers to be able to separate between the role of the machine and the role of the human. Doing this well will allow humans and machines to work together, as “human-machine teams”, resulting in an outcome greater than it’s individual components.

If in the past, one of the manager’s key responsibilities was to assign the most appropriate team member to every task, today, that role has expanded to include evaluating which part of each task should be human-centric, and which should be machine-centric.

Those able to bridge the gap between the humanities-oriented task of putting these technologies into a human and business context with the technological know-how to guide their implementation, will be best positioned to lead a succesful transformation.

We must always attempt to “know the unknowable”, even within our data. But the key to achieving that is first recognizing that our we all suffer from data-tunnel-vision and in doing so, we can attempt to achieve human-machine symbiosis that will transform an operation.


Written by yehudaleibler | VP Strategy @ARX | VC @InvictaVentures | Prv: Founder @ Cortex (acquired, 2018)
Published by HackerNoon on 2021/12/20