The Future of the Metaverse: Where Do We Go From Here?

Written by metapunk | Published 2022/10/30
Tech Story Tags: metaverse | artificial-intelligence | brain-computer-interface

TLDRSan Junipero was the fourth episode in the third series of Black Mirror. It dealt with a simulated reality where the deceased can live and the elderly can visit, all inhabiting their younger selves’ bodies in a time of their choice. It is in fact a true *Second Life* in all senses of the term. We are given the choice to either die or live on within the metaverse. The world’s first artificial general intelligence is born from the metaverse**The next stage will be brain-computer interfaces that allow us to inhabit worlds that exist only inside computers.via the TL;DR App

What happens when you spend most of Friday night down a rabbit hole? This blog post is what happens.


We haven’t even gotten started and I’m already thinking of end-goal scenarios for the metaverse but why not, after all, Iam a recovering futurist.

There are two potential and inevitable conclusions of the metaverse -

  1. We are given the choice to either die or live on within the metaverse
  2. The world’s first artificial general intelligence is born from the metaverse

Heaven is a place on Earth

I think about San Junipero a lot. It was the fourth episode in the third series of Black Mirror, the Netflix series which dealt with a lot of sci-fi topics with more than enough dystopia to choke a hippy. But there was one standout episode that stays with you.

And it stays.

San Junipero dealt with a simulated reality where the deceased can live and the elderly can visit, all inhabiting their younger selves’ bodies in a time of their choice. In the physical world, an elderly Kelly (Denise Burse) visits Yorkie (Annabel Davis). She learns from Yorkie’s nurse, Greg (Raymond McAnally), that Yorkie was paralysed at age 21 after crashing her car when her parents reacted negatively to her coming out. Yorkie wishes to be euthanised to live in San Junipero permanently, but her family objects; she intends to marry Greg so that he can consent for her. Kelly offers to marry Yorkie instead, and after she enthusiastically accepts, Kelly authorises Yorkie’s euthanasia.

During Kelly’s next visit to San Junipero, Yorkie asks her to stay full-time. Kelly says she plans to die without being uploaded to the simulation; her husband chose the same fate because their daughter died before San Junipero existed. Yorkie and Kelly argue, and Kelly leaves in her car, which she intentionally crashes. Yorkie catches up to her just as Kelly disappears, her visiting time over for the week.

Time passes, and Kelly decides she is ready to enter San Junipero permanently. She is euthanised and buried alongside her family, and she happily reunites with Yorkie in San Junipero.

There are a number of things to note here —

the reference to a simulated reality (the metaverse),

the fact that it is a choice to live there, people can come and go to visit,

and right at the end, we’re shown TCKR — the cloud-based company that runs the metaverse and also the servers that store everyone who lives there.

It is in fact a true Second Life in all senses of the term.

How could you realistically achieve this given the technology we have today? Well, you can’t. But in the next 50 years potentially you could. Quantum computing, DNA-based storage, lifelong data streaming, engram mapping, Neuralink v5.0, a simulated reality far beyond what any engine could achieve today, a collaborative, multiplayer, and shared conscious environment…it’s pretty much the end goal of a type of metaverse we’re not even thinking about yet.

In effect, to recreate a duplicate of yourself would require information from both the physical and virtual environments you have ever interacted with from birth. Scary? Perhaps.

This is the kind of vision that Herman Narula, CEO at Improbable, has recently been speaking about in connection with his new book, Virtual Society. Narula has been talking about BCI (brain-computing interfaces) a lot to the point he’s already thinking of writing a second book on the topic.

In an interview with Gizmodo he stated that “I’d bet that the next stage will be brain-computer interfaces that allow us to directly connect to and thus inhabit worlds that exist only inside computers.

Sort of like jacking directly into the Matrix, but without the dystopian connotations. We already know that brain-computer interfaces are theoretically possible; we know that scientists are already performing intriguing experiments with “neural laces” and other technologies that could facilitate direct connections between the brain and a machine.

In the book, I argue that if these technologies continue to improve and evolve, then we will eventually get to the point where we’ll be able to hopscotch between real and virtual worlds, with the only difference between “real” and “virtual” being a semantic one.

We’ll create our own multiverses, basically, and we’ll toggle between worlds with ease.”

Neurable has entered the chat…

And so the rabbit hole brought me to check out Neurable and Dr. Ramses Alcaid after reading about their latest funding round from Felix Hartmann’s post on LinkedIn. Neurable claim that they’re translating brain activity into simple, actionable insights you can use in your everyday life. After years of research, they’ve reduced the typical BCI doohickies from goofy, electrode-filled hats to everyday items like the headband on a pair of headphones.

They’re intending to license this technology to OEMs to take forward at scale which is a genius move. And frankly, this is far more believable a technology and approach to achieving the types of technological singularity many futurists have been banging on about for decades than Neuralink, which requires you to butcher the patient with invasive probes.

In fact, the approach seen in Black Mirror was not an intrusive device either.

But this is not the singularity that Kurzweil and his cronies have imagined, we are not destined to merge with the machine — we’re destined to live in the machine.

So let’s recap — (1) we have humanity choosing to live and easily switch between this world and alternate, virtual realities (2) we have BCI devices that are starting to emerge that will provide the constantly streaming data to understand how and why we think, and potentially record every life experience we have to build a collective store of digital memories like the brain does that isn’t an Instagram post.

So what’s next?

Well, I’ve already written about the need for a metaverse operating system, one which is completely decentralized in nature.

A centralized architecture cannot lead to a truly self-scalable solution, even with the use of multiple servers. Indeed, client-server architectures lead to prohibitive deployment and maintenance costs when it comes to very large-scale applications with thousands of connected clients.

On the other hand, thanks to their self-adaptation features, P2P network overlays have clearly proved to be an effective alternative to powerful servers.

By building a distributed OS created specifically for the metaverse, web3 or even further like the ideas in this post you are also building completely decentralized applications and the engines to power it all. The idea of something like TCKR being under the control of only one corporation but in the hands of every human on the planet instead is far more attractive and also less likely to be switched off.

I hope.

If you want to burrow deeper into the hole then a distributed OS like Plan 9 from Bell Labs is precisely the type of thing I proposed in that blog post — and it’s the kind of foundational software fabric that’s needed to achieve the types of visions like Narula, Dr. Alcaid and Charlie Brooker see, they just aren’t joining the dots.

Plan 9 from Bell Labs was actually named after the worst sci-fi movie of all time.

The exciting part of this idea is that humans become part of the equation — they are in fact software nodes themselves through the BCI devices and part of the overall network and operating system. In fact, back in 2011, I wrote a nonsense blog post about a different type of network protocol I dubbed the Human Network Protocol (lol) that humans became part of rather than the devices themselves, even to the notion that your “IP address” became your decentralized identity and replaced things like your SSN.

Your ‘social graph’ will become more valuable to the Government that your Social Security or NIS number and it’ll supercede these eventually. You’ll carry a single portal to your social graph in the form of a mobile device, and you’ll always be connected and accessible from that device. No more multiple email addresses, no more phone numbers, your HNP will be your single and unique identifier.

So, let’s go for a second recap because this is brain-melting stuff.

(1) the idea of a distributed operating system that connects humans and environments together (2) the idea that distributed engines and applications can be developed to power virtual and alternate realities we may want to live and transcend into before or after death (3) the brain-computer interfaces made specifically to capture that data that makes you you throughout your lifetime that’s invisible and unintrusive.

I’m still not done yet. Friday night was a long one.

That’s a lot of servers

In San Junipero, the end sequence felt like the end of Raiders of the Lost Ark — unending rows of servers processing the data of billions of humans and powering the alternate reality that many chose to transcend to.

But this can’t be sustainable and certainly in the wake of so much attention to climate change, good for the planet either.

So I fell a little further down the rabbit hole and discovered research papers on the carbon footprint of distributed computing vs cloud computing.

One such paper, **Distributed Computing for Carbon Footprint Reduction by Exploiting Low-Footprint Energy Availability **discusses this very idea.

Another piece of research focuses on supercomputing needs and their carbon emissions (which are huge) and also looks at the energy efficiency of developer languages themselves (Python for example)

There’s not a lot of work done in these areas but it makes me wonder whether the energy efficiency of a completely decentralized operating system that was developed in a language built from the ground up to make it as carbon-neutral or negative as possible could be researched and proven to be the best model available instead of huge data centers and approaches like continually spinning up additional virtual machines to cope with scale and load, or just using greener power sources and cooling methods.

Still here? Good. Now I want to tackle AI just for kicks.

The dude was called Theo too, lol

Data is only one part of what makes up artificial intelligence, understanding and studying intent and action and consequence all within contained, connected and simulated environments with real people interacting give insight and wisdom far beyond a chess-playing algorithm or GTP3 that can spit out a blog post like this (hang on….who is writing this??)

I wonder whether Narula and Alcaid have considered that both their ideas, combined, would give rise to the types of artificial intelligence we envisage in movies like Her and Ex Machina.

I doubt this metaverse-type AI entity would fight for the users knowing what we are like but the reality of it could be that the metaverse or this kind of transcended reality gives birth to the first form of artificial general intelligence purely from the amount of information, not just data, that forms part of it provided by all the constituent parts in this post.

Life after death?

An alternate reality distributed across humanity for all?

Artificial intelligence?

Who’s to say we won’t see it if we don’t attempt to connect the dots?

Also Published here


Written by metapunk | Affectionately known as the Tony Stark of Web3 by his three cats.
Published by HackerNoon on 2022/10/30