Taming the AI Beast

Written by davidgerster | Published 2023/01/19
Tech Story Tags: ai | future-of-ai | technology | business | artificial-intelligence | chatgpt | microsoft | taming-the-ai-beast

TLDRAfter OpenAI launched ChatGPT in late 2022, it was immediately clear that a new beast was prowling the AI ecosystem. More than a million users tried ChatGPT in its first week after launch. Most marveled at its ability to read and write complex natural language; some dwelled on its limitations (especially its habit of making things up); others hailed it as a milestone for human-level “artificial general intelligence.”via the TL;DR App

After OpenAI launched ChatGPT in late 2022, it was immediately clear that a new beast was prowling the AI ecosystem. More than a million users tried ChatGPT in its first week after launch. Most marveled at its ability to read and write complex natural language; some dwelled on its limitations (especially its habit of making things up); others hailed it as a milestone for human-level “artificial general intelligence.”


There’s no need to rehash what ChatGPT can do. Unless you’ve been living under a rock, you know it can answer questions, summarize documents, write essays, create code, or simply chat. (Plus, it can do all of this in multiple languages, optionally translating between them.) There’s also no need to get into how it manages these eerily human feats of inference because nobody seems to know. But it turns out that if you feed a neural net 400 billion words (including all of Wikipedia, which weighs in at a puny 3 billion) and give it tools to parse for meaning, then it can mimic human intelligence via sheer brute force — an extreme case of the unreasonable effectiveness of data.


The real goal, however, is not to replicate (the limitations of) human intelligence but rather to help control the rapidly evolving bestiary of specialized, superintelligent AIs. Imagine a future code-writing AI that does a perfect job, or at least better than the best humans. The challenge will be getting it to understand what we want so it can work its magic. Now swap out software expertise for anything else — law, medicine, fantasy sports — and you start to get the idea. The human will remain firmly in the loop, conducting a symphony of mechanical savants like an octopus directing its semi-sentient arms.


In practice, human experts will still be needed: Doctors and lawyers will simply do more, better, and faster work as specialist AIs free them from drudgery. More than ever, we’ll also need creative people who can solve complex problems across disciplines. David Epstein explores this idea in “Range: Why Generalists Triumph in a Specialized World,” noting that modern work demands “the ability to apply knowledge to new situations and different domains.” In a world where humans routinely collaborate with expert AIs, well-rounded generalists might see as much demand as deep specialists, and “range” might be the new “10,000 hours.”


Amid the swirl of speculation, one thing is clear: Large language models, such as the one behind ChatGPT, will only improve as Microsoft and Google go to war. Microsoft__invested $1 billion__ in OpenAI (and might invest $10 billion more), which runs on its Azure cloud; Google has its own state-of-the-art models and has declared a “code red” as it battles disruption in both web search and cloud computing. General purpose models will only get broader, and specialist models (such as the one OpenAI trained on billions of lines of code) will only get deeper. In this war of machines, it’s the humans who will win.


Written by davidgerster | David Gerster and Trevor Mottl are Venture Partners at Fusion Fund
Published by HackerNoon on 2023/01/19