Tackling a new Human-Centric AI Paradigm: Startup Interview with AIR Cofounders

Written by gregsz | Published 2021/08/27
Tech Story Tags: future-of-ai | ai-technology | human-centered-design | entrepreneurship | startup-of-the-year-2021 | startups-of-the-year | ai | hackernoon-top-story

TLDR Air is a Canadian startup focusing on a new AI training approach that allows the tackling of challenges that are too complex for traditional machine learning methods. It's called AIR, which stands for AI Redefined. It's the world's first open-source framework to provide the means to train people and AI together, in multi-agent and multi-human contexts. The company is tackling one of the more impactful issues that humanity faces—how humans will continue to benefit from/steer/trust AI.via the TL;DR App

AIR is a Canadian startup focusing on a new AI training approach that allows the tackling of challenges that are too complex for traditional machine learning methods.

We are doing this by building a toolset for AIs and people to learn continuously from each other. AIR developed Cogment, the world's first open-source framework to provide the means to train people and AI together, in multi-agent, multi-human, multi-rewards contexts.

HackerNoon Reporter: Please tell us briefly about your background.

Dorian Kieken (Founder, President):

Before founding the AI start-up AIR (formerly Age of Minds), I shipped 17 video games in multiple roles ranging from Design to Production, including the acclaimed games Mass Effect 2 (Game of the year 2010) and Mass Effect 3 (RPG of the year 2012).

I co-founded the BioWare Montreal studio and helped it grow to 100+ people. I introduced and evolved agile production methodologies & tools to multiple companies, and drove the implementation of a distributed leadership/teal system. I'm also an active member of the Montreal AI ethics group and a strong believer in taking key actions today to nudge humanity towards a good future.

Greg Szriftgiser (Co-founder, Design):

I spent 20 years professionally studying, writing about, then finally designing and writing for the video games industry and its exploration of interactivity and dynamic systems. I became particularly interested in procedural storytelling (how to design intelligent systems able to create compelling narratives through various storytelling devices), which led to AI research and a passion for the struggles and achievements of machine learning as a way for humans to achieve more.

François Chabot (Co-founder, Technology):

I come from a computer engineering background with a specialization in artificial intelligence. Prior to co-founding AIR, I worked in video games (Capcom), mostly building AI and Gameplay systems in heavily resource-constrained scenarios, notably Dead Rising 2, real-time hardware-in-the-loop simulators (Opal-RT), and malware search (Google).

I have a great interest in the relationship between people and machines, both when it comes to programmers and people using programs. Nowadays, I see myself as multidisciplinary glue that can bring various disparate systems together.

Craig Vachon (CEO):

After grad school, I couldn’t find a job, so I started a small company.

I got extremely lucky and sold that company to a Japanese firm two years later. After a few more lucky professional experiences (and having lived as an expat for 12 years), I decided to spend my efforts helping fellow entrepreneurs solve important human challenges as an angel investor.

I joined AIR because I like to think we’re tackling one of the more impactful issues that humanity faces—how humans will continue to benefit from/steer/trust AI.

What's your startup called? And in a sentence or two, what does it do?

It's called AIR, which stands for AI Redefined.

We create technology for “steerable AI”, leveraging the advent of AI to the benefit of humankind through human-AI collaboration technology.

Our open-source AI orchestration platform, Cogment, allows AI practitioners to design, train and deploy complex intelligent ecosystems, mixing multiple humans and artificial agents of various kinds to bring about results that are more context-aware (ethical, empathetic, explainable).

What is the origin story?

Dorian: After enjoying great success in the game industry, and what was a dream-job for me with recognition in a fantastic studio, I felt motivation slipping away around 2014. It coïncided with the birth of my first daughter. I discovered that actually, this loss of motivation came from not working on something more meaningful that will help the next generation.

Among the many challenges this next generation will face (climate change, energy, water, etc.), I identified AI, and in particular, human/AI alignment, as the most important one that I could get involved in. I got strongly engaged in the Montreal AI Ethics group and ultimately, it led to the creation of AIR that is tackling this human/AI alignment problem.

Greg: I met Dorian around the time I was diving into AI myself, through common friends, and we immediately clicked on several topics, as well as how we thought the AI conundrums could be tackled.

We also shared this desire to see how we could leverage what we had learned in the video game industry to tackle something more impactful with regard to the various global challenges on the horizon.

Francois: Throughout my career, I had the good fortune to be exposed to a number of fascinating, seemingly disparate domains and technologies. On top of that, trying my hand at the startup process had always been on my radar ever since I left university.

So once I reached a point in my professional development where I was starting to feel confident enough in my skills to take a crack at it, I looked back at everything that had interested me and that I had become good at so far: The way humans connect and interact with machines, Simulations, Machine Learning, etc… and pondered how to bring it all together into something truly novel, interesting, and useful. During that process, I was introduced to Dorian, and the rest is history.

Craig: I wasn’t a part of the origin of this company. I joined this year (2021) to help productize and accelerate the adoption of the Team’s amazing technology.

What do you love about your team, and why are you the ones to solve this problem?

Dorian: The dual expertise in both the world of AI and the world of simulation/video games (human-AI interaction) makes for a very unique team of people with expertise in more than one area. It's something that makes the team especially well-suited for the problem we tackle because it needs to be tackled from multiple points of view.

Not just the AI point of view which centers around technology, rewards, methods of training, but also from the environment where agents evolve and interact with each other and with human users. One parallel that comes to mind is the birth of CGI animated movies, which required people with dual skill sets in Art/animation and computer science to truly blossom.

Greg: Beyond the multiple skill sets people have in the team, there is also a very important cultural aspect in my opinion, of a human-centered perspective on the world. Emotional maturity, going hand in hand with a very high prevalence in each and every member of AIR's team of an almost child-like curiosity in diverse topics makes for a very particular blend of talents and personalities to tackle this long-term vision of synergy between humans and AI.

Craig: The team is mission-driven and diverse in terms of its skill sets and expertise. They fully understand and appreciate the opportunity we’ve been given to ensure ‘humans and AI elevate each other.’

If you weren’t building your startup, what would you be doing?

Dorian: Dorian: Probably finding another way to nudge the future of AI and humans in a good direction. Maybe through another existing company like DeepMind? Or an organization like the Montreal AI Ethics Institute?

Greg: Probably something related to Video Games, Interaction Design, and/or Storytelling, I suppose, but with the missing excitement of working on a truly transformative technology with a mission bigger than myself.

François: I would probably be doing something very similar: Tackling hard problems that affect how people relate to technology in some way, and doing so in a healthy collaborative environment. The startup is a means to this end for me. I’m not big on entrepreneurship for the sake of entrepreneurship.

Craig: Helping other great founders pursue worthwhile goals.

At the moment, how do you measure success? What are your core metrics?

Dorian: In the long term, on how human-centric the training of AIs becomes, and how successfully we move past the narrow "AI behaving without context" era we are currently in, to reach towards AI with a broader understanding of context.

Craig: In the short term, it's about the scope and type of collaborations our open-source platform enables, and our ability to generate license-based revenue from customers that value context/steerability/trust from their AI results.

What’s most exciting about your traction to date?

Dorian: The collaborations we have with CRL (Chandar Research Lab at Mila) on Hanabi and the University of Alberta on complex multi-agent problems come to mind.

Greg: From a more general perspective, I'm very excited to see more and more thought pieces, actual work, and studies popping up and coming to similar conclusions we did: that by designing intelligent systems with human participation from the get-go, issues like bias, trust, or explainability can be addressed at the same time as the ability for those systems to achieve better results, thanks to the complementarity between the different strengths humans and AI bring to the table.

Craig: One of our aerospace customers admitted we had made him a hero as we accelerated his roadmap in AI by two full years. (How often do you hear that from the head of a Business Unit?)

What technologies are you currently most excited about, and most worried about? And why?

Dorian: AI in general. I'm excited about how it can have a positive impact on humanity, and I’m worried about how this impact could also be negative.

There is no doubt in anyone's mind, I think, that AI can help us solve very important issues, and bring us much-needed help; but used poorly, it can also exacerbate those issues or even create new ones.

Greg: It's really similar, I think, to any groundbreaking shift of the technological paradigm in which, or with which, we build our societies. Many people will probably talk about the Internet as the most recent example of a profoundly world-changing technological paradigm shift, and they would be right, but even though we now have around 3 decades of hindsight, I think it's fair to say the jury is still out on whether our societies are better for it or not.

AI will probably impact us at least similarly, if not much more profoundly; that is why I think we need to actively work at making it more accessible to everyone, but also more human-centric.

Francois: I am excited about the future of how we express what we want computer systems to do. Technologies that are being widely deployed right now feel like we are dipping our toes towards expressing intent in a more semantically meaningful manner.

The massive rise of Python as a programming language and Kubernetes as a deployment platform are but baby steps in that general direction. I am convinced the next generation of “telling machines what we want from them” will open the door to some truly amazing things, and that this is coming a lot sooner than most people would expect.

As far as worries go, I am particularly concerned about our willingness to deploy socially disruptive systems based on technologies for which we only have proof of concepts exhibiting critical flaws with the assumption that functional implementations are within our grasp. Image-driven emotion detection and blockchain are particularly egregious examples of this in my mind.

Craig: Today’s narrow AI provides only the thinnest veneer of humanity. And yet, without an iota of context, common sense, empathy, or morality, we want #AI to solve significant human problems. We will do better (with Cogment.AI).

What drew you to get published on HackerNoon? What do you like most about our platform?

Getting nominated as a Startup of the Year was a very nice surprise!

HackerNoon offers a very open, accessible platform for people to learn and share knowledge, with accessible tools everyone can use from anywhere; in that sense, it aligns pretty well with how we think about what the future of AI should be :)

What advice would you give to the 21-year-old version of yourself?

Dorian: I was already driven by passion back then. Probably something along the lines of "a healthy mind in a healthy body". Something I only learned in my late twenties. Better care of the body (training, food, etc…) leads to better sleep which, in return, gives you more time for all the cerebral activities.

Greg: "Pay more attention to the irrational and the way humans behave, I swear it's fascinating!" Oh, and, I'd also tell myself: "sleep more, you idiot, stop pulling multiple all-nighters in a row, circadian rhythm is more important than you think".

Francois: Don’t second-guess yourself so much. The shame of being proven wrong is only fleeting, but the missed opportunities often don’t come back.

Craig: Say “yes” more often. The best adventures, (plus laughter/learning) always start by saying, yes.

What is something surprising you've learned this year that your contemporaries would benefit from knowing?

Dorian: From a business standpoint, that the scaling potential of a business and its recurring licensing revenue is more important to investors than the number of revenues! Also, from a cultural perspective, how much face-to-face human contact is important, particularly after the past year of Covid-19 isolation. We are social creatures more than ever.

Greg: The idea in itself is no surprise to me, but the extent to which it verifies surprises me regularly: that what reality is or could be is irrelevant in the face of what human culture dictates it is or should be. I am also surprised by how much more fragmented and inward-facing culture seems to become in the age of the so-called globalization of information.

Francois: Learning to skateboard as an adult feels like rebelling against the rules of physics themselves. The sequence of events required for it to barely function is unlikely. Everything about the process is an unstable equilibrium. Yet here you are, spitting in the face of Newton’s third law.

Craig: Two things: Hairy, audacious goals are life-affirming; And the sequel (of the comedic spy thriller I published last year) isn’t as easy as it ought to be.

Don’t forget to vote for AIR as Startup of the Year in Montreal.


Published by HackerNoon on 2021/08/27