Add video conferencing to your React App with 100ms

Written by dhaiwat10 | Published 2021/12/28
Tech Story Tags: react | webrtc | video-conferencing | react-native | javascript | react-top-story | frontend-development | web-development

TLDRThe 100ms React SDK is the de-facto way to add video conferencing to React apps. We will be building a simple video call room using the React SDK. The hmsStore and hmsActions hooks are your two best friends in developing with the 100ms SDK in React. We'll be looking at some of the actual code from one of the examples and try to understand how it works. The next step is to sign yourself up for an 100ms account to get 10,000 free minutes every month.via the TL;DR App

Agora RTC has been the de-facto way to add video conferencing to React apps for quite some time now. However, it is anything but easy to use. It is not easy to get started with, and it is hard to get the setup just right. Hell, it does not even have a React SDK.
If you are not familiar with Agora, let me give you a rough idea of what it is like. Here is what the code for a simple video call room looks like:
<script src="https://gist.github.com/Dhaiwat10/ffda13b29edc250dcfdaaf94e3ff985e.js"></script>
React is by far the most popular frontend library in the world. There has to be a way to make it easy to add video conferencing to your React app. It should be trivial to do so. This is what 100ms does.
The 100ms React SDK does all the heavy lifting for you. You can literally get up and running in minutes. Don't believe me? Well, let me prove it!
The best way to prove is by building! We will be building a simple video call room using the 100ms React SDK. You will be able to see the app in action in no time. And by no time, I mean literally minutes. Just follow along.

Initial setup

The first step is to of course, sign yourself up for an 100ms account. You will get 10,000 _free_ minutes every month. C'mon, just go ahead and get it done.
You will be greeted with this landing screen. Just choose a subdomain of your choice (you get one for free!), choose the Video Conferencing template and click Setup App.
You will notice that creating your own app means that you now have your own Zoom-like video conferencing app that you can use to host your own video conferencing.

Feel free to test it out, but that's not what we'll be focusing on. We will be of course using the 100ms SDK to add video conferencing to your app. To do that, let's go to the dashboard by clicking Go to Dashboard at the bottom right of your screen.
That's it, now you are ready to create your first 100ms app. All of that took less than a minute. Unlike services like AgoraRTC, no credit card needed!
Next, we will be looking at some of the actual code from one of the examples and try to understand how it works.

How is developing with the 100ms SDK different? (and better!)

If you are developing with the 100ms SDK in React, the hmsStore and hmsActions are your two best friends. Let me explain why.
The hmsStore, just like its name suggests, is a _reactive_ store that holds all the data that the 100ms SDK needs to know about. It is a singleton object that is available to all components in your app via the useHmsStore() hook.
While hmsStore lets you _read_ any data that the 100ms SDK needs to know about, hmsActions lets you _write_ data to the hmsStore. It is essentially the go-to way for mutating the hmsStore. The hmsActions object is also available to all components in your app via the useHmsActions() hook.
Before we dive into the code, I want you to grab hold of an auth token for your app. We will need this token in order to try out the demo we will be looking at next. You can get this token by following this guide: Auth Token Quickstart Guide.
Let's cut right to the chase now. I want you to open this CodeSandbox: 100ms React example.
<iframe src="https://codesandbox.io/embed/happy-meddling-syndrome-q4ukf?fontsize=14&hidenavigation=1&theme=dark"
     style="width:100%; height:500px; border:0; border-radius: 4px; overflow:hidden;"
     title="happy-meddling-syndrome"
     allow="accelerometer; ambient-light-sensor; camera; encrypted-media; geolocation; gyroscope; hid; microphone; midi; payment; usb; vr; xr-spatial-tracking"
     sandbox="allow-forms allow-modals allow-popups allow-presentation allow-same-origin allow-scripts"
   ></iframe>
We will be going through most of the business logic of the app. Let's start from the entry point of the app, index.js.
You will notice that we are wrapping our <App /> with <HMSRoomProviderProvider />. This is the glue that connects the React app to the 100ms SDK. This also crucially provides some sweet React hooks that you'll love to use.
Next, let's have a look at the JoinRoom.js file and see what's going on in there. This is the component that is responsible for letting you join a room and the one you are seeing on the preview.
const hmsActions = useHMSActions();

const handleSubmit = () => {
  hmsActions.join({
    userName: inputValues.name,
    authToken: inputValues.token,
  });
};
That's all the 'business logic' code you need to connect a user to a room via the 100ms SDK. You can see that we are using the useHMSActions() hook to get access to the join() method. This is why I said hooks are so crucial! If we now look at the AgoraRTC equivalent:
join = async () => {
  await this.client.join(this.appId, this.channelId, this.token, this.userId);
  this.localAudioTrack = await AgoraRTC.createMicrophoneAudioTrack();
  this.localVideoTrack = await AgoraRTC.createCameraVideoTrack();
  await this.client.publish([this.localAudioTrack, this.localVideoTrack]);
  // Yes, you actually have to *manually* create the HTML elements for the video track.
  const localPlayerContainer = document.createElement('div');
  localPlayerContainer.id = this.userId.toString();
  localPlayerContainer.style.width = '20vw';
  localPlayerContainer.style.height = '11.25vw';
  localPlayerContainer.addEventListener('click', () => {
    this.changeActiveStream(localPlayerContainer.id);
  });
  document
    .getElementsByClassName('agora-streams')[0]
    .append(localPlayerContainer);
  this.localVideoTrack.play(localPlayerContainer);
};
Choose your fighter. Anyways, let's next have a look at the code for the actual room! Join the room from the UI (use that auth token I told you to grab hold of earlier!) and open up Conference.js and Peer.js.
// Conference.js
import { selectPeers, useHMSStore } from '@100mslive/hms-video-react';
import Peer from './Peer';

function Conference() {
  const peers = useHMSStore(selectPeers);
  return (
    <div className='conference-section'>
      <h2>Conference</h2>

      <div className='peers-container'>
        {peers.map((peer) => (
          <Peer key={peer.id} peer={peer} />
        ))}
      </div>
    </div>
  );
}
That's all. It just works! Hooks to the rescue again. None of that creating a local microphone track, camera track, publishing etc. Just the business logic. This is the Agora equivalent:
 onUserPublished = async (
    user: IAgoraRTCRemoteUser,
    mediaType: 'video' | 'audio'
  ) => {
    await this.client.subscribe(user, mediaType);

    if (mediaType === 'video') {
      const remoteVideoTrack = user.videoTrack;
      const remotePlayerContainer = document.createElement('div');
      remotePlayerContainer.id = user.uid.toString();
      remotePlayerContainer.addEventListener('click', () => {
        this.changeActiveStream(remotePlayerContainer.id);
      });
      remotePlayerContainer.style.width = '20vw';
      remotePlayerContainer.style.height = '11.25vw';
      document
        .getElementsByClassName('agora-streams')[0]
        .append(remotePlayerContainer);
      remoteVideoTrack!.play(remotePlayerContainer);
    }

    if (mediaType === 'audio') {
      const remoteAudioTrack = user.audioTrack;
      remoteAudioTrack!.play();
    }
  };

  onUserUnpublished = (user: IAgoraRTCRemoteUser) => {
    const remotePlayerContainer = document.getElementById(user.uid.toString());
    remotePlayerContainer && remotePlayerContainer.remove();
  };
It is just...painful. Why do we have to manually create new HTML elements for each remote user and add it to the DOM?
The 100ms SDK does all of that dirty work for you so that you can focus on actually building your app. The SDK is there to aid you, not to make your matters worse. Most SDKs in the video conferencing space surprisingly just cannot help themselves and get this right.
Now, let's look at the last but certainly not the least important part of the code — the mute-unmute & disable-enable camera logic. Open up Footer.js.
const videoEnabled = useHMSStore(selectIsLocalVideoEnabled);
const audioEnabled = useHMSStore(selectIsLocalAudioEnabled);

const toggleAudio = () => {
  hmsActions.setLocalAudioEnabled(!audioEnabled);
};

const toggleVideo = () => {
  hmsActions.setLocalVideoEnabled(!videoEnabled);
};
As easy as that. We are using the reactive useHMSStore() hook to get access to the information about the local user's audio and video state. We are then using the hmsActions hook to dispatch actions to the store. Everything just works out of the box. It was crafted to be as simple as possible.

Final comparison and parting thoughts

The biggest differentiating factor that makes the 100ms SDK state-of-the-art is the attention towards modern practices like hooks and reactivity. Traditional services like Agora haven't tailored their SDKs to adopt these practices.
With 100ms, you don't have to 'query' any data. You just have to establish a one-time connection to the reactive store and all the data you'll ever need is seamlessly streamed to you.

If you want to make mutations to the store, you're in luck because there is a hook for that. There's a hook for everything! I think that is enough to convince any React dev to embrace the 100ms SDK with open arms.

Written by dhaiwat10 | Software engineer
Published by HackerNoon on 2021/12/28