Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

With WebRTC, real-time communications come to the browser

Chris Minnick and Ed Tittel | June 6, 2013
The WebRTC standard aims to make peer-to-peer communication over the Web as easy as picking up a phone. Here's what developers need to know about WebRTC, including how to set it up and what limitations the protocol currently faces.

Furthermore, the JavaScript APIs involved in enabling this peer-to-peer communication are simple enough that you can create a WebRTC client with just five or six lines of JavaScript and HTML. The browsers involved in the conversation basically handle everything on your behalf.

If you have any experience with VOIP and video connections, you know that VOIP generally involves proxy and firewall issues, as well as codecs and signaling protocols, which need to be agreed upon by all parties involved. The idea of WebRTC is that HTTP and the Web have already solved the problem of how to get data from one point to another with very few of these issues. The Web just works.

If you have a WebRTC-capable browser (e.g., Chrome and Firefox) installed on your computer, you can use that browser to communicate with any other WebRTC client.

If somebody else has a Web browser with WebRTC support-whether on a desktop computer, a smartphone or a super-awesome wristwatch communication device of the future-you can talk with that person in real time just as easily and trouble-free as if you had picked up the handset of a 1960s wall-mounted rotary phone provided by the central telephone company.

How WebRTC Works: Establish Connection, Create Stream
In 2010, Google acquired Global IP Solutions (GIPS), which developed codecs and real-time voice and video software. In 2011, Google released Hangouts, which uses technology from GIPS, and open sourced the GIPS technologies in the form of WebRTC. (As of this writing, Hangouts still uses a plugin, but rumor (and logic) has it that a WebRTC version is in the works.) WebRTC 1.0 is currently a W3C Working Draft. Although the Working Draft has been implemented in several browsers already, the specification remains very much in flux.

The first step in establishing a voice and video connection between peers is to gain access to the microphone and camera on each device. Until recently, this wasnt really possible with Web browsers. The W3C developed a simple API called the Media Capture API that has gained some support among browser makers and was recently partially baked into Mobile Safari.

However, Media Capture doesn't provide any means for streaming video or audio. That's where the MediaStream API comes in.

The job of the MediaStream API is to ask the user for permission to access a camera and microphone and then to create a synchronized video and audio stream. It does this with a JavaScript method called getUserMedia().

The basic code for creating a stream and displaying it using an HTML5 video tag is as follows. It is taken, and modified slightly from the getUserMedia docs.


Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.