/

WebRTC Insertable Streams


The Red5Pro WebRTC SDK now has support for Inserable Streams!

What this entails for a WebRTC Subscriber is the ability to pass in a track (receiver) Transform function to process the incoming audio and video during playback.

Usage in init()

To enable the use of Insertable Streams for a subscriber using the WebRTC SDK, provide an optional mediaTransform argument to the init() call that provides the optional video and/or audio function, or Worker to use in the Transform pipe.

The mediaTransform argument schema is the following:

{
  video: <function>,
  audio: <function>,
  worker: {
    video: <Worker>,
    audio: <Worker>
  },r
  transformFrameType: <TransformFrameTypes>,
  pipeOptions: <Object>
}

All properties are optional, meaning you can have just a video or audio transform - you do not have to provide both. Or, you can define Worker(s) video and/or audio for operations that are computationally more expensive in which you want to send the processing to a Web Worker.

Additionally, you can define the transformFrameType (either TransformFrameTypes.ENCODED_FRAME - the default - or TransformFrameTypes.PACKET). The default is ENCODED_FRAME which will pass an RTCEncodedVideoFrame or RTCEncodedAudioFrame to the transform. IF PACKET is defined, then a VideoFrame or AudioData will be passed to the transform. Choosing one over the other depends on what operations you wish to perform on the data being processed.

The optional pipeOptions attribute is an object that will be passed along untouched through the piping and is described in more detail at https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/pipeThrough.

Transform Functions

The signature for the video and audio transform functions needs to conform to the following:

{
  video: (frame: <RTCEncodedVideoFrame | VideoFrame>, controller: ReadableStreamDefaultController) => {
    // process incoming video
  },
  audio: (frame: <RTCEncodedAudioFrame | AudioData>, controller: ReadableStreamDefaultController) => {
    // process incoming audio
  }
}

Web Worker

If defining a Web Worker to process the media, you will need to listen for decodeVideo and decodeAudio messages invoked on the Worker:

main.js

const worker = new Worker('worker.js')
worker.onmessage = event => {
  // handle any messaging from Worker.
}

worker.js

const handleDecodeVideo = (readable, writable, options) => {
  const transformStream = new TransformStream({
    transform: async (frame, controller) => {
      // process incoming video
    }
  })
  if (options) {
    readable.pipeThrough(transformStream, options).pipeTo(writable)
  } else {
    readable.pipeThrough(transformStream).pipeTo(writable)
  }
}

const handleDecodeAudio = (readable, writable, options) => {
  const transformStream = new TransformStream({
    transform: async (frame, controller) => {
      // process incoming audio
    }
  })
  if (options) {
    readable.pipeThrough(transformStream, options).pipeTo(writable)
  } else {
    readable.pipeThrough(transformStream).pipeTo(writable)
  }
}

self.onmessage = (event) => {
  const { data, type } = event.data
  const { readable, writable, options } = data
  if (type === 'decodeVideo) {
    handleDecodeVideo(readable, writable, options)
  } else if (type === 'decodeAudio) {
    handleDecodeAudio(readable, writable, options)
  }
}

Example

// Not provided by SDK. Import your custom transform processing to be passed to the SDK.
import { myVideoTransformFunction, myAudioTransformFunction } from 'my-custom-transform-library'

const initWithTransforms = async () => {

  try {

    const config = {
      host: 'myred5pro.domain',
      streamName: 'myencodedstream'
    }

    // The second argument to init() is an object that has optional `video` and `audio`
    // properties defining the transform functions to use in processing the video and audio tracks, respectively.
    // By not providing either, or defining them as `undefined`, no processing on the track(s) will occur.
    subscriber = await new red5prosdk.RTCSubscriber().init(config, {
      video: myVideoTransformFunction,
      audio: myAudioTransformFunction
    })
    subscriber.on('WebRTC.Unsupported.Feature', () => console.error('Bummer! This browser doesn\'t support Insertable Streams.'))
    subscriber.on('WebRTC.Transform.Error', event => console.error(`Error occured for ${event.data.type}!`, event.data.error))

    await subscriber.subscribe()

  } catch (e) {
    // handle error
  }

}

initWithTransforms()

The properties of the mediaTransforms argument of the init() call are:

  • video : A function that will be piped into the transform of the video track. To not perform any video transformation, set as undefined or leave out.
  • audio : A function that will be piped into the transform of the audio track. To not perfom any audio transformation, set as undefined or leave out.

There are two related events that can be dispatched from the RTCSubscriber instance with relation to utilizing Insertable Streams:

  • WebRTC.Unsupported.Feature - This will be dispatched if the current browser does not support Insertable Streams.
  • WebRTC.Transform.Error - This will be dispatched when an error has occurred in establishing a transform pipe for a media track.

Usage through transform()

Alternatively, you can assign the transform after initialization of the RTCSubscriber. This could be helpful in scenarios in which you want to display encoded video while providing a Paywall or other gating mechanism.

It is very similar in method signature as described above and the events to listen for are the same as well:

// Not provided by SDK. Import your custom transform processing to be passed to the SDK.
import { myVideoTransformFunction, myAudioTransformFunction } from 'my-custom-transform-library'

const allowButton = document.querySelector('#allow-button')

const initThenTransform = async () => {

  try {

    subscriber = await new red5prosdk.RTCSubscriber().init(config)
    subscriber.on('WebRTC.Unsupported.Feature', () => console.error('Bummer! This browser doesn\'t support Insertable Streams.'))
    subscriber.on('WebRTC.Transform.Error', event => console.error(`Error occured for ${event.data.type}!`, event.data.error))

    await subscriber.subscribe()

    // Delay transform until some action.
    allowButton.addEventListener('click', () => {
      subscriber.transform({
        video: myVideoTransformFunction,
        audio: myAudioTransformFunction
      })
    })

  } catch (e) {
    // handle error
  }

}

initThenTransform()

NOTE

If assigning transforms using the transform() method instead of on initialization through the init() method, you will need to assign the rtcConfiguration property on the init configuration with the property of encodedInsertableStreams set to true. e.g.,

const config = {
  host: 'myserver.domain',
  streamName: 'stream1',
  rtcConfiguration: {
    iceServers: [{urls: 'stun:stun2.l.google.com:19302'}],
    iceCandidatePoolSize: 2,
    bundlePolicy: 'max-bundle',
    encodedInsertableStreams: true
  }
}