This is a streaming plugin for Janus, allowing WebRTC peers to watch/listen to pre-recorded files or media generated by another tool. Specifically, the plugin currently supports three different type of streams:
For what concerns types 1. and 2., considering the proof of concept nature of the implementation the only pre-recorded media files that the plugins supports right now are Opus, raw mu-Law and a-Law files: support is of course planned for other additional widespread formats as well.
For what concerns type 3., instead, the plugin is configured to listen on a few ports for RTP: this means that the plugin is implemented to receive RTP on those ports and relay them to all peers attached to that stream. Any tool that can generate audio/video RTP streams and specify a destination is good for the purpose: the examples section contains samples that make use of GStreamer (http://gstreamer.freedesktop.org/) but other tools like FFmpeg (http://www.ffmpeg.org/), LibAV (http://libav.org/) or others are fine as well. This makes it really easy to capture and encode whatever you want using your favourite tool, and then have it transparently broadcasted via WebRTC using Janus. Notice that we recently added the possibility to also add a datachannel track to an RTP streaming mountpoint: this allows you to send, via UDP, a text-based message to relay via datachannels (e.g., the title of the current song, if this is a radio streaming channel). When using this feature, though, beware that you'll have to stay within the boundaries of the MTU, as each message will have to stay within the size of an UDP packet.
Streams to make available are listed in the plugin configuration file. A pre-filled configuration file is provided in conf/janus.plugin.streaming.jcfg
and includes some examples you can start from.
To add more streams or modify the existing ones, you can use the following syntax:
stream-name: { [settings] }
with the allowed settings listed below:
type = rtp|live|ondemand|rtsp rtp = stream originated by an external tool (e.g., gstreamer or ffmpeg) and sent to the plugin via RTP live = local file streamed live to multiple viewers (multiple viewers = same streaming context) ondemand = local file streamed on-demand to a single listener (multiple viewers = different streaming contexts) rtsp = stream originated by an external RTSP feed (only available if libcurl support was compiled) id = <unique numeric ID> description = This is my awesome stream metadata = An optional string that can contain any metadata (e.g., JSON) associated with the stream you want users to receive is_private = true|false (private streams don't appear when you do a 'list' request) filename = path to the local file to stream (only for live/ondemand) secret = <optional password needed for manipulating (e.g., destroying or enabling/disabling) the stream> pin = <optional password needed for watching the stream> audio = true|false (do/don't stream audio) video = true|false (do/don't stream video) The following options are only valid for the 'rtp' type: data = true|false (do/don't stream text via datachannels) audioport = local port for receiving audio frames audiortcpport = local port for receiving and sending audio RTCP feedback audiomcast = multicast group for receiving audio frames, if any audioiface = network interface or IP address to bind to, if any (binds to all otherwise) audiopt = <audio RTP payload type> (e.g., 111) audiocodec = name of the audio codec (opus) audiofmtp = Codec specific parameters, if any audioskew = true|false (whether the plugin should perform skew analysis and compensation on incoming audio RTP stream, EXPERIMENTAL) videoport = local port for receiving video frames (only for rtp) videortcpport = local port for receiving and sending video RTCP feedback videomcast = multicast group for receiving video frames, if any videoiface = network interface or IP address to bind to, if any (binds to all otherwise) videopt = <video RTP payload type> (e.g., 100) videocodec = name of the video codec (vp8) videofmtp = Codec specific parameters, if any videobufferkf = true|false (whether the plugin should store the latest keyframe and send it immediately for new viewers, EXPERIMENTAL) videosimulcast = true|false (do|don't enable video simulcasting) videoport2 = second local port for receiving video frames (only for rtp, and simulcasting) videoport3 = third local port for receiving video frames (only for rtp, and simulcasting) videoskew = true|false (whether the plugin should perform skew analysis and compensation on incoming video RTP stream, EXPERIMENTAL) videosvc = true|false (whether the video will have SVC support; works only for VP9-SVC, default=false) h264sps = if using H.264 as a video codec, value of the sprop-parameter-sets that would normally be sent via SDP, but that we'll use to instead manually ingest SPS and PPS packets via RTP for streams that miss it collision = in case of collision (more than one SSRC hitting the same port), the plugin will discard incoming RTP packets with a new SSRC unless this many milliseconds passed, which would then change the current SSRC (0=disabled) dataport = local port for receiving data messages to relay datamcast = multicast group for receiving data messages, if any dataiface = network interface or IP address to bind to, if any (binds to all otherwise) datatype = text|binary (type of data this mountpoint will relay, default=text) databuffermsg = true|false (whether the plugin should store the latest message and send it immediately for new viewers) threads = number of threads to assist with the relaying part, which can help if you expect a lot of viewers that may cause the RTP receiving part in the Streaming plugin to slow down and fail to catch up (default=0) In case you want to use SRTP for your RTP-based mountpoint, you'll need to configure the SRTP-related properties as well, namely the suite to use for hashing (32 or 80) and the crypto information for decrypting the stream (as a base64 encoded string the way SDES does it). Notice that with SRTP involved you'll have to pay extra attention to what you feed the mountpoint, as you may risk getting SRTP decrypt errors: srtpsuite = 32 srtpcrypto = WbTBosdVUZqEb6Htqhn+m3z7wUh4RJVR8nE15GbN The Streaming plugin can also be used to (re)stream media that has been encrypted using something that can be consumed via Insertable Streams. In that case, we only need to be aware of it, so that we can send the info along with the SDP. How to decrypt the media is out of scope, and up to the application since, again, this is end-to-end encryption and so neither Janus nor the Streaming plugin have access to anything. DO NOT SET THIS PROPERTY IF YOU DON'T KNOW WHAT YOU'RE DOING! e2ee = true To allow mountpoints to negotiate the playout-delay RTP extension, you can set the 'playoutdelay_ext' property to true: this way, any subscriber can customize the playout delay of incoming video streams, assuming the browser supports the RTP extension in the first place. playoutdelay_ext = true To allow mountpoints to negotiate the abs-capture-time RTP extension, you can set the 'abscapturetime_src_ext_id' property to value in range 1..14 inclusive: this way, any subscriber can receive the abs-capture-time of incoming RTP streams, assuming the browser supports the RTP extension in the first place. Incoming RTP stream should provide abs-capture-time exactly in the same header id. abscapturetime_src_ext_id = 1 The following options are only valid for the 'rtsp' type: url = RTSP stream URL rtsp_user = RTSP authorization username, if needed rtsp_pwd = RTSP authorization password, if needed rtsp_quirk = Some RTSP servers offer the stream using only the path, instead of the fully qualified URL. If set true, this boolean informs Janus that we should try a path-only DESCRIBE request if the initial request returns 404. rtsp_failcheck = whether an error should be returned if connecting to the RTSP server fails (default=true) rtspiface = network interface IP address or device name to listen on when receiving RTSP streams rtsp_reconnect_delay = after n seconds passed and no media assumed, the RTSP server has gone and schedule a reconnect (default=5s) rtsp_session_timeout = by default the streaming plugin will check the RTSP connection with an OPTIONS query, the value of the timeout comes from the RTSP session initializer and by default this session timeout is the half of this value In some cases this value can be too high (for example more than one minute) because of the media server. In that case this plugin will calculate the timeout with this formula: timeout = min(session_timeout, rtsp_session_timeout / 2). (default=0s) rtsp_timeout = communication timeout (CURLOPT_TIMEOUT) for cURL call gathering the RTSP information (default=10s) rtsp_conn_timeout = connection timeout for cURL (CURLOPT_CONNECTTIMEOUT) call gathering the RTSP information (default=5s)
Notice that attributes like audioport
or videopt
only make sense when you're creating a mountpoint with a single audio and/or video stream, as the plugin in that case assumes that limitation is fine by you. In case you're interested in creating multistream mountpoints, that is mountpoints that can contain more than one audio and/or video stream at the same time, you HAVE to use a different syntax. Specifically, you'll need to use a media
array/list, containing the different streams, in the right order, that you want to make available: each stream will then need to contain the related info, e.g., port to bind to, type of media, codec name and so on. An example is provided below:
multistream-test: { type = "rtp" id = 123 description = "Multistream test (1 audio, 2 video)" media = ( { type = "audio" mid = "a" label = "Audio stream" port = 5102 pt = 111 codec = "opus" }, { type = "video" mid = "v1" label = "Video stream #1" port = 5104 pt = 100 codec = "vp8" }, { type = "video" mid = "v2" label = "Video stream #2" port = 5106 pt = 100 codec = "vp8" } ) }
In the above example, we're creating a mountpoint with a single audio stream and two different video streams: each stream has a unique mid
(that you MUST provide) which is what will be used for the SDP offer to send to viewers, and their unique configuration properties. As you can see, it's much cleaner in the way you create and configure mountpoints: there's no hardcoded audio/video prefix for the name of properties, you configure media streams the same way and just add them to a list. Notice that of course this also works with the simple one audio/one video mountpoints you've used so far, and that has been documented before: as such, you're encouraged to start using this new approach as soon as possible, since in the next versions we might deprecate the old one.
The Streaming API supports several requests, some of which are synchronous and some asynchronous. There are some situations, though, (invalid JSON, invalid request) which will always result in a synchronous error response even for asynchronous requests.
list
, info
, create
, destroy
, recording
, edit
, enable
and disable
are synchronous requests, which means you'll get a response directly within the context of the transaction. list
lists all the available streams; create
allows you to create a new mountpoint dynamically, as an alternative to using the configuration file; destroy
removes a mountpoint and destroys it; recording
instructs the plugin on whether or not a live RTP stream should be recorded while it's broadcasted; enable
and disable
respectively enable and disable a mountpoint, that is decide whether or not a mountpoint should be available to users without destroying it. edit
allows you to dynamically edit some mountpoint properties (e.g., the PIN);
The watch
, start
, configure
, pause
, switch
and stop
requests instead are all asynchronous, which means you'll get a notification about their success or failure in an event. watch
asks the plugin to prepare the playout of one of the available streams; start
starts the actual playout; pause
allows you to pause a playout without tearing down the PeerConnection; switch
allows you to switch to a different mountpoint of the same kind (note: only live RTP mountpoints supported as of now) without having to stop and watch the new one; stop
stops the playout and tears the PeerConnection down.
Notice that, in general, all users can create mountpoints, no matter what type they are. If you want to limit this functionality, you can configure an admin admin_key
in the plugin settings. When configured, only "create" requests that include the correct admin_key
value in an "admin_key" property will succeed, and will be rejected otherwise.
To list the available Streaming mountpoints (both those created via configuration file and those created via API), you can use the list
request:
{ "request" : "list" }
If successful, it will return an array with a list of all the mountpoints. Notice that only the public mountpoints will be returned: those with an is_private
set to yes/true will be skipped. The response will be formatted like this:
{ "streaming" : "list", "list" : [ { "id" : <unique ID of mountpoint #1>, "type" : "<type of mountpoint #1, in line with the types introduced above>", "description" : "<description of mountpoint #1>", "metadata" : "<metadata of mountpoint #1, if any>", "enabled" : <true|false, depending on whether the mountpoint is currently enabled or not>, "media" : [ { "mid" : "<unique mid of this stream>", "label" : "<unique text label of this stream>", "msid" : "<msid of this stream, if configured>", "type" : "<audio|video|data">, "age_ms" : <how much time passed since we last received media for this stream; optional>, }, { // Other streams, if available } ] }, { "id" : <unique ID of mountpoint #2>, "type" : "<type of mountpoint #2, in line with the types introduced above>", "description" : "<description of mountpoint #2>", "metadata" : "<metadata of mountpoint #2, if any>", "media" : [..] }, ... ] }
As you can see, the list
request only returns very generic info on each mountpoint. In case you're interested in learning more details about a specific mountpoint, you can use the info
request instead, which returns more information, or all of it if the mountpoint secret is provided in the request. An info
request must be formatted like this:
{ "request" : "info" "id" : <unique ID of mountpoint to query>, "secret" : <mountpoint secret; optional, can be used to return more info>" }
If successful, this will have the plugin return an object containing more info on the mountpoint:
{ "streaming" : "info", "info" : { "id" : <unique ID of mountpoint>, "name" : "<unique name of mountpoint>", "description" : "<description of mountpoint>", "metadata" : "<metadata of mountpoint, if any>", "secret" : "<secret of mountpoint; only available if a valid secret was provided>", "pin" : "<PIN to access mountpoint; only available if a valid secret was provided>", "is_private" : <true|false, depending on whether the mountpoint is listable; only available if a valid secret was provided>, "viewers" : <count of current subscribers, if any>, "enabled" : <true|false, depending on whether the mountpoint is currently enabled or not>, "type" : "<type of mountpoint>", "media" : [ { "mid" : "<unique mid of this stream>", "mindex" : "<unique mindex of this stream>", "type" : "<audio|video|data">, "label" : "<unique text label of this stream>", "msid" : "<msid of this stream, if configured>", "age_ms" : <how much time passed since we last received media for this stream; optional>, "pt" : <payload type, only present if RTP and configured>, "codec" : "<cocec name value, only present if RTP and configured>", "rtpmap" : "<SDP rtpmap value, only present if RTP and configured>", "fmtp" : "<audio SDP fmtp value, only present if RTP and configured>", ... }, { // Other streams, if available } ] } }
Considering the different mountpoint types that you can create in this plugin, the nature of the rest of the returned info obviously depends on which mountpoint you're querying. This is especially true for RTP and RTSP mountpoints. Notice that info like the ports an RTP mountpoint is listening on will only be returned if you provide the correct secret, as otherwise they're treated like sensitive information and are not returned to generic info
calls.
We've seen how you can create a new mountpoint via configuration file, but you can create one via API as well, using the create
request. Most importantly, you can also choose whether or not a create
request should result in the mountpoint being saved to configuration file so that it's still available after a server restart. The common syntax for all create
requests is the following:
{ "request" : "create", "admin_key" : "<plugin administrator key; mandatory if configured>", "type" : "<type of the mountpoint to create; mandatory>", "id" : <unique ID to assign the mountpoint; optional, will be chosen by the server if missing>, "name" : "<unique name for the mountpoint; optional, will be chosen by the server if missing>", "description" : "<description of mountpoint; optional>", "metadata" : "<metadata of mountpoint; optional>", "secret" : "<secret to query/edit the mountpoint later; optional>", "pin" : "<PIN required for viewers to access mountpoint; optional>", "is_private" : <true|false, whether the mountpoint should be listable; true by default>, "media" : [ { "type" : "<audio|video|data>", "mid" : "<unique mid to assign to this stream in negotiated PeerConnections>", "msid" : "<msid to add to the m-line, if needed>", "port" : <port to bind to, to receive media to relay>", ... }. ... other streams, if any ... ], ... "permanent" : <true|false, whether the mountpoint should be saved to configuration file or not; false by default>, ... }
Of course, different mountpoint types will have different properties you can specify in a create
. Please refer to the documentation on configuration files to see the fields you can pass. The only important difference to highlight is that, unlike in configuration files, you will NOT have to escape semicolons with a trailing slash, in those properties where a semicolon might be needed (e.g., audiofmtp
or videofmtp
).
Notice that, just as we introduced the possibility of configuring multistream mountpoints statically with a media
array, the same applies when using the API to create them: just add a media
JSON array containing the list of streams to create and the related properties as you would do statically (that is, using generic properties like port
, fmtp
, etc., rather than the hardcoded audioport
and the like), and it will work for dynamically created mountpoints as well.
A successful create
will result in a created
response:
{ "streaming" : "created", "create" : "<unique name of the just created mountpoint>", "permanent" : <true|false, depending on whether the mountpoint was saved to configuration file or not>, "stream": { "id" : <unique ID of the just created mountpoint>, "type" : "<type of the just created mountpoint>", "description" : "<description of the just created mountpoint>", "is_private" : <true|false, depending on whether the new mountpoint is listable>, "ports" : [ // Only for RTP mountpoints { "type" : "<audio|video|data>", "mid" : "<unique mid of stream #1>", "msid" : "<msid of this stream, if configured>", "port" : <port the plugin is listening on for this stream's media> }, { // Other streams, if available } ] ... } }
Notice that additional information, namely the ports the mountpoint bound to, will only be added for new RTP mountpoints, otherwise this is all that a created
request will contain. If you want to double check everything in your create
request went as expected, you may want to issue a followup info
request to compare the results.
Once you created a mountpoint, you can modify some (not all) of its properties via an edit
request. Namely, you can only modify generic properties like the mountpoint description, the secret, the PIN and whether or not the mountpoint should be listable. All other properties are considered to be immutable. Again, you can choose whether the changes should be permanent, e.g., saved to configuration file, or not. Notice that an edit
request requires the right secret to be provided, if the mountpoint has one, or will return an error instead. The edit
request must be formatted like this:
{ "request" : "edit", "id" : <unique ID of the mountpoint to edit; mandatory>, "secret" : "<secret to edit the mountpoint; mandatory if configured>", "new_description" : "<new description for the mountpoint; optional>", "new_metadata" : "<new metadata for the mountpoint; optional>", "new_secret" : "<new secret for the mountpoint; optional>", "new_pin" : "<new PIN for the mountpoint, PIN will be removed if set to an empty string; optional>", "new_is_private" : <true|false, depending on whether the mountpoint should be now listable; optional>, "permanent" : <true|false, whether the mountpoint should be saved to configuration file or not; false by default>, "edited_event" : <true|false, whether an event will be sent to all viewers when metadata is updated; false by default> }
A successful edit
will result in an edited
response:
{ "streaming" : "edited", "id" : <unique ID of the just edited mountpoint>, "permanent" : <true|false, depending on whether the changes were saved to configuration file or not> }
In case edited_event
was set to true
in the request, a successful edit
will also result in an edited
event sent to all viewers when the metadata has changed:
{ "streaming" : "edited", "id" : <unique ID of the just edited mountpoint>, "metadata" : "<updated metadata for the mountpoint>", }
Just as you can create and edit mountpoints, you can of course also destroy them. Again, this applies to all mountpoints, whether created statically via configuration file or dynamically via API, and the mountpoint destruction can be made permanent in the configuration file as well. A destroy
request must be formatted as follows:
{ "request" : "destroy", "id" : <unique ID of the mountpoint to destroy; mandatory>, "secret" : "<secret to destroy the mountpoint; mandatory if configured>", "permanent" : <true|false, whether the mountpoint should be removed from the configuration file or not; false by default> }
If successful, the result will be confirmed in a destroyed
event:
{ "streaming" : "destroyed", "id" : <unique ID of the just destroyed mountpoint> }
Notice that destroying a mountpoint while viewers are still subscribed to it will result in all viewers being removed, and their PeerConnection closed as a consequence.
You can also dynamically enable and disable mountpoints via API. A disabled mountpoint is a mountpoint that exists, and still works as expected, but is not accessible to viewers until it's enabled again. This is a useful property, especially in case of mountpoints that need to be prepared in advance but must not be accessible until a specific moment, and a much better alternative to just create the mountpoint at the very last minute and destroy it otherwise. The syntax for both the enable
and disable
requests is the same, and looks like the following:
{ "request" : "enable", "id" : <unique ID of the mountpoint to enable; mandatory>, "secret" : "<secret to enable the mountpoint; mandatory if configured>" }
If successful, a generic ok
is returned:
{ "streaming" : "ok" }
{ "request" : "disable", "id" : <unique ID of the mountpoint to disable; mandatory>, "stop_recording" : <true|false, whether the recording should also be stopped or not; true by default> "secret" : "<secret to disable the mountpoint; mandatory if configured>" }
If successful, a generic ok
is returned:
{ "streaming" : "ok" }
You can kick all viewers from a mountpoint using the kick_all
request. Notice that this only removes all viewers, but does not prevent them from starting to watch the mountpoint again. Please note this request works with all mountpoint types, except for on-demand streaming. The kick_all
request has to be formatted as follows:
{ "request" : "kick_all", "id" : <unique ID of the mountpoint to disable; mandatory>, "secret" : "<mountpoint secret; mandatory if configured>", }
If successful, a kicked_all
response is returned:
{ "streaming" : "kicked_all", }
Finally, you can record a mountpoint to the internal Janus .mjr format using the recording
request. The same request can also be used to stop recording. Although the same request is used in both cases, though, the syntax for the two use cases differs a bit, namely in terms of the type of some properties. Notice that, while for backwards compatibility you can still use the old audio
, video
and data
named properties, they're now deprecated and so you're highly encouraged to use the new drill-down media
list instead.
To start recording a new mountpoint, the request should be formatted like this:
{ "request" : "recording", "action" : "start", "id" : <unique ID of the mountpoint to manipulate; mandatory>, "media" : [ // Drill-down recording controls { "mid" : "<mid of the stream to start recording>", "filename" : "<base path/filename to use for the recording>" }, { // Recording controls for other streams, if provided } ] }
To stop a recording, instead, this is the request syntax:
{ "request" : "recording", "action" : "stop", "id" : <unique ID of the mountpoint to manipulate; mandatory>, "media" : [ // Drill-down recording controls { "mid" : "<mid of the stream to stop recording>" }, { // Recording controls for other streams, if provided } ] }
When using the deprecated properties, when starting a recording the audio
, video
and data
properties are strings, and specify the base path to use for the recording filename; when stopping a recording, instead, they're interpreted as boolean properties. This is one more reason why you should migrate to the new media
list instead, as it doesn't have this ambiguity between the two different requests. Notice that, as with all APIs that wrap .mjr recordings, the filename you specify here is not the actual filename: an .mjr extension is always going to be added by the Janus core, so you should take this into account when tracking the related recording files.
Whether you started or stopped a recording, a successful request will always result in a simple ok
response:
{ "streaming" : "ok" }
All the requests we've gone through so far are synchronous. This means that they return a response right away. That said, many of the requests this plugin supports are asynchronous instead, which means Janus will send an ack when they're received, and a response will only follow later on. This is especially true for requests dealing with the management and setup of mountpoint viewers, e.g., for the purpose of negotiating a WebRTC PeerConnection to receive media from a mountpoint.
To subscribe to a specific mountpoint, an interested viewer can make use of the watch
request. As suggested by the request name, this instructs the plugin to setup a new PeerConnection to allow the new viewer to watch the specified mountpoint. The watch
request must be formatted like this:
{ "request" : "watch", "id" : <unique ID of the mountpoint to subscribe to; mandatory>, "pin" : "<PIN required to access the mountpoint; mandatory if configured>", "media" : [ <array of mids to subscribe to, as strings; optional, missing or empty array subscribes to all mids> ] "offer_audio" : <true|false; deprecated; whether or not audio should be negotiated; true by default if the mountpoint has audio>, "offer_video" : <true|false; deprecated; whether or not video should be negotiated; true by default if the mountpoint has video>, "offer_data" : <true|false; deprecated; whether or not datachannels should be negotiated; true by default if the mountpoint has datachannels> }
As you can see, it's just a matter of specifying the ID of the mountpoint to subscribe to and, if needed, the PIN to access the mountpoint in case it's protected. The media
array is particularly interesting, as it allows you to only subscribe to a subset of the mountpoint media, which you can address by the related mid
property of each stream. By default, in fact, a watch
request will result in the plugin preparing a new SDP offer trying to negotiate all the media streams available in the mountpoint; in case the viewer knows they don't support one of the mountpoint codecs, though (e.g., the video in the mountpoint is VP8, but they only support H.264), or are not interested in getting all the media (e.g., they're ok with just audio and not video, or don't have enough bandwidth for both), they can use those properties to shape the SDP offer to their needs. The media
array is optional, as a missing or empty array will simply be interpreted as a willingness to subscribe to all the streams in the mountpoint, which is the default behaviour. Notice that the order of the mids in the media
array is irrelevant, as is how many times the same mid is listed in the array: the presence of a mid is just interpreted as an "on" switch for that stream, meaning it will be offered in the SDP.
offer_audio
, offer_video
and offer_data
properties are also available. They also allow you to only subscribe to a subset of the mountpoint media, but with a more crude approach: specifically, they dictate whether or not any audio, video or data stream should be offered or not.As anticipated, if successful this request will generate a new JSEP SDP offer, which will be attached to a preparing
status event:
{ "status" : "preparing" }
At this stage, to complete the setup of a subscription the viewer is supposed to send a JSEP SDP answer back to the plugin. This is done by means of a start
request, which in this case MUST be associated with a JSEP SDP answer but otherwise requires no arguments:
{ "request" : "start" }
If successful this request returns a starting
status event:
{ "status" : "starting" }
Once this is done, all that's needed is waiting for the WebRTC PeerConnection establishment to succeed. As soon as that happens, the Streaming plugin can start relaying media from the mountpoint the viewer subscribed to to the viewer themselves.
Notice that the same exact steps we just went through (watch
request, followed by JSEP offer by the plugin, followed by start
request with JSEP answer by the viewer) is what you also use when renegotiations are needed, e.g., for the purpose of ICE restarts.
As a viewer, you can temporarily pause and resume the whole media delivery with a pause
and, again, start
request (in this case without any JSEP SDP answer attached). Neither expect other arguments, as the context is implicitly derived from the handle they're sent on:
{ "request" : "pause" }
{ "request" : "start" }
Unsurprisingly, they just result in, respectively, pausing
and starting
events:
{ "status" : "pausing" }
{ "status" : "starting" }
For more drill-down manipulations of a subscription, a configure
request can be used instead. This request allows viewers to dynamically change some properties associated to their media subscription, e.g., in terms of what should and should not be sent at a specific time. A configure
request must be formatted as follows:
{ "request" : "configure", "streams" : [ { "mid" : <mid of the m-line to tweak>, "send" : <true|false, depending on whether the media addressed by the above mid should be relayed or not; optional>, "substream" : <substream to receive (0-2), in case simulcasting is enabled; optional>, "temporal" : <temporal layers to receive (0-2), in case simulcasting is enabled; optional>, "fallback" : <How much time (in us, default 250000) without receiving packets will make us drop to the substream below>, "spatial_layer" : <spatial layer to receive (0-1), in case VP9-SVC is enabled; optional>, "temporal_layer" : <temporal layers to receive (0-2), in case VP9-SVC is enabled; optional>, "min_delay" : <minimum delay to enforce via the playout-delay RTP extension, in blocks of 10ms; optional>, "max_delay" : <maximum delay to enforce via the playout-delay RTP extension, in blocks of 10ms; optional> }, // Other streams, if any ] }
While the deprecated audio
, video
and data
properties can still be used as a media-level pause/resume functionality, a better option is to specify the mid
of the stream instead, and a send
boolean property to specify if this specific stream should be relayed or not. The pause
and start
requests instead pause and resume all streams at the same time. The substream
and temporal
properties, finally, only make sense when the mountpoint is configured with video simulcasting support, and as such the viewer is interested in receiving a specific substream or temporal layer, rather than any other of the available ones. The spatial_layer
and temporal_layer
have exactly the same meaning, but within the context of VP9-SVC mountpoints, and will have no effect on mountpoints involving a different video codec. In both cases, make sure you specify the mid
of the stream in case multiple videos are available in a mountpoint, or the request may have no effect.
Another interesting feature in the Streaming plugin is the so-called mountpoint "switching". Basically, when subscribed to a specific mountpoint and receiving media from there, you can at any time "switch" to a different mountpoint, and as such start receiving media from that other mountpoint instead. Think of it as changing channel on a TV: you keep on using the same PeerConnection, the plugin simply changes the source of the media transparently. Of course, while powerful and effective this request has some limitations. First of all, it only works with RTP mountpoints, and not other mountpoint types; besides, the two mountpoints must have the same media configuration, that is, use the same codecs, the same payload types, etc. In fact, since the same PeerConnection is used for this feature, switching to a mountpoint with a different configuration might result in media incompatible with the PeerConnection setup being relayed to the viewer, and as such in no audio/video being played. That said, a switch
request must be formatted like this:
{ "request" : "switch", "id" : <unique ID of the new mountpoint to switch to; mandatory> }
If successful, you'll be unsubscribed from the previous mountpoint, and subscribed to the new mountpoint instead. The event to confirm the switch was successful will look like this:
{ "switched" : "ok", "id" : <unique ID of the new mountpoint> }
Finally, to stop the subscription to the mountpoint and tear down the related PeerConnection, you can use the stop
request. Since context is implicit, no other argument is required:
{ "request" : "stop" }
If successful, the plugin will attempt to tear down the PeerConnection, and will send back a stopping
status event:
{ "status" : "stopping" }
Once a PeerConnection has been torn down and the subscription closed, as a viewer you're free to subscribe to a different mountpoint instead. In fact, while you can't watch more than one mountpoint at the same time on the same handle, there's no limit on how many mountpoints you can watch in sequence, again on the same handle. If you're interested in subscribing to multiple mountpoints at the same time, instead, you'll have to create multiple handles for the purpose.