Galène videoconferencing server discussion list archives
 help / color / mirror / Atom feed
From: Juliusz Chroboczek <>
Subject: [Galene] Merged simulcast and SVC into master
Date: Sat, 15 May 2021 13:16:07 +0200	[thread overview]
Message-ID: <> (raw)

Dear all,

I've just merged all of my simulcast and SVC code into master.  This means
that the server is now feature-complete for 0.4; all that's needed is some
work on the client.

The code is running on, and I need you to test,
especially with larger groups.  ( is still running
the 0.3 branch.)

It's been a fair amount of work, please be patient while I ramble a bit
and describe the different pieces.

1. The sender checks if there are at least three users in the group.  If
   that's the case, it sends two video tracks for each video -- one scaled
   down stream at 100kbs, and the normal stream.

   Additionally, both streams have a special "scalable" or "layered"
   structure, where frames are indexed with an integer (the "temporal
   index" or "layer number") and only depend on frames with an equal or
   lower layer index.  If you're visually inclined, see figure 3 of

   Caveats: we disable the feature on Firefox, which uses a different
   protocol for simulcast that Pion doesn't support yet.  It is unknown
   whether it works on Safari.  I've seen strange behaviour with VP9,
   I need to experiment some more.

2. When distributing flows to clients, the server picks among the
   available tracks and sends just one track.  The choice of track is
   controlled by the receiving client (using the "request" and
   "streamRequest" protocol messages, which are exported by protocol.js
   as the ServerConnection.request and Stream.request methods).

3. Instead of using videos at the slowest rate among all clients, we pick
   the rate as

       min(slowest * 2^(layers - 1), fastest)

   Note that this degenerates to the slowest rate if the video is not
   scalable, just like before.  The ratio might need to be tweaked,
   I haven't done much testing yet.

   Caveat: since the video rate is now higher than it used to be, this
   might put more load on your server.  I'll implement a per-group
   throughput limit if this turns out to be a problem.

3. The server detects whether a given flow is scalable.  If the client is
   congested, the server uses the layered structure to decimate the
   flow -- a congested client only receives frames from the lower layers.
   The nice feature is that the resulting flow is a perfectly compliant
   VP8 flow, and no special support is needed on the receiver.

   We switch layers at most once every 500ms, which is the rate at which
   we get congestion indications from the receiver.  There are no visible
   artifacts when switching -- SVC is as close to magic as you can get.

   The rewriting has some overhead, both in space and time.  Both are
   negligible: the space cost is on the order of 1kB per flow, while the
   cost of rewriting the packets is just a couple percent in my tests.  We
   detect the case where no rewriting is necessary, and take a fast path.

   Caveat: the result is that congested clients receive jerky video
   instead of the video quality decreasing for everyone equally.  This is
   good for lectures, but not necessarily for dancing lessons.  VP9 and
   AV1 allow spacial scalability (reducing the resolution instead of the
   framerate), and this appears to be implemented in Chromium 92 for AV1
   (but not for VP9).

   Caveat: the highest layer consists of frames that can be individually
   discarded, which implies that we could do finer congestion control by
   discarding just a fraction of the higher-layer frames.  We don't do
   that right now, we simply fall back to the intermediate layer

4. The client dynamically selects the flow it requests.  Currently, the
   logic is as follows

     - if the video is in full screen, we request the high-resolution flow;
     - if the video's (scrollWidht*scrollHeight) is less than 76800, we
       request the low-resolution flow;
     - otherwise we request the high-resolution flow.

   This can be overridden in the side menu (choose "Receive: low").

   Caveat: a client switching flows requests a new keyframe, which causes
   a degradation of quality for all other clients.  Perhaps it's worth
n   adding some hysteresis, we'll see.

   Caveat: if the browser window is small, the server sends the
   high-resolution flow and the client which immediately requests the
   low-resolution flow.  A solution would be to use two-step negotiation,
   which would add one RTT to every flow establishment.  I'll consider
   that, but not for the next version.

   Caveat: in the current implementation, switching flows causes visible
   flicker.  A solution would be to create a new <video> element instead
   of changing the stream mid-video.  Another solution would be to stitch
   streams in the server rather than switch streams in the client, but
   that would mean that both streams need to use the same codec; I'm
   hoping that at some point I can use VP8 for the low-resolution stream
   and AV1 for the high-resolution once, which would enable sending
   high-resolution AV1 to clients that support it and low-resolution VP8
   to the others.


-- Juliusz

             reply	other threads:[~2021-05-15 11:16 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-15 11:16 Juliusz Chroboczek [this message]
2021-05-15 18:12 ` [Galene] " Juliusz Chroboczek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

  List information:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \
    --subject='Re: [Galene] Merged simulcast and SVC into master' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox