Galène videoconferencing server discussion list archives
 help / color / mirror / Atom feed
* [Galene] Logging
@ 2021-01-07 21:38 Juliusz Chroboczek
  2021-01-07 22:45 ` [Galene] Logging Michael Ströder
                   ` (2 more replies)
  0 siblings, 3 replies; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-07 21:38 UTC (permalink / raw)
  To: galene

It turns out that Pion's logging is controlled by environment variables.
This is described here:

  https://github.com/pion/webrtc/wiki/Debugging-WebRTC

Galène could potentially:

  - set the environment variables;
  - hook into Pion's internals to redirect the log somewhere (to a web page?).

Opinions?

-- Juliusz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-07 21:38 [Galene] Logging Juliusz Chroboczek
@ 2021-01-07 22:45 ` Michael Ströder
  2021-01-08  0:35 ` Antonin Décimo
  2021-01-08 12:40 ` Toke Høiland-Jørgensen
  2 siblings, 0 replies; 32+ messages in thread
From: Michael Ströder @ 2021-01-07 22:45 UTC (permalink / raw)
  To: Juliusz Chroboczek, galene

On 1/7/21 10:38 PM, Juliusz Chroboczek wrote:
> It turns out that Pion's logging is controlled by environment variables.
> This is described here:
> 
>   https://github.com/pion/webrtc/wiki/Debugging-WebRTC

Great!

Not so great for you because I will probably ask more questions. ;-)

> Galène could potentially:
>   - set the environment variables;
>   - hook into Pion's internals to redirect the log somewhere (to a web page?).

To me it seems the systemd unit can handle all this. See examples below.

Ciao, Michael.

----- /etc/systemd/system/galene.service -----
[Unit]
Description=Galene Videoconferencing Server
Requires=local-fs.target network.target
After=local-fs.target nss-lookup.target time-sync.target

[Service]
Type=simple
EnvironmentFile=-/etc/sysconfig/galene
ExecStart=/usr/sbin/galene $ARGS
User=galene
Group=galene
Restart=on-failure
LimitNOFILE=65536

# redirect stderr and stdout to log file
StandardOutput=append:/var/log/galene/stdout.log
StandardError=append:/var/log/galene/stderr.log

[Install]
WantedBy=multi-user.target

----- /etc/sysconfig/galene -----
# Configure command-line arguments for Galène video conference server
ARGS="-http 0.0.0.0:8444 -data /etc/galene -groups
/var/lib/galene/groups -static /usr/share/galene/static -recordings
/var/lib/galene/recordings"

# pion debug messages set by env vars
#PION_LOG_TRACE="all"
#PION_LOG_DEBUG="all"
PIONS_LOG_INFO="ice"
PIONS_LOG_WARNING="all"

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-07 21:38 [Galene] Logging Juliusz Chroboczek
  2021-01-07 22:45 ` [Galene] Logging Michael Ströder
@ 2021-01-08  0:35 ` Antonin Décimo
  2021-01-08 12:40 ` Toke Høiland-Jørgensen
  2 siblings, 0 replies; 32+ messages in thread
From: Antonin Décimo @ 2021-01-08  0:35 UTC (permalink / raw)
  To: Juliusz Chroboczek, galene

> Galène could potentially:
>
> - set the environment variables;

I think is is better done outside of Galène, in the shell or the
service manager.

> - hook into Pion's internals to redirect the log somewhere

Would there be a need to dissociate Galène's own logs from Pion's,
which could not be achieved by a header in the log format?

> to a web page?

Would you retain the logs for the whole execution of the program, tee
the logs in real-time, re-read the file where the logs would have been
written, get them from the service manager, or something else?

Is there a noticeable performance impact on logging everything and
letting the service manager handle the logging level? there's at least
one big service manager on Linux that supports that if the logs are in
syslog format. A distribution-provided service file could always
enable all the logs from Galène and Pion and let the system
administrator discard unwanted levels of logs.

IMO Galène is fine as-is (it could log more), as all the parameters
can be tuned outside of Galène.

-- Antonin

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-07 21:38 [Galene] Logging Juliusz Chroboczek
  2021-01-07 22:45 ` [Galene] Logging Michael Ströder
  2021-01-08  0:35 ` Antonin Décimo
@ 2021-01-08 12:40 ` Toke Høiland-Jørgensen
  2021-01-08 13:28   ` Juliusz Chroboczek
  2 siblings, 1 reply; 32+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-01-08 12:40 UTC (permalink / raw)
  To: Juliusz Chroboczek, galene

Juliusz Chroboczek <jch@irif.fr> writes:

> It turns out that Pion's logging is controlled by environment variables.
> This is described here:
>
>   https://github.com/pion/webrtc/wiki/Debugging-WebRTC
>
> Galène could potentially:
>
>   - set the environment variables;
>   - hook into Pion's internals to redirect the log somewhere (to a web page?).
>
> Opinions?

I just tried setting PIONS_LOG_INFO=ice and restarting. Which got me
this:

an 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Setting new connection state: Checking
Jan 08 12:35:14 video galene[532657]: ice WARNING: 2021/01/08 12:35:14 pingAllCandidates called with no candidate pairs. Connection is not possible yet.
Jan 08 12:35:14 video galene[532657]: ice WARNING: 2021/01/08 12:35:14 pingAllCandidates called with no candidate pairs. Connection is not possible yet.
Jan 08 12:35:14 video galene[532657]: ice WARNING: 2021/01/08 12:35:14 pingAllCandidates called with no candidate pairs. Connection is not possible yet.
Jan 08 12:35:14 video galene[532657]: ice WARNING: 2021/01/08 12:35:14 pingAllCandidates called with no candidate pairs. Connection is not possible yet.
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Ignoring remote candidate with tcpType active: tcp4 host 45.145.xx.xx:9
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Ignoring remote candidate with tcpType active: tcp4 host 10.36.yy.yy:9
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Ignoring remote candidate with tcpType active: tcp6 host 2a0c:4d80:zz:zz::2:9
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Ignoring remote candidate with tcpType active: tcp4 host 45.145.xx.xx:9
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Ignoring remote candidate with tcpType active: tcp4 host 10.36.yy.yy:9
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Ignoring remote candidate with tcpType active: tcp6 host 2a0c:4d80:zz:zz::2:9
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Ignoring remote candidate with tcpType active: tcp4 host 45.145.xx.xx:9
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Ignoring remote candidate with tcpType active: tcp4 host 10.36.yy.yy:9
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Ignoring remote candidate with tcpType active: tcp6 host 2a0c:4d80:zz:zz::2:9
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Ignoring remote candidate with tcpType active: tcp4 host 45.145.xx.xx:9
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Ignoring remote candidate with tcpType active: tcp4 host 10.36.yy.yy:9
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Ignoring remote candidate with tcpType active: tcp6 host 2a0c:4d80:zz:zz::2:9
Jan 08 12:35:14 video galene[532657]: ice INFO: 2021/01/08 12:35:14 Setting new connection state: Connected
Jan 08 12:35:54 video galene[532657]: ice INFO: 2021/01/08 12:35:54 Setting new connection state: Closed

Which doesn't really tell me much. What's happening here, is the Turn
server working? What's with the remote candidates being ignored?

What would be most useful in terms of checking configuration is if
Galene would *on startup* emit something like:

Checking Turn server candidate turn.example.com:443?transport=tcp: success!

for each configured TURN server.

-Toke

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-08 12:40 ` Toke Høiland-Jørgensen
@ 2021-01-08 13:28   ` Juliusz Chroboczek
  2021-01-08 13:52     ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-08 13:28 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: galene

> Which doesn't really tell me much. What's happening here, is the Turn
> server working?

The direct connection was successful, so the TURN server was never contacted.

> What's with the remote candidates being ignored?

They're active TCP candidates, and I've disabled support for ICE-TCP in Galène:

  https://github.com/pion/webrtc/issues/1356

ICE-TCP has the potential to make TURN redundant, so it would greatly
simplify the deployment of Galène.  Unfortunately, I've found it to be
unreliable in Pion, so I've disabled it until I can find the time to work
out what the issue is.

> What would be most useful in terms of checking configuration is if
> Galene would *on startup* emit something like:

> Checking Turn server candidate turn.example.com:443?transport=tcp: success!

Ah.  What you want is not logging, but active monitoring.  Could you
please file an issue?

-- Juliusz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-08 13:28   ` Juliusz Chroboczek
@ 2021-01-08 13:52     ` Toke Høiland-Jørgensen
  2021-01-08 14:33       ` Michael Ströder
  2021-01-08 15:34       ` Juliusz Chroboczek
  0 siblings, 2 replies; 32+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-01-08 13:52 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: galene

Juliusz Chroboczek <jch@irif.fr> writes:

>> Which doesn't really tell me much. What's happening here, is the Turn
>> server working?
>
> The direct connection was successful, so the TURN server was never
> contacted.

Right, I see. What are the conditions where a direct connection fails?
Is it just when the client is behind a NAT?

>> What's with the remote candidates being ignored?
>
> They're active TCP candidates, and I've disabled support for ICE-TCP in Galène:
>
>   https://github.com/pion/webrtc/issues/1356
>
> ICE-TCP has the potential to make TURN redundant, so it would greatly
> simplify the deployment of Galène.  Unfortunately, I've found it to be
> unreliable in Pion, so I've disabled it until I can find the time to work
> out what the issue is.

Right, gotcha.

>> What would be most useful in terms of checking configuration is if
>> Galene would *on startup* emit something like:
>
>> Checking Turn server candidate turn.example.com:443?transport=tcp: success!
>
> Ah. What you want is not logging, but active monitoring.

Well, both, ideally :)
As for just logging, Galene does propagate JSON parsing errors from the
Go JSON parser, but because the files are not read on startup you can
start up the daemon and everything looks fine until someone tries to
connect...

> Could you please file an issue?

Sure: https://github.com/jech/galene/issues/30

-Toke

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-08 13:52     ` Toke Høiland-Jørgensen
@ 2021-01-08 14:33       ` Michael Ströder
  2021-01-08 15:13         ` Juliusz Chroboczek
  2021-01-08 15:34       ` Juliusz Chroboczek
  1 sibling, 1 reply; 32+ messages in thread
From: Michael Ströder @ 2021-01-08 14:33 UTC (permalink / raw)
  To: galene

On 1/8/21 2:52 PM, Toke Høiland-Jørgensen wrote:
> Juliusz Chroboczek <jch@irif.fr> writes:
>> Ah. What you want is not logging, but active monitoring.
> 
> Well, both, ideally :)

Me too.

I'd love to see more connection meta data in /stats so I can tell which
user has which problem:

- connection's IP address
- nick name / user name
- browser's user agent header
- maybe SDP

Or have thus connection meta data logged but with the connection/channel ID.

I suspect most of the users' minor or major issues are caused by
problems with their  local setup. And more information would help
tracking down those.

FWIW: Did anyone look at admin/monitor page of Janus gateway?
Unfortunately that's one of the non-functional use-cases on their public
demo page [1], for very good reasons I guess. I could try to give Galène
devs access to demo pages of my own Janus installation if useful.

Ciao, Michael.

[1] https://janus.conf.meetecho.com/demos.html

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-08 14:33       ` Michael Ströder
@ 2021-01-08 15:13         ` Juliusz Chroboczek
  2021-01-08 17:34           ` Michael Ströder
  0 siblings, 1 reply; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-08 15:13 UTC (permalink / raw)
  To: Michael Ströder; +Cc: galene

> I'd love to see more connection meta data in /stats so I can tell which
> user has which problem:

> - connection's IP address
> - nick name / user name

The stats page is deliberately anonymised.  Also, you cannot spy on
a conversation without appearing in the user list even if you have op
privileges, just like you cannot unmute a user.

> I suspect most of the users' minor or major issues are caused by
> problems with their  local setup. And more information would help
> tracking down those.

Be assured that I am well aware that it complicates troubleshooting -- you
can imagine how difficult it was in the early days, when Galène would
misbehave during a department meeting with important people.  But that's
a price well worth paying in exchange for respecting the users' privacy.

> FWIW: Did anyone look at admin/monitor page of Janus gateway?

Perhaps you could publish a screenshot?

-- Juliusz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-08 13:52     ` Toke Høiland-Jørgensen
  2021-01-08 14:33       ` Michael Ströder
@ 2021-01-08 15:34       ` Juliusz Chroboczek
  2021-01-08 19:34         ` Toke Høiland-Jørgensen
  1 sibling, 1 reply; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-08 15:34 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: galene

>> The direct connection was successful, so the TURN server was never
>> contacted.

> Right, I see. What are the conditions where a direct connection fails?
> Is it just when the client is behind a NAT?

If only one side is behind NAT, you should get a direct connection with no
outside help.

If both sides are behind NAT, they will attempt to establish a paired set
of NAT mappings by synchronising through a STUN server, and then establish
a direct connection (TURN is a superset of STUN, no need to set up
a dedicated STUN server if you've got TURN).  This is usually successful,
but might occasionally fail due to packet loss.

If there is a firewall in the way that drops UDP traffic, or traffic to
high-numbered ports, then the peers will fall back to TURN.  TURN supports
both UDP and TCP proxying, since some networks allow outgoing UDP only to
some ports (typically 1194 for OpenVPN and 10000 for the Cisco VPN client).

ICE-TCP adds an additional step by allowing direct connections over TCP.
It is not designed for peer-to-peer communication, only for client-server
communication, so it should be a perfect fit for Galène.

Once a functioning pair is found, ICE will stick to it.  In Galène, we
restart ICE whenever the connection fails, and also when the user types
"/renegotiate".  (I'd like to detect client-side network changes and restart
ICE automatically, but current browsers don't give me a suitable event
handler.)

All of this happens in parallel, with arbitrary timeouts, so the actual
behaviour tends to be non-deterministic and next to impossible to debug.
See for example

  https://github.com/pion/ice/issues/305

> As for just logging, Galene does propagate JSON parsing errors from the
> Go JSON parser, but because the files are not read on startup you can
> start up the daemon and everything looks fine until someone tries to
> connect...

Hmm... I'm actually using the current behaviour, by using undefined fields
in my JSON files ("contact" and "comment").

But I think you're right.  This will happen in background, so you'll only
get warnings in the log.  After 0.2 is out, though, since Galène is frozen
right now.

-- Juliusz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-08 15:13         ` Juliusz Chroboczek
@ 2021-01-08 17:34           ` Michael Ströder
  2021-01-08 18:00             ` Juliusz Chroboczek
  0 siblings, 1 reply; 32+ messages in thread
From: Michael Ströder @ 2021-01-08 17:34 UTC (permalink / raw)
  To: galene

On 1/8/21 4:13 PM, Juliusz Chroboczek wrote:
> Michael Ströder wrote:
>> I'd love to see more connection meta data in /stats so I can tell which
>> user has which problem:
>> - connection's IP address
>> - nick name / user name
> 
> The stats page is deliberately anonymised.

Hmm, is the /stats page meant to be publicly available? It's definitely
not at my site and never will be (limited by IP addresses and
password-protected).

So I'd argue that in my case the IP addresses, user agent info and user
names are already visible in the logs for the same personnel like the
/stats page.

Could this be made configurable?

> Also, you cannot spy on a conversation without appearing in the user
> list even if you have op privileges, just like you cannot unmute a
> user.
Good.

>> I suspect most of the users' minor or major issues are caused by
>> problems with their  local setup. And more information would help
>> tracking down those.
> 
> Be assured that I am well aware that it complicates troubleshooting -- you
> can imagine how difficult it was in the early days, when Galène would
> misbehave during a department meeting with important people.

And the result clearly shows the advantage of this eat-your-own-dogfood
approach. :-)

>> FWIW: Did anyone look at admin/monitor page of Janus gateway?
> 
> Perhaps you could publish a screenshot?

Too big/long for a screenshot. Also you can click into the connection
information.

Ciao, Michael.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-08 17:34           ` Michael Ströder
@ 2021-01-08 18:00             ` Juliusz Chroboczek
  0 siblings, 0 replies; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-08 18:00 UTC (permalink / raw)
  To: Michael Ströder; +Cc: galene

>> The stats page is deliberately anonymised.

> Could this be made configurable?

Sorry.  You're welcome to patch your own copy, you're welcome to log by
using a reverse proxy, but I don't feel comfortable adding code upstream
that makes it easy to spy upon the users.

-- Juliusz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-08 15:34       ` Juliusz Chroboczek
@ 2021-01-08 19:34         ` Toke Høiland-Jørgensen
  2021-01-08 19:56           ` Juliusz Chroboczek
  0 siblings, 1 reply; 32+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-01-08 19:34 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: galene

Juliusz Chroboczek <jch@irif.fr> writes:

>>> The direct connection was successful, so the TURN server was never
>>> contacted.
>
>> Right, I see. What are the conditions where a direct connection fails?
>> Is it just when the client is behind a NAT?
>
> If only one side is behind NAT, you should get a direct connection with no
> outside help.

And one of these sides is always the Galene server, right? So if that
has public IPs, TURN is only used as an alternative port if the client
is behind a firewall blocking UDP? Or am I missing something?

-Toke

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-08 19:34         ` Toke Høiland-Jørgensen
@ 2021-01-08 19:56           ` Juliusz Chroboczek
  2021-01-09  0:18             ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-08 19:56 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: galene

> And one of these sides is always the Galene server, right? So if that
> has public IPs, TURN is only used as an alternative port if the client
> is behind a firewall blocking UDP?

Assuming no firewall, that's correct, except for the case of repeated UDP
packet losses causing ICE fallback to TURN.

Like you, I was hoping I could get away without using a TURN server.  In
practice, I have found that there are just too many networks that block
outgoing traffic.  For example, the university's WiFi network is as
restrictive as it can possibly get away with without violating the Eduroam
service definition (see page 32 of [1]), while the network in the computer
rooms allows no outgoing traffic whatsoever (I need to go over WiFi when
I do so-called "hybrid" taching, where part of the students are at home).

(And now you know why I implemented the "Blackboard mode".)

[1] https://www.eduroam.org/wp-content/uploads/2016/05/GN3-12-192_eduroam-policy-service-definition_ver28_26072012.pdf

-- Juliusz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-08 19:56           ` Juliusz Chroboczek
@ 2021-01-09  0:18             ` Toke Høiland-Jørgensen
  2021-01-09 13:34               ` Juliusz Chroboczek
  0 siblings, 1 reply; 32+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-01-09  0:18 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: galene

Juliusz Chroboczek <jch@irif.fr> writes:

>> And one of these sides is always the Galene server, right? So if that
>> has public IPs, TURN is only used as an alternative port if the client
>> is behind a firewall blocking UDP?
>
> Assuming no firewall, that's correct, except for the case of repeated UDP
> packet losses causing ICE fallback to TURN.
>
> Like you, I was hoping I could get away without using a TURN server.  In
> practice, I have found that there are just too many networks that block
> outgoing traffic.

So in this case, how is using a TURN server different than just having
Galene itself listen on a bunch of UDP ports and offer each of those?

-Toke

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-09  0:18             ` Toke Høiland-Jørgensen
@ 2021-01-09 13:34               ` Juliusz Chroboczek
  2021-01-10 13:47                 ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-09 13:34 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: galene

> So in this case, how is using a TURN server different than just having
> Galene itself listen on a bunch of UDP ports and offer each of those?

Short answer: Galène needs hundreds or even thousands of ports.  TURN can
use a single port, which you can pick such that it's accessible from your
clients.

Long answer.  Galène uses WebRTC.  WebRTC uses RTP.  RTP transmits
a collection of tracks.  RTP can work in two modes:

  - port multiplexing: use the UDP port to associate packets with tracks;
  - SSID multiplexing: multiplex over a single UDP 5-uple, and use data in
    the RTP header to associate packets with tracks (this is called
   "bundling" in WebRTC).

Now, there are two limitations:

  1. Pion is unable to use a single local UDP port for multiple RTP
     sessions: it uses a distinct UDP port for every RTP session, even if
     sessions have dinstinct remote addresses;

  2. Galène uses a distinct RTP session for every stream (pair of audio
     and video tracks); in other words, Galène ony bundles at most one
     video and one audio track.

Fixing (2) is not a high priority, since it would make the code way more
complex (bundling is fragile, there are some limitations to what you can
do when bundling streams with different configurations and some limitations
to how you can mutate a bundle without tearing it down).  It might
cause issues with AQMs (is it a good idea to use the same 5-uple for
different videos?), but you're the specialist here.

-- Juliusz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-09 13:34               ` Juliusz Chroboczek
@ 2021-01-10 13:47                 ` Toke Høiland-Jørgensen
  2021-01-10 15:14                   ` [Galene] Congestion control and WebRTC [was: Logging] Juliusz Chroboczek
  2021-01-10 15:17                   ` [Galene] Re: Logging Juliusz Chroboczek
  0 siblings, 2 replies; 32+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-01-10 13:47 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: galene

Juliusz Chroboczek <jch@irif.fr> writes:

>> So in this case, how is using a TURN server different than just having
>> Galene itself listen on a bunch of UDP ports and offer each of those?
>
> Short answer: Galène needs hundreds or even thousands of ports.  TURN can
> use a single port, which you can pick such that it's accessible from your
> clients.
>
> Long answer.  Galène uses WebRTC.  WebRTC uses RTP.  RTP transmits
> a collection of tracks.  RTP can work in two modes:
>
>   - port multiplexing: use the UDP port to associate packets with tracks;
>   - SSID multiplexing: multiplex over a single UDP 5-uple, and use data in
>     the RTP header to associate packets with tracks (this is called
>    "bundling" in WebRTC).

Ah, I see. I think I was mentally assuming SSID multiplexing, so that's
what I was missing. Thanks for the clarification!

> Now, there are two limitations:
>
>   1. Pion is unable to use a single local UDP port for multiple RTP
>      sessions: it uses a distinct UDP port for every RTP session, even if
>      sessions have dinstinct remote addresses;
>
>   2. Galène uses a distinct RTP session for every stream (pair of audio
>      and video tracks); in other words, Galène ony bundles at most one
>      video and one audio track.
>
> Fixing (2) is not a high priority, since it would make the code way more
> complex (bundling is fragile, there are some limitations to what you can
> do when bundling streams with different configurations and some limitations
> to how you can mutate a bundle without tearing it down).  It might
> cause issues with AQMs (is it a good idea to use the same 5-uple for
> different videos?), but you're the specialist here.

No, I don't think multiplexing more streams over the same five-tuple is
a good idea if it can be avoided. If the bottleneck does per-flow
queueing (like FQ-CoDel), you'd want each video flow to be scheduled
separately I think.

Another couple of ideas for packet-level optimisations that may be worth
trying (both originally articulated by Dave Taht):

In the presence of an FQ-CoDel'ed bottleneck it may be better to put
audio and video on two separate 5-tuples: That would cause the audio
stream to be treated as a 'sparse flow' with queueing priority and fewer
drops when congested.

(As an aside, is there a reference for the codec constraints in
browsers? And is it possible to tweak codec parameters, say to burn some
bandwidth to enable really high-fidelity audio for special use cases? Or
is Opus so good that it doesn't matter?)

Another packet-based optimisation that could be interesting to try out
is to mark packets containing video key frames as ECN-capable. If the
bottleneck AQM respects ECN, that would prevent key frames from being
dropped. Ideally you'd also actually respond to CE markings, of course,
but just doing selective marking of the ECN-capable bit inside the
stream would theoretically make it possible to protect key frames while
still reacting normally to drops of other data packets...

-Toke

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Congestion control and WebRTC [was: Logging]
  2021-01-10 13:47                 ` Toke Høiland-Jørgensen
@ 2021-01-10 15:14                   ` Juliusz Chroboczek
  2021-01-10 15:23                     ` [Galene] " Juliusz Chroboczek
  2021-01-10 22:23                     ` Toke Høiland-Jørgensen
  2021-01-10 15:17                   ` [Galene] Re: Logging Juliusz Chroboczek
  1 sibling, 2 replies; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-10 15:14 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: galene

> No, I don't think multiplexing more streams over the same five-tuple is
> a good idea if it can be avoided. If the bottleneck does per-flow
> queueing (like FQ-CoDel), you'd want each video flow to be scheduled
> separately I think.

Tere's a tradeoff here.  Using port multiplexing gives more information to
middleboxes, but using SSID multiplexing reduces the amount of ICE
negotation -- adding a new track to an already established flow requires
zero packet exchanges after negotiation (you just start sending data with
a fresh SSID), while adding a new flow for port multiplexing requires
a new set of ICE probes, which might take a few seconds in the TURN case.

> Another couple of ideas for packet-level optimisations that may be worth
> trying (both originally articulated by Dave Taht):

Why is Dave not here?

> In the presence of an FQ-CoDel'ed bottleneck it may be better to put
> audio and video on two separate 5-tuples: That would cause the audio
> stream to be treated as a 'sparse flow' with queueing priority and fewer
> drops when congested.

Uh-huh.  I'll send you a patch to do that, in case you find the time to
test it.

> (As an aside, is there a reference for the codec constraints in
> browsers? And is it possible to tweak codec parameters, say to burn some
> bandwidth to enable really high-fidelity audio for special use cases? Or
> is Opus so good that it doesn't matter?)

A typical laptop microphone has rather poor frequency response, so Opus at
48kbit/s is as good as the original.  It's just not worth reducing the
audio rate upon congestion, it's the video rate that gets reduced.

As to the video rate, you've got plenty of exciting knobs.

1. Congestion control.  As implemented in modern browsers, WebRTC uses two
congestion controllers: a fairly traditional loss-based controller, and an
interesting delay-based one.  This is described here:

  https://tools.ietf.org/html/draft-ietf-rmcat-gcc-02

Unlike some other video servers, that mere forward congestion indictions
from the receivers to the sender, Galène terminates congestion control on
both legs.  currently obeys congestion indication from the receivers, and
implements the loss-based congestion controller for data received from the
sender.

  https://github.com/jech/galene/blob/master/rtpconn/rtpconn.go#L956

We do not currently implement the delay-based controller, which causes
collapse if the sender is on a low-rate bufferbloated network.  That is
why Galène's client limits the rate to 700kbit/s by default (in the
« Send » entry in the side menu).

Implementing the delay-based controller is number 1 on my wishlist.  Your
help would be greatly appreciated.

2. Sender-side tweaks.  The sender has a number of knobs they can tweak,
notably maximum bitrate (separately for each track), and hints about
whether to prefer framerate or image quality upon congestion.  The sender
can also pick the webcam resolution, and they can request downscaling
before encoding.

3. SVC.  The technology that excites me right now is scalable video coding
(SVC), which I believe will make simulcast obsolete.  With VP8, the
sender can request that some frames should not be used as reference for
intra prediction; these « discardable » frames can be dropped by the
server without causing corruption.  VP9 implements full scalability:
temporal scalability, as in VP8, spatial scalability, where the codec
generates a low resolution flow and a high resolution flow that uses the
low resolution flow for intra prediction, and quality scalability, where
the codec generates frames with varying quality.

  https://en.wikipedia.org/wiki/Scalable_Video_Coding

I'm currently planning to skip simulcasting, which I feel is an
obsolescent technology, and experiment with SVC instead.  Implementing the
delay-based controller is a higher prioerity, though.

> Another packet-based optimisation that could be interesting to try out
> is to mark packets containing video key frames as ECN-capable.

Keyframes can be huge (120 packets is not unusual), it wouldn't be
resonable to mark such a burst as ECN-capable without actually reacting to
CE.  And if we drop part of the keyframe, we'll NACK the missing packets
and recover 20ms + 1RTT later.

> Ideally you'd also actually respond to CE markings,

RFC 6679.  I don't know if it's implemented in browsers.

-- Juliusz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Logging
  2021-01-10 13:47                 ` Toke Høiland-Jørgensen
  2021-01-10 15:14                   ` [Galene] Congestion control and WebRTC [was: Logging] Juliusz Chroboczek
@ 2021-01-10 15:17                   ` Juliusz Chroboczek
  1 sibling, 0 replies; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-10 15:17 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: galene

Another note: NACKs are only used for video, not for audio.  For audio, it
would be great to do FEC, but the FEC protocol that is useful for
a server, flexfec, is not implemented for audio in the browsers, while the
one that is, Opus FEC, is not something that the server can control.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Congestion control and WebRTC [was: Logging]
  2021-01-10 15:14                   ` [Galene] Congestion control and WebRTC [was: Logging] Juliusz Chroboczek
@ 2021-01-10 15:23                     ` Juliusz Chroboczek
  2021-01-10 22:23                     ` Toke Høiland-Jørgensen
  1 sibling, 0 replies; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-10 15:23 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: galene

>> Ideally you'd also actually respond to CE markings,

> RFC 6679.  I don't know if it's implemented in browsers.

It is not.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Congestion control and WebRTC [was: Logging]
  2021-01-10 15:14                   ` [Galene] Congestion control and WebRTC [was: Logging] Juliusz Chroboczek
  2021-01-10 15:23                     ` [Galene] " Juliusz Chroboczek
@ 2021-01-10 22:23                     ` Toke Høiland-Jørgensen
  2021-01-10 22:44                       ` Dave Taht
                                         ` (3 more replies)
  1 sibling, 4 replies; 32+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-01-10 22:23 UTC (permalink / raw)
  To: Juliusz Chroboczek, Dave Taht; +Cc: galene

Juliusz Chroboczek <jch@irif.fr> writes:

>> No, I don't think multiplexing more streams over the same five-tuple is
>> a good idea if it can be avoided. If the bottleneck does per-flow
>> queueing (like FQ-CoDel), you'd want each video flow to be scheduled
>> separately I think.
>
> Tere's a tradeoff here.  Using port multiplexing gives more information to
> middleboxes, but using SSID multiplexing reduces the amount of ICE
> negotation -- adding a new track to an already established flow requires
> zero packet exchanges after negotiation (you just start sending data with
> a fresh SSID), while adding a new flow for port multiplexing requires
> a new set of ICE probes, which might take a few seconds in the TURN case.

So in this instance a new flow happens when a new user joins and their
video flow has to be established to every peer?

>> Another couple of ideas for packet-level optimisations that may be worth
>> trying (both originally articulated by Dave Taht):
>
> Why is Dave not here?

I dunno; why aren't you here, Dave? :)

>> In the presence of an FQ-CoDel'ed bottleneck it may be better to put
>> audio and video on two separate 5-tuples: That would cause the audio
>> stream to be treated as a 'sparse flow' with queueing priority and fewer
>> drops when congested.
>
> Uh-huh.  I'll send you a patch to do that, in case you find the time to
> test it.

Sounds good, thanks!

>> (As an aside, is there a reference for the codec constraints in
>> browsers? And is it possible to tweak codec parameters, say to burn some
>> bandwidth to enable really high-fidelity audio for special use cases? Or
>> is Opus so good that it doesn't matter?)
>
> A typical laptop microphone has rather poor frequency response, so Opus at
> 48kbit/s is as good as the original.  It's just not worth reducing the
> audio rate upon congestion, it's the video rate that gets reduced.

Right, I see. Looking at the commit that introduced codec support, it
looks pretty straight-forward to crank up the bitrate; maybe I'll
experiment with that a bit (but not using my laptop's microphone).

> As to the video rate, you've got plenty of exciting knobs.
>
> 1. Congestion control.  As implemented in modern browsers, WebRTC uses two
> congestion controllers: a fairly traditional loss-based controller, and an
> interesting delay-based one.  This is described here:
>
>   https://tools.ietf.org/html/draft-ietf-rmcat-gcc-02
>
> Unlike some other video servers, that mere forward congestion indictions
> from the receivers to the sender, Galène terminates congestion control on
> both legs.  currently obeys congestion indication from the receivers, and
> implements the loss-based congestion controller for data received from the
> sender.
>
>   https://github.com/jech/galene/blob/master/rtpconn/rtpconn.go#L956
>
> We do not currently implement the delay-based controller, which causes
> collapse if the sender is on a low-rate bufferbloated network.  That is
> why Galène's client limits the rate to 700kbit/s by default (in the
> « Send » entry in the side menu).

Right, but the browsers do?

> Implementing the delay-based controller is number 1 on my wishlist.  Your
> help would be greatly appreciated.

Can't promise any hacking time, unfortunately, at least not short-term.
Happy to test out stuff, though :)

> 2. Sender-side tweaks.  The sender has a number of knobs they can tweak,
> notably maximum bitrate (separately for each track), and hints about
> whether to prefer framerate or image quality upon congestion.  The sender
> can also pick the webcam resolution, and they can request downscaling
> before encoding.

Ah, hence the "blackboard mode" - gotcha!

> 3. SVC.  The technology that excites me right now is scalable video coding
> (SVC), which I believe will make simulcast obsolete.  With VP8, the
> sender can request that some frames should not be used as reference for
> intra prediction; these « discardable » frames can be dropped by the
> server without causing corruption.  VP9 implements full scalability:
> temporal scalability, as in VP8, spatial scalability, where the codec
> generates a low resolution flow and a high resolution flow that uses the
> low resolution flow for intra prediction, and quality scalability, where
> the codec generates frames with varying quality.
>
>   https://en.wikipedia.org/wiki/Scalable_Video_Coding
>
> I'm currently planning to skip simulcasting, which I feel is an
> obsolescent technology, and experiment with SVC instead.  Implementing the
> delay-based controller is a higher prioerity, though.

Uh, hadn't heard about that before; neat!

>> Another packet-based optimisation that could be interesting to try out
>> is to mark packets containing video key frames as ECN-capable.
>
> Keyframes can be huge (120 packets is not unusual), it wouldn't be
> resonable to mark such a burst as ECN-capable without actually reacting to
> CE.  And if we drop part of the keyframe, we'll NACK the missing packets
> and recover 20ms + 1RTT later.

Hmm, right, okay I see what you mean...

>>> Ideally you'd also actually respond to CE markings,
>> RFC 6679.  I don't know if it's implemented in browsers.
>
> It is not.

Ah, too bad :(

-Toke

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Congestion control and WebRTC [was: Logging]
  2021-01-10 22:23                     ` Toke Høiland-Jørgensen
@ 2021-01-10 22:44                       ` Dave Taht
  2021-01-11  0:07                       ` Juliusz Chroboczek
                                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 32+ messages in thread
From: Dave Taht @ 2021-01-10 22:44 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Juliusz Chroboczek, galene

On Sun, Jan 10, 2021 at 2:23 PM Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>
> Juliusz Chroboczek <jch@irif.fr> writes:
>
> >> No, I don't think multiplexing more streams over the same five-tuple is
> >> a good idea if it can be avoided. If the bottleneck does per-flow
> >> queueing (like FQ-CoDel), you'd want each video flow to be scheduled
> >> separately I think.
> >
> > Tere's a tradeoff here.  Using port multiplexing gives more information to
> > middleboxes, but using SSID multiplexing reduces the amount of ICE
> > negotation -- adding a new track to an already established flow requires
> > zero packet exchanges after negotiation (you just start sending data with
> > a fresh SSID), while adding a new flow for port multiplexing requires
> > a new set of ICE probes, which might take a few seconds in the TURN case.
>
> So in this instance a new flow happens when a new user joins and their
> video flow has to be established to every peer?
>
> >> Another couple of ideas for packet-level optimisations that may be worth
> >> trying (both originally articulated by Dave Taht):
> >
> > Why is Dave not here?
>
> I dunno; why aren't you here, Dave? :)
>
> >> In the presence of an FQ-CoDel'ed bottleneck it may be better to put
> >> audio and video on two separate 5-tuples: That would cause the audio
> >> stream to be treated as a 'sparse flow' with queueing priority and fewer
> >> drops when congested.
> >
> > Uh-huh.  I'll send you a patch to do that, in case you find the time to
> > test it.
>
> Sounds good, thanks!

This is what we did in a videoconferencing app... in the 90s... with
sfq... it gives
a "clock" from the audio that should with fq_codel or especially cake give very
fastr congestion feedback....

awesome. I am really liking galene...

>
> >> (As an aside, is there a reference for the codec constraints in
> >> browsers? And is it possible to tweak codec parameters, say to burn some
> >> bandwidth to enable really high-fidelity audio for special use cases? Or
> >> is Opus so good that it doesn't matter?)
> >
> > A typical laptop microphone has rather poor frequency response, so Opus at
> > 48kbit/s is as good as the original.  It's just not worth reducing the
> > audio rate upon congestion, it's the video rate that gets reduced.
>
> Right, I see. Looking at the commit that introduced codec support, it
> looks pretty straight-forward to crank up the bitrate; maybe I'll
> experiment with that a bit (but not using my laptop's microphone).
>
> > As to the video rate, you've got plenty of exciting knobs.
> >
> > 1. Congestion control.  As implemented in modern browsers, WebRTC uses two
> > congestion controllers: a fairly traditional loss-based controller, and an
> > interesting delay-based one.  This is described here:
> >
> >   https://tools.ietf.org/html/draft-ietf-rmcat-gcc-02
> >
> > Unlike some other video servers, that mere forward congestion indictions
> > from the receivers to the sender, Galène terminates congestion control on
> > both legs.  currently obeys congestion indication from the receivers, and
> > implements the loss-based congestion controller for data received from the
> > sender.
> >
> >   https://github.com/jech/galene/blob/master/rtpconn/rtpconn.go#L956
> >
> > We do not currently implement the delay-based controller, which causes
> > collapse if the sender is on a low-rate bufferbloated network.  That is
> > why Galène's client limits the rate to 700kbit/s by default (in the
> > « Send » entry in the side menu).
>
> Right, but the browsers do?
>
> > Implementing the delay-based controller is number 1 on my wishlist.  Your
> > help would be greatly appreciated.
>
> Can't promise any hacking time, unfortunately, at least not short-term.
> Happy to test out stuff, though :)
>
> > 2. Sender-side tweaks.  The sender has a number of knobs they can tweak,
> > notably maximum bitrate (separately for each track), and hints about
> > whether to prefer framerate or image quality upon congestion.  The sender
> > can also pick the webcam resolution, and they can request downscaling
> > before encoding.
>
> Ah, hence the "blackboard mode" - gotcha!
>
> > 3. SVC.  The technology that excites me right now is scalable video coding
> > (SVC), which I believe will make simulcast obsolete.  With VP8, the
> > sender can request that some frames should not be used as reference for
> > intra prediction; these « discardable » frames can be dropped by the
> > server without causing corruption.  VP9 implements full scalability:
> > temporal scalability, as in VP8, spatial scalability, where the codec
> > generates a low resolution flow and a high resolution flow that uses the
> > low resolution flow for intra prediction, and quality scalability, where
> > the codec generates frames with varying quality.
> >
> >   https://en.wikipedia.org/wiki/Scalable_Video_Coding
> >
> > I'm currently planning to skip simulcasting, which I feel is an
> > obsolescent technology, and experiment with SVC instead.  Implementing the
> > delay-based controller is a higher prioerity, though.
>
> Uh, hadn't heard about that before; neat!
>
> >> Another packet-based optimisation that could be interesting to try out
> >> is to mark packets containing video key frames as ECN-capable.
> >
> > Keyframes can be huge (120 packets is not unusual), it wouldn't be
> > resonable to mark such a burst as ECN-capable without actually reacting to
> > CE.  And if we drop part of the keyframe, we'll NACK the missing packets
> > and recover 20ms + 1RTT later.
>
> Hmm, right, okay I see what you mean...
>
> >>> Ideally you'd also actually respond to CE markings,
> >> RFC 6679.  I don't know if it's implemented in browsers.
> >
> > It is not.
>
> Ah, too bad :(
>
> -Toke



-- 
"For a successful technology, reality must take precedence over public
relations, for Mother Nature cannot be fooled" - Richard Feynman

dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Congestion control and WebRTC [was: Logging]
  2021-01-10 22:23                     ` Toke Høiland-Jørgensen
  2021-01-10 22:44                       ` Dave Taht
@ 2021-01-11  0:07                       ` Juliusz Chroboczek
  2021-01-11  0:20                         ` Toke Høiland-Jørgensen
  2021-01-11  0:30                       ` Dave Taht
  2021-01-11 13:38                       ` [Galene] Re: Congestion control and WebRTC [was: Logging] Juliusz Chroboczek
  3 siblings, 1 reply; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-11  0:07 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Dave Taht, galene

> So in this instance a new flow happens when a new user joins and their
> video flow has to be established to every peer?

When a user joins, nothing much happens: remember that Galène is meant to
be usable for lectures with hundred of students.  When a user clicks
« Ready », a stream with up to two tracks is established in the client->server
direction.  Galène then determines the set of clients that have expressed
interest in this flow (through the "request" message), and establishes n -
1 streams in the server->client direction.

In the bundle case (which is what we currently do), each of the flows is
an RTP session (a UDP flow).  In the non-bundle case, each of the flows is
one or two RTP sessions, one per track.

>> We do not currently implement the delay-based controller, which causes
>> collapse if the sender is on a low-rate bufferbloated network.  That is
>> why Galène's client limits the rate to 700kbit/s by default (in the
>> « Send » entry in the side menu).

> Right, but the browsers do?

They do, and Galène provides them with all the data they need to do their
job.  So currently you have state-of-the-art congestion control in the
server->client direction, but only basic loss-based congestion control in
the client->server direction.  Which is good enough for lecturing (you
usually try to give your lecture over an uncongested link) but sometimes
suboptimal during departmental meetings (some people need to lock
themselves in the attic in order to get away from their children).

-- Juliusz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Congestion control and WebRTC [was: Logging]
  2021-01-11  0:07                       ` Juliusz Chroboczek
@ 2021-01-11  0:20                         ` Toke Høiland-Jørgensen
  2021-01-11  0:28                           ` Juliusz Chroboczek
  0 siblings, 1 reply; 32+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-01-11  0:20 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: Dave Taht, galene

Juliusz Chroboczek <jch@irif.fr> writes:

>> So in this instance a new flow happens when a new user joins and their
>> video flow has to be established to every peer?
>
> When a user joins, nothing much happens: remember that Galène is meant to
> be usable for lectures with hundred of students.  When a user clicks
> « Ready », a stream with up to two tracks is established in the client->server
> direction.  Galène then determines the set of clients that have expressed
> interest in this flow (through the "request" message), and establishes n -
> 1 streams in the server->client direction.
>
> In the bundle case (which is what we currently do), each of the flows is
> an RTP session (a UDP flow).  In the non-bundle case, each of the flows is
> one or two RTP sessions, one per track.

OK. So in the current case, the latency for each other user to see a new
video flow when someone clicks "enable video" is a bit longer because
there's a handshake for each peer. Whereas in SSID multiplexing you
could skip the handshake?

>>> We do not currently implement the delay-based controller, which causes
>>> collapse if the sender is on a low-rate bufferbloated network.  That is
>>> why Galène's client limits the rate to 700kbit/s by default (in the
>>> « Send » entry in the side menu).
>
>> Right, but the browsers do?
>
> They do, and Galène provides them with all the data they need to do their
> job.  So currently you have state-of-the-art congestion control in the
> server->client direction, but only basic loss-based congestion control in
> the client->server direction.  Which is good enough for lecturing (you
> usually try to give your lecture over an uncongested link) but sometimes
> suboptimal during departmental meetings (some people need to lock
> themselves in the attic in order to get away from their children).

Oh, right, so the server will also respond to the delay signals from the
clients' receiver-side? I.e., the only thing you haven't implemented is
the delay processing on the receiver side (from the last paragraph of
section 3 of draft-ietf-rmcat-gcc-02)?

-Toke

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Congestion control and WebRTC [was: Logging]
  2021-01-11  0:20                         ` Toke Høiland-Jørgensen
@ 2021-01-11  0:28                           ` Juliusz Chroboczek
  0 siblings, 0 replies; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-11  0:28 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Dave Taht, galene

> OK. So in the current case, the latency for each other user to see a new
> video flow when someone clicks "enable video" is a bit longer because
> there's a handshake for each peer. Whereas in SSID multiplexing you
> could skip the handshake?

Establishing a new RTP session requires:

 1. SDP negotation (1 RTT);
 2. ICE probing (1 RTT best case, 2-3s in the TURN case).

Adding a track to an existing RTP session requires just an SDP
renegotation (1 RTT).

> Oh, right, so the server will also respond to the delay signals from the
> clients' receiver-side? I.e., the only thing you haven't implemented is
> the delay processing on the receiver side (from the last paragraph of
> section 3 of draft-ietf-rmcat-gcc-02)?

Exactly.

-- Juliusz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Congestion control and WebRTC [was: Logging]
  2021-01-10 22:23                     ` Toke Høiland-Jørgensen
  2021-01-10 22:44                       ` Dave Taht
  2021-01-11  0:07                       ` Juliusz Chroboczek
@ 2021-01-11  0:30                       ` Dave Taht
  2021-01-11  6:23                         ` Dave Taht
  2021-01-11 13:38                       ` [Galene] Re: Congestion control and WebRTC [was: Logging] Juliusz Chroboczek
  3 siblings, 1 reply; 32+ messages in thread
From: Dave Taht @ 2021-01-11  0:30 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Juliusz Chroboczek, galene

On Sun, Jan 10, 2021 at 2:23 PM Toke Høiland-Jørgensen <toke@toke.dk> wrote:

> >> In the presence of an FQ-CoDel'ed bottleneck it may be better to put
> >> audio and video on two separate 5-tuples: That would cause the audio
> >> stream to be treated as a 'sparse flow' with queueing priority and fewer
> >> drops when congested.
> >
> > Uh-huh.  I'll send you a patch to do that, in case you find the time to
> > test it.
>
> Sounds good, thanks!
>
> >> (As an aside, is there a reference for the codec constraints in
> >> browsers? And is it possible to tweak codec parameters, say to burn some
> >> bandwidth to enable really high-fidelity audio for special use cases? Or
> >> is Opus so good that it doesn't matter?)
> >
> > A typical laptop microphone has rather poor frequency response, so Opus at
> > 48kbit/s is as good as the original.  It's just not worth reducing the
> > audio rate upon congestion, it's the video rate that gets reduced.

There have been some good work (across town) on jacktrip, for
multi-party collaborative audio.

https://www.npr.org/2020/11/21/937043051/musicians-turn-to-new-software-to-play-together-online

but zoom and google etc simply don't cut it.

I am personally extremely interested in high quality, positional
collaborative 3D audio, with the video component highly secondary.

Here's the low end device I've been using of late, mounted to the roof
of my boat.

https://zoomcorp.com/en/us/handheld-recorders/handheld-recorders/h3-vr-360-audio-recorder/

An example of what can be achieved with it, mixed down to binaural,
instead of 5:1
http://www.taht.net/~d/circle/wish_youwerehere_Binaural.mp3

You can clearly hear the airplane go *overhead*, totally
coincidentally. in the outro. I note that I'm not in best voice on
that recording, and one of the big songs I've mostly been recording
this year is over here: https://www.youtube.com/watch?v=tUun-jFFoU4



>
> Right, I see. Looking at the commit that introduced codec support, it
> looks pretty straight-forward to crank up the bitrate; maybe I'll
> experiment with that a bit (but not using my laptop's microphone).

Cool. What comes out of that device is 4 channels of various
ambisonics encodings. Whether opus can feed that into a decoder on the
other side (imagine trying to sit next to the drummer, or piano
player) dunno.

> > As to the video rate, you've got plenty of exciting knobs.
> >
> > 1. Congestion control.  As implemented in modern browsers, WebRTC uses two
> > congestion controllers: a fairly traditional loss-based controller, and an
> > interesting delay-based one.  This is described here:
> >
> >   https://tools.ietf.org/html/draft-ietf-rmcat-gcc-02

I liked this one. Except that I wanted to add either classic or SCE
based ecn to the whole thing.

> > Unlike some other video servers, that mere forward congestion indictions
> > from the receivers to the sender, Galène terminates congestion control on
> > both legs.  currently obeys congestion indication from the receivers, and
> > implements the loss-based congestion controller for data received from the
> > sender.
> >
> >   https://github.com/jech/galene/blob/master/rtpconn/rtpconn.go#L956
> >
> > We do not currently implement the delay-based controller, which causes
> > collapse if the sender is on a low-rate bufferbloated network.  That is
> > why Galène's client limits the rate to 700kbit/s by default (in the
> > « Send » entry in the side menu).
>
> Right, but the browsers do?
>
> > Implementing the delay-based controller is number 1 on my wishlist.  Your
> > help would be greatly appreciated.
>
> Can't promise any hacking time, unfortunately, at least not short-term.
> Happy to test out stuff, though :)
>
> > 2. Sender-side tweaks.  The sender has a number of knobs they can tweak,
> > notably maximum bitrate (separately for each track), and hints about
> > whether to prefer framerate or image quality upon congestion.  The sender
> > can also pick the webcam resolution, and they can request downscaling
> > before encoding.
>
> Ah, hence the "blackboard mode" - gotcha!
>
> > 3. SVC.  The technology that excites me right now is scalable video coding
> > (SVC), which I believe will make simulcast obsolete.  With VP8, the
> > sender can request that some frames should not be used as reference for
> > intra prediction; these « discardable » frames can be dropped by the
> > server without causing corruption.  VP9 implements full scalability:
> > temporal scalability, as in VP8, spatial scalability, where the codec
> > generates a low resolution flow and a high resolution flow that uses the
> > low resolution flow for intra prediction, and quality scalability, where
> > the codec generates frames with varying quality.
> >
> >   https://en.wikipedia.org/wiki/Scalable_Video_Coding
> >
> > I'm currently planning to skip simulcasting, which I feel is an
> > obsolescent technology, and experiment with SVC instead.  Implementing the
> > delay-based controller is a higher prioerity, though.
>
> Uh, hadn't heard about that before; neat!
>
> >> Another packet-based optimisation that could be interesting to try out
> >> is to mark packets containing video key frames as ECN-capable.
> >
> > Keyframes can be huge (120 packets is not unusual), it wouldn't be
> > resonable to mark such a burst as ECN-capable without actually reacting to
> > CE.  And if we drop part of the keyframe, we'll NACK the missing packets
> > and recover 20ms + 1RTT later.
>
> Hmm, right, okay I see what you mean...
>
> >>> Ideally you'd also actually respond to CE markings,
> >> RFC 6679.  I don't know if it's implemented in browsers.
> >
> > It is not.
>
> Ah, too bad :(
>
> -Toke



-- 
"For a successful technology, reality must take precedence over public
relations, for Mother Nature cannot be fooled" - Richard Feynman

dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Congestion control and WebRTC [was: Logging]
  2021-01-11  0:30                       ` Dave Taht
@ 2021-01-11  6:23                         ` Dave Taht
  2021-01-11 12:55                           ` [Galene] Multichannel audio [was: Congestion control...] Juliusz Chroboczek
  0 siblings, 1 reply; 32+ messages in thread
From: Dave Taht @ 2021-01-11  6:23 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Juliusz Chroboczek, galene

On Sun, Jan 10, 2021 at 4:30 PM Dave Taht <dave.taht@gmail.com> wrote:
>
> On Sun, Jan 10, 2021 at 2:23 PM Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>
> > >> In the presence of an FQ-CoDel'ed bottleneck it may be better to put
> > >> audio and video on two separate 5-tuples: That would cause the audio
> > >> stream to be treated as a 'sparse flow' with queueing priority and fewer
> > >> drops when congested.
> > >
> > > Uh-huh.  I'll send you a patch to do that, in case you find the time to
> > > test it.
> >
> > Sounds good, thanks!
> >
> > >> (As an aside, is there a reference for the codec constraints in
> > >> browsers? And is it possible to tweak codec parameters, say to burn some
> > >> bandwidth to enable really high-fidelity audio for special use cases? Or
> > >> is Opus so good that it doesn't matter?)
> > >
> > > A typical laptop microphone has rather poor frequency response, so Opus at
> > > 48kbit/s is as good as the original.  It's just not worth reducing the
> > > audio rate upon congestion, it's the video rate that gets reduced.
>
> There have been some good work (across town) on jacktrip, for
> multi-party collaborative audio.
>
> https://www.npr.org/2020/11/21/937043051/musicians-turn-to-new-software-to-play-together-online
>
> but zoom and google etc simply don't cut it.
>
> I am personally extremely interested in high quality, positional
> collaborative 3D audio, with the video component highly secondary.
>
> Here's the low end device I've been using of late, mounted to the roof
> of my boat.
>
> https://zoomcorp.com/en/us/handheld-recorders/handheld-recorders/h3-vr-360-audio-recorder/
>
> An example of what can be achieved with it, mixed down to binaural,
> instead of 5:1
> http://www.taht.net/~d/circle/wish_youwerehere_Binaural.mp3
>
> You can clearly hear the airplane go *overhead*, totally
> coincidentally. in the outro. I note that I'm not in best voice on
> that recording, and one of the big songs I've mostly been recording
> this year is over here: https://www.youtube.com/watch?v=tUun-jFFoU4
>
>
>
> >
> > Right, I see. Looking at the commit that introduced codec support, it
> > looks pretty straight-forward to crank up the bitrate; maybe I'll
> > experiment with that a bit (but not using my laptop's microphone).
>
> Cool. What comes out of that device is 4 channels of various
> ambisonics encodings. Whether opus can feed that into a decoder on the
> other side (imagine trying to sit next to the drummer, or piano
> player) dunno.

Can galene, or a web browser, carry multichannel opus audio?
>
> > > As to the video rate, you've got plenty of exciting knobs.
> > >
> > > 1. Congestion control.  As implemented in modern browsers, WebRTC uses two
> > > congestion controllers: a fairly traditional loss-based controller, and an
> > > interesting delay-based one.  This is described here:
> > >
> > >   https://tools.ietf.org/html/draft-ietf-rmcat-gcc-02
>
> I liked this one. Except that I wanted to add either classic or SCE
> based ecn to the whole thing.
>
> > > Unlike some other video servers, that mere forward congestion indictions
> > > from the receivers to the sender, Galène terminates congestion control on
> > > both legs.  currently obeys congestion indication from the receivers, and
> > > implements the loss-based congestion controller for data received from the
> > > sender.
> > >
> > >   https://github.com/jech/galene/blob/master/rtpconn/rtpconn.go#L956
> > >
> > > We do not currently implement the delay-based controller, which causes
> > > collapse if the sender is on a low-rate bufferbloated network.  That is
> > > why Galène's client limits the rate to 700kbit/s by default (in the
> > > « Send » entry in the side menu).
> >
> > Right, but the browsers do?
> >
> > > Implementing the delay-based controller is number 1 on my wishlist.  Your
> > > help would be greatly appreciated.
> >
> > Can't promise any hacking time, unfortunately, at least not short-term.
> > Happy to test out stuff, though :)
> >
> > > 2. Sender-side tweaks.  The sender has a number of knobs they can tweak,
> > > notably maximum bitrate (separately for each track), and hints about
> > > whether to prefer framerate or image quality upon congestion.  The sender
> > > can also pick the webcam resolution, and they can request downscaling
> > > before encoding.
> >
> > Ah, hence the "blackboard mode" - gotcha!
> >
> > > 3. SVC.  The technology that excites me right now is scalable video coding
> > > (SVC), which I believe will make simulcast obsolete.  With VP8, the
> > > sender can request that some frames should not be used as reference for
> > > intra prediction; these « discardable » frames can be dropped by the
> > > server without causing corruption.  VP9 implements full scalability:
> > > temporal scalability, as in VP8, spatial scalability, where the codec
> > > generates a low resolution flow and a high resolution flow that uses the
> > > low resolution flow for intra prediction, and quality scalability, where
> > > the codec generates frames with varying quality.
> > >
> > >   https://en.wikipedia.org/wiki/Scalable_Video_Coding
> > >
> > > I'm currently planning to skip simulcasting, which I feel is an
> > > obsolescent technology, and experiment with SVC instead.  Implementing the
> > > delay-based controller is a higher prioerity, though.
> >
> > Uh, hadn't heard about that before; neat!
> >
> > >> Another packet-based optimisation that could be interesting to try out
> > >> is to mark packets containing video key frames as ECN-capable.
> > >
> > > Keyframes can be huge (120 packets is not unusual), it wouldn't be
> > > resonable to mark such a burst as ECN-capable without actually reacting to
> > > CE.  And if we drop part of the keyframe, we'll NACK the missing packets
> > > and recover 20ms + 1RTT later.
> >
> > Hmm, right, okay I see what you mean...
> >
> > >>> Ideally you'd also actually respond to CE markings,
> > >> RFC 6679.  I don't know if it's implemented in browsers.
> > >
> > > It is not.
> >
> > Ah, too bad :(
> >
> > -Toke
>
>
>
> --
> "For a successful technology, reality must take precedence over public
> relations, for Mother Nature cannot be fooled" - Richard Feynman
>
> dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729



-- 
"For a successful technology, reality must take precedence over public
relations, for Mother Nature cannot be fooled" - Richard Feynman

dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Multichannel audio [was: Congestion control...]
  2021-01-11  6:23                         ` Dave Taht
@ 2021-01-11 12:55                           ` Juliusz Chroboczek
  2021-01-11 17:25                             ` [Galene] " Dave Taht
  0 siblings, 1 reply; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-11 12:55 UTC (permalink / raw)
  To: Dave Taht; +Cc: galene

> Can galene, or a web browser, carry multichannel opus audio?

It's apparently possible under Chrome, but they use a non-standard payload
format, so it'd require some hacking.  I'll see what I can do.

-- Juliusz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Congestion control and WebRTC [was: Logging]
  2021-01-10 22:23                     ` Toke Høiland-Jørgensen
                                         ` (2 preceding siblings ...)
  2021-01-11  0:30                       ` Dave Taht
@ 2021-01-11 13:38                       ` Juliusz Chroboczek
  2021-01-11 15:17                         ` Toke Høiland-Jørgensen
  2021-01-12  1:38                         ` Juliusz Chroboczek
  3 siblings, 2 replies; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-11 13:38 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: galene

>>> In the presence of an FQ-CoDel'ed bottleneck it may be better to put
>>> audio and video on two separate 5-tuples:

>> Uh-huh.  I'll send you a patch to do that, in case you find the time to
>> test it.

> Sounds good, thanks!

It turns out that there's no API to disable bundling, and it's not
immediately obvious how to achieve that through SDP munging, so doing that
would require somewhat more hacking than I'm willing to do right now.

-- Juliusz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Congestion control and WebRTC [was: Logging]
  2021-01-11 13:38                       ` [Galene] Re: Congestion control and WebRTC [was: Logging] Juliusz Chroboczek
@ 2021-01-11 15:17                         ` Toke Høiland-Jørgensen
  2021-01-11 17:20                           ` Dave Taht
  2021-01-12  1:38                         ` Juliusz Chroboczek
  1 sibling, 1 reply; 32+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-01-11 15:17 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: galene

Juliusz Chroboczek <jch@irif.fr> writes:

>>>> In the presence of an FQ-CoDel'ed bottleneck it may be better to put
>>>> audio and video on two separate 5-tuples:
>
>>> Uh-huh.  I'll send you a patch to do that, in case you find the time to
>>> test it.
>
>> Sounds good, thanks!
>
> It turns out that there's no API to disable bundling, and it's not
> immediately obvious how to achieve that through SDP munging, so doing that
> would require somewhat more hacking than I'm willing to do right now.

Alright, fair enough. Let's leave this in the mental "things it would be
nice to do at some point" pile, then :)

-Toke

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Congestion control and WebRTC [was: Logging]
  2021-01-11 15:17                         ` Toke Høiland-Jørgensen
@ 2021-01-11 17:20                           ` Dave Taht
  0 siblings, 0 replies; 32+ messages in thread
From: Dave Taht @ 2021-01-11 17:20 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Juliusz Chroboczek, galene

On Mon, Jan 11, 2021 at 7:17 AM Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>
> Juliusz Chroboczek <jch@irif.fr> writes:
>
> >>>> In the presence of an FQ-CoDel'ed bottleneck it may be better to put
> >>>> audio and video on two separate 5-tuples:
> >
> >>> Uh-huh.  I'll send you a patch to do that, in case you find the time to
> >>> test it.
> >
> >> Sounds good, thanks!
> >
> > It turns out that there's no API to disable bundling, and it's not
> > immediately obvious how to achieve that through SDP munging, so doing that
> > would require somewhat more hacking than I'm willing to do right now.
>
> Alright, fair enough. Let's leave this in the mental "things it would be
> nice to do at some point" pile, then :)

Grump. That's where we left it 5+ years back.
>
> -Toke
> _______________________________________________
> Galene mailing list -- galene@lists.galene.org
> To unsubscribe send an email to galene-leave@lists.galene.org



-- 
"For a successful technology, reality must take precedence over public
relations, for Mother Nature cannot be fooled" - Richard Feynman

dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Multichannel audio [was: Congestion control...]
  2021-01-11 12:55                           ` [Galene] Multichannel audio [was: Congestion control...] Juliusz Chroboczek
@ 2021-01-11 17:25                             ` Dave Taht
  0 siblings, 0 replies; 32+ messages in thread
From: Dave Taht @ 2021-01-11 17:25 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: galene

On Mon, Jan 11, 2021 at 4:55 AM Juliusz Chroboczek <jch@irif.fr> wrote:
>
> > Can galene, or a web browser, carry multichannel opus audio?
>
> It's apparently possible under Chrome, but they use a non-standard payload
> format, so it'd require some hacking.  I'll see what I can do.

Thx!

I keep hoping to find a killer app for not just multiparty audio but
better congestion control,
and I have to confess that back in march, a zillion companies called
me up and asked what
they could do to make for better videoconferencing, and I told 'em
"Encourage replacement of all the home routers in the world"... and
none of them called me back. Or did anything.

I decided I'd done enough to fix the internet, and just made music
with my friends outdoors every weekend. made a lot of people happier
that way.

Here, have a smile: http://www.taht.net/~d/circle/smile.mp3 with april
on vocals and darin on bass, playing together for the first time.

While I'm trying to get back into it this year, trying to find
something genuinely useful and profitable to do has been on my mind.
Wifi 6 is a disaster so far... and certainly there's a lot of top
level interest
in better videoconferencing, and I love the relative simplicity of
galene vs, for example, jitsi.

> -- Juliusz



-- 
"For a successful technology, reality must take precedence over public
relations, for Mother Nature cannot be fooled" - Richard Feynman

dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Galene] Re: Congestion control and WebRTC [was: Logging]
  2021-01-11 13:38                       ` [Galene] Re: Congestion control and WebRTC [was: Logging] Juliusz Chroboczek
  2021-01-11 15:17                         ` Toke Høiland-Jørgensen
@ 2021-01-12  1:38                         ` Juliusz Chroboczek
  1 sibling, 0 replies; 32+ messages in thread
From: Juliusz Chroboczek @ 2021-01-12  1:38 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: galene

> It turns out that there's no API to disable bundling, and it's not
> immediately obvious how to achieve that through SDP munging, so doing that
> would require somewhat more hacking than I'm willing to do right now.

It turns out that this is not supported at least by Chrome.

  https://bugs.chromium.org/p/webrtc/issues/detail?id=4260

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2021-01-12  1:38 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-07 21:38 [Galene] Logging Juliusz Chroboczek
2021-01-07 22:45 ` [Galene] Logging Michael Ströder
2021-01-08  0:35 ` Antonin Décimo
2021-01-08 12:40 ` Toke Høiland-Jørgensen
2021-01-08 13:28   ` Juliusz Chroboczek
2021-01-08 13:52     ` Toke Høiland-Jørgensen
2021-01-08 14:33       ` Michael Ströder
2021-01-08 15:13         ` Juliusz Chroboczek
2021-01-08 17:34           ` Michael Ströder
2021-01-08 18:00             ` Juliusz Chroboczek
2021-01-08 15:34       ` Juliusz Chroboczek
2021-01-08 19:34         ` Toke Høiland-Jørgensen
2021-01-08 19:56           ` Juliusz Chroboczek
2021-01-09  0:18             ` Toke Høiland-Jørgensen
2021-01-09 13:34               ` Juliusz Chroboczek
2021-01-10 13:47                 ` Toke Høiland-Jørgensen
2021-01-10 15:14                   ` [Galene] Congestion control and WebRTC [was: Logging] Juliusz Chroboczek
2021-01-10 15:23                     ` [Galene] " Juliusz Chroboczek
2021-01-10 22:23                     ` Toke Høiland-Jørgensen
2021-01-10 22:44                       ` Dave Taht
2021-01-11  0:07                       ` Juliusz Chroboczek
2021-01-11  0:20                         ` Toke Høiland-Jørgensen
2021-01-11  0:28                           ` Juliusz Chroboczek
2021-01-11  0:30                       ` Dave Taht
2021-01-11  6:23                         ` Dave Taht
2021-01-11 12:55                           ` [Galene] Multichannel audio [was: Congestion control...] Juliusz Chroboczek
2021-01-11 17:25                             ` [Galene] " Dave Taht
2021-01-11 13:38                       ` [Galene] Re: Congestion control and WebRTC [was: Logging] Juliusz Chroboczek
2021-01-11 15:17                         ` Toke Høiland-Jørgensen
2021-01-11 17:20                           ` Dave Taht
2021-01-12  1:38                         ` Juliusz Chroboczek
2021-01-10 15:17                   ` [Galene] Re: Logging Juliusz Chroboczek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox