Galène videoconferencing server discussion list archives
 help / color / mirror / Atom feed
* [Galene] using up more ports in ipv6 for better congestion control
@ 2021-07-10 15:15 Dave Taht
  2021-07-10 15:19 ` [Galene] " Dave Taht
  2021-07-10 16:36 ` T H Panton
  0 siblings, 2 replies; 12+ messages in thread
From: Dave Taht @ 2021-07-10 15:15 UTC (permalink / raw)
  To: galene; +Cc: T H Panton

tim panton wrote me just now on linked in:

"It is still possible. Just set bundle-policy to max-compat and you'll
get one stream for audio and one for video. Turn off rtcp-mux and
you'll get 4 ports (voice, video, voice-RTCP, video-RTCP) - But your
ability to connect will drop significantly (according to Google's
data) and your connection setup time will increase. Even with port
muxing congestion control is still possible, it just works
_differently_ - which arguably it should, because realtime video has
quite different needs from streaming or file transfer. Happy to have a
chat about this...."

(so... chatting via email is preferred for me)

I am also under the impression that the congestion control
notifications in rtcp are essentially obsolete in the rfc, mandating a
500ms interval instead of something sane, like a frame?

My other fantasy is to somehow start using udplite for more things.

The context for this of course is my never ending quest to have an IP
based video and audio streaming system good enough to have a band
playing with each other across town.


-- 
Latest Podcast:
https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/

Dave Täht CTO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Galene] Re: using up more ports in ipv6 for better congestion control
  2021-07-10 15:15 [Galene] using up more ports in ipv6 for better congestion control Dave Taht
@ 2021-07-10 15:19 ` Dave Taht
  2021-07-10 16:36 ` T H Panton
  1 sibling, 0 replies; 12+ messages in thread
From: Dave Taht @ 2021-07-10 15:19 UTC (permalink / raw)
  To: galene; +Cc: T H Panton

the thread over here:
https://www.linkedin.com/posts/dtaht_apples-new-facetime-a-sip-perspective-activity-6817609260348911616-p0kv/

On Sat, Jul 10, 2021 at 8:15 AM Dave Taht <dave.taht@gmail.com> wrote:
>
> tim panton wrote me just now on linked in:
>
> "It is still possible. Just set bundle-policy to max-compat and you'll
> get one stream for audio and one for video. Turn off rtcp-mux and
> you'll get 4 ports (voice, video, voice-RTCP, video-RTCP) - But your
> ability to connect will drop significantly (according to Google's
> data) and your connection setup time will increase. Even with port
> muxing congestion control is still possible, it just works
> _differently_ - which arguably it should, because realtime video has
> quite different needs from streaming or file transfer. Happy to have a
> chat about this...."
>
> (so... chatting via email is preferred for me)
>
> I am also under the impression that the congestion control
> notifications in rtcp are essentially obsolete in the rfc, mandating a
> 500ms interval instead of something sane, like a frame?
>
> My other fantasy is to somehow start using udplite for more things.
>
> The context for this of course is my never ending quest to have an IP
> based video and audio streaming system good enough to have a band
> playing with each other across town.
>
>
> --
> Latest Podcast:
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>
> Dave Täht CTO, TekLibre, LLC



-- 
Latest Podcast:
https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/

Dave Täht CTO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Galene] Re: using up more ports in ipv6 for better congestion control
  2021-07-10 15:15 [Galene] using up more ports in ipv6 for better congestion control Dave Taht
  2021-07-10 15:19 ` [Galene] " Dave Taht
@ 2021-07-10 16:36 ` T H Panton
  2021-07-10 16:48   ` Dave Taht
  2021-07-11 11:19   ` Juliusz Chroboczek
  1 sibling, 2 replies; 12+ messages in thread
From: T H Panton @ 2021-07-10 16:36 UTC (permalink / raw)
  To: Dave Taht; +Cc: galene



> On 10 Jul 2021, at 17:15, Dave Taht <dave.taht@gmail.com> wrote:
> 
> tim panton wrote me just now on linked in:
> 
> "It is still possible. Just set bundle-policy to max-compat and you'll
> get one stream for audio and one for video. Turn off rtcp-mux and
> you'll get 4 ports (voice, video, voice-RTCP, video-RTCP) - But your
> ability to connect will drop significantly (according to Google's
> data) and your connection setup time will increase. Even with port
> muxing congestion control is still possible, it just works
> _differently_ - which arguably it should, because realtime video has
> quite different needs from streaming or file transfer. Happy to have a
> chat about this...."
> 
> (so... chatting via email is preferred for me)
> 
> I am also under the impression that the congestion control
> notifications in rtcp are essentially obsolete in the rfc, mandating a
> 500ms interval instead of something sane, like a frame?

Oh, no, the congestion control is _very_ much alive in webRTC but under guise of 
bandwidth estimation - The ‘simplest' is google’s REMB RTCP message which basically
looks at the arrival time of packets and uses any _lengthening_ in the tof to deduce the onset of
additional buffering in the path. Transport CC tries to expand this to apply to all the streams muxed over a port
by adding an RTP header extension with an accurate (NTP) clock in it.

The thing that drives design this is that losing a packet is catastrophic for realtime video,  
one dropped packet makes seconds (aka megabytes) worth of data un-renderable.
At 50 fps the frame interval is shorter than the roundtrip time on the path (for VDSL users anyway - perhaps not fiber)
so a NAK/resend will fix it too late.

So the strategy is to dynamically estimate the capacity of the path and try to surf _just_ under that, ensuring minimal packet loss.

I realise this is anathema  to TCP folks, it certainly came as a shock to me….

T.

> 
> My other fantasy is to somehow start using udplite for more things.
> 
> The context for this of course is my never ending quest to have an IP
> based video and audio streaming system good enough to have a band
> playing with each other across town.
> 
> 
> -- 
> Latest Podcast:
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> 
> Dave Täht CTO, TekLibre, LLC


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Galene] Re: using up more ports in ipv6 for better congestion control
  2021-07-10 16:36 ` T H Panton
@ 2021-07-10 16:48   ` Dave Taht
  2021-07-11 11:19   ` Juliusz Chroboczek
  1 sibling, 0 replies; 12+ messages in thread
From: Dave Taht @ 2021-07-10 16:48 UTC (permalink / raw)
  To: T H Panton; +Cc: galene, Cake List

On Sat, Jul 10, 2021 at 9:36 AM T H Panton <tim@pi.pe> wrote:
>
>
>
> > On 10 Jul 2021, at 17:15, Dave Taht <dave.taht@gmail.com> wrote:
> >
> > tim panton wrote me just now on linked in:
> >
> > "It is still possible. Just set bundle-policy to max-compat and you'll
> > get one stream for audio and one for video. Turn off rtcp-mux and
> > you'll get 4 ports (voice, video, voice-RTCP, video-RTCP) - But your
> > ability to connect will drop significantly (according to Google's
> > data) and your connection setup time will increase. Even with port
> > muxing congestion control is still possible, it just works
> > _differently_ - which arguably it should, because realtime video has
> > quite different needs from streaming or file transfer. Happy to have a
> > chat about this...."
> >
> > (so... chatting via email is preferred for me)
> >
> > I am also under the impression that the congestion control
> > notifications in rtcp are essentially obsolete in the rfc, mandating a
> > 500ms interval instead of something sane, like a frame?
>
> Oh, no, the congestion control is _very_ much alive in webRTC but under guise of
> bandwidth estimation - The ‘simplest' is google’s REMB RTCP message which basically
> looks at the arrival time of packets and uses any _lengthening_ in the tof to deduce the onset of
> additional buffering in the path. Transport CC tries to expand this to apply to all the streams muxed over a port
> by adding an RTP header extension with an accurate (NTP) clock in it.

thank you very much for this update. We can do MUCH better than ntp
these days, gps being so common.

>
> The thing that drives design this is that losing a packet is catastrophic for realtime video,
> one dropped packet makes seconds (aka megabytes) worth of data un-renderable.

at least in my fq_codeled world, rfc3168 ecn is alive and well. It's
just that marking and responding to ecn in the
go library we are using (pion) is deeply buried.

also SCE might finally take flight after the next ietf.

It's easy to see fq_codel for the wifi chipsets we use overagressively
attempt to drop packets for non-paced videoconferencing streams. (I
never said fq_codel was perfect) I demoed that to juliusz a while
back. Totally fixed if you mark at least the keyframes as ecn-capable.

Even more fixable if sch_cake with its diffserv4 support is on the
bottleneck path.

> At 50 fps the frame interval is shorter than the roundtrip time on the path (for VDSL users anyway - perhaps not fiber)
> so a NAK/resend will fix it too late.

In terms of the jamaphone across town I care mostly that the audio
streams are perfect. Can lose a frame on the video here or there. But
I groke we have issues here.

Ideally I'd like to be running at 250 fps (4ms) which is like being 4
feet away from someone else in the band.

>
> So the strategy is to dynamically estimate the capacity of the path and try to surf _just_ under that, ensuring minimal packet loss.

I generally find enforcing the observed minimum via cake from a
starlink, 5g or wifi link and staying there is best for
videoconferencing apps,  and it turns out low latency is often best
for web and other forms of applications.

Only if you are addicted to speedtest results do you need to care
about bandwidth.

>
> I realise this is anathema  to TCP folks, it certainly came as a shock to me….

to *most* tcp folks. Not all. RTTs are what a I hope an icreasingly
high percentage will be caring about.

> T.
>
> >
> > My other fantasy is to somehow start using udplite for more things.
> >
> > The context for this of course is my never ending quest to have an IP
> > based video and audio streaming system good enough to have a band
> > playing with each other across town.
> >
> >
> > --
> > Latest Podcast:
> > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> >
> > Dave Täht CTO, TekLibre, LLC
>


-- 
Latest Podcast:
https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/

Dave Täht CTO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Galene] Re: using up more ports in ipv6 for better congestion control
  2021-07-10 16:36 ` T H Panton
  2021-07-10 16:48   ` Dave Taht
@ 2021-07-11 11:19   ` Juliusz Chroboczek
  2021-07-15 14:17     ` Sean DuBois
  2021-07-15 16:26     ` T H Panton
  1 sibling, 2 replies; 12+ messages in thread
From: Juliusz Chroboczek @ 2021-07-11 11:19 UTC (permalink / raw)
  To: T H Panton; +Cc: Dave Taht, galene

Tim is right about everything, as usual.

Tim said:

>> "It is still possible. Just set bundle-policy to max-compat and you'll
>> get one stream for audio and one for video.

This mode of operation is not currently supported by Pion.

Tim also said:

> Oh, no, the congestion control is _very_ much alive in webRTC but under
> guise of bandwidth estimation - The ‘simplest' is google’s REMB RTCP
> message which basically looks at the arrival time of packets and uses
> any _lengthening_ in the tof to deduce the onset of additional buffering
> in the path.

Yes.  Galène implements both REMB and loss-based congestion control in the
sender->SFU direction, and just loss-based in the SFU->receiver direction.
Implementing REMB is on my to-do list, but I've got more important things
to do.

Dave said:

>> I am also under the impression that the congestion control
>> notifications in rtcp are essentially obsolete in the rfc, mandating a
>> 500ms interval instead of something sane, like a frame?

You're speaking about the loss-based congestion controller, which is used
in combination with the delay-based controller that Tim is referring to.
The 500ms interval is the default, but nothing prevents the receiver from
sending more frequent feedback, for example just after a loss episode.
(Galène doesn't currently do that.)

Tim said:

> Transport CC tries to expand this to apply to all the streams muxed over
> a port by adding an RTP header extension with an accurate (NTP) clock in
> it.

I still don't understand why Transport-wide CC (TWCC) is better than REMB.
In my view, it's just moving the REMB congestion controller from the
receiver to the sender, which requires huge amounts of timely per-packet
feedback.

Tim, I'd be very grateful if you could explain what advantages TWCC has
over REMB.  For now, I'm sticking with REMB.

> I realise this is anathema to TCP folks, it certainly came as a shock to me

It depends on your background, I guess.  It was fairly natural for me, but
then I've been brought up on the litterature on ECN on the one hand and
TCP-Vegas on the other, both of which aim to perform congestion control
without any packet drops.

Ceterum autem censeo that Dave is right, and that we should be working on
ECN support in WebRTC.

-- Juliusz

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Galene] Re: using up more ports in ipv6 for better congestion control
  2021-07-11 11:19   ` Juliusz Chroboczek
@ 2021-07-15 14:17     ` Sean DuBois
  2021-07-16  1:36       ` Juliusz Chroboczek
  2021-07-15 16:26     ` T H Panton
  1 sibling, 1 reply; 12+ messages in thread
From: Sean DuBois @ 2021-07-15 14:17 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: T H Panton, Dave Taht, galene

On Sun, Jul 11, 2021 at 01:19:47PM +0200, Juliusz Chroboczek wrote:
> Tim is right about everything, as usual.
> 
> Tim said:
> 
> >> "It is still possible. Just set bundle-policy to max-compat and you'll
> >> get one stream for audio and one for video.
> 
> This mode of operation is not currently supported by Pion.
> 
> Tim also said:
> 
> > Oh, no, the congestion control is _very_ much alive in webRTC but under
> > guise of bandwidth estimation - The ‘simplest' is google’s REMB RTCP
> > message which basically looks at the arrival time of packets and uses
> > any _lengthening_ in the tof to deduce the onset of additional buffering
> > in the path.
> 
> Yes.  Galène implements both REMB and loss-based congestion control in the
> sender->SFU direction, and just loss-based in the SFU->receiver direction.
> Implementing REMB is on my to-do list, but I've got more important things
> to do.
> 
> Dave said:
> 
> >> I am also under the impression that the congestion control
> >> notifications in rtcp are essentially obsolete in the rfc, mandating a
> >> 500ms interval instead of something sane, like a frame?
> 
> You're speaking about the loss-based congestion controller, which is used
> in combination with the delay-based controller that Tim is referring to.
> The 500ms interval is the default, but nothing prevents the receiver from
> sending more frequent feedback, for example just after a loss episode.
> (Galène doesn't currently do that.)
> 
> Tim said:
> 
> > Transport CC tries to expand this to apply to all the streams muxed over
> > a port by adding an RTP header extension with an accurate (NTP) clock in
> > it.
> 
> I still don't understand why Transport-wide CC (TWCC) is better than REMB.
> In my view, it's just moving the REMB congestion controller from the
> receiver to the sender, which requires huge amounts of timely per-packet
> feedback.
> 
> Tim, I'd be very grateful if you could explain what advantages TWCC has
> over REMB.  For now, I'm sticking with REMB.
> 

TWCC gives you the interarrival time of packets. You also get this with
abs-send-time and do REMB. I would be interested to know the math of
data costs of abs-send-time vs TWCC (reference time + deltas)

With TWCC the sender knows the metadata of lost packets. If you lose a packet
with REMB you don't know the send time or the size of the packet. That
seems like it could be useful information?

> > I realise this is anathema to TCP folks, it certainly came as a shock to me
> 
> It depends on your background, I guess.  It was fairly natural for me, but
> then I've been brought up on the litterature on ECN on the one hand and
> TCP-Vegas on the other, both of which aim to perform congestion control
> without any packet drops.
> 
> Ceterum autem censeo that Dave is right, and that we should be working on
> ECN support in WebRTC.

Getting things into Chromium has been hard for me. If ECN gets
acceptance that would be amazing. My mindset is that TWCC is most
effort/least reward.

Pion just got a Network Conditioner so I am hoping to put a basic TWCC
driven congestion controller in pion/transport. A few papers were
published on Google Congestion Control. The code [0] is nicely split up
though (Delay, Loss based BWE all have their own classes)

> 
> -- Juliusz

[0] https://source.chromium.org/chromium/chromium/src/+/main:third_party/webrtc/modules/congestion_controller/goog_cc/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Galene] Re: using up more ports in ipv6 for better congestion control
  2021-07-11 11:19   ` Juliusz Chroboczek
  2021-07-15 14:17     ` Sean DuBois
@ 2021-07-15 16:26     ` T H Panton
  2021-07-16  1:37       ` Juliusz Chroboczek
  1 sibling, 1 reply; 12+ messages in thread
From: T H Panton @ 2021-07-15 16:26 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: Dave Taht, galene

[-- Attachment #1: Type: text/plain, Size: 473 bytes --]



> On 11 Jul 2021, at 13:19, Juliusz Chroboczek <jch@irif.fr> wrote:
> 
> Tim, I'd be very grateful if you could explain what advantages TWCC has
> over REMB.  For now, I'm sticking with REMB.

I can’t speak from experience (I’ve only used REMB) - but my sense is that the
difference really kicks in when you have multiple video media streams using the same path.
So perhaps video, screen share and audio. REMB treats each stream separately if I recall.

T.

[-- Attachment #2: Type: text/html, Size: 2091 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Galene] Re: using up more ports in ipv6 for better congestion control
  2021-07-15 14:17     ` Sean DuBois
@ 2021-07-16  1:36       ` Juliusz Chroboczek
  2021-07-16 14:25         ` Sean DuBois
  0 siblings, 1 reply; 12+ messages in thread
From: Juliusz Chroboczek @ 2021-07-16  1:36 UTC (permalink / raw)
  To: Sean DuBois; +Cc: T H Panton, Dave Taht, galene

>> Tim, I'd be very grateful if you could explain what advantages TWCC has
>> over REMB.  For now, I'm sticking with REMB.

> With TWCC the sender knows the metadata of lost packets. If you lose a packet
> with REMB you don't know the send time or the size of the packet. That
> seems like it could be useful information?

I understand that TWCC is more chatty, and sends more detailed information
to the sender.  What I don't understand is why this information is useful:
REMB performs the exact same computation as TWCC, but it does it on the
receiver side, and only sends the result to the sender, thus avoiding the
chattiness but yielding the exact same result.

What am I missing?  What exactly does TWCC buy you?

-- Juliusz

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Galene] Re: using up more ports in ipv6 for better congestion control
  2021-07-15 16:26     ` T H Panton
@ 2021-07-16  1:37       ` Juliusz Chroboczek
  2021-07-16 14:46         ` T H Panton
  0 siblings, 1 reply; 12+ messages in thread
From: Juliusz Chroboczek @ 2021-07-16  1:37 UTC (permalink / raw)
  To: T H Panton; +Cc: galene

>> Tim, I'd be very grateful if you could explain what advantages TWCC has
>> over REMB.  For now, I'm sticking with REMB.

> I can’t speak from experience (I’ve only used REMB) - but my sense is
> that the difference really kicks in when you have multiple video media
> streams using the same path.  So perhaps video, screen share and
> audio. REMB treats each stream separately if I recall.

Then REMB could be modified to perform per-connection congestion control,
just like TWCC, without all of the chattiness of TWCC.

I really feel that I'm missing something.

-- Juliusz

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Galene] Re: using up more ports in ipv6 for better congestion control
  2021-07-16  1:36       ` Juliusz Chroboczek
@ 2021-07-16 14:25         ` Sean DuBois
  0 siblings, 0 replies; 12+ messages in thread
From: Sean DuBois @ 2021-07-16 14:25 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: T H Panton, Dave Taht, galene

On Fri, Jul 16, 2021 at 03:36:31AM +0200, Juliusz Chroboczek wrote:
> >> Tim, I'd be very grateful if you could explain what advantages TWCC has
> >> over REMB.  For now, I'm sticking with REMB.
> 
> > With TWCC the sender knows the metadata of lost packets. If you lose a packet
> > with REMB you don't know the send time or the size of the packet. That
> > seems like it could be useful information?
> 
> I understand that TWCC is more chatty, and sends more detailed information
> to the sender.  What I don't understand is why this information is useful:
> REMB performs the exact same computation as TWCC, but it does it on the
> receiver side, and only sends the result to the sender, thus avoiding the
> chattiness but yielding the exact same result.
> 
> What am I missing?  What exactly does TWCC buy you?
> 
> -- Juliusz

Receiver Side BWE can't know the size+send time of lost packets.

I am not aware of any other reasons though. In the GCC [0] paper it
looked like calculations were designed to happen on both ends. Maybe it
was more maintainable to have all the logic in one peer?

[0] https://www.aitrans.online/static/paper/Gcc-analysis.pdf

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Galene] Re: using up more ports in ipv6 for better congestion control
  2021-07-16  1:37       ` Juliusz Chroboczek
@ 2021-07-16 14:46         ` T H Panton
  2021-07-16 17:48           ` Juliusz Chroboczek
  0 siblings, 1 reply; 12+ messages in thread
From: T H Panton @ 2021-07-16 14:46 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: galene



> On 16 Jul 2021, at 03:37, Juliusz Chroboczek <jch@irif.fr> wrote:
> 
>>> Tim, I'd be very grateful if you could explain what advantages TWCC has
>>> over REMB.  For now, I'm sticking with REMB.
> 
>> I can’t speak from experience (I’ve only used REMB) - but my sense is
>> that the difference really kicks in when you have multiple video media
>> streams using the same path.  So perhaps video, screen share and
>> audio. REMB treats each stream separately if I recall.
> 
> Then REMB could be modified to perform per-connection congestion control,
> just like TWCC, without all of the chattiness of TWCC.

I don’t think it could. - isn't REMB based on watching the packet arrival interval which is
pretty consistent on a single stream.  But imagine multiplexing some opus (50 fps)
a screenshare (10fps) a thumbnail (15fps) and a presenter view (60 fps), you now have
multiple valid packet intervals (in some sort of repeating pattern). 
I imagine that would make the smoother in REMB go badly wrong.

TWCC is based on watching the time of flight - which doesn’t have that problem.

> 
> I really feel that I'm missing something.
> 
> -- Juliusz


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Galene] Re: using up more ports in ipv6 for better congestion control
  2021-07-16 14:46         ` T H Panton
@ 2021-07-16 17:48           ` Juliusz Chroboczek
  0 siblings, 0 replies; 12+ messages in thread
From: Juliusz Chroboczek @ 2021-07-16 17:48 UTC (permalink / raw)
  To: T H Panton; +Cc: galene

> I don’t think it could. - isn't REMB based on watching the packet
> arrival interval

No, it acts on the derivative of the packet arrival *delay*.

Recall that every RTP packet is timestamped at the sender (that's the
Timestamp field of the RTP header).  When the receiver receives a packet
with timestamp t, it computes a local timestamp u.  When it receives
a second packet with remote timestamp t' and local timestamp u', it
computes

  (u' - t') - (u - t)

which is the variation of the packet delay.  Note that this formula is
equivalent to

  (u' - u) - (t' - t)

which shows that the offsets between the local and remote clocks cancel
out, so the clocks don't need to be synchronised.

-- Juliusz

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-07-16 17:48 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-10 15:15 [Galene] using up more ports in ipv6 for better congestion control Dave Taht
2021-07-10 15:19 ` [Galene] " Dave Taht
2021-07-10 16:36 ` T H Panton
2021-07-10 16:48   ` Dave Taht
2021-07-11 11:19   ` Juliusz Chroboczek
2021-07-15 14:17     ` Sean DuBois
2021-07-16  1:36       ` Juliusz Chroboczek
2021-07-16 14:25         ` Sean DuBois
2021-07-15 16:26     ` T H Panton
2021-07-16  1:37       ` Juliusz Chroboczek
2021-07-16 14:46         ` T H Panton
2021-07-16 17:48           ` Juliusz Chroboczek

Galène videoconferencing server discussion list archives

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://lists.galene.org/galene

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V1 galene galene/ https://lists.galene.org/galene \
		galene@lists.galene.org
	public-inbox-index galene

Example config snippet for mirrors.


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git