From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from srv1.stroeder.com (srv1.stroeder.com [213.240.180.113]) by mail.toke.dk (Postfix) with ESMTPS id 972257CA2F1 for ; Tue, 12 Jan 2021 20:29:31 +0100 (CET) Authentication-Results: mail.toke.dk; dkim=pass (1536-bit key) header.d=stroeder.com header.i=@stroeder.com header.b=0QjoTlDa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=stroeder.com; s=stroeder-com-20201114; t=1610479770; bh=zlMOSV+4ZXFDWbnLSOs8S8A6J1LweFfeZD7ZecWHlKs=; h=To:References:From:Subject:Date:In-Reply-To:From; b=0QjoTlDai4M0B1X1aXTZ4stWfSMazletU7s1F3TwWq1P32LnlkypLjSsbuc0nIFPN WASiQP6PjQ4FJd8jB6odcpqA+HwY5VmZtqYdVoLWUAzwX8Vu3y+PCW4Jdm5rItyBoX K3JvNZe0MtsBCVnu5k0rX23qbtGpHIfIzaf2ndj6SlpryRjnysf3ChjxAQ0V+JIe5+ jK4+3Yau9eg8gbqdwiACnYf+hzso2/4s/gfI9qnxbrRH5pHGf2ORONAuGzn To: galene@lists.galene.org References: <87k0siqndj.wl-jch@irif.fr> <87pn2aup3o.fsf@toke.dk> <878s8yqcmh.wl-jch@irif.fr> <871reqqb52.wl-jch@irif.fr> From: =?UTF-8?Q?Michael_Str=c3=b6der?= Message-ID: <37ca33b3-e31a-db9c-8ca8-f4d438a1d284@stroeder.com> Date: Tue, 12 Jan 2021 20:29:30 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <871reqqb52.wl-jch@irif.fr> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable Message-ID-Hash: NRIS3J2M3TQ2DNDBC26IHPGIALD572WW X-Message-ID-Hash: NRIS3J2M3TQ2DNDBC26IHPGIALD572WW X-MailFrom: michael@stroeder.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header X-Mailman-Version: 3.3.2 Precedence: list Subject: [Galene] Re: fq-codel trashing List-Id: =?utf-8?q?Gal=C3=A8ne_videoconferencing_server_discussion_list?= Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: On 1/12/21 7:10 PM, Juliusz Chroboczek wrote: > We just had a meeting with 70 people and at around 40 cameras switched = on, Which send quality were they all using? "normal"? Frankly I understand only ~5% of what you're talking about in this thread= . But do you really expect all users' end devices to decode 40 video streams? Not to speak of all the crappy Internet connections, already over-loaded while other family members are watching Netflix or similar situations in shared flats. As said before I'm running Gal=C3=A8ne in a VM on an insanely slow hardwa= re. But even with this setup and send quality set to "lowest" we managed to overwhelm some older iPad devices or older laptops with just 7 video streams. In my local tests with slow and ancient 10+ years old laptops I even get video drop-outs within my LAN with only 3 video streams. > Gal=C3=A8ne became unusable =E2=80=94 there were too many voice drops, = which indicates > two issues: >=20 > * I need to think of a better way of prioritising voice over video wh= en > under load; We had one hearing impaired user who hears a little bit with in-ear devices. Normally the user also follows spoken text by lip-reading to get more context. But this is nearly impossible for her in a video session because audio and video are not sufficiently synchronised with our setup. It probably would be helpful that the prioritising voice over video could be changed per user to at least have a chance for such a special ca= se. > * there are fairness issues =E2=80=94 some clients were receiving oka= y-ish > audio, others were not. Not sure whether that's really a fairness issue within Gal=C3=A8ne. I can= see differing latencies in /stats for different connections. The connection with higher latency, most times on all "Down" streams, has the higher latency consistently throughout whole session. I suspect the receiver side is the issue. > Gal=C3=A8ne recovered after some people switched their cameras off, I d= idn't > need to restart anything. I can confirm that Gal=C3=A8ne behaves pretty predictable if streams are turned on or off. > At the highest point, Gal=C3=A8ne was at 270% CPU, > and the TURN server was using another 50%. That's on a four-core VM. I'm far away from such a setup. So I wonder whether my response is useful at all. Ciao, Michael.