-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[1.x] system under heavy load stops to handle calls wia ws api #3358
Comments
may we have some partial debug_locks implementation only for locks on session creation to debug it? |
Do other requests work though? E.g. "info" request or attaching a handle for a different plugin. That could help understanding if the deadlock lies in the transport (WebSocket) or somewhere else.
Do you mean
The debug_locks has a massive impact on verbosity, that's probably the reason of performance issue. If you are using Janus in a containerized environment with cgroups v2, the huge log file increasing might increase the memory allocated (due to pages being kept in buffer) and might explain the OOM.
There is no such option available. You might try customizing the code and just logs the |
I mean the following request isn't working
I didn't tried this, but I can try next time this happens. |
Taking a deeper inspection at the logs you shared, the issue seems to start with some errors:
Those reminds me of situations where the host memory is exhausting (like the issue about cgroups v2 I already mentioned). Are you running janus in containers with a memory limit? If you suspect a memory leak, try running your janus app in a lab environment under |
we are running it in containers, but without memory limits set
yes |
All right, this is a long shot, but can you check the status of the memory in the containers?
replace long-id with the id of the docker container. If you see the |
Got this on another customer. We are using this on our production, so in case we got it on our servers, we will enable debug_locks, since we have no such high load as on customers servers, we will try to catch it. |
got info from our customer, after janus restart
|
We will try to reproduce it on our lab with enabling recording and setting memory limits on docker container. |
we unable to reproduce it on our test stand, but it consistently repeats on customers envs |
Those data are useless after a restart, we need them while the issue exists (btw, the
If you suspect a deadlock wait for the issue and then provide the output of this
|
got new log for issue, unfortunatelly customers janus is running in docker with autoheal, so we can't get gdb output yet, but got something strange in log:
full log in attach, restart of janus by autoheal happened at 2024-05-15T08:41:28 |
I see a whole bunch of |
It seems to happen after restart of it. Our clients are not connected directly to janus with ws and don't manage handles to janus, we use server side ws connections to janus and server-side management of handles. Now it seems clients tried to ice restart subscribe connections after janus restart, but we don't handle janus restarts correctly now on server - we don't detect janus restarts and don't recreate necessary rooms and logic in our app isn't recovering now from such situation. |
we found a leak of janus sessions in our code. Is it possible what big count of janus sessions opened causes something like overflow and leads janus to stop accepting new? |
This is something we never tested. There are a lot of maps and structs involved (that relies on integer indexes) when setting up new sessions (either in the core or in the plugin). |
we fixed leaks of sessions, but it doesn't help with janus hangs - today janus stopped to accept connections on our production and also it hangs four times today on one of our customers. |
@spscream I just pushed a PR that contains a patch we prepared for one of our customers some time ago, where they were experiencing a deadlock in the AudioBridge, especially when closing PeerConnections. Due to the way we use some atomics for locking purposes, there was a small chance of inverted locks that could cause a deadlock. Your latest log now seems to indicate the same problem, so I've released the patch as a PR so that you can test that too. Please let us know if that helps, as we were waiting for feedback from them too before merging it. |
PS: this is a patch @atoppi made, I just pushed it as a PR (credit where credit's due!) |
@spscream any update on the PR? Did it help? |
it looks like it helps, we have had no problems on it |
Ack, thanks! We merged the PR, and we'll close this issue then. |
What version of Janus is this happening on?
master
Have you tested a more recent version of Janus too?
we use master
Was this working before?
we had no issue on lower versions, but we had another issue with memory leaks on them, so we got OOM some time.
Is there a gdb or libasan trace of the issue?
no
Under some circuumstances janus stops to handle any api calls via ws - ws is accepting connections, but create_session has timeouts and janus doesn't work. We tried to add debug_locks, but we got situation what janus server eats all memory and IO and stuck, so we got problems with performance before issue appeared. This happens on one of our customers with very heavy load on januses.
How can we debug this and what can be a cause of it?
https://gist.github.com/spscream/84aa7bca6f8e3f43e07d4c58f414e9cd - recent log of such situation with debug_level = 4
The text was updated successfully, but these errors were encountered: