Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below): clickhouse-1 | #5707

Open
mahesh1b opened this issue Mar 25, 2024 · 44 comments

Comments

@mahesh1b
Copy link

Self-Hosted Version

24.4.0.dev

CPU Architecture

x86_64

Docker Version

26.0.0

Docker Compose Version

2.25.0

Steps to Reproduce

  • Clone the project from the Github: https://github.com/getsentry
  • Do ./install.sh in the GitHub folder
  • Once the containers are up check the clickhouse container logs

Expected Result

The clickhouse container should work without throwing any errors in the logs and CPU consumption should be normal.

Actual Result

clickhouse-1  | 2024.03.25 15:38:16.970267 [ 46 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):
clickhouse-1  |
clickhouse-1  | 0. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x13c4ee8e in /usr/bin/clickhouse
clickhouse-1  | 1. Poco::Net::SocketImpl::peerAddress() @ 0x13c510d6 in /usr/bin/clickhouse
clickhouse-1  | 2. DB::ReadBufferFromPocoSocket::ReadBufferFromPocoSocket(Poco::Net::Socket&, unsigned long) @ 0x101540cd in /usr/bin/clickhouse
clickhouse-1  | 3. DB::HTTPServerRequest::HTTPServerRequest(std::__1::shared_ptr<DB::Context const>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x110e6fd5 in /usr/bin/clickhouse
clickhouse-1  | 4. DB::HTTPServerConnection::run() @ 0x110e5d6e in /usr/bin/clickhouse
clickhouse-1  | 5. Poco::Net::TCPServerConnection::start() @ 0x13c5614f in /usr/bin/clickhouse
clickhouse-1  | 6. Poco::Net::TCPServerDispatcher::run() @ 0x13c57bda in /usr/bin/clickhouse
clickhouse-1  | 7. Poco::PooledThread::run() @ 0x13d89e59 in /usr/bin/clickhouse
clickhouse-1  | 8. Poco::ThreadImpl::runnableEntry(void*) @ 0x13d860ea in /usr/bin/clickhouse
clickhouse-1  | 9. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
clickhouse-1  | 10. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
clickhouse-1  |  (version 21.8.13.1.altinitystable (altinity build))
clickhouse-1  | 2024.03.25 15:38:17.081968 [ 513 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):
clickhouse-1  |
clickhouse-1  | 0. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x13c4ee8e in /usr/bin/clickhouse
clickhouse-1  | 1. Poco::Net::SocketImpl::peerAddress() @ 0x13c510d6 in /usr/bin/clickhouse
clickhouse-1  | 2. DB::HTTPServerRequest::HTTPServerRequest(std::__1::shared_ptr<DB::Context const>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x110e6f0b in /usr/bin/clickhouse
clickhouse-1  | 3. DB::HTTPServerConnection::run() @ 0x110e5d6e in /usr/bin/clickhouse
clickhouse-1  | 4. Poco::Net::TCPServerConnection::start() @ 0x13c5614f in /usr/bin/clickhouse
clickhouse-1  | 5. Poco::Net::TCPServerDispatcher::run() @ 0x13c57bda in /usr/bin/clickhouse
clickhouse-1  | 6. Poco::PooledThread::run() @ 0x13d89e59 in /usr/bin/clickhouse
clickhouse-1  | 7. Poco::ThreadImpl::runnableEntry(void*) @ 0x13d860ea in /usr/bin/clickhouse
clickhouse-1  | 8. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
clickhouse-1  | 9. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
clickhouse-1  |  (version 21.8.13.1.altinitystable (altinity build))
clickhouse-1  | 2024.03.25 15:38:17.749096 [ 513 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):
clickhouse-1  |
clickhouse-1  | 0. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x13c4ee8e in /usr/bin/clickhouse
clickhouse-1  | 1. Poco::Net::SocketImpl::peerAddress() @ 0x13c510d6 in /usr/bin/clickhouse
clickhouse-1  | 2. DB::HTTPServerRequest::HTTPServerRequest(std::__1::shared_ptr<DB::Context const>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x110e6f0b in /usr/bin/clickhouse
clickhouse-1  | 3. DB::HTTPServerConnection::run() @ 0x110e5d6e in /usr/bin/clickhouse
clickhouse-1  | 4. Poco::Net::TCPServerConnection::start() @ 0x13c5614f in /usr/bin/clickhouse
clickhouse-1  | 5. Poco::Net::TCPServerDispatcher::run() @ 0x13c57bda in /usr/bin/clickhouse
clickhouse-1  | 6. Poco::PooledThread::run() @ 0x13d89e59 in /usr/bin/clickhouse
clickhouse-1  | 7. Poco::ThreadImpl::runnableEntry(void*) @ 0x13d860ea in /usr/bin/clickhouse
clickhouse-1  | 8. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
clickhouse-1  | 9. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
clickhouse-1  |  (version 21.8.13.1.altinitystable (altinity build))

Event ID

No response

@csvan
Copy link

csvan commented Mar 26, 2024

Seeing the same thing on 24.3.0. It is unclear when and how it started, but it was not there when we set up the instance initially, nor after upgrading to 24.3.0.

It is also unclear if it has any actual impact on functionality.

@mahesh1b
Copy link
Author

@csvan I suspect that clickhouse is causing spikes in the CPU usage and the CPU usage for the server has not been stable
image

@csvan
Copy link

csvan commented Mar 26, 2024

Looking at our internal graphs, I have not noticed any significant deviations in CPU usage.

@mahesh1b
Copy link
Author

Do you think having too many projects can cause CPU spikes? I have a total of 67 projects on the Sentry and 23 out of them are actively used for monitoring.

@jap
Copy link

jap commented Mar 26, 2024

I also came across this, but also saw this in the logs early on while booting:

clickhouse-1                                    | 2024.03.26 13:59:42.424894 [ 44 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 21.8.13.1.altinitystable (altinity build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>

which makes sense as there is no IPv6 in our docker.

I've added a <listen_host>0.0.0.0</listen_host> to clickhouse/config.xml and rebuilt things.

I've now gotten another error from clickhouse (which has scrolled out of my terminal's history unfortunately) about not being able to bind to several ports, but some prodding with nsenter and ss tells me that it was able to bind, and tcpdumping confirms that requests are being made and processed.

Note that I'm not seeing any CPU spikes as well.

(all of this one 24.3.0)

@azaslavsky
Copy link

A duplicate of this error is at getsentry/self-hosted#2876. Have you tried the updating to a nightly build past the PR listed there?

@mahesh1b
Copy link
Author

@azaslavsky For now I have just rolled back to 24.1.0 and the clickhouse stopped throwing the error, but still seeing a lot of CPU spikes for the server.

@aldy505
Copy link

aldy505 commented Mar 27, 2024

Do you think having too many projects can cause CPU spikes? I have a total of 67 projects on the Sentry and 23 out of them are actively used for monitoring.

@mahesh1b to answer this: No, having too many projects doesn't cause CPU spikes. I have 100+ projects with only 8 core CPU and the average CPU usage is around 19% - 24%

I also came across this, but also saw this in the logs early on while booting:

clickhouse-1                                    | 2024.03.26 13:59:42.424894 [ 44 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 21.8.13.1.altinitystable (altinity build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>

which makes sense as there is no IPv6 in our docker.

I've added a <listen_host>0.0.0.0</listen_host> to clickhouse/config.xml and rebuilt things.

@jap There's IPv6 support in Docker though, but it's not the default yet: https://docs.docker.com/config/daemon/ipv6/

@aldy505
Copy link

aldy505 commented Mar 27, 2024

Wild guess, but try to change every rust-consumer entries on the docker-compose.yml file to be just consumer, and see if the problem's solved.

@mahesh1b
Copy link
Author

@aldy505 For now I have rolled-back to version 24.1.0 as it was production, Should I try this on version 23.4.0 ?

@aldy505
Copy link

aldy505 commented Mar 27, 2024

@aldy505 For now I have rolled-back to version 24.1.0 as it was production, Should I try this on version 23.4.0 ?

It's up to you, using consumer works though, as it's not a deprecated command. But if you're not facing any issues by using the consumer instead of the rust-consumer, we might need to consider some things about the usage of Rust consumers.

@mahesh1b
Copy link
Author

mahesh1b commented Mar 27, 2024

@aldy505 I have set up a new sentry server with version 23.4.0, I will try it and let know.
I am a bit confused changing consumer will resolve the clickhouse error or the CPU usage.

Thanks.

@mmerickel
Copy link

Just upgraded from 23.9.1 to 24.3.0 and am seeing this connection error. Also events are not being processed by the instance - it seems very broken. I followed the instructions in getsentry/self-hosted#2876 (comment) to stop using the rust-consumer and add the billing worker and it seems to have fixed the issues for now.

@mahesh1b
Copy link
Author

replacing rust-consumer with consumer in the docker-compose.yml file resolved the errors, now I no longer see the error in the clickhouse container.
But am still not sure why the CPU usage is so unstable, I am using the t3a.2xlarge instance
image

@csvan
Copy link

csvan commented Jun 12, 2024

I run 24.5.1 on a an 8-core 32GB VM and am still being absolutely spammed by these logs, so I am not sure the VM size is related.

@theneva
Copy link

theneva commented Jun 13, 2024

I run 24.5.1 on a an 8-core 32GB VM and am still being absolutely spammed by these logs, so I am not sure the VM size is related.

I see. Well, it was worth a shot, thanks!

@gmisiolek-sbm
Copy link

gmisiolek-sbm commented Jun 19, 2024

Any update on that? The same problems after upgrading sentry to 24.6.0

I find replacing rust-consumer with consumer as current workaround. That's sad tbh.

@theneva
Copy link

theneva commented Jun 19, 2024

I just upgraded my self-hosted stack to 24.6.0, and I'm still seeing the error messages a whole bunch 😞

@gmisiolek-sbm
Copy link

I just upgraded my self-hosted stack to 24.6.0, and I'm still seeing the error messages a whole bunch 😞

I had the same, changing rust-consumer to consumer in docker-compose.yml helped.

@stumbaumr
Copy link

Same issue here
#5707 (comment) fixed it.

Clickhouse log was full with

2024.06.13 15:27:25.034661 [ 18085 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):

0. Poco::Net::SocketImpl::error(int, String const&) @ 0x0000000015b3dbf2 in /usr/bin/clickhouse
1. Poco::Net::SocketImpl::peerAddress() @ 0x0000000015b40376 in /usr/bin/clickhouse
2. DB::HTTPServerRequest::HTTPServerRequest(std::shared_ptr<DB::IHTTPContext>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x0000000013154417 in /usr/bin/clickhouse
3. DB::HTTPServerConnection::run() @ 0x0000000013152ba4 in /usr/bin/clickhouse
4. Poco::Net::TCPServerConnection::start() @ 0x0000000015b42834 in /usr/bin/clickhouse
5. Poco::Net::TCPServerDispatcher::run() @ 0x0000000015b43a31 in /usr/bin/clickhouse
6. Poco::PooledThread::run() @ 0x0000000015c7a667 in /usr/bin/clickhouse
7. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015c7893c in /usr/bin/clickhouse
8. ? @ 0x00007f3b5b13f609 in ?
9. ? @ 0x00007f3b5b064353 in ?
 (version 23.8.11.29.altinitystable (altinity build))
2024.06.13 15:27:25.491262 [ 18085 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):

0. Poco::Net::SocketImpl::error(int, String const&) @ 0x0000000015b3dbf2 in /usr/bin/clickhouse
1. Poco::Net::SocketImpl::peerAddress() @ 0x0000000015b40376 in /usr/bin/clickhouse
2. DB::ReadBufferFromPocoSocket::ReadBufferFromPocoSocket(Poco::Net::Socket&, unsigned long) @ 0x000000000c896cc6 in /usr/bin/clickhouse
3. DB::HTTPServerRequest::HTTPServerRequest(std::shared_ptr<DB::IHTTPContext>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x000000001315451b in /usr/bin/clickhouse
4. DB::HTTPServerConnection::run() @ 0x0000000013152ba4 in /usr/bin/clickhouse
5. Poco::Net::TCPServerConnection::start() @ 0x0000000015b42834 in /usr/bin/clickhouse
6. Poco::Net::TCPServerDispatcher::run() @ 0x0000000015b43a31 in /usr/bin/clickhouse
7. Poco::PooledThread::run() @ 0x0000000015c7a667 in /usr/bin/clickhouse
8. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015c7893c in /usr/bin/clickhouse
9. ? @ 0x00007f3b5b13f609 in ?
10. ? @ 0x00007f3b5b064353 in ?
 (version 23.8.11.29.altinitystable (altinity build))
root@sentry01(dc1.prd):~/getsentry/self-hosted (git:(24.6.0))# docker exec -it sentry-self-hosted-clickhouse-1 /bin/bash
root@7271d67d55dc:/# ls -alh /var/log/clickhouse-server/
total 423M
drwxrwxrwx 2 clickhouse clickhouse   23 Jun 13 04:35 .
drwxr-xr-x 5 root       root         12 Apr 30 03:07 ..
-rw-r----- 1 clickhouse clickhouse 996M Jun 22 14:56 clickhouse-server.err.log
-rw-r----- 1 clickhouse clickhouse  20M Jun 13 04:35 clickhouse-server.err.log.0.gz
-rw-r----- 1 clickhouse clickhouse  20M Jun  3 06:20 clickhouse-server.err.log.1.gz
-rw-r----- 1 clickhouse clickhouse  19M May 23 15:06 clickhouse-server.err.log.2.gz
-rw-r----- 1 clickhouse clickhouse  19M May 22 03:53 clickhouse-server.err.log.3.gz
-rw-r----- 1 clickhouse clickhouse  19M May 20 21:42 clickhouse-server.err.log.4.gz
-rw-r----- 1 clickhouse clickhouse  19M May 19 15:43 clickhouse-server.err.log.5.gz
-rw-r----- 1 clickhouse clickhouse  19M May 18 10:50 clickhouse-server.err.log.6.gz
-rw-r----- 1 clickhouse clickhouse  19M May 17 05:01 clickhouse-server.err.log.7.gz
-rw-r----- 1 clickhouse clickhouse  19M May 15 23:16 clickhouse-server.err.log.8.gz
-rw-r----- 1 clickhouse clickhouse 996M Jun 22 14:56 clickhouse-server.log
-rw-r----- 1 clickhouse clickhouse  20M Jun 13 04:35 clickhouse-server.log.0.gz
-rw-r----- 1 clickhouse clickhouse  20M Jun  3 06:20 clickhouse-server.log.1.gz
-rw-r----- 1 clickhouse clickhouse  19M May 23 15:06 clickhouse-server.log.2.gz
-rw-r----- 1 clickhouse clickhouse  19M May 22 03:53 clickhouse-server.log.3.gz
-rw-r----- 1 clickhouse clickhouse  19M May 20 21:42 clickhouse-server.log.4.gz
-rw-r----- 1 clickhouse clickhouse  19M May 19 15:43 clickhouse-server.log.5.gz
-rw-r----- 1 clickhouse clickhouse  19M May 18 10:50 clickhouse-server.log.6.gz
-rw-r----- 1 clickhouse clickhouse  19M May 17 05:01 clickhouse-server.log.7.gz
-rw-r----- 1 clickhouse clickhouse  19M May 15 23:16 clickhouse-server.log.8.gz
-rw-r----- 1 clickhouse clickhouse  19M May 14 17:27 clickhouse-server.log.9.gz
root@7271d67d55dc:/#

@crinjes
Copy link

crinjes commented Jun 25, 2024

Hello @lynnagara, sorry for the ping. Do you have any timeline regarding this issue?

Can you or other people who have responded to this issue make it more clear what the problem is?

The original message just contains some error logs from the clickhouse container in a self-hosted environment. Taken alone, I wouldn't assume those are anything more than (recoverable) transient networking issues. That container is now at least one major version behind the lowest major version we support (22.8, soon to move to 23.3).

I'd like to close this issue out, or narrow down the problem (if CPU usage is too high, then on which containers?)

The main issue for me why I ended up here with this workaround is disk usage from log spam. Both clickhouse log files fill up with the same error message at a rate of multiple gigabytes per day (see comment above). The high CPU usage may just be another symptom of whatever is going on.

It's the error logs from the original message, but at a constant rate of multiple per second.

Their frequency may depend on the type and mount of activity, so may not show on an idle test instance.

@csvan
Copy link

csvan commented Jun 26, 2024

Hello @lynnagara, sorry for the ping. Do you have any timeline regarding this issue?

Can you or other people who have responded to this issue make it more clear what the problem is?
The original message just contains some error logs from the clickhouse container in a self-hosted environment. Taken alone, I wouldn't assume those are anything more than (recoverable) transient networking issues. That container is now at least one major version behind the lowest major version we support (22.8, soon to move to 23.3).
I'd like to close this issue out, or narrow down the problem (if CPU usage is too high, then on which containers?)

The main issue for me why I ended up here with this workaround is disk usage from log spam. Both clickhouse log files fill up with the same error message at a rate of multiple gigabytes per day (see comment above). The high CPU usage may just be another symptom of whatever is going on.

It's the error logs from the original message, but at a constant rate of multiple per second.

Their frequency may depend on the type and mount of activity, so may not show on an idle test instance.

You can cap the size of log files in your Docker daemon config, e.g.

  "log-driver": "local",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },

@JanMikes
Copy link

You can cap the size of log files in your Docker daemon config, e.g.

  "log-driver": "local",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },

Hi (yeah, i have the same issue 😺 ), though you can cap the size of logs, it is not really solution neither workaround when using centralized logs storage, for example we are using grafana stack (promtail + loki) and it takes a lot of storage, ignoring logs for the container is not solution either.

@stumbaumr
Copy link

The solution against hammering into the logging system is not to reduce the logfile size!
Fix it at the source...

@csvan
Copy link

csvan commented Jun 27, 2024

@stumbaumr its a workaround until the issue is resolved, nobody said it was a solution.

@liukch
Copy link

liukch commented Jul 1, 2024

Any progress on this issue or do we have some schedule? It has been several months after this issue was created.
If it's not resolved, the upgrade of Sentry will continue with trouble and just need to be workaround.
@aldy505 @lynnagara

@patschi
Copy link

patschi commented Jul 21, 2024

I'm also experiencing this issue still with most recent 24.7.0. Adjusting the rust-consumer to consumer as mentioned in #5707 (comment) does indeed seem to solve the issue.

I'm now running following after update/install and before starting the stack:

cp /opt/sentry/onpremise/docker-compose.yml /opt/sentry/onpremise/docker-compose.yml.bak
sed -e "s/rust-consumer/consumer/g" -i /opt/sentry/onpremise/docker-compose.yml
docker compose --env-file .env.custom up -d

I only replaced the consumer, no other changes were made. RAM usage also slightly reduced. So it seems there's indeed a issue with the new consumer written in rust.

@neseih
Copy link

neseih commented Jul 23, 2024

I'm also experiencing this issue still with most recent 24.7.0. Adjusting the rust-consumer to consumer as mentioned in #5707 (comment) does indeed seem to solve the issue.

I'm now running following after update/install and before starting the stack:

cp /opt/sentry/onpremise/docker-compose.yml /opt/sentry/onpremise/docker-compose.yml.bak
sed -e "s/rust-consumer/consumer/g" -i /opt/sentry/onpremise/docker-compose.yml
docker compose --env-file .env.custom up -d

I only replaced the consumer, no other changes were made. RAM usage also slightly reduced. So it seems there's indeed a issue with the new consumer written in rust.

thanks, worked for me as well and resolved some other nasty DuplicateKeyExceptions in postgres. Should be integrated into master as soon as possible.

@Solvik
Copy link

Solvik commented Oct 24, 2024

Seems Clickhouse fixed the logging issue and it's been released in ClickHouse release 24.8 LTS, 2024-08-20 version

@aarnaud
Copy link

aarnaud commented Nov 6, 2024

Same issue with 24.10.0

Josh5 added a commit to Josh5/sentry-docker-swarm that referenced this issue Nov 8, 2024
There is a know bug that has yet to be fixed that causes lots of errors and excess resource consumption in clickhouse.
REF: getsentry/snuba#5707 (comment)
@sebzur
Copy link

sebzur commented Nov 20, 2024

In version 24.11.0, this issue still persists. I applied the suggested fix: https://github.com/getsentry/snuba/issues/5707#issuecomment-2027710056 - and everything appears to be working now. @patschi seems to be correct - there is indeed an issue with rust-consumer. I had previously fixed it a few months ago by replacing the Rust version with the standard one, but unfortunately, the issue has not been fully resolved.

@aamarques
Copy link

I'm also experiencing this issue still with most recent 24.7.0. Adjusting the rust-consumer to consumer as mentioned in #5707 (comment) does indeed seem to solve the issue.
I'm now running following after update/install and before starting the stack:

cp /opt/sentry/onpremise/docker-compose.yml /opt/sentry/onpremise/docker-compose.yml.bak
sed -e "s/rust-consumer/consumer/g" -i /opt/sentry/onpremise/docker-compose.yml
docker compose --env-file .env.custom up -d

I only replaced the consumer, no other changes were made. RAM usage also slightly reduced. So it seems there's indeed a issue with the new consumer written in rust.

thanks, worked for me as well and resolved some other nasty DuplicateKeyExceptions in postgres. Should be integrated into master as soon as possible.

I still have duplicated duplicate key value violates unique constraint "sentry_organizationonboar_organization_id_47e98e05cae29cf3_uniq"

@sgohl
Copy link

sgohl commented Nov 28, 2024

Socket is not connected still happens on 24.12.0.dev0 (master)
I did not try the rust-consumer to consumer workaround though

also I have errors from postgres about duplicates I cannot get rid of

sentry-self-hosted-postgres-1  | 2024-11-28 09:43:27.622 UTC [50626] STATEMENT:  INSERT INTO "sentry_environmentproject" ("project_id", "environment_id", "is_hidden") VALUES (9, 13, NULL) RETURNING "sentry_environmentproject"."id"
sentry-self-hosted-postgres-1  | 2024-11-28 09:44:02.370 UTC [53361] ERROR:  duplicate key value violates unique constraint "sentry_environmentprojec_project_id_environment_i_91da82f2_uniq"
sentry-self-hosted-postgres-1  | 2024-11-28 09:44:02.370 UTC [53361] DETAIL:  Key (project_id, environment_id)=(5, 82) already exists.

@evanh
Copy link
Member

evanh commented Dec 9, 2024

We're working to get to the latest version of CH to fix the rust consumer issue (per this comment).

Assigning this to the self hosted team to triage the postgres issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Waiting for: Product Owner
Status: No status
Status: No status
Development

No branches or pull requests