You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We ran a failover today that moved proxysql to a different nodepool, and the new one was not set to autoscale. This lead to having 2 pods be unscheduled, but the agent was still trying to cluster with those pods.
I thought there was code to prevent clustering if the pod doesn't have an IP yet, I will check on that.
Alternatively something is screwy with k8s, but I don't know what that'd be; pods should not be behind a service until they report healthy.
| 2025-02-13 13:55:58 MySQL_Monitor.cpp:4502:monitor_dns_resolver_thread(): [ERROR] An error occurred while resolving hostname: [-2] │
│ 2025-02-13 13:56:00 ProxySQL_Cluster.cpp:273:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer :6032 . Error: Can't connect to local serve │
│ 2025-02-13 13:56:02 ProxySQL_Cluster.cpp:273:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer :6032 . Error: Can't connect to local serve │
│ 2025-02-13 13:56:04 ProxySQL_Cluster.cpp:273:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer :6032 . Error: Can't connect to local serve │
│ 2025-02-13 13:56:06 ProxySQL_Cluster.cpp:273:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer :6032 . Error: Can't connect to local serve │
│ 2025-02-13 13:56:08 ProxySQL_Cluster.cpp:273:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer :6032 . Error: Can't connect to local serve │
│ 2025-02-13 13:56:08 [INFO] Received LOAD PROXYSQL SERVERS TO RUNTIME command
The text was updated successfully, but these errors were encountered:
We ran a failover today that moved proxysql to a different nodepool, and the new one was not set to autoscale. This lead to having 2 pods be unscheduled, but the agent was still trying to cluster with those pods.
I thought there was code to prevent clustering if the pod doesn't have an IP yet, I will check on that.
Alternatively something is screwy with k8s, but I don't know what that'd be; pods should not be behind a service until they report healthy.
The text was updated successfully, but these errors were encountered: