No VM Console access #12199
-
|
Hi Community, |
Beta Was this translation helpful? Give feedback.
Replies: 12 comments 10 replies
-
|
Post Upgrade - I removed all the zone, restarting from scratch the first config. Launching the zone, it is stuck at the first step "Creating physical Networks" - it's about 30' that I'm waiting. In the while, inside the server, nmcli connection show added a dark-grey line, a bridge named "cloud0" instead of the one I set previously named, as per the guide, cloudbr0. |
Beta Was this translation helpful? Give feedback.
-
|
Nope, it can’t. I can ping it inside the physical server but not on my
desktop, that is routed with no fw rules to that machine (I access the
server directly that is in the same subnet via ssh and https for rocky
Linux console).
Thank you for your reply
Raff
Il giorno lun 8 dic 2025 alle 10:35 prashanthr2 ***@***.***>
ha scritto:
… @poltraf <https://github.com/poltraf> Could you verify whether your local
machine can reach the Console Proxy VM’s public IP on port 443?
When you open a VM console in CloudStack, your browser establishes an
HTTPS (port 443) connection directly to the Console Proxy VM’s public IP.
If this port is unreachable, the console will fail to load.
—
Reply to this email directly, view it on GitHub
<#12199 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AJFSCRBX6PKMPCBG57PCE6T4AVA75AVCNFSM6AAAAACOE63SBCVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKMJZGQYDSMA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Again, redestroyed and recreated. This time I set also the VLAN ID for all the tuypes, management and public. Still no access to console. The weird thing is that I can ping from inside the server the systemVM private address, but not the public (that is on the same VLAN/subnet), neither the link local/Control IP - but the latter maybe is normal. |
Beta Was this translation helpful? Give feedback.
-
|
@poltraf You should be able to ping the public IP of the systemVM's. Are your POD IP range and Public IP in the same range? Asking this as I suspect it could be some route conflict on the CPVM. |
Beta Was this translation helpful? Give feedback.
-
|
Well, better if I explain my network by now. 2 ph. NICs, bonded, 2 VLAN IDs (but just one used). |
Beta Was this translation helpful? Give feedback.
-
|
Update. I reinstalled the whole stack. I missed to add |
Beta Was this translation helpful? Give feedback.
-
|
New update. After long long time, the 2 VMs were deployed. But agent status unknown. Accessed via virsh I noticed no agent at all, directory /usr/local/cloud is empty. Permission to primary storage are 775, owner is root (qemu is running as root). |
Beta Was this translation helpful? Give feedback.
-
|
@poltraf Can you confirm the version of cloudstack and also the systemVM template ? I see you have bridges as below on your physical server Have you configured the above under Zone --> physical networking --> Traffic Types? ( make sure your management and public matches the right label ) The quick Installation guide assumes all the traffic types to be associated with cloudbr0, which is not the case here based on networking info you shared #12199 (comment) . Hence, make sure your traffic types in Physical network match the right labels ( bridge names). If the labels are not right, update them and destroy the systemVM's , new systemVM's will be created automatically. |
Beta Was this translation helpful? Give feedback.
-
|
Hi Pras, |
Beta Was this translation helpful? Give feedback.
-
|
Unfortunately not :-( virsh console v-7-VM root@v-7-VM:/usr/local/cloud# ls root@v-7-VM:/usr/local/cloud# mount (NO NFS) cat /var/log/cloud.log root@v-7-VM:/usr/local/cloud# dmesg | grep -i nfs root@v-7-VM:/usr/local/cloud# service cloud status Dec 11 10:37:35 v-7-VM systemd[1]: cloud.service: Scheduled restart job, restar> So. Cloud agent isn't starting because not installed, and not installed because directory is empty. My idea is that the VM cannot mount the secondary storage, that is inside the same KVM and permissions 777... I do not have any further idea.... |
Beta Was this translation helpful? Give feedback.
-
|
Great! Gone in seconds, either destroy and recreating, and VMs are up & running correctly! |
Beta Was this translation helpful? Give feedback.
-
|
Oh, so it’s something already happened…. Being network- related, maybe reproducing it could be possible building the same network chain as I did, from NICs to bond to vlans and lastly to bridges over all? At the beginning I’ve been trying to do it via nmtui, but bridges didn’t create their slaves, so I move to manual configuration of ifcfg-* |
Beta Was this translation helpful? Give feedback.
@poltraf
cloud0 interface is DOWN, try