Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modernize 0readme_ethernet.txt by recommending virtualization / containerization of simh? #444

Open
jwbrase opened this issue Jan 28, 2025 · 1 comment

Comments

@jwbrase
Copy link

jwbrase commented Jan 28, 2025

Properly doing the manual setup to get a tap interface bridged to your ethernet NIC, and to make this persistent across boots, and to not do it in a way that causes weird networking failures down the line, can be a bit of a pain. However, pretty much every virtualization platform I'm aware of has functionality to do this automatically for the user already built in, since VMs generally want a NIC exposed to the local LAN. And most consumer CPUs these days have hardware virtualization features.

So a very appealing option is to configure a VM with multiple NICs (one for the VM and one or more for simh), set up simh inside the VM, let the hypervisor and associated middleware handle the tap interfaces and the bridging and all, and just present plain-old (virtual) ethernet cards to simh. You still need to set up libpcap on the VM for non-root simh instances to access the extra ethernet devices, but the hypervisor on the host handles the heavy lifting of multiplexing that all onto the ultimate physical NIC.

I've got a homelab Proxmox/Ceph cluster in which I've tried this out: it seems to work well. I've even got some udev rules setup in the VM so that virtualized disks with a serial number (as passed in by the hypervisor) beginning with "simh" after the QEMU portion of the serial have their device files assigned to group "simh" and symlinked under /dev/disk/emuguest/ (similar things may be possible to automate handling of new NICs added to the VM, but I haven't really tried that yet).

Might running simh inside a VM or container be a good course of action to recommend in 0readme_ethernet.txt as an alternative to a manual bridged tap setup?

More generally, I could see a VM or container template being a fairly portable way to distribute a binary release of simh: For example, an OVA VM with a debian install on a main disk image, 10 or 20 NICs configured (to be assigned to emulated machines as desired by the user), and simh binaries on an image mounted as /opt/simh.

@psmode
Copy link

psmode commented Feb 6, 2025

This is the approach I have been using for some time, in my Rocky Linux (formerly CentOS) KVM environment. To keep my sanity, I name the virtual NICs at the host level so I can keep track easier of which is which. Here is the shell script I use to generate the guest VM (backslash continuation characters on the virtual-install command were eaten by GitHub here).

[root@wort kvm]# cat make_rocky-3disk-2nic
#!/bin/bash

#BOOT_ISO=$(ls /repo/rocky-linux/8/isos/x86_64/Rocky-8.*x86_64-boot.iso)
#DVD1_ISO=$(ls /repo/rocky-linux/8/isos/x86_64/Rocky-8.*x86_64-dvd1.iso)
DVD1_ISO=$(ls /repo/rocky-linux/9/isos/x86_64/Rocky-x86_64-dvd.iso)

NEW_MACHINE=$1
MACO6=$2
let tmpv=0x$MACO6+1
MACO61=$(printf '%x' "$tmpv")
VPORT=$3

virt-install
--connect qemu:///system -n $NEW_MACHINE
--ram 4096 --vcpus=4
--cdrom=$DVD1_ISO
--disk device=disk,bus=virtio,discard='unmap',path=/dev/T0/$NEW_MACHINE-disk0
--disk device=disk,bus=virtio,path=/dev/T3/$NEW_MACHINE-disk1
--disk device=disk,bus=virtio,path=/dev/T1/$NEW_MACHINE-disk2
--disk device=disk,bus=virtio,discard='unmap',path=/dev/T0/$NEW_MACHINE-disk3
--network=bridge:kvm-guests,model='virtio',target.dev=vn.$1.0
--mac=00:16:3e:01:08:$MACO6
--network=bridge:kvm-guests,model='virtio',target.dev=vn.$1.1
--mac=00:16:3e:01:08:$MACO61
--noreboot --vnc --vncport=$VPORT
--os-type linux --os-variant=rocky9.0 --accelerate

I keep the MAC addresses assigned at the host largely the same, with the last octet incremented by 1 between the OS MAC and the simulation MAC. Virtual disks created for the guest are implemented on LVs created prior to running the script. Backups are easy, handy and safe with this approach (think of HUGE ODS5 disk backup target mounted on demand in the simulation for VMSBACKUP, but subsequently backed up with Veeam in the guest using logical block tracking)

Within the guest, I end up with the device used by the OS and the other device used exclusively by the VAX simulator - ens4 does not get configured by the Linux OS at all

[psmode@ripple MORGAN]$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 00:16:3e:01:08:51 brd ff:ff:ff:ff:ff:ff
altname enp0s4
3: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 00:16:3e:01:08:50 brd ff:ff:ff:ff:ff:ff
altname enp0s3

In the simh config file, attach xq ens4 and you are on your way (after setting a MAC on the xq)

I use this setup running a mixed architecture cluster against a FreeAXP Alpha running inside a KVM guest running Windows 10 inside the same KVM host.

I would have liked running multiple simh VAXen inside the same Linux guest. However, I got stuck with the Ethernet frames of one simulation not visible to the other simulation within the same guest. Sort of a VAX children should be seen but not heard thing.

Testing multiple VAXen with this approach across multiple guests would be wise, preferably with cluster traffic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants