Implement basic varlink API and rhc-server daemon#366
Conversation
207c4af to
7671637
Compare
pkoprda
left a comment
There was a problem hiding this comment.
Thanks for the PR, I have left some comments below.
0ac7e54 to
860dc2b
Compare
|
Hi @mjcr99 , Then the Thanks in advance |
28eb80d to
83e397c
Compare
jirihnidek
left a comment
There was a problem hiding this comment.
Thanks for the PR. I have few comments and request in comments.
I have another request here: could you please add rhc-server.service file and could you please modify rhc.spec file, because your PR contains some code that is not possible test without these changes.
pkoprda
left a comment
There was a problem hiding this comment.
Some comments below.
Please also fix the typo in the commit message d367b32 to feat(varlink api): added preliminary varlink API implementation (preliminar -> preliminary) and remove the dot so it would fit the conventional commit format.
83e397c to
54a40d6
Compare
pkoprda
left a comment
There was a problem hiding this comment.
Can you also write some unit tests?
jirihnidek
left a comment
There was a problem hiding this comment.
Thanks for your update. I can confirm that it works as expected and the service is started automatically, when rhc-server.socket is enabled and call the Varlink method using varlinkctl.
However, when I stop the rhc-server.service using systemctl stop rhc-server.service, then something deletes the /run/rhc directory including socket file and then it is not possible to call varlink method again, because the socket service does not have the socket file and it cannot trigger the rhc-server.service. Maybe, there is something wrong in service configuration or some function in rhc-server has side-effect and it deletes this directory.
54a40d6 to
6285684
Compare
bc7ea71 to
6df2545
Compare
* Card ID: CCT-1753 This initial implementation support a test call that returns the same received message. Signed-off-by: mjcr99 <macano@redhat.com> Assisted-by: Claude Code
* Card ID: CCT-1753 Add a minimal backend implementation that handles varlink Test method requests following the pattern established in go-varlink/example. The Test method accepts an input string and returns it prefixed with "Echo from rhc-server:" to demonstrate bidirectional communication. Signed-off-by: mjcr99 <macano@redhat.com> Assisted-by: Claude Code
* Card ID: CCT-1753 Implement the rhc-server daemon that exposes varlink API over a Unix socket for internal RHC communication. Key features: - Uses govarlink.NewRegistry() to handle varlink requests - Systemd socket activation support with fallback to manual socket creation - Unix socket at /run/rhc/com.redhat.rhc with 0666 permissions - Graceful shutdown on SIGINT/SIGTERM signals - Integrates with internal/rhc-server backend for request handling The server follows the architecture pattern from go-varlink/example, registering the internal API handler and serving requests through the varlink protocol. Signed-off-by: mjcr99 <macano@redhat.com> Assisted-by: Claude Code
* Card ID: CCT-1753 It also updates the .gitignore file to avoid adding rhc-server file and updates the clean target to clean the rhc-server binary as well. Signed-off-by: mjcr99 <macano@redhat.com>
* Card ID: CCT-1753 Add a PID file-based locking mechanism to ensure only one instance of rhc-server can run at a time. This addresses the issue where starting rhc-server multiple times could lead to socket conflicts and communication failures. The PID lock prevents the race condition where: - First rhc-server instance creates socket A - Second rhc-server instance removes socket A and creates socket B - Second instance terminates and removes socket B - First instance becomes unreachable With this change, the second instance will fail immediately with: "another instance of rhc-server is already running" The implementation is robust against crashes: flock() is automatically released by the kernel when the process terminates (even with kill -9), allowing new instances to start cleanly after a crash. Key changes: 1. PID Lock Implementation: - Introduce acquirePIDLock() function that creates and locks /run/rhc/rhc-server.pid using flock(LOCK_EX|LOCK_NB) - Non-blocking lock ensures immediate failure if another instance is already running - Returns cleanup function for proper lock release and PID file removal - Lock is automatically released by kernel if process crashes or is killed 2. Improve systemd socket activation: - Iterate through all listeners from activation.Listeners() instead of blindly using the first one - Verify listener.Addr().Network() == "unix" to ensure we get the correct socket type - Return descriptive error if systemd provides listeners but none are unix sockets - This makes the code more robust for future scenarios where multiple listeners of different types might be provided 3. Socket removal safety: - Add comments explaining that removing existing socket is now safe because we hold the PID lock - If socket exists, it means previous instance terminated abnormally - No risk of removing socket from running instance 4. Permission changes: - Change socket permissions from 0666 to 0660 for better security Signed-off-by: mjcr99 <macano@redhat.com> Assisted-by: Claude Code
6df2545 to
6749c23
Compare
| ReadWritePaths=/run/rhc | ||
| RuntimeDirectory=rhc | ||
| RuntimeDirectoryMode=0755 | ||
| RuntimeDirectoryPreserve=yes |
…erver * Card ID: CCT-1753 * Card ID: CCT-1754 Introduce systemd service and socket units to enable rhc-server to run as a system service with socket activation support. Update the RPM spec file to build, install, and manage the rhc-server daemon. New systemd units: 1. rhc-server.socket: - Defines the Unix socket at /run/rhc/com.redhat.rhc - Enables socket activation (service starts on-demand when client connects) - Socket mode 0660 2. rhc-server.service: - Executes /usr/libexec/rhc/rhc-server binary - Requires and orders after rhc-server.socket for socket activation - RuntimeDirectory directive ensures /run/rhc is created with correct permissions (0755) - ReadWritePaths grants write access to /run/rhc for PID lock - Logs to systemd journal for centralized log management - RuntimeDirectoryPreserve ensures /run/rhc folder is not deleted if the service is stopped RPM packaging changes (rhc.spec): Build phase: - Compile rhc-server binary alongside rhc client Install phase: - Install rhc-server to /usr/libexec/rhc/ (daemon, not user-facing) - Install systemd units to /usr/lib/systemd/system/ - Improved rhc binary installation from wildcard to explicit path to avoid unintended files Scriptlets: - %post: Enable rhc-server.socket after package installation - %preun: Stop socket and service before package removal - %postun: Restart service after package update - %files: Declare rhc-server binary and systemd units as package files Socket activation benefits: - Service starts automatically when client connects (lazy loading) - Systemd handles socket lifecycle and file descriptor passing - Compatible with rhc-server PID lock for instance management Deployment workflow: 1. Install RPM package 2. systemctl enable --now rhc-server.socket 3. Service auto-starts when rhc client connects to the socket 4. PID lock prevents multiple instances from running concurrently Signed-off-by: mjcr99 <macano@redhat.com> Assisted-by: Claude Code
…ming conventions * Card ID: CCT-1753 Reorganize rhc-server code structure and rename varlink interface files to follow the fully qualified naming convention used in varlink ecosystems. Also moves backend implementation from internal/rhc-server to cmd/rhc-server. No functional changes - this is purely a structural refactoring. Signed-off-by: mjcr99 <macano@redhat.com> Assisted-by: Claude Code
Add comprehensive unit tests for the Backend struct and its Test method in cmd/rhc-server. Tests cover various input scenarios. Signed-off-by: mjcr99 <macano@redhat.com> Assisted-by: Claude Code
6749c23 to
a3fa20e
Compare
pkoprda
left a comment
There was a problem hiding this comment.
Looks good from my side, I will let @jirihnidek and @m-horky finalize the review :)
Description
Hi team,
This PR implements a preliminary version of the varlink API we will need to communicate with the rhc-server daemon, which is also implemented as a first approach, providing a very basic functionality to test the communication.
Testing
Created a custom package from the branch, using RHEL 10.1, following dependencies are valid to have a working environment:
Locate in the repository folder and run:
Install custom package;
sudo dnf install /var/lib/mock/fedora-43-x86_64/result/rhc-0.3.8-1.fc43.x86_64.rpmWe can now, enable the socket
The socket now shows as active, but the rhc-server daemon is not:
Then, we can send a request to the socket, the rhc-server will automatically start, we will get a response and the server will keep now listening for other calls:
The service can be safely stopped, and the socket remains:
The socket can be reconnected, and the service will automatically boot as well.