Replies: 5 comments 2 replies
-
|
Great initiative, always love to peer into how other teams handle this! 👍 ⭐ We (Furway) keep our deployment process simple and cost-effective while ensuring consistency between development and production. CI/CD Workflow: Automation & Deployment: On our Ubuntu server, a lightweight With this setup, everything runs smoothly on an $8-12/month server. There is no need to add complexity unless it is truly necessary. 😸 I do not see any immediate improvements needed for our current needs 💭 |
Beta Was this translation helpful? Give feedback.
-
|
At Finfur Animus we have main and dev branch with github workflow that automatically deploys to production or test server depending on which branch is committed to. We should probably think ways how to make easy to deploy any solutions we come up here to make it easy to pickup. We want any software to be easily deploy able with as little friction as possible. Possibly we could try to aim to serve 90% of cases and offer some ways to fine tune things for the rest of the 10%. While Github workflows are nice since they react to branch changes but it might be hard to come up generalized workflow that works for many solutions so maybe we should focus on providing set simple commands to be used in that workflow so anyone can easily get the build out and then add their own deployment although we could provide maybe some examples. |
Beta Was this translation helpful? Give feedback.
-
|
Eurofurence has a Helm Chart here on github, which is deployed via Argo CD into a self-hosted k8s cluster (I've changed this into a proper new comment instead of a reply...) |
Beta Was this translation helpful? Give feedback.
-
|
just sharing my company setup:
For local:
|
Beta Was this translation helpful? Give feedback.
-
|
Fanime ConOps (the rest of Fanime Services I'm not involved with) Logging: this is essentially a simple Web interface to a Database that is basically forms. Radio Checkout: This is ConRAM that I re-wrote a few years ago and have been slowly adding features as requested and have time. (Installation instructions are in the repo Readme) ConRAM is also used at other cons. Some deploy as a local. I've deployed it for BLFC as a DigitalOcean App with an external Postgres DB. This is basically 1-click deploy. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
This came up in tonight's general meeting as an open question/spec gathering idea: how is everyone deploying and running their current software, and what is the cost for a basic running instance? How does a prod environment differ from local/dev deploys? What are things you'd like to improve about your setup?
For ubersystem, we have a Dockerfile that handles building the app and a corresponding docker compose file to orchestrate the app's pieces (web, redis, db, rabbitmq, celery-beat, and celery-worker being the core app). These are all that's required to develop locally, as the default docker-compose file mounts your local code for the app so you can just restart containers to update the server. More details for exactly how to deploy locally are in our developer documentation.
From there, MAGFest has a CI integration that relies on a DOCKER_BUILDS GitHub variable to define what branches should get built and what plugins to include for each event, then pushes those images to ghcr.io (e.g., we want a
magwesttag that is built from themainbranch in the main repo and includes themainbranch from the magwest plugin repo). We orchestrate our config in Terraform using a separate repo and trigger our builds from Terraform (there's also an action that triggers a Terraform build if you update themainbranch of the config repo itself). We use this repo for the app config (controlling business logic per event) and also to configure things like how many web shards to run per server. Terraform takes this and puts it on AWS. We have staging servers that run pretty much all year in addition to actual event servers, plus archival event servers, so our monthly server costs are pretty significant but that means each MAGFest event can test features year-round and look at the past 3 years of data.Differences from local and prod are minimized thanks to the Docker setup, so generally it's just that prod deploys usually offload pieces of the app out of docker containers and into, e.g., RDS, plus general prod server things like setting up SSL. There is also the question of whether you build your own local Docker image on the server or pull down an image. Building on the server is easier but seems to be pretty intense and makes, e.g., AWS micro instances unresponsive if you try to do it.
For that reason, a really basic AWS deploy costs about $25/month since it needs a t4g.medium; I'd really like to be able to reduce that so smaller cons can use the software without spending three figures a year just on server costs. Also, MAGFest's system isn't particularly portable or easy to replicate for other events, and it would be nice if there was a way for other events to more easily set up their own deploys, particularly ones that have seamless restarts/updates, multiple server shards, and "I don't have to SSH into a server to run a deploy"-ability (MAGFest ticks all these boxes, other events running ubersystem don't).
I think I've covered everything -- I'm not actually very good at devops/server admin so if I'm missing any details that are useful let me know. And of course please contribute with your own setups!
Beta Was this translation helpful? Give feedback.
All reactions