Self Hosting: Docker Compose setup and flow #640
Reference in New Issue
Block a user
Delete Branch "main"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
My attempt at implementing Issue #627.
I'm for sure missing some stuff, so please don't hesitate to mention it.
The
force_sslenvironment configs are based on this discussion.@Radu-C-Martin nice start here!
I think the goal for this issue will be to merge an "all in one" solution that is well tested and validated across the entire self-hosting flow, which will include:
docker.mddocument here - https://github.com/maybe-finance/maybe/tree/main/docs/self-hostingAny thoughts on how we can test and validate amongst the community? I'd imagine we'll have quite a few different types of setups to run through.
I'm going to mark this pull request as a draft and would encourage any other self-hosters to collaborate on this branch to get this to completion. Would love to hear more thoughts around:
I played around with this some more.
I adjusted the deployment workflow to tag not only the complete version, but also a major.minor tag and a major-only tag (this probably would only make sense if breaking changes only happen in major versions?).
I also added a step to prune the untagged images when deploying, since creating a new image for every commit will generate quite a few images :)
Additionally, I updated the documentation a bit to reflect the docker deployment. From commercial VPS I could only test Linode, but it works well there.
I left the references to my images in the docker-compose file for now, so that people can play around with that and suggest changes.
One thing to note is that the
latesttag follows the last image that commit that was tagged, so for example if you first tag av.2.1.2, and then a fix forv.1.5.6or something, the latest tag will be on thev.1.5.6. But in that case we would probably need more logic in the workflow anyway, so tell me if that's something you want me to consider :)Coming along nicely! Would love to hear from additional self-hosters on their own use-cases and review of this flow.
One question, why do we need redis here?
@@ -0,0 +97,4 @@type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v') }}- name: Build and push Docker imageIt would be nice to build docker image for arm64
@@ -0,0 +97,4 @@type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v') }}- name: Build and push Docker imageThat's probably more a question to the core maintainers since that could bring some additional overhead :D
I based the docker-compose on the devcontainer one, which has redis as well, but as far as I can see it's not actually used?
We won't need Redis for this first pass. It is not being used within the app currently.
@@ -0,0 +97,4 @@type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v') }}- name: Build and push Docker imageFor this first workflow, I think we should focus on keeping things as simple and opinionated as possible. Once we have a workflow merged and in place, we can introduce subsequent PRs to enhance the flow with additional architecture targets, etc.
The
RUN_DB_MIGRATIONS_IN_BUILD_STEPis specific to the Render deployments so I believe it can safely be removed here.@@ -0,0 +97,4 @@type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v') }}- name: Build and push Docker imageIt's working perfectly fine on arm -> https://github.com/DorianMazur/maybe/actions/runs/8822599388/job/24221266397
@@ -0,0 +97,4 @@type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v') }}- name: Build and push Docker image@DorianMazur couple questions:
@@ -0,0 +97,4 @@type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v') }}- name: Build and push Docker image@zachgoll Maybe we could reduce build time by using
macos-latestrunners (arm64 architecture). Because currently in my workflow I am using ubuntu-latest that is x86_64.I am working on arm64, I have multiple homeLab servers (each of them is using 5W of power) on arm64 and my Macbook is also arm64. That's why I need this, probably not only me :)
I just tested this docker compose and it's not working for me :/EDIT:
It's working now, I removed special characters from password. It seems that they are not supported.
@@ -0,0 +97,4 @@type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v') }}- name: Build and push Docker imageGotcha, had forgotten about the M1 chip and Docker compatibility. I'm guessing we'll have a decent number of self-hosters running this on a Mac, so probably worth supporting.
That said, I think we either need to get that build time significantly reduced or we could change the frequency in which we're running these builds.
I was originally thinking we'd build and publish on every commit to
main, but it may be more practical to only build and publish for each new release (which we're eventually going to hit a weekly/bi-weekly cycle on).@@ -0,0 +97,4 @@type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v') }}- name: Build and push Docker image@zachgoll Out of curiosity, why are you worried about build time on public repo? Github actions are free on public repos :D
Anyway, I can reduce build time by using macos runners. I can create PR in future or maybe add some commits to this one.
@@ -0,0 +97,4 @@type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v') }}- name: Build and push Docker imageNot worried about cost—mostly want to be conscious of the overall feedback loop of making modifications to this flow in the future.
If we add this architecture, looks like we can add
runs-on: ${{ matrix.os }}syntax (reference) to speed things up.looks good, can't wait to see this merged! just a couples questions/comments
Used by rails directly https://api.rubyonrails.org/v7.1.3.2/classes/Rails/Application.html#method-i-secret_key_base
i assume this will change to whoever owns the image?
unless you're expecting devs/contributors to use this file to run it locally, i'd rename it to something like
docker-compose.example.ymlthis might be a little too prescriptive, though anyone who know's what they're doing will likely just copy the contents on docker-compose.yml and adapt it
That's a good point—we will not be supporting a dev-optimized compose file since that is already done in
.devcontainer. It should be made clear that this should not be used for development.We should also throw a noticeable comment at the top of the file with some links to the documentation on how to get it running for self-hosting
For some additional context, this was originally introduced as an alternative to using the
credentials:editfor "one-click" Render deploys. My thinking was that generating this key and saving it in an env var would be easier for self-hosters than going through the process of securingconfig/master.key.This may or may not be the best way to handle this Docker self hosting flow. Suggestions around this are welcome!
36e8b14486/render.yaml (L29-L34)Generally, self-hosters who are on the more "advanced" side will skip guides like this altogether, so I don't think we need to worry about confusion on that end.
IMO, the goal for these instructions should be prescriptive enough so that someone with minimal experience with self-hosting and Docker can successfully get the project running on a fresh VPS.
This is where the One-liner script I was suggesting could come in handy. Steps like "Create a new folder", "Copy docker compose file", "copy .env.example file" could all be great things to throw in a script like that. That way, we could simplify these instructions to something like:
Unless I'm not thinking about this correctly, we'd set this to Maybe's latest published image.
@Radu-C-Martin I'm guessing this was just here so you could test it on your fork correct?
yes, originally I even pre-setup it with the maybe-finance image name, but then you said you'd like people to test it first, which is a fair point 🤷
Yes IMO it absolutely is 👍🏼
i'm a big fan of one-liner scripts (over prescriptive setup instructions) for those less savvy users who will just copy the instructions anyways. makes it less likely they'll mess something up when copying instructions.
i wonder what the split between "savvy users who understand docker and compose" and "users running on a cheap vps/rpi" 🤔
Am I missing something, or is there not really a docker image actually available yet?
Not available yet, it will be available once this gets merged. This PR also includes the publish workflow 😅
If you want to check if everything works, you could use the
ghcr.io/radu-c-martin/maybeimage (cf. my fork).Gotcha, thanks!
@Radu-C-Martin I'll do some local testing on all of this today/tomorrow and we'll get it merged!
@@ -30,6 +30,10 @@ You can find [detailed setup guides for self hosting here](docs/self-hosting.md)1. Click the button above@@ -0,0 +1,35 @@services:I think we can remove
HOSTING_PLATFORM. The original reason for introducing this variable was in relation to auto-upgrades, but those assumptions are no longer relevant based on the iterations we've gone through here.Can we consolidate all of this into the
self-hosting/docker.mdfile?This file (
docs/self-hosting.md) could be as short as:And then
self-hosting/docker.mdwould provide these sections:@@ -0,0 +1,107 @@name: Publish Docker imageCould be wrong, but I don't think Github charges anything for storage on GHCR for public repos?
Either way, I think we can remove this final step here for simplicity.
@@ -0,0 +1,107 @@name: Publish Docker imageCan we remove these two steps? It looks like the
ubuntu-latestrunner already hasDocker-Buildx 0.14.0installed:https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2204-Readme.md#tools
@Radu-C-Martin had some time this afternoon to thoroughly test this. I was able to get it all running on a Digital Ocean droplet, so I think the sample compose file looks good.
Just left some comments around config and documentation that I think we should address before merging this in.
@@ -0,0 +1,107 @@name: Publish Docker imageI'm thinking the only two tags that we'll need to support for now are the "latest" and "release" tags. By using default configs for
metadata-action, I believe we can simplify to this:@@ -0,0 +1,35 @@services:In addition to this, we will need to hide both "auto updates" and "provider settings" in the self hosted settings since these do not apply to an app running via docker compose:
dc024d63b0/app/views/settings/hostings/show.html.erb (L6-L48)@@ -0,0 +7,4 @@restart: unless-stoppedenv_file:- .envenvironment:We should probably add
RAILS_ENV=productionhere too.@@ -0,0 +1,107 @@name: Publish Docker imageIn my experience, the
latesttag always points to the latest release. Do you plan for the main branch to always be production-ready?@@ -0,0 +1,107 @@name: Publish Docker imageIn theory (since we're using trunk-based development), yes,
mainshould always be production ready. If it's not, the checks should fail and the Docker image should not be published.I would expect
latestto represent the most recently published image, which in our case, would always be the latest commit.But there probably is some value in providing an alias for self-hosters who want to use the "latest release". What do you think of using the following tag scheme?
latest- the latest published image (i.e.main)stable- the latest published release[release_semver_tag]- the semver tag of the release[commit_sha]- the commit sha of the docker image so self-hoster can lock into a certain versionSo something like:
@@ -38,12 +38,12 @@ Rails.application.configure do# config.action_cable.url = "wss://example.com/cable"# config.action_cable.allowed_request_origins = [ "http://example.com", /http:\/\/example.*/ ]I think the default assumption is that you're using a reverse proxy to terminate incoming HTTPS traffic.
@@ -0,0 +1,35 @@services:I believe this should be:
so the Rails backend is not exposed to the internet without a reverse proxy in front of it
@@ -0,0 +1,35 @@services:I don't think that's needed, and even then if you want to stop it being exposed to the host (not internet, since that would require port-forwarding) just don't bind the ports and use a reverse proxy container.
@@ -38,12 +38,12 @@ Rails.application.configure do# config.action_cable.url = "wss://example.com/cable"# config.action_cable.allowed_request_origins = [ "http://example.com", /http:\/\/example.*/ ]We've been back and forth on this one quite a bit, but as I was going back through prior discussions, found this comment, which IMO is the clearest solution offered to this:
https://github.com/maybe-finance/maybe/issues/308#issuecomment-2081522347
@@ -38,12 +38,12 @@ Rails.application.configure do# config.action_cable.url = "wss://example.com/cable"# config.action_cable.allowed_request_origins = [ "http://example.com", /http:\/\/example.*/ ]Resolving this so we consolidate into one comment.
@@ -0,0 +1,35 @@services:@gantoine I think your vision of how this works is incorrect, especially if you're talking about servers that are exposed to the internet directly.
3000:3000is equivalent to0.0.0.0:3000:3000which meanslisten on port 3000 on ALL interfacesso if your server has a network interface that's exposed to the internet (literally any VPS) then it will accept incoming connections on port 3000 from virtually anyone. That's not a safe/sane default.That's another assumption that's going to make it harder for anyone interested in self-hosting, since your reverse proxy container will listen on ports 80 and 443, and you will have to put any/all subsequent containers in the same Docker network and configure the reverse proxy serving your Maybe instance to serve your other containers or use some convoluted setup involving Traefik and labeling stuff.
I, for example, have a Caddy on the host doing all the reverse proxying to all containers I have on my server like Immich, Maybe, Nextcloud and a ton of other stuff.
@@ -0,0 +1,35 @@services:Just to confirm—you're optimizing for a self-hosting scenario here where you're running this app on your local machine and never want it exposed to the internet?
In your reverse-proxy setup are you assuming you'd run a container (e.g. Traefik), or would you be setting up something like Nginx on the host machine instead?
@@ -0,0 +1,35 @@services:@zachgoll Me? No, I'm optimizing for any sane self-hosting solution - there's a reverse proxy in front of the Rails application. If we put a reverse proxy in the Docker Compose configuration and the self-hosting user has some other services on the server, then they have two options.
docker-compose.ymlso the reverse proxy doesn't listen on ports 80 and 443 and point their preferred reverse proxy to... reverse proxy to our reverse proxyMaybe there's some convoluted setup leveraging Treafik, but that's not we are aiming for
@@ -0,0 +1,35 @@services:What we could do is have an additional
docker-compose.override.ymlsample which has a reverse proxy configured inside. Then we can point everyone who doesn't particularly care about those details to use alongside the defaultdocker-compose.ymlWDYT?
@@ -0,0 +1,35 @@services:@Quintasan Your comment is perfectly reasonable, it's just that I've never seen an example docker-compose bind the network interface (in like, any self-hosted software I use or have tried).
I'm not thinking about the VPS case, I'm more-so thinking about the user who's first getting started with self hosting on a small server/old laptop/RPI at home.
That's how I have it setup at home, and it works well pretty well 🤷🏼
A machine running in my home, not exposed to the internet, accessible via tailscale or cloudflare tunnel. This is the most common setup for users getting into self-hosting, and the one most recommended by reddit, blog posts and youtube channels (which is likely how they're learning to self-host).
@@ -0,0 +1,35 @@services:@Quintasan @gantoine thanks for this additional detail, this is helpful.
While I think both of you are probably stronger than me on the config details here, here's the general direction I was thinking:
docker-compose.example.yml- this should represent a setup that intersects the 1) most common and 2) most "beginner friendly" setup. It should nearly be an "out of the box" copy/paste solution, or as close as we can get.docker-compose.[specific-setup].example.yml- we could introduce variations of the default as sample files OR add instructions to our documentation@@ -0,0 +1,35 @@services:@gantoine Fair enough. I'm not really going to say that
everyone does it so it's a good ideais ever going to convince me, especially once you realize how much automated traffic scanning is going on nowadays. Exposing a web application directly is usually a bad idea considering that webservers like Puma, Webrick etc. are not well-equipped to serve static assets.@zachgoll I think we should provide a
docker-compose.ymlin the repository and adocker-compose.override.ymlexample in the documentation. By default, Docker Compose uses bothdocker-compose.ymlanddocker-compose.override.ymlwhen you invokedocker compose up -dso we can support two scenarios out of the box with no convoluted tinkering required:docker-compose.ymlin the repository and configure the reverse proxy on their ownI don't care just let me run this- in this case, we would ask the self-hoster to additionally createdocker-compose.override.ymlin the repository which would set up the reverse proxy for them, so literally no work would be required. Just copy both files and rundocker compose up -dand you're ready to go.In both scenarios, binding the backend port to the loopback interface (
127.0.0.1) only prevents external connections from accessing the backend directly on port 3000.If someone actually wants to expose the backend directly, then they can use
docker-compose.override.ymlto override our default binding.@@ -0,0 +1,35 @@services:I’m just a self-hosting bystander here, but I would tend to agree with @gantoine. I’ve never seen Docker compose set ups specifically bind the network interface. And, even on a VPS set up, you would need to specifically allow that port in the firewall; at least that’s how it has been for any fresh box I have installed.
I also think if most people are using a Docker set up, there’s no real need to “dumb it down”. They should already understand the risks with simple networking concepts like ports and how to securely access your apps over the internet. I think the Render deploy option serves that purpose well.
@@ -0,0 +1,35 @@services:A very, very simplified example:
If you
docker-compose up -dwith those two files, then you end up with a backend container and a reverse proxy for it with automatic HTTPS. We can supply a custom Caddyfile OR use Traefik as the reverse proxy if we need a more complicated setup@@ -0,0 +1,35 @@services:@csmith1210 All OVH and Hetzner boxes I have had so far accept all connections on all ports, unless you explicitly configure them not to do so. Assuming your boxes don't (which is a good thing) accept all connections, then binding to
127.0.0.1:<port>changes exactly nothing in your case, while being a sane default for all boxes which don't have a firewall. I remain unconvinced that binding to0.0.0.0and exposing the backend container directly to the internet is a good idea.I don't think we're dumbing anything down here. We're trying to provide sane and secure defaults while giving people option to
just run itwithout any tinkering. If someone wants to tinker with this setup, then they have all the escape hatches (docker-compose.override.ymland the ability to bring their own reverse proxy)@@ -0,0 +1,35 @@services:This is a fair point, and as it personally wouldn't affect me (as I know how to edit a compose file) I think it's fine to set it that way in the default. 👍🏼
@@ -0,0 +1,107 @@name: Publish Docker imageAs long as the tags are semvar compatible, the existing tag scheme is very nice to be able to select an upgrade schedule using auto image pulls. One can have
maybe:latestwhich always auto upgrades (and should not be done), while one can domaybe:2ormaybe:2.3and get the patch upgrades.@@ -0,0 +1,107 @@name: Publish Docker imageThat's how I've seen it done in other projects. It's intuitive, and also works well with renovate/dependabot. But I guess that would impose stricter adherence to semver versioning to be totally useful, so it's more for the core maintainers to decide. It can always be added later :D
@@ -0,0 +1,35 @@services:I think @Quintasan's proposal here makes sense overall:
Still not 100% certain on the port binding (mostly because I've never seen or done this), but I'd rather be conservative to start and then make modifications (or allow the user to make those mods) if needed.
@Radu-C-Martin this looks good, thanks for the changes! The documentation and overall structure is good with me.
If I'm not mistaken, the only pending item we have left is the reverse proxy config and the "override" file mentioned by @Quintasan. I think this may be a good follow-up PR.
Unless there are objections, I think we can merge this as-is so we can start getting some images published to GHCR for self-hosters to start playing around with.
Hey, Zach! Thanks for merging my PR 🙂 I tried accessing the image and apparently I'm not authorized. I think there could be some settings missing in the organization settings? (this is what I found, but could be out of date: https://github.com/orgs/community/discussions/26014, I've never had to deal this yet). I see that the CI passes correctly, so I assume for you the image works at least?
@Radu-C-Martin thanks for the heads up, we'll get the org setting updated shortly.
All good now!