Skip to content

Ensure Container is available at Domain Name before proceeding to Let's Encrypt Checks #19

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
claytondaley opened this issue Nov 12, 2018 · 4 comments · Fixed by #23

Comments

@claytondaley
Copy link
Contributor

claytondaley commented Nov 12, 2018

Your image is perfect for Amazon's Elastic Container Service (ECS) because it requires no local bindings/files. Unfortunately, it's hard to provision a static IP on ECS unless you use (and pay for) a load balancer.

I don't need/want to pay for a load balancer so I must manually update my DNS (+time to propagate) each time I deploy a new container. As a result, I'm running afoul of Let's Encrypt's rate limits, specifically a Failed Validation limit of 5 failures per account, per hostname, per hour.

Given that hard cap, I'd like to suggest adjusting the retry interval to something like minute 0, 1, 5, 15, (and every 15 minutes after that i.e. 30, 45, 60, 75). In theory, min 45 (and probably 60) will rate limit, but this provides a simple rule-of-thumb that is otherwise rate-friendly.

EDIT: Per the discussion in #23, the long-term goal is to simulate the acme check:

  • Create a small file in a random (but known) location
  • Ensure that the connection can be made and finds this file (possibly multiple times)

PR #23 (merged) is a first step in this direction, providing a simple check that a server (but not necessarily this one) responds with 200 to a call to the domain name. This issue has been left open to track potential improvements.

@claytondaley
Copy link
Contributor Author

claytondaley commented Nov 12, 2018

Come to think of it, even better if you could guard against the issue. Before starting the Let's Encrypt process:

  1. Make sure the DNS name resolves to an A record
  2. (configurable) Make a request against that domain to ensure it's reaching the right machine (the instance making the request). This would guard against old A records that are no longer valid.

The second check needs to be optional as it's theoretically possible that the container won't be able to initiate outbound connections.

@claytondaley
Copy link
Contributor Author

claytondaley commented Nov 12, 2018

  • both checks need to be optional

If the server can't make an outbound connection, it might not be able to check a DNS name either. I think both of these are very rare -- I just try to anticipate the worst case.

@echohtp
Copy link

echohtp commented Nov 13, 2018

im having a similar issue! and hitting lets encrypt rate limits without wanting to

@claytondaley
Copy link
Contributor Author

claytondaley commented Nov 13, 2018

To further refine my suggestion, this thread says Lets Encrypt uses Google's DNS servers and this thread says they don't cache the requests.

  • If possible you should run your DNS checks against the Google DNS (i.e. 8.8.8.8 or 2001:4860:4860::8888).
  • You can Flush the Google DNS cache by completing the form here. Your debug/helper text for the DNS check phase could point users to this form to accelerate the change on Google's DNS.

claytondaley added a commit to claytondaley/docker-nginx-ssl-proxy that referenced this issue Apr 25, 2019
claytondaley added a commit to claytondaley/docker-nginx-ssl-proxy that referenced this issue Apr 25, 2019
…eding to let's encrypt verification (to avoid rate limits, see issue DanielDent#19).  Note that `dns-servers` flag would be ideal but is not available.
@DanielDent DanielDent reopened this Apr 26, 2019
@claytondaley claytondaley changed the title Handle Rate Limits Better Ensure Container is available at Domain Name before proceeding to Let's Encrypt Checks Apr 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants