Skip to content

Too aggressive caching for soljson-latest.js in solc-bin #11572

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cameel opened this issue Jun 22, 2021 · 4 comments · Fixed by ethereum/solc-bin#97
Closed

Too aggressive caching for soljson-latest.js in solc-bin #11572

cameel opened this issue Jun 22, 2021 · 4 comments · Fixed by ethereum/solc-bin#97

Comments

@cameel
Copy link
Member

cameel commented Jun 22, 2021

This came up in juanfranblanco/vscode-solidity#256 (comment).

Some people are getting a stale version of soljson-latest.js from https://binaries.soliditylang.org. It's 0.8.5 rather than 0.8.6.

The file is served from an S3 bucket with a Cloudfront CDN in front of it (the config is described in ethereum/solc-bin#47 (comment)). Binaries are cached because they are supposed to never change but file lists (i.e. any file matching */.list.*) are exempt from caching.

We should probably exempt soljson-latest.js and other files that may change content as well. The problem is that these files do get a lot of traffic. For example bin/soljson-latest.js got over 200k requests in the last month (98.93% out of which came from cache; only 1350 were misses). In total 5 TB of traffic for this file alone, all but 30 GB coming from cache. This is the top downloaded file (though a few others are close). So before doing that we need to check how much it will affect the cost. I think the impact might not be that big, since serving from an S3 bucket should be relatively cheap anyway but it's better to check. If the impact turns out to be too big, we might consider invalidating cache for specific files after every update, in the Github Action responsible for mirroring solc-bin instead of disabling cache for all accesses to the file.

@chriseth
Copy link
Contributor

This is about browser caches, right (in that case we cannot really clear the cache)? If it is not about browser caches, I would really have expected a better logic between s3 and cloudfront to detect changes in the files...

@cameel
Copy link
Member Author

cameel commented Jun 22, 2021

Not browser caches. It's about Cloudfront cache. Server side.

Actually updating files in S3 bucket is supposed to invalidate it automatically but it can take up to 24h according to https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serving-outdated-content-s3/. If that's too long, we have to invalidate it manually or disable caching.

I guess in this case it's also that people were very eager to skip 0.8.5 and get 0.8.6 immediately. Such a delay might not be such a big deal with future updates. Still, I think we might be better off invalidating cache in the sync action anyway. We could then enable caching for lists too and these also get a lot of traffic (though still less than binaries).

@chriseth
Copy link
Contributor

Yep, invalidating the cache during sync sounds like a good idea, but I would not disable the cache. Also people should better use the list and then fetch the correct version instead of relying on soljson-latest. By the way, is soljson-latest a file or a "temporary redirect"? Using a temporary redirect might also help.

@cameel
Copy link
Member Author

cameel commented Jun 23, 2021

By the way, is soljson-latest a file or a "temporary redirect"? Using a temporary redirect might also help.

It's a file. All symlinks are turned into copies by the s3 sync action. Back when I implemented it we decided that it's good enough and left it at that.

Invalidation and making it a symlink will be a similar amount of work (i.e. not much code but as always, testing/debugging CI stuff is annyoing). I can actually do both at the same time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants