Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Considerations for concurrent requests #11

Closed
wolfgang42 opened this issue Feb 9, 2022 · 6 comments
Closed

Considerations for concurrent requests #11

wolfgang42 opened this issue Feb 9, 2022 · 6 comments

Comments

@wolfgang42
Copy link

wolfgang42 commented Feb 9, 2022

The draft says:

If the request is retried, while the original request is still being processed, the resource server MUST reply with an HTTP 409 status code[...]

I'm wondering what the considerations were behind this requirement. In particular:

  • Why must this be a "Conflict" in particular rather than, say, "423 Locked"? (I'm probably just missing some subtlety of HTTP status codes here.)

  • What should a client do if they get this response back? The draft says “Clients MUST correct the requests before performing a retry operation,” but I’m not clear on what sort of correction would be expected here before retrying. (For example, I'm imagining a scenario where the client loses its connection to the reverse proxy, but the backend is still busy processing the request. When the client retries, they might get 409 if the server hasn't happened to finish the original request yet.)

  • Why MUST the server detect this condition specially, rather than treating it as a replay? Are there some special considerations that are involved in concurrent requests? In particular I'm thinking of:

    • A single-threaded server, which may not even have a way of detecting concurrent requests without special handling.
    • A server which processes write requests in a queue, or uses a database with serializable transactions. In such cases concurrent requests would be idempotent, but the concurrency would not be detected without extra checks.

    In either case the concurrency would not seem at first glance to pose any problems; is there a scenario here that I'm missing?

Thanks for your work on this spec, it looks very promising so far.

@dret
Copy link
Collaborator

dret commented Feb 10, 2022 via email

@LukeMathWalker
Copy link

I was reading through the draft and I wondered: wouldn't "waiting", on the server-side, be an acceptable strategy to deal with concurrent requests? I.e. wait until the lock is released and then return the same saved response.

This would provide better ergonomics when there is an expectation that clients might not be sophisticated enough to handle a different status code (e.g. 409) gracefully.

@jayadeba
Copy link
Collaborator

jayadeba commented May 6, 2022

@LukeMathWalker waiting is an expensive strategy:-) That said, If you implement wait approach (which is a subjective server side implementation) you would be returning a 2XX code and it is ok as per the spec.

@jayadeba
Copy link
Collaborator

jayadeba commented May 6, 2022

@wolfgang42 @dret thanks for the discussion. I softened the language on specific error codes from MUST to SHOULD and removed the need of client correcting the request for 409 (good catch-thank you) in the PR- https://github.com/ietf-wg-httpapi/idempotency/pull/13/files#diff-63f88e5bb8aa74d91f2757586789b64a30958730c720c7850b542867eb4297db. I'll be resolving/closing this issue

@jayadeba jayadeba closed this as completed May 6, 2022
@LukeMathWalker
Copy link

@jayadeba: waiting is definitely expensive. It is not always a suitable strategy, but it does offer a good client-side experience when affordable.

@slinkydeveloper
Copy link
Contributor

@jayadeba I think your PR relaxing MUST to SHOULD for concurrent requests missed a sentence here: #30

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

No branches or pull requests

5 participants