-
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QUIC: To avoid being used for an amplification attack, such endpoints MUST limit the cumulative size of packets it sends to three times the cumulative size of the packets that are received and attributed to the connection. #895
Comments
@Sashan do digging and place notes here |
The text from the RFC:
based on the structure of ch_tx in the quic implementation, I believe we meet this criteria:
If the remote end sends us a connection close event, then we enter the draining state, and immediately stop sending any frames. If we initiate the close then we may send a remaining packet, but that would be under the 3x limit imposed by the RFC. If we receive another packet in the interim, then ch->conn_close_queued gets reset to one, but on the next transmit it would be reset to zero again, leaving once again below the 3x received size limit. @Sashan could you please check me on this to be sure I'm not missing anything? |
I think we are still missing such limit for clients which are not validated yet, which is either established connection after handling a RETRY packet from server or completing SSL handshake (which also count as validation). |
The relevant section of the RFC I believe is from 10.2.1:
Given that 10.2.1 relates to connections in the closing/terminating states (ie. QUIC_CHANNEL_STATE_TERMINATING_CLOSING and QUIC_CHANNEL_STATE_TERMINATING_DRAINING), I don't think we need to worry about checks during connection establishment (i.e. connections in the QUIC_CHANNEL_STATE_IDLE or QUIC_CHANNEL_STATE_ACTIVE states). That said, there might (maybe) be a concern about packet sizes here. The RFC indicates we should count the cumulative size of packets sent such that they are less than 3x the amount of data received, but we don't compute packet size, we just limit the number of packets, so I suppose it might be possible, even though we don't send more than 1 packet in response to a packet received during connection termination, it might be possible, if the received packet is very small, and if we pack a data gram with a full mtu worth of quic packets, that we might strictly violate the 3x rule above. Though I'm not sure quite how much that matters. |
There is also section 21.1.1.1. Anti-amplification:
My understanding is the amplification limit must be implemented at server side. I think we don't track amount of data we send as a reply before we either validate |
Ok, I see what you're saying, but i'm not sure I see how we might send more than 3x data to an unvalidated address. With the addition of the unvalidated address code, the receipt of any frame omitting a retry token will only ever respond with a single retry packet, discarding any other packets in the received datagram which triggered the retry. This would change if we modified that code to support validating only in times of congestion, but at the moment, thats what I see. Am I missing something? |
If we do not do a retry, the client's address is not validated until handshake completes. Could it happen that especially with PQC the handshake data sent in the response to the initial packet from the client be bigger than 3x data that was received? The server sents its certificate chain to the client within that handshake data, I think it can be definitely bigger than the client hello. The problem is how to account for that. |
Right, but again from 21.1.1.1:
That to me says that clients doing a handshake, given that the connection is not yet established, are excused from the amplification limit. Or do you read that differently? |
That talks about no requirement for client validating the server address when establishing a new connection or during connection migration. Which is logical as the traffic from the server would be a response to client's packets. However on the server side the client's address must be validated (either implicitly or explicitly). |
Ok, thats fair, but that says to me that the only way to avoid not being able to complete a handshake when the sever hello is 3x the size of the client hello is to validate the client address first, which is what our server address validation feature currently does. I'm just struggling to see how, in the event that a server hello is 3x the size of the client hello, how we can make forward progress on a quic connection without validating the client address, which is perhaps the answer (i.e. that we should validate client addresses in that situation, or more simply, always do validation, which is what we currently do) |
Always doing the validation is adding an unnecessary round trip which is bad. We should probably look at other QUIC libraries how they solved this. |
Note initial packet contains a padding, RFC suggest padding to be 1200 bytes (RFC 9000, section 8.1). This padding provides two things:
|
What are the elements of the handshake which are likely to extend the size beyond the 3x limit? Can we somehow estimate the size based on those elements and force a retry if we're likely to go over? E.g. if we've been configured with an excessively large cert chain, then always retry. |
Yeah, one option IMO good enough for server MVP would be to simply send retry on each initial connection from an unique address and not require it on subsequent connections from the same address as they are validated. |
doesn't that create the possibility for spoofing though? i.e. if a given client validates their address, another bad actor could use arp poisoning to claim that ip and bypass validation. |
One way around it and most simple for us now is to always send retry packet to validate client. This way we won't need to do packet accounting to enforce anti-amplification limit. |
As an addendum to that we can optional add the sending of a NEW_TOKEN packet to allow clients to bypass the retry round trip on subsequent connections. |
But this adds a roundtrip which is undesirable. As noted above I think the simplest solution is just to always send a retry if our cert chain is over a certain size. This seems to me the most likely cause of the 3x limit being breached?? |
I think I have a proposal here What we need is a token format that can do the following:
Based on how quic-go does this in token_generator.go what it seems like we need is a token structure that contains the following information a) a timestamp (for allowing expiration at various times in the future (based on weather it was sent in a retry packet or a NEW_TOKEN packet) In quic-go, retry tokens are marshalled as ASN1 encodings containing the following information: b) for NEW_TOKEN packets: validation in quic go consists of unmarshalling the ASN1 object in the token header field, and doing the following: If any of those checks fail for a retry token, we respond with an INVALID protocol error. For NEW_TOKEN tokens, we respond with a retry packet. It seems pretty straightforward to implement in our use case I think |
FYI, we have the issue #912 to track the updates to the token changes, allowing this to be closed, as we will validate every incomming connection either via retry token or the use of a NEW_TOKEN |
I'm afraid this becomes a MUST with openssl/openssl#26114. re-opening. |
being handled in #949 |
This is a missing MUST item from the spreadsheet here:
https://docs.google.com/spreadsheets/d/1is0eRNrmNwzqcCTmTPYJwC3fswpYpqmY87-5CylraLc/edit?gid=1067533579#gid=1067533579
What we need is a token format that can do the following:
Based on how quic-go does this in token_generator.go what it seems like we need is a token structure that contains the following information
a) a timestamp (for allowing expiration at various times in the future (based on weather it was sent in a retry packet or a NEW_TOKEN packet)
b) Some correlating information to allow address validation (either the dcid of the new connection sent in the new inital packet or in the next initial packet in the case of NEW_TOKEN usage)
In quic-go, retry tokens are marshalled as ASN1 encodings containing the following information:
a) for retry packets, the token consists of the data:
IsRetryToken: bool (true)
RemoteAddr: bytes (remote addr of the client requesting the connection
odcid: bytes (dcid of the first initial connection attempt)
rscid: bytes (retry source connection id of the connection)
timestamp: uint64_t (time that the token was minted)
b) for NEW_TOKEN packets:
IsRetryToken: bool (false)
RemoteAddr: bytes (remote addr of the client requesting the connection)
Timestamp: uint64_t (time that the token was minted)
validation in quic go consists of unmarshalling the ASN1 object in the token header field, and doing the following:
a) Validate the remote address to confirm that the address matches the ip address of the client trying to establish the connection
b) validate the time stamp against one of two timeout values. For retry packet tokens the timeout is very short as its used immediately, for NEW_TOKEN packets the timeout is longer
If any of those checks fail for a retry token, we respond with an INVALID protocol error. For NEW_TOKEN tokens, we respond with a retry packet.
It seems pretty straightforward to implement in our use case I think
The text was updated successfully, but these errors were encountered: