Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QUIC: To avoid being used for an amplification attack, such endpoints MUST limit the cumulative size of packets it sends to three times the cumulative size of the packets that are received and attributed to the connection. #895

Closed
nhorman opened this issue Oct 11, 2024 · 22 comments
Assignees

Comments

@nhorman
Copy link
Contributor

nhorman commented Oct 11, 2024

This is a missing MUST item from the spreadsheet here:
https://docs.google.com/spreadsheets/d/1is0eRNrmNwzqcCTmTPYJwC3fswpYpqmY87-5CylraLc/edit?gid=1067533579#gid=1067533579

What we need is a token format that can do the following:

  1. be expired after a configured period (https://datatracker.ietf.org/doc/html/rfc9000#section-8.1.4)
  2. be used only once (https://datatracker.ietf.org/doc/html/rfc9000#section-8.1.4)
  3. Allow for validation of the client via address correlation (https://datatracker.ietf.org/doc/html/rfc9000#section-8.1.3)

Based on how quic-go does this in token_generator.go what it seems like we need is a token structure that contains the following information

a) a timestamp (for allowing expiration at various times in the future (based on weather it was sent in a retry packet or a NEW_TOKEN packet)
b) Some correlating information to allow address validation (either the dcid of the new connection sent in the new inital packet or in the next initial packet in the case of NEW_TOKEN usage)

In quic-go, retry tokens are marshalled as ASN1 encodings containing the following information:
a) for retry packets, the token consists of the data:
IsRetryToken: bool (true)
RemoteAddr: bytes (remote addr of the client requesting the connection
odcid: bytes (dcid of the first initial connection attempt)
rscid: bytes (retry source connection id of the connection)
timestamp: uint64_t (time that the token was minted)

b) for NEW_TOKEN packets:
IsRetryToken: bool (false)
RemoteAddr: bytes (remote addr of the client requesting the connection)
Timestamp: uint64_t (time that the token was minted)

validation in quic go consists of unmarshalling the ASN1 object in the token header field, and doing the following:
a) Validate the remote address to confirm that the address matches the ip address of the client trying to establish the connection
b) validate the time stamp against one of two timeout values. For retry packet tokens the timeout is very short as its used immediately, for NEW_TOKEN packets the timeout is longer

If any of those checks fail for a retry token, we respond with an INVALID protocol error. For NEW_TOKEN tokens, we respond with a retry packet.

It seems pretty straightforward to implement in our use case I think

@vavroch2010
Copy link

@Sashan do digging and place notes here

@vavroch2010 vavroch2010 moved this from Refining to Todo in Development Board Oct 21, 2024
@vavroch2010 vavroch2010 moved this from Refining to Todo in Project Board 3.5.0 planning Oct 21, 2024
@nhorman nhorman self-assigned this Nov 1, 2024
@nhorman
Copy link
Contributor Author

nhorman commented Nov 1, 2024

The text from the RFC:

An endpoint MAY drop packet protection keys when entering the closing state and send a packet containing a CONNECTION_CLOSE frame in response to any UDP datagram that is received. However, an endpoint that discards packet protection keys cannot identify and discard invalid packets. To avoid being used for an amplification attack, such endpoints MUST limit the cumulative size of packets it sends to three times the cumulative size of the packets that are received and attributed to the connection. To minimize the state that an endpoint maintains for a closing connection, endpoints MAY send the exact same packet in response to any received packet.

based on the structure of ch_tx in the quic implementation, I believe we meet this criteria:

static int ch_tx(QUIC_CHANNEL *ch, int *notify_other_threads)
{
    QUIC_TXP_STATUS status;
    int res;

    /*
     * RFC 9000 s. 10.2.2: Draining Connection State:
     *      While otherwise identical to the closing state, an endpoint
     *      in the draining state MUST NOT send any packets.
     * and:
     *      An endpoint MUST NOT send further packets.
     */
    if (ossl_quic_channel_is_draining(ch))
        return 0;

    if (ossl_quic_channel_is_closing(ch)) {   <= NH: If we have transitioned to the closing state
        /*
         * While closing, only send CONN_CLOSE if we've received more traffic
         * from the peer. Once we tell the TXP to generate CONN_CLOSE, all
         * future calls to it generate CONN_CLOSE frames, so otherwise we would
         * just constantly generate CONN_CLOSE frames.
         *
         * Confirming to RFC 9000 s. 10.2.1 Closing Connection State:
         *      An endpoint SHOULD limit the rate at which it generates
         *      packets in the closing state.
         */
        if (!ch->conn_close_queued)  <= NH: And that transition is pending
            return 0;  <= NH: Then don't send anything

        ch->conn_close_queued = 0;
   }

If the remote end sends us a connection close event, then we enter the draining state, and immediately stop sending any frames. If we initiate the close then we may send a remaining packet, but that would be under the 3x limit imposed by the RFC. If we receive another packet in the interim, then ch->conn_close_queued gets reset to one, but on the next transmit it would be reset to zero again, leaving once again below the 3x received size limit.

@Sashan could you please check me on this to be sure I'm not missing anything?

@nhorman nhorman moved this from Todo to In Progress in Project Board 3.5.0 planning Nov 1, 2024
@nhorman nhorman moved this from Todo to In Progress in Development Board Nov 1, 2024
@Sashan
Copy link

Sashan commented Nov 4, 2024

I think we are still missing such limit for clients which are not validated yet, which is either established connection after handling a RETRY packet from server or completing SSL handshake (which also count as validation).

@nhorman
Copy link
Contributor Author

nhorman commented Nov 11, 2024

The relevant section of the RFC I believe is from 10.2.1:

To avoid being used for an amplification attack, such endpoints MUST limit the cumulative size of packets it sends to three times the cumulative size of the packets that are received and attributed to the connection. To minimize the state that an endpoint maintains for a closing connection, endpoints MAY send the exact same packet in response to any received packet.

Given that 10.2.1 relates to connections in the closing/terminating states (ie. QUIC_CHANNEL_STATE_TERMINATING_CLOSING and QUIC_CHANNEL_STATE_TERMINATING_DRAINING), I don't think we need to worry about checks during connection establishment (i.e. connections in the QUIC_CHANNEL_STATE_IDLE or QUIC_CHANNEL_STATE_ACTIVE states).

That said, there might (maybe) be a concern about packet sizes here. The RFC indicates we should count the cumulative size of packets sent such that they are less than 3x the amount of data received, but we don't compute packet size, we just limit the number of packets, so I suppose it might be possible, even though we don't send more than 1 packet in response to a packet received during connection termination, it might be possible, if the received packet is very small, and if we pack a data gram with a full mtu worth of quic packets, that we might strictly violate the 3x rule above. Though I'm not sure quite how much that matters.

@Sashan
Copy link

Sashan commented Nov 11, 2024

There is also section 21.1.1.1. Anti-amplification:

      |  Note: The anti-amplification limit only applies when an
      |  endpoint responds to packets received from an unvalidated
      |  address.  The anti-amplification limit does not apply to
      |  clients when establishing a new connection or when initiating
      |  connection migration.

My understanding is the amplification limit must be implemented at server side.
each server when replying to IP address which is not validated yet must limit the
amount of data it sends back to the address.

I think we don't track amount of data we send as a reply before we either validate
address or establish TLS session.

@nhorman
Copy link
Contributor Author

nhorman commented Nov 11, 2024

Ok, I see what you're saying, but i'm not sure I see how we might send more than 3x data to an unvalidated address. With the addition of the unvalidated address code, the receipt of any frame omitting a retry token will only ever respond with a single retry packet, discarding any other packets in the received datagram which triggered the retry. This would change if we modified that code to support validating only in times of congestion, but at the moment, thats what I see. Am I missing something?

@t8m
Copy link
Member

t8m commented Nov 12, 2024

If we do not do a retry, the client's address is not validated until handshake completes. Could it happen that especially with PQC the handshake data sent in the response to the initial packet from the client be bigger than 3x data that was received? The server sents its certificate chain to the client within that handshake data, I think it can be definitely bigger than the client hello. The problem is how to account for that.

@nhorman
Copy link
Contributor Author

nhorman commented Nov 12, 2024

Right, but again from 21.1.1.1:

The anti-amplification limit does not apply to
clients when establishing a new connection or when initiating
connection migration.

That to me says that clients doing a handshake, given that the connection is not yet established, are excused from the amplification limit. Or do you read that differently?

@t8m
Copy link
Member

t8m commented Nov 12, 2024

Right, but again from 21.1.1.1:

The anti-amplification limit does not apply to
clients when establishing a new connection or when initiating
connection migration.

That to me says that clients doing a handshake, given that the connection is not yet established, are excused from the amplification limit. Or do you read that differently?

That talks about no requirement for client validating the server address when establishing a new connection or during connection migration. Which is logical as the traffic from the server would be a response to client's packets. However on the server side the client's address must be validated (either implicitly or explicitly).

@nhorman
Copy link
Contributor Author

nhorman commented Nov 12, 2024

Ok, thats fair, but that says to me that the only way to avoid not being able to complete a handshake when the sever hello is 3x the size of the client hello is to validate the client address first, which is what our server address validation feature currently does.

I'm just struggling to see how, in the event that a server hello is 3x the size of the client hello, how we can make forward progress on a quic connection without validating the client address, which is perhaps the answer (i.e. that we should validate client addresses in that situation, or more simply, always do validation, which is what we currently do)

@t8m
Copy link
Member

t8m commented Nov 12, 2024

Always doing the validation is adding an unnecessary round trip which is bad. We should probably look at other QUIC libraries how they solved this.

@Sashan
Copy link

Sashan commented Nov 12, 2024

If we do not do a retry, the client's address is not validated until handshake completes. Could it happen that especially with PQC the handshake data sent in the response to the initial packet from the client be bigger than 3x data that was received? The server sents its certificate chain to the client within that handshake data, I think it can be definitely bigger than the client hello. The problem is how to account for that.

Note initial packet contains a padding, RFC suggest padding to be 1200 bytes (RFC 9000, section 8.1). This padding provides two things:

  • server should be able to send enough data to complete TLS handshake effectively (it can send roughly ~4k of data back to client for every packet it receives)
  • clients (attacker) needs to send large data enough -> short INITIAL packets from from invalidated clients are suspicious. we can use it as a decision to validate such address implicitly (retry packets are short and can not be used for effective amplification).

@mattcaswell
Copy link
Member

What are the elements of the handshake which are likely to extend the size beyond the 3x limit? Can we somehow estimate the size based on those elements and force a retry if we're likely to go over? E.g. if we've been configured with an excessively large cert chain, then always retry.

@t8m
Copy link
Member

t8m commented Nov 12, 2024

Yeah, one option IMO good enough for server MVP would be to simply send retry on each initial connection from an unique address and not require it on subsequent connections from the same address as they are validated.

@nhorman
Copy link
Contributor Author

nhorman commented Nov 12, 2024

doesn't that create the possibility for spoofing though? i.e. if a given client validates their address, another bad actor could use arp poisoning to claim that ip and bypass validation.

@Sashan
Copy link

Sashan commented Nov 12, 2024

One way around it and most simple for us now is to always send retry packet to validate client. This way we won't need to do packet accounting to enforce anti-amplification limit.

@nhorman
Copy link
Contributor Author

nhorman commented Nov 12, 2024

As an addendum to that we can optional add the sending of a NEW_TOKEN packet to allow clients to bypass the retry round trip on subsequent connections.

@mattcaswell
Copy link
Member

One way around it and most simple for us now is to always send retry packet to validate client. This way we won't need to do packet accounting to enforce anti-amplification limit.

But this adds a roundtrip which is undesirable. As noted above I think the simplest solution is just to always send a retry if our cert chain is over a certain size. This seems to me the most likely cause of the 3x limit being breached??

@nhorman
Copy link
Contributor Author

nhorman commented Nov 13, 2024

I think I have a proposal here

What we need is a token format that can do the following:

  1. be expired after a configured period (https://datatracker.ietf.org/doc/html/rfc9000#section-8.1.4)
  2. be used only once (https://datatracker.ietf.org/doc/html/rfc9000#section-8.1.4)
  3. Allow for validation of the client via address correlation (https://datatracker.ietf.org/doc/html/rfc9000#section-8.1.3)

Based on how quic-go does this in token_generator.go what it seems like we need is a token structure that contains the following information

a) a timestamp (for allowing expiration at various times in the future (based on weather it was sent in a retry packet or a NEW_TOKEN packet)
b) Some correlating information to allow address validation (either the dcid of the new connection sent in the new inital packet or in the next initial packet in the case of NEW_TOKEN usage)

In quic-go, retry tokens are marshalled as ASN1 encodings containing the following information:
a) for retry packets, the token consists of the data:
IsRetryToken: bool (true)
RemoteAddr: bytes (remote addr of the client requesting the connection
odcid: bytes (dcid of the first initial connection attempt)
rscid: bytes (retry source connection id of the connection)
timestamp: uint64_t (time that the token was minted)

b) for NEW_TOKEN packets:
IsRetryToken: bool (false)
RemoteAddr: bytes (remote addr of the client requesting the connection)
Timestamp: uint64_t (time that the token was minted)

validation in quic go consists of unmarshalling the ASN1 object in the token header field, and doing the following:
a) Validate the remote address to confirm that the address matches the ip address of the client trying to establish the connection
b) validate the time stamp against one of two timeout values. For retry packet tokens the timeout is very short as its used immediately, for NEW_TOKEN packets the timeout is longer

If any of those checks fail for a retry token, we respond with an INVALID protocol error. For NEW_TOKEN tokens, we respond with a retry packet.

It seems pretty straightforward to implement in our use case I think

@nhorman
Copy link
Contributor Author

nhorman commented Nov 14, 2024

FYI, we have the issue #912 to track the updates to the token changes, allowing this to be closed, as we will validate every incomming connection either via retry token or the use of a NEW_TOKEN

@nhorman nhorman closed this as completed Nov 14, 2024
@github-project-automation github-project-automation bot moved this from In Progress to Done in Development Board Nov 14, 2024
@github-project-automation github-project-automation bot moved this from In Progress to Done in Project Board 3.5.0 planning Nov 14, 2024
@github-project-automation github-project-automation bot moved this from New to Done in Project Board Nov 14, 2024
@Sashan
Copy link

Sashan commented Dec 9, 2024

I'm afraid this becomes a MUST with openssl/openssl#26114. re-opening.

@Sashan Sashan reopened this Dec 9, 2024
@Sashan Sashan moved this from Done to Pre-Refinement in Development Board Dec 9, 2024
@nhorman
Copy link
Contributor Author

nhorman commented Dec 16, 2024

being handled in #949

@nhorman nhorman closed this as completed Dec 16, 2024
@github-project-automation github-project-automation bot moved this from Pre-Refinement to Done in Development Board Dec 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Status: Done
Status: Done
Development

No branches or pull requests

5 participants