-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
webrtc: wait for FIN_ACK before closing data channels #2615
Conversation
06d4eea
to
fd471fa
Compare
6e4d8c2
to
ca1cf7d
Compare
ca1cf7d
to
ef77e3b
Compare
ef77e3b
to
07fd995
Compare
0fe1285
to
1036003
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Partial review.
Discussed offline: If we move closing of the stream scope to the transports, we don't need to introduce the AsyncCloser
interface. We need to decide if that structure makes sense first.
p2p/transport/webrtc/connection.go
Outdated
c.closeErr = errors.New("connection closed") | ||
c.cancel() | ||
return c.pc.Close() | ||
func (c *connection) closeTimedOut() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment?
df84625
to
79d77e8
Compare
@Stebalien I have removed The difference between webrtc and yamux/quic is that you're relying on the transport stack to discard all the buffers when you do a Close. This happens with yamux. With quic, the receive side of the stream is not immediately discarded but the peer doesn't get credits to open a new stream till it cleanly finishes the stream after we have sent No such mechanism exists in webrtc. There is no limit to streams you can open(At least not in pion) or unilateral stream closing since streams are reusable. In webrtc There is one way to handle this other than doing an explicit async close without waiting for 1RTT which would require us to check a misbehaving peer which is forcing us to keep all the datachannels open and just close the connection to such a peer. We can discuss that strategy in a separate issue. That'd also require pion/webrtc#2672 |
Can't we do the same thing here? My objection is adding this hack to libp2p itself as this is a transport specific detail. |
That's a fair point. I'll open an issue to discuss. |
43c7f77
to
e60515d
Compare
e60515d
to
8abd3fe
Compare
I have addressed this and removed the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments, mostly questions, otherwise LGTM.
p2p/transport/webrtc/stream_read.go
Outdated
@@ -35,18 +37,28 @@ func (s *stream) Read(b []byte) (int, error) { | |||
var msg pb.Message | |||
if err := s.reader.ReadMsg(&msg); err != nil { | |||
s.mx.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: not related, but this locking is strange. We could write this as:
s.mx.Unlock()
var msg pb.Message
err := s.reader.ReadMsg(&msg)
s.mx.Lock()
if err != nil {
// ...
}
// no second "lock" call needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is much better. Thanks!
p2p/transport/webrtc/stream.go
Outdated
defer s.mx.Unlock() | ||
|
||
s.closeForShutdownErr = closeErr | ||
s.SetReadDeadline(time.Now().Add(-1 * time.Hour)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a strange way to do this. It'll also be racy (e.g., if the user calls the same function to change the read deadline).
Do we need to change the read deadline? If this is the only way to do this, let's add something to SetReadDeadline
so it can't be changed again by the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only way to unblock a waiting reader. Even closing the datachannel won't unblock it till 1RTT later when the peer replies with a close datachannel on its part.
I've changed this to make SetReadDeadline a noop after the read half is closed so the user can't change this again.
p2p/transport/webrtc/stream.go
Outdated
select { | ||
case s.sendStateChanged <- struct{}{}: | ||
default: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Future cleanup: make some form of notifyStateChanged
function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've replaced the three channels with 1 writeStateChanged
and using a method to notify on the channel removing all the select
blocks everywhere.
p2p/transport/webrtc/stream.go
Outdated
s.mx.Lock() | ||
defer s.mx.Unlock() | ||
// Unblock any Read call waiting on reader.ReadMsg | ||
s.SetReadDeadline(time.Now().Add(-1 * time.Hour)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same race comment as above.
@@ -17,6 +18,7 @@ import ( | |||
quicproxy "github.com/quic-go/quic-go/integrationtests/tools/proxy" | |||
|
|||
"golang.org/x/crypto/sha3" | |||
"golang.org/x/exp/rand" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the rand you want to use?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did mostly for this: golang/go#21835
But looks like math/rand.Read
is deprecated and the recommendation is to always use crypto.Read. So using that now.
f4b00c1
to
2924291
Compare
7d341f3
to
8992b8c
Compare
I have addressed your concerns here. I want to do a release today. Happy to take any comments in a follow up PR and if something major is missed we can do a patch release.
Nope, LGTM! |
part of #2656
This removes the control message queue because that only ever adds 500 bytes to the queue. This will not cause too much HOL Blocking for other streams.
There is no problem with backpressure since sctp writes never actually block. There is no blocking on sctp Write.
To verify: