-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Wasm: add a transport that binds to a JS libp2p transport #974
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Oh well, someone who knows js-libp2p is probably better suited than me. Alternatively, I can do it if someone is available to answer my questions about js-libp2p. |
Once done, we could remove the browser-specific code from |
This might interest you. I made a small lib that let's you treat websockets as an tokio AsyncRead/AsyncWrite in wasm. No need to do anything more "browser specific". All you need is to compile to wasm, and be able to work with AsyncRead/AsyncWrite. The lib is very young, still needs review (unit tests fail in release mode, dunno why yet). It might get included in gloo at some point. And it's aimed at async-await, but if a version for futures 0.1 is wanted, it's not to hard to convert that and make a legacy version. |
Hey @tomaka this sounds really cool and something we'll need at some point. I can give you a hand with the JS side - let me know how you want to do this. |
@najamelan Thanks for the help. I also have a working implementation of WebSockets-in-JavaScript-in-Rust in #970, but the approach of this PR here is I think better (provided that the API of the JavaScript libraries doesn't change). |
@dryajov I've had a few issues:
|
Hey @tomaka
This should be possible by calling the It looks something like this, the So the flow in js is something like this: const conn = ws.dial([mas], options, () => {
// the connection is ready to use
})
...
// close an established connection or cancel an ongoing dial attempt
conn.close(() => {...})
The callback should be always called, either with an error if no addresses are available or with the resolved addresses, it should never hang/not execute the callback. I assume you're seeing this, in that case it's either a bug is the transport, or something else related to js/webassembly interop.
Yes, this is how it works right now unfortunately. A fix was started some time ago, but I'm not sure how far along this effort is and weather it's a priority ATM. Here is a (lengthy) tread for a bit of background.
I believe the answer is no, if I'm reading this correctly.
Yes, it closes it forever. Once the source ended, it should not be usable anywhere else. What is the specific use case you looking at with pull streams? Here is some background on pull streams: If you call Just as an example, here are the two of the most used sources The The way this looks in code is like this: const pull = require('pull-stream')
pull(
pull.values([1, 2, 3, 4]), // drain calls the closure returned from values repeatedly until it "returns" true in the read callback
pull.drain((val) => {
console.log(val) // prints 1234
},
(err) => console.log)) Hope this answers it for pull-streams. |
That's very helpful, thanks!
That seems to be a misunderstanding from myself. I was assuming that the callback was called once per address. Are you saying that the callback is only called once with an array of addresses?
I need to implement the pull streams interface on the Rust side as well (because writing data is done by passing a source to a sink), and I'm trying to make sure that I'm doing this correctly. |
For what it's worth, I've got some code in #970 that is working fine on the dialing side. |
Yep, it should be called once with an array of multiaddrs.
I'm not sure what sort of interop Rust has with js streams, but it is possible to convert between node streams and pull streams - https://github.com/pull-stream/stream-to-pull-stream and https://github.com/pull-stream/pull-stream-to-stream. Maybe rewriting pull streams in rust is not necessary? Here is some more background on pull streams ipfs/js-ipfs#362. One more thing, in the case of the WS transport specifically, in the browser, only the dialer is used, the listening part is ignored on libp2p boot. Sadly we can't listen for websocket connections in the browsers. I was thinking about it a little more and I wonder if directly using javascript transports in Rust is the right way to go about it? For one the implementations between Rust and JS are quite different, on the other hand, the transports are really thin wrappers around some lower level lib. In the browser it falls back to whatever underlying primitive there is for it (i.e. https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API). So I wonder if it's worth the trouble of trying to make things interop or just use the js implementation as inspiration but fork wherever it makes sense from Rust's standpoint? |
I'll take a closer look at #970 and try to lend a hand wherever I can 👍 |
My objective is to make rust-libp2p work in the browser. There's only one possible way to do this: by interfacing with some JavaScript code. I can either interface with the browser/nodejs's APIs, or I can interface with the js-libp2p transports. Doing the latter would be a huge time saver. |
Yeah, totally get the intend of this effort. My point is that there is lots of js-libp2p sepcifics in how this transports are implemented, for example pull-streams, that decision was based mostly on the limitations around node streams, but it does introduce a requirement for Rust to implement it. Another thing, which you brought up is the close functionality that drops all connections, which is most likely gonna take some time to complete. What I'm thinking is, what of we'd take the JS implementations, fork them and tweak to work with Rust, rather than adapting/implementing the Rust parts. |
For example the WS transport - we can expose an interface that's easiest to consume for Rust, but keep the internals mostly as is, i.e. continues using pull-ws a internally. The pros as I see it are - a) no need to implement missing Js parts in rust (pull-streams), b) the rust transport interface doesn't have to accommodate for js-libp2p specifics, c) connection management can be done in a way that is more convenient for the rust implementation Cons, you end up with rust specific Js transports. Maybe not such a biggie given how different both implementations are |
Thinking about it, the right solution is probably to write some bindings on the Rust side corresponding to what the Rust expects, and write a compatibility layer in JavaScript. |
The reason behind this is that writing JavaScript from within Rust is incredibly painful, as one has to manually handle all the possible corner cases (what if a method doesn't exist? what if this is null? what if this callback is called twice despite the documentation saying it should only be called once? and so on). By writing a compatibility layer in JavaScript, we delegate all these concerns to the code that uses rust-libp2p instead of having them in rust-libp2p itself. Having a compatibility layer also means that we decouple WASM from JavaScript, which is never a bad thing. The |
As I stated before, I would go as far as not trying to interface with the Js transports themselves. The Js transports are mostly wrappers around some lower level primitive that are in place to make it easier and more ergonomic to consume from Js itself, it would be a mistake to wrap around the wrapper. |
Using wasm-bindgen, we can write an implementation of
Transport
that gets passed aJsValue
which is expected to conform to https://github.com/libp2p/interface-transportThis way, we can use the existing js-libp2p transports if we compile Rust in the browser, instead of having to rewrite everything in Rust.
The text was updated successfully, but these errors were encountered: