-
-
Notifications
You must be signed in to change notification settings - Fork 161
feat(pool): add a Singleton pool type #226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
112d15d
to
99e0932
Compare
99e0932
to
c87b743
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is really exciting work @seanmonstar! thanks for your patience in review, it took me a couple days to take a deep read through this.
broadly i believe this is already in tremendous shape. my primary concern is bounding the number of pending callers while waiting for the connection to be established.
besides that, i have a question about how this might be extended in the future to accommodate an Arc<T>
-backed inner service. that is mostly a forward-facing design question rather than concern with this initial introduction to the client::pool
module.
thanks for breaking this work out into distinct pull requests, and for the illustrative outline and examples in hyperium/hyper#3948.
//! The singleton pool combines a MakeService that should only produce a single | ||
//! active connection. It can bundle all concurrent calls to it, so that only | ||
//! one connection is made. All calls to the singleton will return a clone of | ||
//! the inner service once established. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
//! The singleton pool combines a MakeService that should only produce a single | |
//! active connection. It can bundle all concurrent calls to it, so that only | |
//! one connection is made. All calls to the singleton will return a clone of | |
//! the inner service once established. | |
//! This ensures that only one connection is made. | |
//! | |
//! The singleton pool wraps a `MakeService<T>` so that it only produces a | |
//! single [`Service<T>`]. Once that service is instantiated, all successive | |
//! calls to the `Singleton` pool will return a clone of the inner service. |
take-it-or-leave-it, docs nit.
use std::task::{self, Poll}; | ||
|
||
use tokio::sync::oneshot; | ||
use tower_service::Service; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a question that occurs to me: should these types work with hyper::service::Service
instead? should they implement both tower_service::Service
and hyper
's own trait?
tower_service
naturally makes these types play well with libraries like tower-util
and the rest of that ecosystem, though not providing implementations of hyper
's traits means that callers must use this with TowerToHyperService
.
this is a question that might be worth bubbling up hyperium/hyper#3948, since this will affect #227 and #228.
//! one connection is made. All calls to the singleton will return a clone of | ||
//! the inner service once established. | ||
//! | ||
//! This fits the HTTP/2 case well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm happy that this is called out!
#[derive(Debug)] | ||
enum State<S> { | ||
Empty, | ||
Making(Vec<oneshot::Sender<S>>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be bounded in some manner! if the SingletonFuture
gets stuck whilst establishing a connection, we could end up with an unbounded number of waiters here.
it'd be nice if configuring that was optional (e.g. with a builder-style method?) with a relatively high default value.
ideally, we also introduce some lightweight test coverage to ensure that only N waiters can enter the queue once we've begun "making" the inner service.
State::Empty | State::Made(_) => { | ||
// shouldn't happen! | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it'd be nice if this emitted a tracing::warn!
should we ever hit this, ideally behind a feature gate.
State::Made(ref svc) => SingletonFuture::Made { | ||
svc: Some(svc.clone()), | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
something we may wish to add as a follow-on to this would be a similar "singleton" pool that is backed by an Arc<T>
. upon my initial read, i was slightly surprised we weren't Arc<T>
'ing the generated Service<T>
to already.
as is, this calls mk_svc.call()
once, but does result in multiple distinct copies of the M::Response
service. an Arc<T>
-backed singleton pool would be useful for services that are expensive to Clone
.
i don't believe any of this blocks this specific change, but i'm curious to ask where we think it might reasonably live. ideally we could rely on a lot of this infrastructure, since 99% of it will be the same! 😸
/// The singleton wraps an inner service maker, bundling all calls to ensure | ||
/// only one service is created. Once made, it returns clones of the made | ||
/// service. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'd suggest that we clarify this slightly, and indicate that we lazily instantiate the inner service.
i could imagine an alternative "eager" singleton connection pool that invokes the MakeService
immediately, for example.
This creates a
Singleton
pool service which wraps an inner make-service, and queues up all calls while waiting for the one instance to be created, and then hands out clones. This pattern fits HTTP/2 well.The current implementation mirrors much of how the legacy pool handles connecting with HTTP/2 prior knowledge.
Closes hyperium/hyper#3953