FEAT: Improve interactions of user pools#1009
Conversation
mj-will
left a comment
There was a problem hiding this comment.
Looks great, I look forward to trying it! I've added some initial comments but might need to have a second look.
| parameters=priors.sample(), | ||
| ) as _pool: | ||
| start_time = datetime.datetime.now() | ||
| sampler.pool = _pool |
There was a problem hiding this comment.
Is this safe to do? Depending on how the sampler uses the pool, are there settings where the pool the sampler has stored is not updated?
For example, if the sampler has constructed a likelihood using the pool.map from the initial input pool will this break?
There was a problem hiding this comment.
I'm trying to work through what this would look like.
If the initial input pool is not None, all of the pool objects reference here should be the same, so I don't think that will make a difference.
One potential issue is that if a specific sampler implementation handles the pool internally by itself this will create two pools and then in the best case, we have a bunch of extra processes we don't need. I think this is what nessai does, so maybe we should game through that specific case.
There was a problem hiding this comment.
Coming back to this, would it make sense to have an update_pool method or similar? That could have some additional logic to prevent multiple pools etc.
| npool=None, | ||
| pool=None, | ||
| parameters=None, | ||
| ): |
There was a problem hiding this comment.
I think this would benefit from a doc-string with example usage.
| npool=None, | ||
| pool=None, | ||
| parameters=None, | ||
| ): |
There was a problem hiding this comment.
This would benefit from a doc-string. In particular, I think we should make clear what happens in npool is None or 1.
| my_pool = create_pool(likelihood=this_logl, npool=npool) | ||
| if my_pool is None: | ||
| map_fn = map | ||
| else: | ||
| map_fn = partial(my_pool.imap, chunksize=chunksize) | ||
| likelihood_fn = partial(_safe_likelihood_call, this_logl) | ||
|
|
||
| log_l = list(tqdm( | ||
| map_fn(likelihood_fn, dict_samples[starting_index:]), | ||
| desc='Computing likelihoods', | ||
| total=n, | ||
| )) | ||
| close_pool(my_pool) |
There was a problem hiding this comment.
Any reason to use open/close over the context manager?
There was a problem hiding this comment.
I suggest switching to a context manager then since I think it's more robust to exceptions.
ColmTalbot
left a comment
There was a problem hiding this comment.
I don't remember exactly what the behaviour for some of these questions is, so I'll go back and check, and ideally write docstrings about them.
| parameters=priors.sample(), | ||
| ) as _pool: | ||
| start_time = datetime.datetime.now() | ||
| sampler.pool = _pool |
There was a problem hiding this comment.
I'm trying to work through what this would look like.
If the initial input pool is not None, all of the pool objects reference here should be the same, so I don't think that will make a difference.
One potential issue is that if a specific sampler implementation handles the pool internally by itself this will create two pools and then in the best case, we have a bunch of extra processes we don't need. I think this is what nessai does, so maybe we should game through that specific case.
mj-will
left a comment
There was a problem hiding this comment.
Forgot to submit my comments as a review, but see them above.
|
|
||
| _pool = multiprocessing.Pool( | ||
| processes=npool, | ||
| initializer=_initialize_global_variables, |
There was a problem hiding this comment.
I think it would be useful to allow users to specify their own (additional) initializer.
For nessai, there's another variable that needs to be global for the pool to work and that can't be intialized unless using the fork start method.
I noticed that it was difficult to pass a user-specified pool through run_sampler, and it isn't used at all in the post processing. This PR:
with multiprocessing.pool()...)Sampler._setup_poolandSampler._close_poolparallel_bilbymoot.