Skip to content

RPC Communication in Distributed RL Training #303

@Sharad24

Description

@Sharad24

There's three ways that I can think of having distributed training:

  1. Use of Pytorch's Distributed Training infrastructure. Would require establishing communication protocols specific to the case of Deep RL. This would all be in Python (most likely) unless we find a way around.
  2. Use of Reverb
    • Use TF based Datasets (@threewisemonkeys-as )
    • Pytorch wrapper for the conversion of NumPy arrays, etc (that are received) (Short-term, up for grabs)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions