You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the odota/parser just outputs a stream of logs (converted to JSON from the protobuf messages in the replay) and we have a series of JS processors that convert this stream into a parsed data blob. This takes about 3 seconds of CPU time.
Java perf is better than JS, so we can likely save a second or two a parse by doing it there.
This also has the potential to simplify our code a bit--instead of curl <REPLAY_URL> | bunzip2 | curl -X POST <JAVA PARSER> | node createParsedDataBlob.mjs <MATCHID>, we can just do curl <JAVA_PARSER>/parse?replay_url=<REPLAY_URL>&match_id=<MATCH_ID> and get back the blob to insert. (the Java code would take care of fetching and unzipping the replay)
This also makes the parser worker very lightweight, so we would only need one instance of it with high parallelism, run it on the backend node, and point it at parser URLs, whether running on Google or an external cloud provider. (Currently, we have many parser nodes which all grab work from the queue individually. Each node runs both the JS and Java docker containers)
Downside: we would need to add a load balancer if we aren't using a singleton parser instance.
EDIT: We can actually decouple the architectural change from the porting--we could just install node in the Java parser image and have Java call the JS code as an external process. We won't get the perf improvement but that could be done separately.
The text was updated successfully, but these errors were encountered:
Currently the odota/parser just outputs a stream of logs (converted to JSON from the protobuf messages in the replay) and we have a series of JS processors that convert this stream into a parsed data blob. This takes about 3 seconds of CPU time.
Java perf is better than JS, so we can likely save a second or two a parse by doing it there.
This also has the potential to simplify our code a bit--instead of
curl <REPLAY_URL> | bunzip2 | curl -X POST <JAVA PARSER> | node createParsedDataBlob.mjs <MATCHID>
, we can just docurl <JAVA_PARSER>/parse?replay_url=<REPLAY_URL>&match_id=<MATCH_ID>
and get back the blob to insert. (the Java code would take care of fetching and unzipping the replay)This also makes the
parser
worker very lightweight, so we would only need one instance of it with high parallelism, run it on the backend node, and point it at parser URLs, whether running on Google or an external cloud provider. (Currently, we have manyparser
nodes which all grab work from the queue individually. Each node runs both the JS and Java docker containers)Downside: we would need to add a load balancer if we aren't using a singleton parser instance.
EDIT: We can actually decouple the architectural change from the porting--we could just install node in the Java parser image and have Java call the JS code as an external process. We won't get the perf improvement but that could be done separately.
The text was updated successfully, but these errors were encountered: