Skip to content

Fix ML processing that is causing jobs to fail for medium-large jobs #782

@mihow

Description

@mihow

A continuation of #773

Even after switching to RabbitMQ, our processing jobs are still failing after 100 images or so.

I believe this is because of the huge responses introduced in #684. They are collected into results_json in the save_results_async tasks and appear to reach over 30mb each.

However, now we have also observed this behavior for some large export jobs, which don't have huge result responses (everything is written to a file). More investigation is needed.

Sub-issues

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions