Skip to content

Training times are slower in Serval than in SILNLP #708

@Enkidu93

Description

@Enkidu93

It looks like silnlp jobs are typically training at almost 60 samples/s on average whereas Serval is sitting at around 40 samples/s. Although the training code is roughly similar in both silnlp and machine.py, it's organized differently and implementations vary slightly between the codebases. This is a substantial difference and fixing this would constitute a major improvement for our users and our hardware load.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Status

✅ Done

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions