-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issue #3
Comments
Hi @andreabac3 .
The following is the result of the test. TorchCRF (this repo)
The two results show that TorchCRF has more function calls and I don' know why TorchCRF calls many matrix transformation function (view, squeeze, and unsqueeze), this may be bad. |
Hi @andreabac3 |
I greatly appreciated your work, both for its simplicity of use and for your commitment. I'm probably wrong, but the library is very slow to use compared to other packages that do the same job.
I checked and all tensor operations are performed on the GPU (GTX 1070).
The TQDM library estimates an iteration every two seconds during training but the waiting time is 2 hours per epoch. Using other libraries for the same model I get a waiting time of 15 minutes per epoch.
I can assure you that the mask, the CRF layer are run on GPU.
I also tried to force methods with to (device) but obviously nothing has changed.
self.crflayer = CRF(hparams.num_classes, pad_idx=0).to(device) self.model.crflayer.forward(outputs, goldLabels, mask).to(device)
The text was updated successfully, but these errors were encountered: