Skip to content

Encoder-decoder Multihead attention cpu optimization #43

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

NickNickGo
Copy link
Contributor

This PR reduces CPU time for encoder-decoder multihead attention by 25-30%. GPU time is reduced by 10%.

  1. Unnecessary reshapes during EINSUM op are eliminated.
  2. EINSUM logic converted to BMM OP, thus avoiding CPU overhead during EINSUM.
    Overall generation time reduces from 47.8 to 44.9.
    Attaching before/after profile results:
    image
    image

@NickNickGo NickNickGo requested a review from a team October 14, 2020 03:03
@JiushengChen
Copy link
Contributor

JiushengChen commented Oct 14, 2020

Good improvement!

  1. Can we do the same for Huggingface and ProphetNet (its implementation is separated.)?
  2. Update benchmarks

Copy link
Member

@yuyan2do yuyan2do left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also update readme and number in benchmark script

torch.bool), float("-inf"))
else:
#Not supported
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add "assert False, reason"

else:
q = q.contiguous().view(tgt_len, bsz * self.num_heads,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why contiguous is needed here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was present in the earlier implementation, I didn't touch it since my changes are only meant for en-dec attention. I agree this is redundant. I'll remove it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are other places using contiguous. please also check if they can be removed as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just checked this. In all other places, its present after permute/transpose operations which is essential.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants