Skip to content

Conversation

@Susskind115
Copy link

与InfiniLM中的paged Attention机制对应的算子添加:
Paged Attention:使用非连续kvcache计算Attention。
Paged Caching:按分配的物理页表存储kvcache。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant