gpu support of llama.cpp for Vivante, through opencl #16324
Unanswered
Mallakbasheer
asked this question in
Q&A
Replies: 1 comment
-
This is an odd way to phrase it, there's no way to have a Vulkan implementation without compute pipeline support. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am working on deploying a small language model on the edge itself. Our device is equipped with Vivante GC7000UL gpu and npu. GPU can be accessed using opencl, and Npu using openvx. Gpu access through Vulkan is not supported for compute pipeline, got to know this from the manufacturer. Is there anyway, I can run llama.cpp either on gpu or npu?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions