-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to Load and Test the SecAlign Trained Models? #5
Comments
I tried to run the Mistral Instruct example using python setup.py --instruct, and I used the base model and merged it with the LoRA adapter, but I got this error.
this is my code
|
Other than that, when I test with the Llama3-8B-Instruct (0.8GB) LoRA adapters and merge them with the base model, I get this error.
|
Hi SecAlign Team,
Thank you for the great work! I wanted to test the SecAlign trained models, but I’m unsure how to properly load and run them.
Do you provide a Colab notebook or any scripts for easy testing? If not, could you share the best way to load and run inference on these models?
Looking forward to your guidance. Thanks!
The text was updated successfully, but these errors were encountered: