LLM fine-tuning with LoRA + NVFP4/MXFP8 on NVIDIA DGX Spark (Blackwell GB10)
-
Updated
Dec 22, 2025 - Python
LLM fine-tuning with LoRA + NVFP4/MXFP8 on NVIDIA DGX Spark (Blackwell GB10)
🔧 Fine-tune large language models efficiently on NVIDIA DGX Spark with LoRA adapters and optimized quantization for high performance.
Add a description, image, and links to the mxfp8 topic page so that developers can more easily learn about it.
To associate your repository with the mxfp8 topic, visit your repo's landing page and select "manage topics."