Skip to content

kanthgithub/jupyter-hugging-face-mac-guide

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

✅ Jupyter + Hugging Face Local Model Setup (Mac M3 Optimized)

This guide sets up Python 3.11, Jupyter Notebook, Hugging Face models, and local model loading using uv and pyenv.


1. Install Prerequisites

Install Homebrew (if not installed)

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Install pyenv and uv using Homebrew

brew install pyenv uv

Configure pyenv in shell

echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.zprofile
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.zprofile
echo 'eval "$(pyenv init --path)"' >> ~/.zprofile
source ~/.zprofile

2. Install Python 3.11.9

pyenv install 3.11.9
pyenv global 3.11.9

Confirm Python

which python
# should be ~/.pyenv/shims/python
python --version
# should be Python 3.11.9

3. Create Project and .venv with uv

mkdir -p ~/llm/jupyter
cd ~/llm/jupyter
uv venv

4. Add requirements.txt

Save the following file as requirements.txt in the project folder:

torch==2.3.0
numpy==1.26.4
transformers==4.52.4
huggingface-hub==0.33.0
datasets==3.6.0  
trl==0.14.0
jinja2==3.1.2
markupsafe==2.0.1
tabulate==0.9.0
pandas==2.3.0

5. Install all dependencies via uv

uv pip install -r requirements.txt

✅ Verify all packages are installed

uv pip list

Look for: torch, transformers, pandas, etc.


6. Launch Jupyter Notebook

source .venv/bin/activate
jupyter notebook

7. Download Model from Hugging Face (CLI option)

First, log in (if required)

huggingface-cli login

Then download the model

huggingface-cli download Qwen/Qwen3-0.6B-Base --local-dir ./models/Qwen/Qwen3-0.6B-Base --local-dir-use-symlinks False

8. Load Model in Notebook

from transformers import AutoModelForCausalLM, AutoTokenizer
from pathlib import Path

def load_model_and_tokenizer(model_path, use_gpu=True):
    resolved_path = str(Path(model_path).resolve())

    model = AutoModelForCausalLM.from_pretrained(
        resolved_path,
        local_files_only=True,
        device_map="auto" if use_gpu else None
    )

    tokenizer = AutoTokenizer.from_pretrained(
        resolved_path,
        local_files_only=True
    )

    return model, tokenizer

model, tokenizer = load_model_and_tokenizer("./models/Qwen/Qwen3-0.6B-Base", use_gpu=False)

9. Test Inference

input_ids = tokenizer("Hello, world!", return_tensors="pt").input_ids
output = model.generate(input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))

✅ You’re Ready!

  • Runs fast on Mac M3
  • No remote HF calls
  • Jupyter is isolated in .venv
  • Fully reproducible with requirements.txt

About

setup guide to install and setup jupyter notebook and huggingface local model management

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published