Skip to content

Force DeepSeek r1 models to think for as long as you wish

Notifications You must be signed in to change notification settings

pincente/r1-overthinker

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Open In Colab

DeepSeek R1 Overthinker

Using this app you can force DeepSeek R1 models to think more deeply by extending their reasoning process. It uses unsloth optimized models for better performance and unlimited context length (only limited by available VRAM).

The app works by detecting when the model tries to conclude thoughts too early and replacing those with prompts that encourage additional reasoning, continuing until a minimum threshold of thinking set by you is reached.



App by anzorq. If you like it, please consider supporting me:

Buy Me A Coffee


image

Features

  • 🤔 Force models to think longer and more thoroughly
  • 🔄 Customizable reasoning extensions and thinking thresholds
  • 🎯 Fine-grained control over model parameters (temperature, top-p, etc.)
  • 💭 Visible thinking process with token count tracking
  • 📝 LaTeX support for mathematical expressions
  • 🖥️ Optimized for various VRAM configurations
  • ♾️ Unlimited context length (VRAM-dependent)
  • 🔄 Choose from multiple model sizes (1.5B to 70B parameters)

Available Models

You can choose from any of the unsloth-optimized distilled DeepSeek R1 models:

Qwen-based Models

LLaMA-based Models

Choose the model size based on your available VRAM and performance requirements. Larger models generally provide better quality responses but require more VRAM. Qwen and LLaMA architectures may perform differently on various tasks.

Note: You can run models up to 14B parameters on a free Google Colab T4 GPU.

Credits


Visitors

About

Force DeepSeek r1 models to think for as long as you wish

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%