- https://github.com/verazuo/jailbreak_llms (jailbreak prompt dataset)
- https://github.com/terminalcommandnewsletter/everything-chatgpt
- https://x.com/dotey/status/1724623497438155031?s=20
- https://github.com/0xk1h0/ChatGPT_DAN
- https://learnprompting.org/docs/category/-prompt-hacking
- https://github.com/MiesnerJacob/learn-prompting/blob/main/08.%F0%9F%94%93%20Prompt%20Hacking.ipynb
- https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516
- https://news.ycombinator.com/item?id=35630801
- https://www.reddit.com/r/ChatGPTJailbreak/
- https://github.com/0xeb/gpt-analyst/
- https://arxiv.org/abs/2312.14302 (Exploiting Novel GPT-4 APIs to Break the Rules)
- https://www.anthropic.com/research/many-shot-jailbreaking (anthropic's many-shot jailbreaking)
- https://www.youtube.com/watch?v=zjkBMFhNj_g (GPT-4 Jailbreaking on 46min)
- https://twitter.com/elder_plinius/status/1777937733803225287
- https://github.com/2-fly-4-ai/V0-system-prompt
- https://github.com/daveshap/Claude_Sentience (Claude Sentience, a special description of Claude)
- https://github.com/elder-plinius/L1B3RT4S (L1B3RT4S, jailbreak FOR ALL FLAGSHIP AI MODELS)
- https://github.com/LouisShark/claude-code (claude_code, Decompiled product)
- https://github.com/x1xhlol/v0-system-prompts-models-and-tools (full v0 system prompts, models and tools)
- https://manus.im/share/lLR5uWIR5Im3k9FCktVu0k?replay=1 (Manus's Jailbreak)