- Less than 30cm away from where I was a nanosecond ago.
- https://github.com/ckuethe
Lists (9)
Sort Name ascending (A-Z)
- All languages
- Arduino
- Assembly
- Asymptote
- Batchfile
- C
- C#
- C++
- CMake
- CSS
- Clojure
- Crystal
- Dart
- Dockerfile
- Eagle
- Earthly
- Elixir
- Euphoria
- Fortran
- G-code
- Go
- Groff
- HTML
- Hack
- Haskell
- IDL
- Java
- JavaScript
- Jinja
- Julia
- Jupyter Notebook
- KerboScript
- KiCad Layout
- Kotlin
- Lua
- MATLAB
- MDX
- Makefile
- Markdown
- NSIS
- OCaml
- Objective-C
- Objective-C++
- OpenSCAD
- PHP
- Pascal
- Perl
- PowerShell
- Processing
- Prolog
- Python
- R
- ReScript
- Rich Text Format
- Ruby
- Rust
- SCSS
- SWIG
- Scala
- Shell
- Standard ML
- Swift
- SystemVerilog
- TSQL
- TeX
- Text
- TypeScript
- VBA
- Verilog
- Vim Script
- Visual Basic
- Vue
- XSLT
- YARA
Starred repositories
This repository is used to collect underwater scene datasets and is always updated
Code related to "Fundamentals of Astrodynamics and Applications" 5th ed. by David Vallado
Complete software examples for PyCubed
Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver
Detects aircraft interceptions, in real-time or after the fact.
A complete low-power gamma ray spectrometer that can be used by itself or integrated into other projects
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr…
Framework for testing vulnerabilities of large language models (LLM).
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
This repository demonstrates the use of a prompt jailbreak to expose information within a system prompt. Specifically, we target any LLM hosted on HuggingFace Inference Endpoints.
Research on "Many-Shot Jailbreaking" in Large Language Models (LLMs). It unveils a novel technique capable of bypassing the safety mechanisms of LLMs, including those developed by Anthropic and oth…
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
Repository for our paper "Frustratingly Easy Jailbreak of Large Language Models via Output Prefix Attacks". https://www.researchsquare.com/article/rs-4385503/latest
[USENIX Security '24] Dataset associated with real-world malicious LLM applications, including 45 malicious prompts for generating malicious content, malicious responses from LLMs, 182 real-world j…
LLM Jailbreak, a collection of prompt injection implementations
This repository contains the code for the paper "Tricking LLMs into Disobedience: Formalizing, Analyzing, and Detecting Jailbreaks" by Abhinav Rao, Sachin Vashishta*, Atharva Naik*, Somak Aditya, a…
Code for our NeurIPS 2024 paper Improved Generation of Adversarial Examples Against Safety-aligned LLMs
A list of red teaming, jailbreaks and specification gaming methods on LLMs
Official repo of paper [Effective and Evasive Fuzz Testing-Driven Jailbreaking Attacks against LLMs (arxiv.org)](https://arxiv.org/abs/2409.14866)
Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794)
The most comprehensive and accurate LLM jailbreak attack benchmark by far
A dataset consists of 6,387 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 666 jailbreak prompts).