A comprehensive collection of research papers, datasets, and resources related to Style Transfer across different domains. From traditional methods to the latest diffusion models, this repository provides a curated list of cutting-edge research in style transfer.
Title | Year | Publish | Paper | Code |
---|---|---|---|---|
Neural Style Transfer (NST) | 2015 | CVPR | paper | - |
Fast Style Transfer | 2016 | ECCV | paper | code |
Texture Net | 2016 | arXiv | paper | code |
Instance Normalization | 2016 | arXiv | paper | code |
Universal Style Transfer | 2017 | NIPS | paper | - |
Light-weight Style Transfer | 2017 | CVPR | paper | code |
Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization | 2017 | ICCV | paper | code |
Histogram Loss | 2017 | CVPR | paper | code |
Multimodal Style Transfer (AdaIN Extension) | 2018 | CVPR | paper | code |
Avatar-Net | 2018 | CVPR | paper | code |
NST-MetaNet | 2018 | CVPR | - | - |
AMST | 2019 | CVPR | - | - |
SEAN | 2020 | CVPR | - | - |
AdaAttN | 2021 | ICCV | paper | code |
ArtFlow | 2021 | CVPR | paper | code |
CLIPstyler | 2021 | CVPR | paper | code |
ST-RAFS | 2021 | CVPR | - | - |
CSD-AST | 2022 | ICCV | - | - |
RCST | 2023 | arXiv | - | - |
OIT-SD | 2023 | arXiv | - | - |
Puff-Net | 2024 | CVPR | - | - |
S2WAT | 2024 | AAAI | - | - |
EUC-HSTN | 2024 | Heliyon | - | - |
ReLU-Oscillator | 2024 | ESWA | - | - |
AEANet | 2024 | arXiv | - | - |
ANST-AS | 2024 | arXiv | - | - |
StyleMamba | 2024 | arXiv | paper |
Title | Year | Publish | Paper | Code |
---|---|---|---|---|
Generative Image Modeling using Style and Structure Adversarial Networks | 2016 | - | paper | - |
ArtGAN | 2017 | arxiv | paper | code |
ComboGAN | 2017 | CVPR | paper | - |
Face Translation between Images and Videos | 2017 | arxiv | paper | - |
MaskGAN | 2018 | ICLR | paper | - |
Perceptually Optimized GAN | 2018 | arXiv | paper | - |
NICE-GAN | 2020 | CVPR | paper | code |
BlendGAN | 2021 | arXiv | paper | - |
SemanticStyleGAN | 2022 | CVPR | paper | - |
GP-UNIT | 2022 | CVPR | paper | code |
Style-Aware-Discriminator | 2022 | CVPR | paper | code |
ISSA | 2023 | arXiv | paper | code |
zGAN | 2024 | - | paper | - |
Title | Year | Publish | Paper | Code |
---|---|---|---|---|
WAE | - | - | paper | - |
Representative Feature Extraction During Diffusion Process | 2024 | arXiv | paper | - |
StyleMamba | 2024 | arXiv | paper | code |
Title | Year | Publish | Paper | Code |
---|---|---|---|---|
StyleTokenizer | 2024 | ECCV | paper | code |
MagicFace | 2024 | ArXiv | paper | code |
SCEPTER | 2024 | ArXiv | paper | code |
B-LoRA | 2024 | ArXiv | paper | code |
CreativeSynth | 2024 | ArXiv | paper | code |
FreeStyle | 2024 | ArXiv | paper | code |
InstantID | 2024 | ArXiv | paper | code |
StyleAligned | 2024 | CVPR | paper | code |
StyleID | 2024 | CVPR | paper | code |
Portrait Diffusion | 2023 | ArXiv | paper | code |
ProSpect | 2023 | SIGGRAPH | paper | code |
InST | 2023 | CVPR | paper | code |
Title | Year | Publish | Paper | Code |
---|---|---|---|---|
APDrawingGAN | 2019 | CVPR | paper | code |
WarpGAN | 2020 | CVPR | paper | code |
AiSketcher | 2020 | IROS | paper | code |
Cartoon-StyleGAN | 2021 | Arxiv | paper | code |
SPatchGAN | 2021 | ICCV | paper | code |
StyleCariGAN | 2021 | SIGGRAPH | paper | code |
CariMe | 2021 | TMM | paper | code |
BlendGAN | 2021 | NeurIPS | paper | code |
DynaGAN | 2022 | SIGGRAPH | paper | code |
TargetCLIP | 2022 | ECCV | paper | code |
DCT-Net | 2022 | TOG | paper | code |
GODA | 2022 | NeurIPS | paper | code |
Mind the Gap | 2022 | ICLR | paper | code |
MMFS | 2023 | PG | paper | code |
Fix the Noise | 2023 | CVPR | paper | code |
SSR-Encoder | 2024 | CVPR | paper | code |
InstantStyle | 2024 | ArXiv | paper | code |
Pair Customization | 2024 | ArXiv | paper | code |
ZePo | 2024 | ACM MM | paper | code |
DoesFS | 2024 | CVPR | paper | code |
Title | Year | Publish | Paper | Code |
---|---|---|---|---|
ReCoNet | 2018 | - | paper | code |
Learning Linear Transformations | 2019 | CVPR | paper | code |
Layered Neural Atlases | 2021 | - | paper | code |
VToonify | 2022 | - | paper | code |
CCPL | 2022 | - | paper | code |
FateZero | 2023 | - | paper | code |
CAP-VSTNet | 2023 | - | paper | code |
Control A Video | 2023 | - | paper | code |
Rerender A Video | 2023 | - | paper | code |
Style-A-Video | 2023 | - | paper | code |
Hallo1 | 2024 | - | paper | code |
Hallo2 | 2024 | - | paper | code |
Hallo3 | 2024 | - | paper | code |
Title | Year | Publish | Paper | Code |
---|---|---|---|---|
RSMT | 2023 | SIGGRAPH | paper | code |
DiffuseStyleGesture | 2023 | - | paper | code |
CAMDM | 2024 | - | paper | code |
Local Motion Phases | 2022 | - | paper | code |
Title | Year | Publish | Paper | Code |
---|---|---|---|---|
Sequence to Better Sequence | 2017 | ICML | paper | code |
Toward Controlled Generation of Text | 2017 | - | paper | code |
Style Transfer from Non-Parallel Text | 2017 | NeurIPS | paper | code |
Adversarially Regularized Autoencoders | 2018 | - | paper | code |
Delete, Retrieve, Generate | 2018 | - | paper | code |
Style Transfer Through Back-Translation | 2018 | - | paper | code |
Disentangled Representation Learning | 2019 | - | paper | code |
Learning Sentiment Memories | 2018 | - | paper | code |
Unsupervised Controllable Text | 2019 | - | paper | code |
Dual Reinforcement Learning | 2019 | - | paper | code |
Title | Year | Publish | Paper | Code |
---|---|---|---|---|
One-Shot Domain Adaptation | 2020 | CVPR | paper | - |
Meta Face Recognition | 2020 | - | paper | code |
Cross-Domain Document Detection | 2020 | CVPR | paper | code |
StereoGAN | 2020 | CVPR | paper | - |
Domain Adaptation for Dehazing | 2020 | CVPR | paper | - |
PointDAN | 2019 | - | paper | code |
GCAN | 2019 | CVPR | paper | - |
DCAN | 2020 | - | paper | code |
Dataset | Year | Size | Description | Link |
---|---|---|---|---|
Danbooru2017 | 2017 | 1.9TB, 2.94M images | Anime | link |
Chinese Style Transfer | 2018 | 1000 content, 100 style images | Chinese Painting | link |
Stylized ImageNet | 2018 | ~134GB | Style Transfer | link |
WikiArt | 2018 | 42,129 images | Style, Artist, Genre | link |
FFHQ | 2019 | 70,000 images | Human Faces | link |
Dark Zurich Dataset | 2019 | 8,779 images | Night, Twilight, Day | link |
Comic Faces | 2020 | 20K images | Paired Synthetic Comics | link |
iFakeFaceDB | 2020 | 87,000 images | Face Images | link |
Ukiyo-e Faces | 2020 | 5,209 images | Aligned Ukiyo-e Faces | link |
DFFD | 2020 | 299,039 images | Face Manipulation | link |
MetFaces | 2020 | 1,336 images | Artistic Faces | link |
AAHQ | 2021 | 25,000 images | Artistic Faces | link |
StyleGAN Human | 2022 | 40K+ images | Human Generation | link |
DiffusionDB | 2022 | 14M images | Text-to-image | link |
4SKST | 2023 | 25 color, 100 sketches | Sketch Style | link |
JourneyDB | 2023 | 4.4M images | Multimodal Vision | link |
DiffusionFace | 2024 | 600,000 images | Face Forgery | link |
Trailer Faces HQ | 2024 | 187K faces | Facial Expressions | link |
StyleShot | 2024 | - | Style Transfer | link |
Dataset | Year | Size | Description | Link |
---|---|---|---|---|
UADFV | 2018 | 100 videos | Video Style Transfer | link |
Deepfake-TIMIT | 2018 | 960 videos | Face Recognition | link |
DFFD | 2019 | 300 videos | Diverse Fake Face | link |
Celeb-DF | 2020 | 408 original videos | DeepFake | link |
DFDC | 2020 | 100,000 clips | DeepFake Detection | link |
FaceForensics++ | 2019 | 6000 videos | Swapped Face | link |
ForgeryNet | 2021 | 221,247 videos | Forgery Analysis | link |
FFIW-10K | 2021 | 10,000 videos | Face Forgery | link |
Wild Deepfake | 2024 | 7,314 sequences | Deepfake Detection | link |
Dataset | Year | Size | Description | Link |
---|---|---|---|---|
100STYLE | 2022 | 4M frames | Stylized Motion Capture | link |
Motiondataset | 2023 | 36,673 frames | 3D Motion | link |
Dataset | Year | Size | Description | Link |
---|---|---|---|---|
Touchdown | 2020 | 9,326 instructions | Navigation | link |
Yelp | 2020 | 6.99M comments | NLP Corpus | link |
YAFC Corpus | 2018 | Largest stylistic corpus | Style Transfer | link |
ParaDetox | 2020 | 10,000 toxic sentences | Detoxification | link |
We welcome contributions! Here's how you can help:
- π Report bugs and issues
- π‘ Suggest new papers or resources
- π§ Submit pull requests
- β Star this repository if you find it helpful!
Feel free to open an issue or submit a pull request if you:
- Have any suggestions or corrections
- Want to add new papers or resources
- Find any broken links
If you find this repository useful for your research, please consider citing:
Thanks to all researchers and developers who made their work publicly available.