From 09e5e7885c21290b88f6ca0934a699618daed1b3 Mon Sep 17 00:00:00 2001 From: HongCheng Date: Wed, 28 Feb 2024 12:10:27 +0900 Subject: [PATCH] add openDiT --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 15bf252..e14c114 100644 --- a/README.md +++ b/README.md @@ -70,6 +70,7 @@ Mini Sora 开源社区定位为由社区同学自发组织的开源社区(** | 3) **SiT**: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers | [Paper](https://arxiv.org/abs/2401.08740), [Github](https://github.com/willisma/SiT), [ModelScope](https://modelscope.cn/models/AI-ModelScope/SiT-XL-2-256/summary)| | 4) **FiT**: Flexible Vision Transformer for Diffusion Model | [Paper](https://arxiv.org/abs/2402.12376), [Github](https://github.com/whlzy/FiT) | | 5) **k-diffusion**: Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion Transformers | [Paper](https://arxiv.org/pdf/2401.11605v1.pdf), [Github](https://github.com/crowsonkb/k-diffusion) | +| 6) **OpenDiT**: An Easy, Fast and Memory-Efficent System for DiT Training and Inference | [Github](https://github.com/NUS-HPC-AI-Lab/OpenDiT) | |

Video Generation

| | | **论文** | **链接** | | 1) **Animatediff**: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning | [Paper](https://arxiv.org/abs/2307.04725), [Github](https://github.com/guoyww/animatediff/), [ModelScope](https://modelscope.cn/models?name=Animatediff&page=1) |