Z-Image-De-Turbo de-distilled variant of Z-Image

Z-Image-De-Turbo is a de-distilled variant of Tongyi-MAI’s Z-Image-Turbo. The goal behind this version is simple: remove the tight compression created during turbo distillation and reopen the space for training, shaping, and experimenting. Since it is trained directly on images produced by the turbo release, it stays aligned with it while giving creators more freedom.
The model works with both ComfyUI and Diffusers, so it fits easily into most workflows. You can train on it directly, continue building new styles, or run standard inference. No adapters are required, which keeps the setup clean.
Why It Matters
The original base model was not available at the time this project started, and this version exists to fill that gap. It allows users to explore deeper behavior, try new directions, and push experiments further than what the turbo release normally allows.
For anyone interested in training LoRAs, testing character consistency, or trying extended finetunes, this version opens doors that the turbo release keeps tight.
Key Features
| Feature | Details |
|---|---|
| De-distilled structure | Removes compression limits from Z-Image-Turbo. |
| LoRA training support | Works smoothly for new styles and character sets. |
| ComfyUI + Diffusers | Two formats available inside the repo. |
| Inference ready | Works well with low CFG (2.0–3.0) and 20–30 steps. |
| CFG normalization friendly | Responds well to normalized CFG setups. |
| Direct training | Needs no adapter to start. |
How It Performs
During testing, I found that Z-Image-De-Turbo behaves more openly than the turbo release. Lower CFG values give crisp outcomes, and higher step counts help it stabilize details without losing structure.
When training LoRAs, it maintains strong alignment with the turbo release, so results feel familiar yet more flexible. It also holds style cues well, even when pushed for longer finetunes.
It may not match the speed of the turbo setup, but the extra room for shaping and experimenting makes up for it.
My Experience
Using it through zimageturbo.org, I noticed:
- Stable outcomes even at lower CFG
- Better control during LoRA shaping
- More freedom to test unusual prompts
- Good consistency in character-focused work
- Clean outputs when pushing it through 20–30 steps
I especially enjoyed using it for stylized prompts, character shaping, and experimental LoRA sessions. It feels like a model built for people who enjoy exploring and pushing boundaries instead of sticking to a fixed pattern.
If you want a Z-Image experience that gives you more room to grow your own styles, train deeper, and experiment without limits, Z-Image-De-Turbo is a solid choice to explore.
Official Model: https://huggingface.co/ostris/Z-Image-De-Turbo
Recent Posts

How to Improve Text on Z-Image Turbo?
Z-Image-De-Turbo: A de-distilled variant of Z-Image-Turbo for flexible training, LoRA development, and extended experimentation without adapters.

Z-Image Turbo ControlNet Workflow
Tutorial on Union ControlNet in ComfyUI—pose, Canny, and depth controls, depth-model preprocessing, step-by-step workflow, plus speed tests with example results.

Zimage Turbo Beats FLUX 2: Local AI Image Generation
Meet Tongyi/Alibaba’s Zimage Turbo: stunning local AI image results with sharp anatomy. See examples and get the ComfyUI workflow that outshines FLUX 2.