DREAM: Diffusion Rectification and Estimation-Adaptive Models

Turning the top to the bottom by adding only three lines of code.


Abstract

We present DREAM, a novel training framework representing Diffusion Rectification and Estimation-Adaptive Models, requiring minimal code changes (just three lines) yet significantly enhancing the alignment of training with sampling in diffusion models. DREAM features two components: diffusion rectification, which adjusts training to reflect the sampling process, and estimation adaptation, which balances perception against distortion. When applied to image super-resolution (SR), DREAM adeptly navigates the tradeoff between minimizing distortion and preserving high image quality. Experiments demonstrate DREAM's superiority over standard diffusion-based SR methods, showing a 2 to 3x faster training convergence and a 10 to 20x reduction in necessary sampling steps to achieve comparable or superior results. We hope DREAM will inspire a rethinking of diffusion model training paradigms.

TL;DR:

We propose a novel training framework for diffusion models, enhances alignment between training and sampling, significantly improving image super-resolution efficiency and quality with three lines of code changes.

overview

Training acceleration

DREAM enables much faster training convergence.


Sampling efficiency

DREAM allows a significant reduction in necessary sampling steps.

overview

How does DREAM compare to standard diffusion training?

overview

Citation

@article{zhou2023dream,
  title = {DREAM: Diffusion Rectification and Estimation-Adaptive Models},
  author = {Zhou, Jinxin and Ding, Tianyu and Chen, Tianyi and Jiang, Jiachen and Zharkov, Ilya and Zhu, Zhihui and Liang, Luming},
  journal = {arXiv preprint arXiv:2312.00210},
  year = {2023},
}

Acknowledgement

This webpage is borrowed from FreeNeRF and RefNeRF. We sincerely thank the authors for their great work.