zhoujiaming777 commited on
Commit
2748cf0
·
verified ·
1 Parent(s): 05647f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -5,5 +5,7 @@ license: cc-by-nc-sa-4.0
5
  [![arXiv](https://img.shields.io/badge/Paper-arXiv-red.svg)](https://arxiv.org/abs/2507.18452)
6
  [![deploy](https://img.shields.io/badge/Hugging%20Face-DIFFA-FFEB3B)](https://huggingface.co/zhoujiaming777/DIFFA)
7
  [![Github](https://img.shields.io/badge/Github-DIFFA-blue)](https://github.com/NKU-HLT/DIFFA)
 
 
8
  **DIFFA** is the first **diffusion-based large audio-language model** for spoken language understanding.
9
  It combines a frozen diffusion LLM with **dual adapters** (semantic + acoustic) to enhance **audio perception and reasoning**.
 
5
  [![arXiv](https://img.shields.io/badge/Paper-arXiv-red.svg)](https://arxiv.org/abs/2507.18452)
6
  [![deploy](https://img.shields.io/badge/Hugging%20Face-DIFFA-FFEB3B)](https://huggingface.co/zhoujiaming777/DIFFA)
7
  [![Github](https://img.shields.io/badge/Github-DIFFA-blue)](https://github.com/NKU-HLT/DIFFA)
8
+
9
+
10
  **DIFFA** is the first **diffusion-based large audio-language model** for spoken language understanding.
11
  It combines a frozen diffusion LLM with **dual adapters** (semantic + acoustic) to enhance **audio perception and reasoning**.