Abstract
A 24 billion parameter model generates abstractive summaries with inline citations through synthetic data training, outperforming larger models in accuracy and verifiability.
Large language models frequently generate plausible but unfaithful summaries that users cannot verify against source text, a critical limitation in compliance-sensitive domains such as government and legal analysis. We present sui-1, a 24B parameter model that produces abstractive summaries with inline citations, enabling users to trace each claim to its source sentence. Our synthetic data pipeline combines chain-of-thought prompting with multi-stage verification, generating over 22,000 high-quality training examples across five languages from diverse sources including parliamentary documents, web text, and Wikipedia. Evaluation shows sui-1 significantly outperforms all tested open-weight baselines, including models with 3x more parameters. These results demonstrate that task-specific training substantially outperforms scale alone for citation-grounded summarization. Model weights and an interactive demo are publicly available.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- GenProve: Learning to Generate Text with Fine-Grained Provenance (2026)
- InstructLR: A Scalable Approach to Create Instruction Dataset for Under-Resourced Languages (2025)
- Enhancing Long Document Long Form Summarisation with Self-Planning (2025)
- PolicyBot - Reliable Question Answering over Policy Documents (2025)
- Disco-RAG: Discourse-Aware Retrieval-Augmented Generation (2026)
- SiamGPT: Quality-First Fine-Tuning for Stable Thai Text Generation (2025)
- DocVAL: Validated Chain-of-Thought Distillation for Grounded Document VQA (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper
