Fairy2i: Training Complex LLMs from Real LLMs with All Parameters in {pm 1, pm i}
Abstract
Fairy2i converts pre-trained real-valued models to complex form, enabling efficient low-bit quantization while maintaining performance.
Large language models (LLMs) have revolutionized artificial intelligence, yet their massive memory and computational demands necessitate aggressive quantization, increasingly pushing representations toward the theoretical limit of a single bit. While complex-valued LLMs, such as iFairy, offer a superior chance for low-bit representation compared to real-valued counterparts, they require training from scratch, preventing the utilization of the vast ecosystem of pre-trained real-valued foundation models. Here we present Fairy2i, a universal framework that transforms pre-trained real-valued layers into an equivalent widely-linear complex form, enabling extremely low-bit quantization while reusing existing checkpoints. By proving a lossless mathematical equivalence between real and widely-linear maps, we convert standard Transformers into the complex domain and employ a phase-aware quantization scheme with a highly efficient codebook of fourth roots of unity. Furthermore, we introduce a recursive residual quantization mechanism that iteratively minimizes quantization error, allowing inference to proceed via efficient multiplication-free accumulation. We demonstrate that Fairy2i restores the performance of LLaMA-2 7B at an effective 2-bit precision to levels nearly comparable with full-precision baselines, significantly outperforming state-of-the-art real-valued binary and ternary quantization methods. This work bridges the gap between the representational efficiency of complex-valued arithmetic and the practical utility of pre-trained models, paving a new way for efficient inference on commodity hardware.
Community
Is it possible to run LLMs at 2-bit with virtually NO loss in accuracy? ๐ค
No with Real numbers, but Yes with Complex ones!
๐ Meet Fairy2i-W2(2bit):
QAT from LLaMA-2 7B with Complex Phase quant
PPL: 7.85 (vs FP16's 6.63)
Accuracy: 62.00% (vs FP16's 64.72%)

But isn't LLaMA real-valued? Yes, but we built a bridge. ๐
We prove a mathematical equivalence: Any real linear layer can be losslessly re-parameterized into a "Widely-Linear Complex Form".
Which means no retraining needed!

Another secret sauce: Recursive Residual Quantization. ๐ฏ
Instead of just quantize once.We also quantize the remaining error to wipe out noise.
Best part? These stages are Data-Independent, so they run in PARALLEL.
You get high accuracy with virtually NO latency penalty.

But isn't Complex arithmetic slow?"
๐คNot with Fairy2i.Since weights are quantized to the unit circle ${ \pm 1, \pm i }$, we achieve Multiplication-Free Inference.Heavy Matrix Muls turn into simple Adds, Subs, and Swaps.
This efficiency is key for running LLMs on edge devices
We've only scratched the surface (QAT on just 30B tokens)
We believe with more training data, surpassing the full-precision model is just around the corner
Resources:
arXiv:https://arxiv.org/pdf/2512.02901
huggingface:https://huggingface.co/PKU-DS-LAB/Fairy2i-W2
github:https://github.com/PKULab1806/Fairy2i-W2
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- R2Q: Towards Robust 2-Bit Large Language Models via Residual Refinement Quantization (2025)
- Learning Grouped Lattice Vector Quantizers for Low-Bit LLM Compression (2025)
- ELUTQ: Efficient LUT-Aware Quantization for Deploying Large Language Models on Edge Devices (2025)
- SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs (2025)
- Mixed-Precision Quantization for Language Models: Techniques and Prospects (2025)
- FALQON: Accelerating LoRA Fine-tuning with Low-Bit Floating-Point Arithmetic (2025)
- BitSkip: An Empirical Analysis of Quantization and Early Exit Composition (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper