Papers
arxiv:2512.02901

Fairy2i: Training Complex LLMs from Real LLMs with All Parameters in {pm 1, pm i}

Published on Dec 2
ยท Submitted by PKU-DS-LAB on Dec 15
Authors:
,
,
,
,
,
,

Abstract

Fairy2i converts pre-trained real-valued models to complex form, enabling efficient low-bit quantization while maintaining performance.

AI-generated summary

Large language models (LLMs) have revolutionized artificial intelligence, yet their massive memory and computational demands necessitate aggressive quantization, increasingly pushing representations toward the theoretical limit of a single bit. While complex-valued LLMs, such as iFairy, offer a superior chance for low-bit representation compared to real-valued counterparts, they require training from scratch, preventing the utilization of the vast ecosystem of pre-trained real-valued foundation models. Here we present Fairy2i, a universal framework that transforms pre-trained real-valued layers into an equivalent widely-linear complex form, enabling extremely low-bit quantization while reusing existing checkpoints. By proving a lossless mathematical equivalence between real and widely-linear maps, we convert standard Transformers into the complex domain and employ a phase-aware quantization scheme with a highly efficient codebook of fourth roots of unity. Furthermore, we introduce a recursive residual quantization mechanism that iteratively minimizes quantization error, allowing inference to proceed via efficient multiplication-free accumulation. We demonstrate that Fairy2i restores the performance of LLaMA-2 7B at an effective 2-bit precision to levels nearly comparable with full-precision baselines, significantly outperforming state-of-the-art real-valued binary and ternary quantization methods. This work bridges the gap between the representational efficiency of complex-valued arithmetic and the practical utility of pre-trained models, paving a new way for efficient inference on commodity hardware.

Community

Paper submitter

Is it possible to run LLMs at 2-bit with virtually NO loss in accuracy? ๐Ÿค”
No with Real numbers, but Yes with Complex ones!

๐Ÿš€ Meet Fairy2i-W2(2bit):

QAT from LLaMA-2 7B with Complex Phase quant
PPL: 7.85 (vs FP16's 6.63)
Accuracy: 62.00% (vs FP16's 64.72%)

eval
But isn't LLaMA real-valued? Yes, but we built a bridge. ๐ŸŒ‰
We prove a mathematical equivalence: Any real linear layer can be losslessly re-parameterized into a "Widely-Linear Complex Form".
Which means no retraining needed!

wide_linear
Another secret sauce: Recursive Residual Quantization. ๐ŸŽฏ

Instead of just quantize once.We also quantize the remaining error to wipe out noise.

Best part? These stages are Data-Independent, so they run in PARALLEL.
You get high accuracy with virtually NO latency penalty.

recursive
But isn't Complex arithmetic slow?"
๐Ÿค”Not with Fairy2i.Since weights are quantized to the unit circle ${ \pm 1, \pm i }$, we achieve Multiplication-Free Inference.Heavy Matrix Muls turn into simple Adds, Subs, and Swaps.
This efficiency is key for running LLMs on edge devices
We've only scratched the surface (QAT on just 30B tokens)
We believe with more training data, surpassing the full-precision model is just around the corner
Resources:
arXiv:https://arxiv.org/pdf/2512.02901
huggingface:https://huggingface.co/PKU-DS-LAB/Fairy2i-W2
github:https://github.com/PKULab1806/Fairy2i-W2

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.02901 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.02901 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.