Non-deterministic outputs with identical inputs (GPTQ-Int4, vLLM 0.18, document extraction)
Description:
Environment
- Model:
Qwen3.5-35B-A3B-GPTQ-Int4(from
Qwen/Qwen3.5-35B-A3B-GPTQ-Int4) - Inference engine: vLLM 0.18.0
- GPU: NVIDIA RTX 5090 (32 GB VRAM)
- OS: Ubuntu 24.04, CUDA 13.0
Problem
We use Qwen3.5-35B-A3B-GPTQ-Int4 for structured data extraction from scanned Russian accounting documents (invoices,
contracts, acts, etc.). The model receives a page image + a system prompt in Russian and must return a strict JSON with
fields like document number, date, buyer/seller names, INN, KPP, VAT amount, etc.
The issue: given the exact same inputs, the model produces different outputs across runs.
We process a batch of 37 identical PDF files with:
- Same system prompt (see below)
- Same user prompt
- Same DPI (230) for PDF→PNG conversion
- Same vLLM parameters
Yet on repeated runs, some fields change — e.g., seller name or address may differ slightly, or VAT amount
appears/disappears. This is critical for our use case where accuracy and consistency are essential for accounting
automation.
vLLM generation parameters
We explicitly set all parameters for maximum determinism:
{
"model": "document-parser",
"temperature": 0.0,
"top_p": 1.0,
"top_k": 1,
"seed": 42,
"chat_template_kwargs": {"enable_thinking": False},
"max_tokens": 2048
}
vLLM launch command
python -m vllm.entrypoints.openai.api_server \
--model Qwen3.5-35B-A3B-GPTQ-Int4 \
--served-model-name document-parser \
--port 8001 \
--dtype float16 \
--gpu-memory-utilization 0.97 \
--max-model-len 16384 \
--max-num-seqs 12 \
--trust-remote-code \
--enable-chunked-prefill \
--seed 42
System prompt (Russian)
The prompt instructs the model to extract structured data from Russian accounting documents and return strict JSON:
Ты — экспертная система извлечения данных из российских бухгалтерских документов.
На изображении — отсканированный документ. Определи тип и извлеки реквизиты двух сторон.
Верни СТРОГО JSON:
{
"тип_документа": "string",
"номер_документа": "string | null",
"дата_документа": "ДД.ММ.ГГГГ | null",
"покупатель": "string | null",
"покупатель_инн": "string | null",
"покупатель_кпп": "string | null",
"покупатель_адрес": "string | null",
"продавец": "string | null",
"продавец_инн": "string | null",
"продавец_кпп": "string | null",
"продавец_адрес": "string | null",
"ндс": "number | null",
"сумма_с_ндс": "number | null"
}
МАППИНГ СТОРОН:
- Продавец = ПОСТАВЩИК (тот, кто отгружает товар).
- Покупатель = тот, кто получает и оплачивает товар.
СУММЫ:
- НДС — итоговая сумма НДС (число).
- Сумма с НДС — итоговая сумма документа с учётом НДС.
- Числа: десятичный разделитель — точка, без пробелов. Пример: 15000.50
ПРАВИЛА:
1. Верни ТОЛЬКО валидный JSON.
2. Если данные не найдены — null.
3. Даты строго ДД.ММ.ГГГГ.
/no_think
Question
1. Is fully deterministic output achievable with this model (GPTQ-Int4 quantization) given fixed seed, temperature=0,
top_k=1?
2. Could the MoE routing introduce non-determinism even with greedy decoding? (Expert selection might vary due to
floating-point order of operations)
3. Are there any recommended vLLM settings or model configurations to improve reproducibility?
4. Would the full BF16 version be more deterministic than GPTQ-Int4?
Any guidance would be greatly appreciated. Thank you!