image * evals calculated with llama.cpp llama-perplexity

mistralai/Ministral-3-8B-Instruct-2512 neopolitized with projected shards and fragments of mistralai/Devstral-Small-2-24B-Instruct-2512.

  • projection method: 2
  • merge method: 0
  • layers: 0-7 [x->y(int)]
  • alpha: 0.85-0.85
  • tensors: attn_q, attn_k, attn_v, attn_o.T
                             8 w  w       
8d8b. .d88b .d8b. 88b. .d8b. 8 w w8ww .d88
8P Y8 8.dP' 8' .8 8  8 8' .8 8 8  8   8  8
8   8 `Y88P `Y8P' 88P' `Y8P' 8 8  Y8P `Y88
                  8                       
Downloads last month
50
GGUF
Model size
8B params
Architecture
mistral3
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for neopolita/Neo-Ministral-3-8B-Instruct-2512-v0

Collection including neopolita/Neo-Ministral-3-8B-Instruct-2512-v0