Instructions to use mrqx0195/BLOCKv0.6 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use mrqx0195/BLOCKv0.6 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("mrqx0195/BLOCKv0.6", dtype=torch.bfloat16, device_map="cuda") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Notebooks
- Google Colab
- Kaggle
BLOCKv0.6
BLOCKv0.6 is an image-to-image model for converting a 3D Minecraft character preview into a 2D skin texture file.
Compared with BLOCKv0.5, this release upgrades the base model from FLUX.2-klein-base-9B to FLUX.2-klein-base-4B and fixes the original train/inference mismatch. The merged checkpoint now aligns with the official Flux2KleinPipeline, so you can use Flux2KleinPipeline.from_pretrained(...) directly without a custom train-order pipeline.
What This Model Does
- Input: a character preview image with front/back views
- Optional control: text prompt for style/details
- Output: a generated Minecraft skin UV atlas
Quick Start
import torch
from diffusers import Flux2KleinPipeline
from PIL import Image
model_id = "AliceKJ/BLOCKv0.6"
pipe = Flux2KleinPipeline.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
)
pipe.to("cuda")
preview = Image.open("examples/ex1_preview.png").convert("RGB").resize((512, 512))
result = pipe(
prompt="Image-to-image translation using the reference image. The reference shows the same 3D Minecraft character with front and back views in a single image. Generate the corresponding Minecraft skin UV atlas in 64x64 pixel-art UV layout. High-quality anime-style. Flat shading, sharp pixel edges, no blur, no anti-aliasing. Keep consistent UV placement and mapping; match the same character design from the reference. Model type: classic (auto-detected Minecraft player model).",
image=preview,
num_inference_steps=30,
guidance_scale=4.0,
).images[0]
result.save("generated_skin.png")
Example Results
The example images below are reused from the BLOCKv0.5 release for continuity while this model card focuses on the updated 4B checkpoint and official-pipeline inference path.
Notes
- This checkpoint is designed for Minecraft skin img2img generation, not general text-to-image use.
- BLOCKv0.6 is based on FLUX.2-klein-base-4B.
- The train/inference mismatch from BLOCKv0.5 has been fixed, so the official
Flux2KleinPipelinenow works directly. - No custom train-order pipeline is required for inference.
Citation
If you use BLOCKv0.6 or results derived from this model, please cite:
@article{guo2026block,
title={BLOCK: An Open-Source Bi-Stage MLLM Character-to-Skin Pipeline for Minecraft},
author={Guo, Hengquan},
journal={arXiv preprint arXiv:2603.03964},
year={2026},
url={http://arxiv.org/abs/2603.03964}
}
- Downloads last month
- -
Model tree for mrqx0195/BLOCKv0.6
Base model
black-forest-labs/FLUX.2-klein-base-4BPaper for mrqx0195/BLOCKv0.6
Paper • 2603.03964 • Published • 1





