ConSynth-X — Download & Usage
ConSynth-X release v1 is a multi-condition synthetic-augmentation benchmark of
122,156 construction-site images across three sub-datasets, packaged as
Parquet shards with embedded JPEG bytes. See
dataset_description.pdf for full content,
generation pipelines, schema, and known issues.
| Sub | Rows | Annotation | HuggingFace config |
|---|---|---|---|
cs10k |
44,105 | 3-class detection + 4 rule-violation categories + image attributes | cs10k |
soda_voc |
56,021 | 15-class detection | soda_voc |
soda_ktsh |
22,030 | Caption-only (5 captions/image) | soda_ktsh |
Condition grid preview
Each row shows the same source image transformed across the released conditions
(original, rain_light, rain_heavy, snow_light, snow_heavy,
fog_light, fog_medium, fog_heavy, night / day2night, small).
Empty cells = condition not yet generated for that source id.
cs10k — construction-site detection (3-class + rule violations)
soda_voc — SODA 15-class detection
soda_ktsh — SODA caption sub (5 captions/image)
Quick start
1. Install
pip install datasets pyarrow pillow huggingface_hub
2. Stream a single config (no full download)
from datasets import load_dataset
ds = load_dataset("Ben11304/ConSynth-X", "cs10k", split="train", streaming=True)
for row in ds.take(3):
print(row["image_id"], row["condition"], row["condition_labels"])
3. Download a single config locally
from datasets import load_dataset
ds = load_dataset("Ben11304/ConSynth-X", "soda_voc")
print(ds) # DatasetDict with 'train' split (or whatever HF resolved)
print(ds["train"][0]["image_id"], ds["train"][0]["condition"])
4. Snapshot-download specific shards via huggingface_hub
For granular access without datasets:
from huggingface_hub import snapshot_download
local = snapshot_download(
repo_id="Ben11304/ConSynth-X",
repo_type="dataset",
allow_patterns=["soda_voc/parquet/fog_*/*.parquet"], # only fog
local_dir="./consynthx",
)
print(local)
CLI form:
huggingface-cli download Ben11304/ConSynth-X \
--repo-type dataset \
--include "cs10k/parquet/rain_heavy/**" \
--local-dir ./consynthx
5. Read directly with PyArrow (no datasets)
import pyarrow.parquet as pq
from io import BytesIO
from PIL import Image
t = pq.read_table("consynthx/soda_voc/parquet/fog_heavy/fog_heavy.parquet")
print(t.schema)
row = t.slice(0, 1).to_pylist()[0]
img = Image.open(BytesIO(row["image"]))
print(row["image_id"], img.size, row["condition_labels"])
Schema
Common fields across all three subs:
| Field | Type | Description |
|---|---|---|
image |
bytes |
JPEG bytes, byte-identical to the source — never re-encoded |
image_id |
string |
Unique within sub |
source_id |
string | null |
Original-image identifier (joins back to clean source) |
source_dataset |
string |
"cs10k" | "soda_voc" | "soda_ktsh" |
condition |
string |
Primary condition label (fog_heavy, rain_light, ...) |
condition_labels |
list[string] |
All applicable labels (multi-label filter) |
objects |
list[struct] |
{class_id: int, class_name: string, bbox: [x1,y1,x2,y2] normalised} |
pipeline |
struct |
{method, checkpoint, prompt_template, params} (params is JSON string) |
quality_scores |
struct | null |
{dino_sim, ssim, clip_sim, lpips}; null when not yet computed |
quality_alert |
bool | null |
dino_sim < 0.75; null when DINO unavailable |
Sub-specific extensions:
cs10kaddsimage_attributes(caption, illumination, camera_distance, view, quality_of_info) andrule_violations(list of{rule_id, bbox, reason}).soda_ktshaddscaptions(list of 5 natural-language captions per image; populated for 21,789 / 22,030 (98.9%) of rows. The remaining ~1.1% are rows whoseimage_iddoes not appear in any caption-bearing source arrow and are released withcaptions = []).
Bounding-box conventions:
objects[].bbox is [x1, y1, x2, y2] normalised to [0, 1] — resolution-agnostic, invariant under resize. Convert to pixel [x, y, w, h] per row by multiplying by the image's (W, H) (decoded from image bytes).
Recipes
Decode an image
from io import BytesIO
from PIL import Image
img = Image.open(BytesIO(row["image"]))
img.save("preview.jpg")
Filter by condition
ds = load_dataset("Ben11304/ConSynth-X", "soda_voc", split="train")
fog = ds.filter(lambda x: "fog" in x["condition_labels"]) # multi-intensity fog
heavy_rain = ds.filter(lambda x: x["condition"] == "rain_heavy")
Use only high-quality augmentations
quality_alert = false means DINOv3 similarity ≥ 0.75 (synthetic stays close to source).
high_quality = ds.filter(lambda x: x["quality_alert"] is False)
# Strict — also drop rows whose DINO is missing:
high_quality_only = ds.filter(
lambda x: x.get("quality_alert") is False
and x.get("quality_scores") is not None
)
Detection: convert to YOLO format
import os
from io import BytesIO
from PIL import Image
def export_yolo(row, out_dir):
img = Image.open(BytesIO(row["image"]))
w, h = img.size
img_path = os.path.join(out_dir, "images", f"{row['image_id']}.jpg")
lbl_path = os.path.join(out_dir, "labels", f"{row['image_id']}.txt")
os.makedirs(os.path.dirname(img_path), exist_ok=True)
os.makedirs(os.path.dirname(lbl_path), exist_ok=True)
img.save(img_path, "JPEG", quality=95)
with open(lbl_path, "w") as f:
for o in row["objects"]:
x1, y1, x2, y2 = o["bbox"] # normalised xyxy
cx, cy = (x1 + x2) / 2, (y1 + y2) / 2
bw, bh = x2 - x1, y2 - y1
f.write(f"{o['class_id']} {cx:.6f} {cy:.6f} {bw:.6f} {bh:.6f}\n")
cs10k: caption + scene attributes
attrs = row["image_attributes"]
print(attrs["caption"])
print(attrs["illumination"], attrs["camera_distance"], attrs["view"])
cs10k: rule violations
for rv in row["rule_violations"]:
print(f"rule {rv['rule_id']}: {rv['reason']}")
if rv["bbox"]:
print(f" bbox (normalised xyxy): {rv['bbox']}")
soda_ktsh: 5 captions per image
for caption in row["captions"]:
print("-", caption)
Decode pipeline params
pipeline.params is a JSON-encoded string for forward-compatibility:
import json
params = json.loads(row["pipeline"]["params"])
print(row["pipeline"]["method"], row["pipeline"]["checkpoint"])
print(params)
Citation
@dataset{duong2026consynthx,
title = {ConSynth-X: A Large-Scale Synthetic Construction-Site Image Dataset
for Challenging Field Conditions},
author = {Duong, Viet Huy and Xiong, Ruoxin},
year = {2026},
url = {https://huggingface.co/datasets/Ben11304/ConSynth-X}
}
Please also cite the upstream datasets:
@misc{chen2025vlm,
title = {Are Large Pre-trained Vision Language Models Effective
Construction Safety Inspectors?},
author = {Chen, Xuezheng and Zou, Zhengbo},
year = {2025}
}
@article{duan2022soda,
title = {SODA: A large-scale open site object detection dataset for
deep learning in construction},
author = {Duan, Rui and Deng, Hui and Tian, Mao and Deng, Yichuan and Lin, Jiarui},
journal = {Automation in Construction},
volume = {142},
pages = {104499},
year = {2022},
doi = {10.1016/j.autcon.2022.104499}
}
License
- Dataset (Parquet shards with embedded JPEG bytes): CC BY-NC 4.0
- Generation and repack code: Apache-2.0 (separate repository)
Source-data attributions: Construction Site 10k (LouisChen15) and the SODA dataset family (Duan et al., Automation in Construction 2022; SODA-KTSH extension, Deng et al. 2025). Downstream redistribution must respect both the ConSynth-X licence and the upstream source licences.
See dataset_description.pdf for the full data card, including known issues
and ongoing DINO compute coverage.
- Downloads last month
- 78


