Datasets:
doc_id stringlengths 13 16 | source_name stringclasses 1
value | domain stringclasses 1
value | language stringclasses 1
value | text stringlengths 8.74k 3.27M | url null | license stringclasses 1
value | source_file stringlengths 104 110 | source_index int64 1 1.63k | timestamp stringlengths 32 32 | raw_path stringclasses 1
value | processing_version stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|
aihub_books_0 | aihub_books | korean | ko | "각 상태의 행동을 별개의 클래스로 국지화 코드를 수정하거나 이해하기가 (...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 1 | 2026-04-02T18:13:25.103654+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_1 | aihub_books | korean | ko | "그림 43 자율주행자동차는 제때 브레이크를 밟거나 피해갈 수 없다. 하지만(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 2 | 2026-04-02T18:13:25.229573+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_2 | aihub_books | korean | ko | "세상 밖으로 나온 키보 바로 이것이 어머니의 완벽에 가까운 성공의 진수(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 3 | 2026-04-02T18:13:25.360902+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_3 | aihub_books | korean | ko | "인류 역사에서 과학과 기술의 발전도 대체로 생산력 고양에 도움 되어왔다(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 4 | 2026-04-02T18:13:25.477221+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_4 | aihub_books | korean | ko | "엔! ! 이라는 공식 사용 취소 사유를 입력한 후 취소를 클릭합니다. 수정 처(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 5 | 2026-04-02T18:13:25.603493+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_5 | aihub_books | korean | ko | "나는 키케로를 사랑했습니다. 베르길리우스도 사랑했습니다. 그러나 이제(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 6 | 2026-04-02T18:13:25.721656+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_6 | aihub_books | korean | ko | "이번에는 입자 대신 파동을 쏘아보자. 구슬탄알을 쏘는 기계를 제거하고 (...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 7 | 2026-04-02T18:13:25.853672+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_7 | aihub_books | korean | ko | "답 아니요. 는 오로지 내장 하드에만 설치가능합니다. 뿐만 아니라 설치 전(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 8 | 2026-04-02T18:13:25.974151+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_8 | aihub_books | korean | ko | "인증 아이디패스워드 인증, 전자서명 인증 등 인증 수단을 이용하여 사용(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 9 | 2026-04-02T18:13:26.105878+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_9 | aihub_books | korean | ko | "2.5 구현소스 쇠전을 거쳐 도수장 앞에 와 돌 때 변수 선언 “아니, 꼭 그렇(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 10 | 2026-04-02T18:13:26.220808+09:00 | data/raw/aihub_books | aihub_v1 |
Normalized Datasets for Korean LLM
Stage 0.5 — Normalization output of the Keural Korean LLM pretraining pipeline. Raw text from 38 source datasets converted into a single unified JSONL schema. No filtering has been applied. This is the clean, structured raw form.
Quick Stats
| Metric | Value |
|---|---|
| Total documents normalized | ~836,302,981 |
| Number of source datasets | 38 |
| Domains | English, Korean, Code, Science |
| Format | JSONL (one JSON object per line) |
| Schema version | v2 |
| Pipeline stage | Stage 0.5 (after download, before filtering) |
| Last updated | 2026-04-20 |
Where This Fits in the Pipeline
Stage 0 Stage 0.5 Stage 1 Stage 2
Raw Download --> Normalization --> Filtering --> Dedup + Shard
~1.5 TB ~836M docs ~649M docs ~549B tokens
(YOU ARE HERE)
What Is Normalization?
Each source dataset stores text in a different format and field name.
For example, Gutenberg uses the field name TEXT, but CCNews uses plain_text, and StarCoderData uses content.
Normalization solves this by:
- Reading each source dataset in its original format
- Extracting the text from its specific field
- Writing everything into one unified schema with the same field names
- Adding metadata like
domain,language,doc_id,license
After normalization, all downstream stages (filtering, deduplication) work on one consistent format regardless of the original source.
Source Datasets & Field Mappings
English Domain
| Dataset Key | HuggingFace Source | Source Field | Docs Normalized | Language |
|---|---|---|---|---|
gutenberg |
sedthh/gutenberg_english | TEXT |
48,284 | en |
openwebtext |
Skylion007/openwebtext | text |
8,013,769 | en |
ccnews |
stanford-oval/ccnews | plain_text |
71,629,440 | en |
falcon-refinedweb |
tiiuae/falcon-refinedweb | content |
59,839,870 | en |
fineweb |
HuggingFaceFW/fineweb | text |
56,181,731 | en |
fineweb_edu |
HuggingFaceFW/fineweb-edu | text |
57,121,167 | en |
hplt_english_1 |
HPLT/HPLT2.0_cleaned | text |
24,520,357 | en |
hplt_english_2 |
HPLT/HPLT2.0_cleaned | text |
16,085,779 | en |
hplt_english_3 |
HPLT/HPLT2.0_cleaned | text |
22,951,515 | en |
wikipedia |
wikimedia/wikipedia config: 20231101.en |
text |
6,407,814 | en |
Korean Domain
| Dataset Key | Source | Source Field | Docs Normalized | Language |
|---|---|---|---|---|
namuwiki |
heegyu/namuwiki-extracted | text |
565,293 | ko |
wikipedia_ko |
lcw99/wikipedia-korean-20240501 | text |
515,425 | ko |
oscar_ko_only |
lcw99/oscar-ko-only | text |
3,675,420 | ko |
korean_webtext |
HAERAE-HUB/KOREAN-WEBTEXT | text |
1,284,878 | ko |
hplt_korean |
HPLT/HPLT2.0_cleaned | text |
38,866,835 | ko |
culturax_ko |
uonlp/CulturaX | text |
17,072,188 | ko |
c4_korean |
allenai/c4 | text |
15,618,718 | ko |
cc100_documents_korean |
singletongue/cc100-documents | text |
35,678,358 | ko |
fineweb2_korean |
HuggingFaceFW/fineweb-2 | text |
59,423,122 | ko |
wanjuan_korean |
opendatalab/WanJuan-Korean | content |
68,894,937 | ko |
aihub_modu |
AIHub — Korean government open data (local) | parsed from AIHub format | 58,997 | ko |
aihub_books |
AIHub — Korean government open data (local) | parsed from AIHub format | 5,823 | ko |
aihub_online_colloquial |
AIHub — Korean government open data (local) | parsed from AIHub format | 22,859 | ko |
aihub_korean_corpus_literature |
AIHub — Korean government open data (local) | parsed from AIHub format | 908 | ko |
Code Domain
| Dataset Key | HuggingFace Source | Source Field | Docs Normalized | Language |
|---|---|---|---|---|
github-top-code |
ronantakizawa/github-top-code | content |
1,121,474 | en (code) |
codeparrot_clean |
codeparrot/codeparrot-clean | content |
5,365,659 | en (code) |
starcoderdata |
bigcode/starcoderdata | content |
42,191,832 | en (code) |
the_stack_c |
bigcode/the-stack | content |
21,383,810 | en (code) |
the_stack_java |
bigcode/the-stack | content |
31,222,087 | en (code) |
the_stack_python |
bigcode/the-stack | content |
24,214,204 | en (code) |
Science Domain
| Dataset Key | HuggingFace Source | Source Field | Docs Normalized | Language |
|---|---|---|---|---|
arxiv |
KiteFishAI/arxiv-tex-corpus-full | text |
1,089,469 | en |
open-web-math |
open-web-math/open-web-math | text |
6,315,233 | en |
peS2o |
allenai/peS2o | text |
69,950,435 | en |
s2orc |
sentence-transformers/s2orc | abstract |
43,180,281 | en |
algebraic_stack |
typeof/algebraic-stack | text |
3,440,226 | en |
fiwebmath_4plus |
HuggingFaceTB/finemath | text |
6,699,493 | en |
pubmed_abstracts |
casinca/PUBMED_title_abstracts_2019_baseline | text |
15,517,555 | en |
open_med_text |
ywchoi/OpenMedText | text |
127,736 | en |
Total Documents by Domain
| Domain | Docs Normalized | % of Total |
|---|---|---|
| English | 322,799,726 | 38.6% |
| Korean | 241,683,761 | 28.9% |
| Code | 125,499,066 | 15.0% |
| Science | 146,320,428 | 17.5% |
| Total | 836,302,981 | 100% |
Normalization Process
Source File Read original format Extract text from
(HuggingFace / AIHub) --> (JSONL / Parquet / TXT) --> dataset-specific field
(TEXT / text / content / plain_text)
|
v
Assign doc_id Write unified JSONL Update checkpoint
= source_name + index with full metadata --> (resumable mid-stream)
Normalization is resumable. Each dataset tracks line_index and doc_id in a checkpoint file, so if the process is interrupted, it resumes exactly where it left off.
What normalization does NOT do:
- Does not modify or clean text content
- Does not filter documents
- Does not deduplicate
- Does not re-encode or translate
- Only reshapes structure and adds metadata
Unified Document Schema
Every document in this repository follows this exact schema:
{
"doc_id": "gutenberg_000000042",
"source_name": "gutenberg",
"domain": "english",
"language": "en",
"text": "The full original text of the document...",
"url": "https://source-url.com/page (if available, else null)",
"license": "Public Domain",
"source_file": "data/raw/gutenberg/train-00000-of-00001.parquet",
"source_index": 42,
"timestamp": "2026-03-15T08:22:11Z",
"processing_version": "v2"
}
Field Descriptions
| Field | Type | Description |
|---|---|---|
doc_id |
string | Unique ID: {source_name}_{source_index} |
source_name |
string | Dataset key (e.g. gutenberg, ccnews) |
domain |
string | One of: english, korean, code, science |
language |
string | ISO 639-1 code: en or ko |
text |
string | Raw document text (unmodified from source) |
url |
string or null | Original URL if provided by source dataset |
license |
string | Source dataset license |
source_file |
string | Local path to the source file it was read from |
source_index |
int | Row index within that source file |
timestamp |
string or null | Publication date/time if available in source |
processing_version |
string | Pipeline version (v2) |
Normalization Statistics (Seen vs Written)
"Seen" = total rows read from source. "Written" = successfully normalized. Rows not written are rows that failed to parse (malformed JSON, empty text, encoding errors).
| Dataset | Seen | Written | Normalized Rate |
|---|---|---|---|
| aihub_books | 5,974 | 5,823 | 97.5% |
| aihub_korean_corpus_literature | 3,276,057 | 908 | 0.0% (field mismatch) |
| aihub_modu | 117,994 | 58,997 | 50.0% (deduped at read) |
| aihub_online_colloquial | 45,894 | 22,859 | 49.8% (deduped at read) |
| algebraic_stack | 3,440,694 | 3,440,226 | ~100% |
| arxiv | 1,089,469 | 1,089,469 | 100% |
| c4_korean | 15,618,718 | 15,618,718 | 100% |
| cc100_documents_korean | 35,678,358 | 35,678,358 | 100% |
| ccnews | 71,629,440 | 71,629,440 | 100% |
| codeparrot_clean | 5,365,659 | 5,365,659 | 100% |
| culturax_ko | 20,557,310 | 20,557,309 | ~100% |
| falcon-refinedweb | 59,839,870 | 59,839,870 | 100% |
| fineweb | 81,595,324 | 81,595,324 | 100% |
| fineweb2_korean | 60,904,429 | 60,904,429 | 100% |
| fineweb_edu | 57,121,167 | 57,121,167 | 100% |
| fiwebmath_4plus | 6,699,493 | 6,699,493 | 100% |
| github-top-code | 1,122,139 | 1,121,474 | ~100% |
| gutenberg | 48,285 | 48,284 | ~100% |
| hplt_english_1 | 37,128,244 | 37,128,244 | 100% |
| hplt_english_2 | 16,085,779 | 16,085,779 | 100% |
| hplt_english_3 | 22,951,515 | 22,951,515 | 100% |
| hplt_korean | 38,866,835 | 38,866,835 | 100% |
| korean_webtext | 1,284,879 | 1,284,878 | ~100% |
| namuwiki | 565,293 | 565,293 | 100% |
| open-web-math | 6,315,233 | 6,315,233 | 100% |
| open_med_text | 127,736 | 127,736 | 100% |
| openwebtext | 8,013,769 | 8,013,769 | 100% |
| oscar_ko_only | 3,675,421 | 3,675,420 | ~100% |
| peS2o | 69,950,435 | 69,950,435 | 100% |
| pubmed_abstracts | 15,518,009 | 15,517,555 | ~100% |
| s2orc | 89,455,939 | 89,455,939 | 100% |
| starcoderdata | 42,191,832 | 42,191,832 | 100% |
| the_stack_c | 21,383,832 | 21,383,810 | ~100% |
| the_stack_java | 42,429,211 | 42,429,204 | ~100% |
| the_stack_python | 24,214,270 | 24,214,204 | ~100% |
| wanjuan_korean | 68,894,938 | 68,894,937 | ~100% |
| wikipedia (en) | 6,407,814 | 6,407,814 | 100% |
| wikipedia_ko | 515,425 | 515,425 | 100% |
Tokenizer Used for Token Counting
All tokens_count values in this dataset are computed using the Keural SentencePiece tokenizer:
- Model:
mkd-ai/keural-tokenizer - Type: SentencePiece (Unigram)
- Vocabulary file:
keural_tokenizer.vocab - Model file:
keural_tokenizer.model
This is the same tokenizer used by the Keural LLM model.
Download & Processing Timeline
| Event | Date (KST) |
|---|---|
| Download of first datasets begins | 2026-04-01 |
| Normalization of first batch complete | 2026-04-08 |
| Initial 19 datasets normalized | 2026-04-09 |
| Additional datasets added (19 more) | 2026-04-10 through 2026-04-19 |
| All 38 datasets normalized | 2026-04-20 |
Licenses
This dataset contains content from multiple sources with mixed licenses. Each source retains its original license.
| Dataset | License | Commercial Use |
|---|---|---|
| gutenberg | Public Domain | Yes |
| openwebtext | MIT | Yes |
| ccnews | Unknown | Caution |
| falcon-refinedweb | ODC-By 1.0 | Caution (copyright unclear) |
| fineweb | ODC-By | Yes |
| fineweb_edu | ODC-By | Yes |
| fineweb2_korean | ODC-By | Yes |
| hplt_english_1/2/3 | CC0-1.0 | Yes |
| wikipedia (en) | CC-BY-SA 3.0 | Yes (with attribution) |
| namuwiki | CC BY-NC-SA 2.0 KR | No (non-commercial only) |
| wikipedia_ko | Apache-2.0 | Yes |
| oscar_ko_only | Open (Common Crawl) | Yes |
| korean_webtext | Unspecified | Caution |
| hplt_korean | CC0-1.0 | Yes |
| culturax_ko | Mixed (mC4 / OSCAR / CC) | Caution |
| c4_korean | ODC-By | Yes |
| cc100_documents_korean | MIT | Yes |
| wanjuan_korean | Apache-2.0 | Yes |
| aihub_modu | AIHub Terms of Use | No (research only) |
| aihub_books | AIHub Terms of Use | No (research only) |
| aihub_online_colloquial | AIHub Terms of Use | No (research only) |
| aihub_korean_corpus_literature | AIHub Terms of Use | No (research only) |
| github-top-code | MIT | Yes (permissive only repos) |
| codeparrot_clean | Permissive (filtered GitHub) | Yes |
| starcoderdata | BigCode OpenRAIL-M | Restricted |
| the_stack_c | BigCode OpenRAIL-M | Restricted |
| the_stack_java | BigCode OpenRAIL-M | Restricted |
| the_stack_python | BigCode OpenRAIL-M | Restricted |
| arxiv | MIT | Caution (individual paper copyright) |
| open-web-math | ODC-By 1.0 | Yes |
| peS2o | ODC-By | Yes |
| s2orc | ODC-By | Yes |
| algebraic_stack | MIT | Yes |
| fiwebmath_4plus | ODC-By | Yes |
| pubmed_abstracts | CC0 (NLM/PubMed) | Yes |
| open_med_text | Various (medical open access) | Caution |
License Notice: This repository inherits mixed licenses from its source datasets. Please review the license of each individual source before commercial or research use.
Related Repositories
| Repo | Stage | Description |
|---|---|---|
| This repo | Stage 0.5 | Normalized raw data |
| mkd-chanwoo/filtered-datasets-for-koreanLLM | Stage 1 | Quality + language + toxicity filtered |
| mkd-chanwoo/keural-datasets | Stage 2 | Final deduplicated + sharded production data |
| mkd-chanwoo/simplemodel-270M | Model | LLM trained on this pipeline's output |
- Downloads last month
- 5,145