Collection of models and dataset related to MixtureVitae, open and fully reproducible pretraining dataset built from permissive sources
LAION eV
non-profit
AI & ML interests
datasets, computer vision
Recent Activity
View all activity
The full collection of our EmoNet effort. More info available at: https://huggingface.co/blog/felfri/emonet
Releases related to Open-ψ (Open-Sci) Collective
Re-LAION-5B-research
OpenCLIP models trained on DataComp (https://huggingface.co/papers/2304.14108).
-
laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K
Zero-Shot Image Classification • Updated • 36.2k • 122 -
laion/CLIP-ViT-B-16-DataComp.XL-s13B-b90K
Zero-Shot Image Classification • Updated • 27k • 8 -
laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K
Zero-Shot Image Classification • Updated • 8.31k • 8 -
laion/CLIP-ViT-B-16-DataComp.L-s1B-b8K
Zero-Shot Image Classification • Updated • 461 • 1
CLAP is to audio what CLIP is to image.
models and datasets related to openthoughts 4 experiments
-
laion/openthoughts-4-code-qwen3-32b-annotated-32k_qwen3-1.7B_32k
2B • Updated • 80 -
laion/openthoughts-4-code-qwen3-32b-annotated-32k_qwen2.5-1.5B_32k
Text Generation • 2B • Updated • 67 -
laion/openthoughts-3-QwQ-32b-annotated-16k_qwen2.5-1.5B_16k
Text Generation • 2B • Updated • 30 -
laion/openthoughts-4-code-qwen3-32b-annotated-7k_qwen3-1.7B_10k
Text Generation • 2B • Updated • 68
openMaMMUT/openCLIP models trained on DataComp-1.4B, DFN-1.4B and Re-LAION-2B. Pre-trained models on various scales, incl. intermediate checkpoints
-
laion/openMaMMUT-ViT-L-14-DataComp-1.4B-s12.8B-b180K
Zero-Shot Image Classification • Updated • 17 • 5 -
Scaling Laws for Robust Comparison of Open Foundation Language-Vision Models and Datasets
Paper • 2506.04598 • Published • 7 -
laion/openMaMMUT-ViT-L-14-512x512-pt_datacomp1b-ft_DFN512x512-s293M-b32k
Zero-Shot Image Classification • Updated • 15 • 2 -
laion/scaling-laws-for-comparison
Updated • 2
Re-LAION-5B research safe
OpenCLIP models trained on LAION-2B
-
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
Zero-Shot Image Classification • Updated • 75.1k • 304 -
laion/CLIP-ViT-g-14-laion2B-s34B-b88K
Zero-Shot Image Classification • Updated • 8.73k • 27 -
laion/CLIP-ViT-g-14-laion2B-s12B-b42K
1B • Updated • 19.5k • 44 -
laion/CLIP-ViT-H-14-laion2B-s32B-b79K
Zero-Shot Image Classification • 1.0B • Updated • 525k • 435
Collection of models and dataset related to MixtureVitae, open and fully reproducible pretraining dataset built from permissive sources
models and datasets related to openthoughts 4 experiments
-
laion/openthoughts-4-code-qwen3-32b-annotated-32k_qwen3-1.7B_32k
2B • Updated • 80 -
laion/openthoughts-4-code-qwen3-32b-annotated-32k_qwen2.5-1.5B_32k
Text Generation • 2B • Updated • 67 -
laion/openthoughts-3-QwQ-32b-annotated-16k_qwen2.5-1.5B_16k
Text Generation • 2B • Updated • 30 -
laion/openthoughts-4-code-qwen3-32b-annotated-7k_qwen3-1.7B_10k
Text Generation • 2B • Updated • 68
The full collection of our EmoNet effort. More info available at: https://huggingface.co/blog/felfri/emonet
Releases related to Open-ψ (Open-Sci) Collective
openMaMMUT/openCLIP models trained on DataComp-1.4B, DFN-1.4B and Re-LAION-2B. Pre-trained models on various scales, incl. intermediate checkpoints
-
laion/openMaMMUT-ViT-L-14-DataComp-1.4B-s12.8B-b180K
Zero-Shot Image Classification • Updated • 17 • 5 -
Scaling Laws for Robust Comparison of Open Foundation Language-Vision Models and Datasets
Paper • 2506.04598 • Published • 7 -
laion/openMaMMUT-ViT-L-14-512x512-pt_datacomp1b-ft_DFN512x512-s293M-b32k
Zero-Shot Image Classification • Updated • 15 • 2 -
laion/scaling-laws-for-comparison
Updated • 2
Re-LAION-5B-research
Re-LAION-5B research safe
OpenCLIP models trained on DataComp (https://huggingface.co/papers/2304.14108).
-
laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K
Zero-Shot Image Classification • Updated • 36.2k • 122 -
laion/CLIP-ViT-B-16-DataComp.XL-s13B-b90K
Zero-Shot Image Classification • Updated • 27k • 8 -
laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K
Zero-Shot Image Classification • Updated • 8.31k • 8 -
laion/CLIP-ViT-B-16-DataComp.L-s1B-b8K
Zero-Shot Image Classification • Updated • 461 • 1
OpenCLIP models trained on LAION-2B
-
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
Zero-Shot Image Classification • Updated • 75.1k • 304 -
laion/CLIP-ViT-g-14-laion2B-s34B-b88K
Zero-Shot Image Classification • Updated • 8.73k • 27 -
laion/CLIP-ViT-g-14-laion2B-s12B-b42K
1B • Updated • 19.5k • 44 -
laion/CLIP-ViT-H-14-laion2B-s32B-b79K
Zero-Shot Image Classification • 1.0B • Updated • 525k • 435
CLAP is to audio what CLIP is to image.