DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling
Paper • 2406.11617 • Published • 10
⚠️ Note: This model requires ChatML chat template.
This is a merge of pre-trained language models created using mergekit.
The model is partially censored but can be jailbroken or ablated if needed.
This model was merged using the DELLA merge method using p-e-w/Mistral-Nemo-Instruct-2407-heretic-noslop as a base.
This merge required the enable_fix_mistral_regex_true.md patch for tokenizer stability.
The graph_v18.py patch was also helpful to use 8GB VRAM for acceleration.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
architecture: MistralForCausalLM
base_model: B:/12B/models--p-e-w--Mistral-Nemo-Instruct-2407-heretic-noslop
models:
- model: B:/12B/models--p-e-w--Mistral-Nemo-Instruct-2407-heretic-noslop
- model: B:/12B/models--BeaverAI--MN-2407-DSK-QwQify-v0.1-12B
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:/12B/models--crestf411--MN-Slush
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:/12B/models--D1rtyB1rd--Egregore-Alice-RP-NSFW-12B
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:/12B/models--D1rtyB1rd--Looking-Glass-Alice-Thinking-NSFW-RP
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:/12B/models--Delta-Vector--Francois-PE-V2-Huali-12B
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:/12B/models--Delta-Vector--Ohashi-NeMo-12B
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:/12B/models--Delta-Vector--Rei-V3-KTO-12B
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:/12B/models--Epiculous--Violet_Twilight-v0.2
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:/12B/models--elinas--Chronos-Gold-12B-1.0
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:\12B\models--inflatebot--MN-12B-Mag-Mell-R1
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:\12B\models--MarinaraSpaghetti--NemoMix-Unleashed-12B
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:\12B\models--Sao10K--MN-12B-Vespa-x1
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:\12B\models--TheDrummer--Rocinante-12B-v1.1
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:\12B\models--TheDrummer--UnslopNemo-12B-v4.1
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
- model: B:\12B\models--Vortex5--Crimson-Constellation-12B
parameters:
density: 0.9
weight: 0.1
epsilon: 0.099
# --lazy-unpickle --random-seed 420 --cuda --fix-mistral-regex
merge_method: della
parameters:
lambda: 1.0
normalize: false
int8_mask: false
dtype: float32
out_dtype: bfloat16
tokenizer:
source: "union"
tokens:
# Force ChatML EOS tokens
"<|im_start|>":
source: "B:/12B/models--D1rtyB1rd--Egregore-Alice-RP-NSFW-12B"
force: true
"<|im_end|>":
source: "B:/12B/models--D1rtyB1rd--Egregore-Alice-RP-NSFW-12B"
force: true
# Keep Mistral tokens
"[INST]":
source: "B:/12B/models--p-e-w--Mistral-Nemo-Instruct-2407-heretic-noslop"
# source: "B:/12B/models--mistralai--Mistral-Nemo-Instruct-2407"
# The tokenizer system requires all models referenced in token configurations to be present in the merge's model list to build proper embedding permutations.
"[/INST]":
source: "B:/12B/models--p-e-w--Mistral-Nemo-Instruct-2407-heretic-noslop"
# Force </s> as fallback EOS
"</s>":
source: "B:/12B/models--p-e-w--Mistral-Nemo-Instruct-2407-heretic-noslop"
force: true
chat_template: "chatml"
name: 🌵 Cactus-Dream-Horror-12B