Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
•
2311.03099
•
Published
•
30
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA + Azazelle/L3-Daybreak-8b-lora as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA+Azazelle/L3-Daybreak-8b-lora
chat_template: llama3
dtype: float32
merge_method: dare_ties
modules:
default:
slices:
- sources:
- layer_range: [0, 32]
model: LuxiaSL/luxia-selfsim-8b
parameters:
density: 0.55
weight: 0.4
- layer_range: [0, 32]
model: Hastagaras/Llama-3.1-Jamet-8B-MK.I
parameters:
density: 0.55
weight: 0.4
- layer_range: [0, 32]
model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA+Azazelle/L3-Daybreak-8b-lora
parameters:
normalize: 0.0
tokenizer:
pad_to_multiple_of: 32