DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling
Paper • 2406.11617 • Published • 10
How to use DarkArtsForge/Magistaroth-24B-v1 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="DarkArtsForge/Magistaroth-24B-v1")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("DarkArtsForge/Magistaroth-24B-v1")
model = AutoModelForCausalLM.from_pretrained("DarkArtsForge/Magistaroth-24B-v1")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use DarkArtsForge/Magistaroth-24B-v1 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "DarkArtsForge/Magistaroth-24B-v1"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DarkArtsForge/Magistaroth-24B-v1",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/DarkArtsForge/Magistaroth-24B-v1
How to use DarkArtsForge/Magistaroth-24B-v1 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "DarkArtsForge/Magistaroth-24B-v1" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DarkArtsForge/Magistaroth-24B-v1",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "DarkArtsForge/Magistaroth-24B-v1" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "DarkArtsForge/Magistaroth-24B-v1",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use DarkArtsForge/Magistaroth-24B-v1 with Docker Model Runner:
docker model run hf.co/DarkArtsForge/Magistaroth-24B-v1
⚠️ Warning: This model can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly, and use Mistral Tekken chat template.
A highly creative merge. Some refusals but you can use jailbreaks or ablate the model. A normtrue version was tested. Normfalse did better overall, it was slightly less censored, more detailed and creative.
Scores 14152 at Q0 Bench (Pass Q0G).
This model was merged using the following merge method: DELLA
architecture: MistralForCausalLM
models:
- model: B:\24B\!models--mistralai--Magistral-Small-2509\textonly
- model: B:\24B\!models--Gryphe--Tiamat-24B-Magistral\textonly
parameters:
density: 0.9
weight: 0.4
epsilon: 0.099
- model: B:\24B\!models--TheDrummer--Magidonia-24B-v4.3
parameters:
density: 0.9
weight: 0.4
epsilon: 0.099
- model: B:\24B\!models--TheDrummer--Precog-24B-v1
parameters:
density: 0.9
weight: 0.4
epsilon: 0.099
- model: B:\24B\!models--zerofata--MS3.2-PaintedFantasy-v3-24B
parameters:
density: 0.9
weight: 0.4
epsilon: 0.099
- model: B:\24B\!models--zerofata--MS3.2-PaintedFantasy-v4.1-24B
parameters:
density: 0.9
weight: 0.4
epsilon: 0.099
# Seed: 420
merge_method: della
base_model: B:\24B\!models--mistralai--Magistral-Small-2509\textonly
parameters:
lambda: 1.0
normalize: false
int8_mask: false
dtype: float32
out_dtype: bfloat16
tokenizer:
source: B:\24B\!models--TheDrummer--Magidonia-24B-v4.3
# chat_template: auto
name: 🌌 Magistaroth-24B-v1