Instructions to use cstr/multilingual-e5-small-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use cstr/multilingual-e5-small-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="cstr/multilingual-e5-small-GGUF", filename="multilingual-e5-small-q4_k.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use cstr/multilingual-e5-small-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf cstr/multilingual-e5-small-GGUF:Q8_0 # Run inference directly in the terminal: llama-cli -hf cstr/multilingual-e5-small-GGUF:Q8_0
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf cstr/multilingual-e5-small-GGUF:Q8_0 # Run inference directly in the terminal: llama-cli -hf cstr/multilingual-e5-small-GGUF:Q8_0
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf cstr/multilingual-e5-small-GGUF:Q8_0 # Run inference directly in the terminal: ./llama-cli -hf cstr/multilingual-e5-small-GGUF:Q8_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf cstr/multilingual-e5-small-GGUF:Q8_0 # Run inference directly in the terminal: ./build/bin/llama-cli -hf cstr/multilingual-e5-small-GGUF:Q8_0
Use Docker
docker model run hf.co/cstr/multilingual-e5-small-GGUF:Q8_0
- LM Studio
- Jan
- Ollama
How to use cstr/multilingual-e5-small-GGUF with Ollama:
ollama run hf.co/cstr/multilingual-e5-small-GGUF:Q8_0
- Unsloth Studio new
How to use cstr/multilingual-e5-small-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for cstr/multilingual-e5-small-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for cstr/multilingual-e5-small-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for cstr/multilingual-e5-small-GGUF to start chatting
- Docker Model Runner
How to use cstr/multilingual-e5-small-GGUF with Docker Model Runner:
docker model run hf.co/cstr/multilingual-e5-small-GGUF:Q8_0
- Lemonade
How to use cstr/multilingual-e5-small-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull cstr/multilingual-e5-small-GGUF:Q8_0
Run and chat with the model
lemonade run user.multilingual-e5-small-GGUF-Q8_0
List all available models
lemonade list
multilingual-e5-small GGUF
GGUF format of intfloat/multilingual-e5-small for use with CrispEmbed and Ollama.
Files
| File | Quantization | Size |
|---|---|---|
| multilingual-e5-small-q4_k.gguf | Q4_K | 0 MB |
| multilingual-e5-small-q8_0.gguf | Q8_0 | 0 MB |
| multilingual-e5-small.gguf | F32 | 0 MB |
Recommended: Q8_0 for quality (cos vs HF: 0.9999), Q4_K for size (0.990).
Quick Start
CrispEmbed
./crispembed -m multilingual-e5-small "Hello world"
./crispembed-server -m multilingual-e5-small --port 8080
Ollama (with CrispStrobe fork)
# Create model
echo "FROM multilingual-e5-small-q8_0.gguf" > Modelfile
ollama create multilingual-e5-small -f Modelfile
# Embed
curl http://localhost:11434/api/embed -d '{"model":"multilingual-e5-small","input":["Hello world"]}'
Python (CrispEmbed)
from crispembed import CrispEmbed
model = CrispEmbed("multilingual-e5-small-q8_0.gguf")
vectors = model.encode(["Hello world", "Goodbye world"])
Model Details
| Property | Value |
|---|---|
| Architecture | BERT |
| Parameters | 118M |
| Embedding Dimension | 384 |
| Layers | 12 |
| Pooling | mean |
| Tokenizer | SentencePiece |
| Language | multilingual |
| Q8_0 vs HuggingFace | 0.9999 |
| Q4_K vs HuggingFace | 0.990 |
Server API
CrispEmbed server supports four API dialects:
POST /embedโ nativePOST /v1/embeddingsโ OpenAI-compatiblePOST /api/embedโ Ollama-compatiblePOST /api/embeddingsโ Ollama legacy
Credits
- Original model: intfloat/multilingual-e5-small
- Inference: CrispEmbed (MIT, ggml-based)
- Downloads last month
- 1,064
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for cstr/multilingual-e5-small-GGUF
Base model
intfloat/multilingual-e5-small