Instructions to use Menlo/Jan-nano-128k with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Menlo/Jan-nano-128k with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Menlo/Jan-nano-128k") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Menlo/Jan-nano-128k") model = AutoModelForCausalLM.from_pretrained("Menlo/Jan-nano-128k") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Menlo/Jan-nano-128k with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Menlo/Jan-nano-128k" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Menlo/Jan-nano-128k", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Menlo/Jan-nano-128k
- SGLang
How to use Menlo/Jan-nano-128k with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Menlo/Jan-nano-128k" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Menlo/Jan-nano-128k", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Menlo/Jan-nano-128k" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Menlo/Jan-nano-128k", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Menlo/Jan-nano-128k with Docker Model Runner:
docker model run hf.co/Menlo/Jan-nano-128k
Jan-Nano-128k: Empowering deeper research through extended context understanding.
Note: Jan-Nano is a non-thinking model.
Authors: Alan Dao, Bach Vu Dinh
Overview
Jan-Nano-128k represents a significant advancement in compact language models for research applications. Building upon the success of Jan-Nano, this enhanced version features a native 128k context window that enables deeper, more comprehensive research capabilities without the performance degradation typically associated with context extension methods.
Key Improvements:
- 🔍 Research Deeper: Extended context allows for processing entire research papers, lengthy documents, and complex multi-turn conversations
- ⚡ Native 128k Window: Built from the ground up to handle long contexts efficiently, maintaining performance across the full context range
- 📈 Enhanced Performance: Unlike traditional context extension methods, Jan-Nano-128k shows improved performance with longer contexts
This model maintains full compatibility with Model Context Protocol (MCP) servers while dramatically expanding the scope of research tasks it can handle in a single session.
Evaluation
Jan-Nano-128k has been rigorously evaluated on the SimpleQA benchmark using our MCP-based methodology, demonstrating superior performance compared to its predecessor:
Why Jan-Nano-128k?
Traditional approaches to extending context length, such as YaRN (Yet another RoPE extensioN), often result in performance degradation as context length increases. Jan-Nano-128k breaks this paradigm:
This fundamental difference makes Jan-Nano-128k ideal for research applications requiring deep document analysis, multi-document synthesis, and complex reasoning over large information sets.
🖥️ How to Run Locally
Jan desktop will eventually support this model (WIP). Otherwise you can check the deployment options below that we have tested.
For additional tutorials and community guidance, visit our Discussion Forums.
Deployment
Deploy using VLLM:
vllm serve Menlo/Jan-nano-128k \
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--rope-scaling '{"rope_type":"yarn","factor":3.2,"original_max_position_embeddings":40960}' --max-model-len 131072
Or llama-server from llama.cpp:
llama-server ... --rope-scaling yarn --rope-scale 3.2 --yarn-orig-ctx 40960
Note: The chat template is included in the tokenizer. For troubleshooting, download the Non-think chat template.
Recommended Sampling Parameters
Temperature: 0.7
Top-p: 0.8
Top-k: 20
Min-p: 0.0
FAQ:
- I have Jinja template issue with LMStudio, how can i fix? Here
🤝 Community & Support
- Discussions: HuggingFace Community
- Issues: GitHub Repository
- Documentation: Official Docs
📄 Citation
@misc{dao2025jannanotechnicalreport,
title={Jan-nano Technical Report},
author={Alan Dao and Dinh Bach Vu},
year={2025},
eprint={2506.22760},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.22760},
}
Jan-Nano-128k: Empowering deeper research through extended context understanding.
- Downloads last month
- 615

