TPTT: Transforming Pretrained Transformer into Titans
Paper
•
2506.17671
•
Published
•
5
Titanesque version of google/gemma-3-270m with parallel linearized attention (TPTT 😊) and PEFT.
The architecture was presented in the paper TPTT.
Classic model parameter with LiZA injection :
| Subfolder | Max Self Attn Length | Mag Weight | Cross Gate | Max Chunk Size | Bidirectional | LoRA | Description |
|---|---|---|---|---|---|---|---|
| delta_rule | 8192 (default) | 0.5 | False | 64 | False | Yes | Parallel linearized attention with delta_rule operator |
| delta_rule_gelu | 8192 (default) | 0.5 | False | 64 | False | Yes | Non-linear operator with gelu activation |
| delta_product | 8192 (default) | 0.5 | False | 64 | False | Yes | Second order operator with derivative trick |
| delta_product_r | 8192 (default) | 0.5 | False | 64 | False | Yes | Second order operator with rotative trick |
| delta_product_c | 8192 (default) | 0.5 | False | 64 | False | Yes | Second order operator with combined trick |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"ffurfaro/Titanesque-gemma-3-270m",
subfolder="tptt_subfolder", # see in repo tree
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("ffurfaro/google/gemma-3-270m")
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs, skip_special_tokens=True))
If you use TPTT in your academic work, please cite Furfaro. For questions or support, please open an issue on the GitHub repository or contact the maintainer.
Base model
google/gemma-3-270m