nielsr HF Staff commited on
Commit
0917a2f
Β·
verified Β·
1 Parent(s): 4ce6757

Add `library_name` and update paper link, and remove redundant license section

Browse files

This PR improves the model card by:
- Adding `library_name: transformers` to the metadata, enabling the "how to use" widget on the Hugging Face Hub, as indicated by the model's `config.json` and tokenizer configurations.
- Updating the placeholder paper link in the "Resources" section to the correct Hugging Face paper page: [VideoNSA: Native Sparse Attention Scales Video Understanding](https://huggingface.co/papers/2510.02295).
- Removing the redundant "

Files changed (1) hide show
  1. README.md +4 -7
README.md CHANGED
@@ -2,13 +2,14 @@
2
  language:
3
  - en
4
  license: apache-2.0
 
5
  tags:
6
  - video-understanding
7
  - sparse-attention
8
  - vision-language
9
  - qwen2.5-vl
10
  - multimodal
11
- pipeline_tag: video-text-to-text
12
  ---
13
 
14
  # VideoNSA: Native Sparse Attention for Video Understanding
@@ -75,10 +76,6 @@ For installation, training, and evaluation instructions, please refer to:
75
 
76
  ## Resources
77
 
78
- - πŸ“„ [Paper](https://arxiv.org/abs/TODO)
79
  - 🌐 [Project Page](https://enxinsong.com/VideoNSA-web/)
80
- - πŸ’» [GitHub Repository](https://github.com/Espere-1119-Song/VideoNSA)
81
-
82
- ## License
83
-
84
- This model is released under the Apache 2.0 License.
 
2
  language:
3
  - en
4
  license: apache-2.0
5
+ pipeline_tag: video-text-to-text
6
  tags:
7
  - video-understanding
8
  - sparse-attention
9
  - vision-language
10
  - qwen2.5-vl
11
  - multimodal
12
+ library_name: transformers
13
  ---
14
 
15
  # VideoNSA: Native Sparse Attention for Video Understanding
 
76
 
77
  ## Resources
78
 
79
+ - πŸ“„ [Paper](https://huggingface.co/papers/2510.02295)
80
  - 🌐 [Project Page](https://enxinsong.com/VideoNSA-web/)
81
+ - πŸ’» [GitHub Repository](https://github.com/Espere-1119-Song/VideoNSA)