| --- |
| language: |
| - gl |
| license: apache-2.0 |
| base_model: openai/whisper-large |
| tags: |
| - whisper-event |
| - generated_from_trainer |
| datasets: |
| - mozilla-foundation/common_voice_13_0 |
| metrics: |
| - wer |
| model-index: |
| - name: Whisper Large Galician |
| results: |
| - task: |
| name: Automatic Speech Recognition |
| type: automatic-speech-recognition |
| dataset: |
| name: mozilla-foundation/common_voice_13_0 gl |
| type: mozilla-foundation/common_voice_13_0 |
| config: gl |
| split: test |
| args: gl |
| metrics: |
| - name: Wer |
| type: wer |
| value: 6.939845474613686 |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| # Whisper Large Galician |
|
|
| This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the mozilla-foundation/common_voice_13_0 gl dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 0.3605 |
| - Wer: 6.9398 |
| |
| ## Model description |
| |
| More information needed |
| |
| ## Intended uses & limitations |
| |
| More information needed |
| |
| ## Training and evaluation data |
| |
| More information needed |
| |
| ## Training procedure |
| |
| ### Training hyperparameters |
| |
| The following hyperparameters were used during training: |
| - learning_rate: 1e-05 |
| - train_batch_size: 32 |
| - eval_batch_size: 16 |
| - seed: 42 |
| - gradient_accumulation_steps: 2 |
| - total_train_batch_size: 64 |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| - lr_scheduler_type: linear |
| - lr_scheduler_warmup_steps: 500 |
| - training_steps: 20000 |
| |
| ### Training results |
| |
| | Training Loss | Epoch | Step | Validation Loss | Wer | |
| |:-------------:|:-----:|:-----:|:---------------:|:------:| |
| | 0.0126 | 4.01 | 1000 | 0.2128 | 8.3558 | |
| | 0.0032 | 9.01 | 2000 | 0.2262 | 6.9416 | |
| | 0.0022 | 14.01 | 3000 | 0.2528 | 7.1123 | |
| | 0.0025 | 19.01 | 4000 | 0.2643 | 7.3641 | |
| | 0.0015 | 24.01 | 5000 | 0.2596 | 7.3365 | |
| | 0.0014 | 29.01 | 6000 | 0.2723 | 7.6366 | |
| | 0.0008 | 34.01 | 7000 | 0.2778 | 7.6090 | |
| | 0.0003 | 39.01 | 8000 | 0.2880 | 7.2261 | |
| | 0.0004 | 44.01 | 9000 | 0.2920 | 7.6745 | |
| | 0.0001 | 49.01 | 10000 | 0.2854 | 7.4089 | |
| | 0.0 | 54.01 | 11000 | 0.3027 | 7.4365 | |
| | 0.0 | 59.01 | 12000 | 0.3159 | 7.4055 | |
| | 0.0 | 64.01 | 13000 | 0.3242 | 7.3693 | |
| | 0.0 | 69.01 | 14000 | 0.3312 | 7.3072 | |
| | 0.0 | 74.01 | 15000 | 0.3379 | 7.0226 | |
| | 0.0 | 79.01 | 16000 | 0.3442 | 7.0019 | |
| | 0.0 | 84.01 | 17000 | 0.3500 | 6.9933 | |
| | 0.0 | 89.01 | 18000 | 0.3550 | 6.9605 | |
| | 0.0 | 94.01 | 19000 | 0.3589 | 6.9467 | |
| | 0.0 | 99.01 | 20000 | 0.3605 | 6.9398 | |
| |
| |
| ### Framework versions |
| |
| - Transformers 4.33.0.dev0 |
| - Pytorch 2.0.1+cu117 |
| - Datasets 2.14.4 |
| - Tokenizers 0.13.3 |
| |
| ## Citation |
| |
| If you use these models in your research, please cite: |
| |
| ```bibtex |
| @misc{dezuazo2025whisperlmimprovingasrmodels, |
| title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages}, |
| author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja}, |
| year={2025}, |
| eprint={2503.23542}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.CL}, |
| url={https://arxiv.org/abs/2503.23542}, |
| } |
| ``` |
| |
| Please, check the related paper preprint in |
| [arXiv:2503.23542](https://arxiv.org/abs/2503.23542) |
| for more details. |
| |
| ## Licensing |
| |
| This model is available under the |
| [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0). |
| You are free to use, modify, and distribute this model as long as you credit |
| the original creators. |