Feature Extraction
Transformers
PyTorch
xlm-roberta
biomedical
bionlp
entity linking
embedding
bert
text-embeddings-inference
Instructions to use andorei/BERGAMOT-multilingual-GAT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use andorei/BERGAMOT-multilingual-GAT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="andorei/BERGAMOT-multilingual-GAT")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("andorei/BERGAMOT-multilingual-GAT") model = AutoModel.from_pretrained("andorei/BERGAMOT-multilingual-GAT") - Inference
- Notebooks
- Google Colab
- Kaggle
A multilingual BERGAMOT: Biomedical Entity Representation with Graph-Augmented Multi-Objective Transformer model with pre-trained on UMLS (version 2020AB) using a Graph Attention Network (GAT) encoder.
For technical details see our NAACL 2024 paper.
Here is the poster of our paper.
For pretraining code see our github: https://github.com/Andoree/BERGAMOT.
Citation
@inproceedings{sakhovskiy-et-al-2024-bergamot,
title = "Biomedical Entity Representation with Graph-Augmented Multi-Objective Transformer",
author = "Sakhovskiy, Andrey and Semenova, Natalia and Kadurin, Artur and Tutubalina, Elena",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
}
- Downloads last month
- 1,671