ONNX version of LtG/norbert3-base
This repository contains ONNX-converted weights for the Norwegian language model LtG/norbert3-base.
The conversion enables this state-of-the-art Norwegian model to run directly in browsers or Node.js environments using Transformers.js.
It includes both:
- Quantized (int8): Faster, smaller (default).
- Full Precision (float32): Higher theoretical accuracy.
Usage (Node.js/Web)
First, install the library:
npm install @huggingface/transformers
Option 1: Use Quantized Model (Recommended)
This is the default behavior. It loads model_quantized.onnx (approx. 4x smaller, faster inference).
import { pipeline } from '@huggingface/transformers';
// Loads the model (automatically selects the quantized version)
const embedder = await pipeline(
'feature-extraction',
'lebchen/norbert3-base-onnx',
{ device: 'auto' }
);
const sentences = [
"Dette er en setning på norsk.",
"Norbert er en språkmodell fra UiO."
];
// Norbert generally benefits from mean pooling for sentence representations
const output = await embedder(sentences, { pooling: 'mean', normalize: true });
console.log(output.tolist());
Option 2: Use Full Precision Model
To load the uncompressed model.onnx, explicitly set quantized: false.
const embedder = await pipeline(
'feature-extraction',
'lebchen/norbert3-base-onnx',
{
device: 'auto',
quantized: false
}
);
Credits & Attribution
The original model (NorBERT 3) was developed by the Language Technology Group (LTG) at the University of Oslo.
Original Repository: LtG/norbert3-base
Paper/Citation: Please refer to the original model card for proper citation if you use this in academic work.
This distribution is converted to ONNX for compatibility reasons and maintains the original Apache 2.0 license.
- Downloads last month
- 15