Transformers documentation

RF-DETR

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.8.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

This model was released on 2024-04-05 and added to Hugging Face Transformers on 2026-05-07.

PyTorch

RF-DETR

RF-DETR proposes a Receptive Field Detection Transformer (DETR) architecture designed to compete with and surpass the dominant YOLO series for real-time object detection. It achieves a new state-of-the-art balance between speed (latency) and accuracy (mAP) by combining recent transformer advances with efficient design choices.

The RF-DETR architecture is characterized by its simple and efficient structure: a DINOv2 Backbone, a Projector, and a shallow DETR Decoder. It enhances the DETR architecture for efficiency and speed using the following core modifications:

  1. DINOv2 Backbone: Uses a powerful DINOv2 backbone for robust feature extraction.
  2. Group DETR Training: Utilizes Group-Wise One-to-Many Assignment during training to accelerate convergence.
  3. Richer Input: Aggregates multi-level features from the backbone and uses a C2f Projector (similarly to YOLOv8) to pass multi-scale features.
  4. Faster Decoder: Employs a shallow 3-layer DETR decoder with deformable cross-attention for lower latency.
  5. Optimized Queries: Uses a mixed-query scheme combining learnable content queries and generated spatial queries.

You can find all the available RF-DETR checkpoints under the stevenbucaille organization. The original code can be found here.

Thanks to the weight conversion mapping, RfDetr is compatible with models from the original rf-detr library as well as models that you trained using the Roboflow platform. This means you can use Roboflow platform to train your model and use RfDetr in transformers to import the weights and deploy your model anywhere.

Click on the RF-DETR models in the right sidebar for more examples of how to apply RF-DETR to different object detection tasks.

The example below demonstrates how to perform object detection with the Pipeline and the AutoModel class.

Pipeline
AutoModel
from transformers import pipeline
import torch

pipeline = pipeline("object-detection", model="stevenbucaille/rf-detr-base", device_map="auto")

pipeline("http://images.cocodataset.org/val2017/000000039769.jpg")

Resources

RfDetrConfig

class transformers.RfDetrConfig

< >

( transformers_version: str | None = None architectures: list[str] | None = None output_hidden_states: bool | None = False return_dict: bool | None = True dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None chunk_size_feed_forward: int = 0 is_encoder_decoder: bool = False id2label: dict[int, str] | dict[str, str] | None = None label2id: dict[str, int] | dict[str, str] | None = None problem_type: typing.Optional[typing.Literal['regression', 'single_label_classification', 'multi_label_classification']] = None backbone_config: dict | transformers.configuration_utils.PreTrainedConfig | None = None hidden_expansion: float = 0.5 c2f_num_blocks: int = 3 activation_function: str = 'silu' dropout: float = 0.1 decoder_ffn_dim: int = 2048 decoder_n_points: int = 4 decoder_layers: int = 3 decoder_self_attention_heads: int = 8 decoder_cross_attention_heads: int = 16 decoder_activation_function: str = 'relu' num_queries: int = 300 attention_bias: bool = True attention_dropout: float | int = 0.0 activation_dropout: float | int = 0.0 group_detr: int = 13 init_std: float = 0.02 disable_custom_kernels: bool = True class_cost: int | float = 2 bbox_cost: int | float = 5 giou_cost: int | float = 2 class_loss_coefficient: int | float = 1 dice_loss_coefficient: int | float = 1 bbox_loss_coefficient: int | float = 5 giou_loss_coefficient: int | float = 2 eos_coefficient: float = 0.1 focal_alpha: float = 0.25 auxiliary_loss: bool = True d_model: int = 256 layer_norm_eps: float = 1e-05 num_feature_levels: int = 1 mask_loss_coefficient: int | float = 1 mask_point_sample_ratio: int = 16 mask_downsample_ratio: int = 4 mask_class_loss_coefficient: int | float = 5.0 mask_dice_loss_coefficient: int | float = 5.0 segmentation_head_activation_function: str = 'gelu' intermediate_size: int = 1024 )

Parameters

  • backbone_config (Union[dict, ~configuration_utils.PreTrainedConfig], optional) — The configuration of the backbone model.
  • hidden_expansion (float, optional, defaults to 0.5) — Expansion factor for hidden dimensions in the projector layers.
  • c2f_num_blocks (int, optional, defaults to 3) — Number of blocks in the C2F layer.
  • activation_function (str, optional, defaults to "silu") — The non-linear activation function in the projector. Supported values are "silu", "relu", "gelu".
  • dropout (float, optional, defaults to 0.1) — The ratio for all dropout layers.
  • decoder_ffn_dim (int, optional, defaults to 2048) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
  • decoder_n_points (int, optional, defaults to 4) — The number of sampled keys in each feature level for each attention head in the decoder.
  • decoder_layers (int, optional, defaults to 3) — Number of decoder layers in the transformer.
  • decoder_self_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the decoder self-attention.
  • decoder_cross_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the decoder cross-attention.
  • decoder_activation_function (str, optional, defaults to "relu") — The non-linear activation function in the decoder. Supported values are "relu", "silu", "gelu".
  • num_queries (int, optional, defaults to 300) — Number of object queries, i.e. detection slots. This is the maximal number of objects RfDetrModel can detect in a single image.
  • attention_bias (bool, optional, defaults to True) — Whether to use a bias in the query, key, value and output projection layers during self-attention.
  • attention_dropout (Union[float, int], optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
  • activation_dropout (Union[float, int], optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer.
  • group_detr (int, optional, defaults to 13) — Number of groups for Group DETR attention mechanism, which helps reduce computational complexity.
  • init_std (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • disable_custom_kernels (bool, optional, defaults to True) — Disable the use of custom CUDA and CPU kernels. This option is necessary for the ONNX export, as custom kernels are not supported by PyTorch ONNX export.
  • class_cost (Union[int, float], optional, defaults to 2) — Relative weight of the classification error in the Hungarian matching cost.
  • bbox_cost (Union[int, float], optional, defaults to 5) — Relative weight of the L1 bounding box error in the Hungarian matching cost.
  • giou_cost (Union[int, float], optional, defaults to 2) — Relative weight of the generalized IoU loss in the Hungarian matching cost.
  • class_loss_coefficient (float, optional, defaults to 1) — Relative weight of the classification loss in the Hungarian matching cost.
  • dice_loss_coefficient (float, optional, defaults to 1) — Relative weight of the DICE/F-1 loss in the object detection loss.
  • bbox_loss_coefficient (float, optional, defaults to 5) — Relative weight of the L1 bounding box loss in the object detection loss.
  • giou_loss_coefficient (float, optional, defaults to 2) — Relative weight of the generalized IoU loss in the object detection loss.
  • eos_coefficient (float, optional, defaults to 0.1) — Relative classification weight of the ‘no-object’ class in the object detection loss.
  • focal_alpha (float, optional, defaults to 0.25) — Alpha parameter in the focal loss.
  • auxiliary_loss (bool, optional, defaults to True) — Whether auxiliary decoding losses (losses at each decoder layer) are to be used.
  • d_model (int, optional, defaults to 256) — Size of the encoder layers and the pooler layer.
  • layer_norm_eps (float, optional, defaults to 1e-05) — The epsilon used by the layer normalization layers.
  • num_feature_levels (int, optional, defaults to 1) — Number of feature levels used in the multiscale deformable attention.
  • mask_loss_coefficient (float, optional, defaults to 1) — Relative weight of the Focal loss in the instance segmentation mask loss.
  • mask_point_sample_ratio (int, optional, defaults to 16) — The ratio of points to sample for the mask loss calculation.
  • mask_downsample_ratio (int, optional, defaults to 4) — The downsample ratio for the segmentation masks compared to the input image resolution.
  • mask_class_loss_coefficient (float, optional, defaults to 5.0) — Relative weight of the Focal loss in the instance segmentation loss.
  • mask_dice_loss_coefficient (float, optional, defaults to 5.0) — Relative weight of the DICE/F-1 loss in the instance segmentation loss.
  • segmentation_head_activation_function (str, optional, defaults to "gelu") — The non-linear activation function in the segmentation head. Supported values are "relu", "silu", "gelu".
  • intermediate_size (int, optional, defaults to 1024) — Dimension of the MLP representations.

This is the configuration class to store the configuration of a RfDetrModel. It is used to instantiate a Rf Detr model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the stevenbucaille/rf-detr-base

Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.

Examples:

>>> from transformers import RfDetrConfig, RfDetrModel

>>> # Initializing a LW-DETR stevenbucaille/RfDetr_small_60e_coco style configuration
>>> configuration = RfDetrConfig()

>>> # Initializing a model (with random weights) from the stevenbucaille/RfDetr_small_60e_coco style configuration
>>> model = RfDetrModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

RfDetrDinov2Config

class transformers.RfDetrDinov2Config

< >

( transformers_version: str | None = None architectures: list[str] | None = None output_hidden_states: bool | None = False return_dict: bool | None = True dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None chunk_size_feed_forward: int = 0 is_encoder_decoder: bool = False id2label: dict[int, str] | dict[str, str] | None = None label2id: dict[str, int] | dict[str, str] | None = None problem_type: typing.Optional[typing.Literal['regression', 'single_label_classification', 'multi_label_classification']] = None hidden_size: int = 768 num_hidden_layers: int = 12 num_attention_heads: int = 12 mlp_ratio: int = 4 hidden_act: str = 'gelu' hidden_dropout_prob: float | int = 0.0 attention_probs_dropout_prob: float | int = 0.0 initializer_range: float = 0.02 layer_norm_eps: float = 1e-06 image_size: int | list[int] | tuple[int, int] = 224 patch_size: int | list[int] | tuple[int, int] = 14 num_channels: int = 3 qkv_bias: bool = True layerscale_value: float = 1.0 drop_path_rate: float | int = 0.0 use_swiglu_ffn: bool = False _out_features: list[str] | None = None _out_indices: list[int] | None = None apply_layernorm: bool = True reshape_hidden_states: bool = True use_mask_token: bool = True num_windows: int = 4 )

Parameters

  • hidden_size (int, optional, defaults to 768) — Dimension of the hidden representations.
  • num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer decoder.
  • num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer decoder.
  • mlp_ratio (int, optional, defaults to 4) — Ratio of the MLP hidden dim to the embedding dim.
  • hidden_act (str, optional, defaults to gelu) — The non-linear activation function (function or string) in the decoder. For example, "gelu", "relu", "silu", etc.
  • hidden_dropout_prob (Union[float, int], optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
  • attention_probs_dropout_prob (Union[float, int], optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • layer_norm_eps (float, optional, defaults to 1e-06) — The epsilon used by the layer normalization layers.
  • image_size (Union[int, list[int], tuple[int, int]], optional, defaults to 224) — The size (resolution) of each image.
  • patch_size (Union[int, list[int], tuple[int, int]], optional, defaults to 14) — The size (resolution) of each patch.
  • num_channels (int, optional, defaults to 3) — The number of input channels.
  • qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values.
  • layerscale_value (float, optional, defaults to 1.0) — Initial value to use for layer scale.
  • drop_path_rate (float, optional, defaults to 0.0) — Stochastic depth rate per sample (when applied in the main path of residual layers).
  • use_swiglu_ffn (bool, optional, defaults to False) — Whether to use the SwiGLU feedforward neural network.
  • apply_layernorm (bool, optional, defaults to True) — Whether to apply layer normalization to the feature maps in case the model is used as backbone.
  • reshape_hidden_states (bool, optional, defaults to True) — Whether to reshape the feature maps to 4D tensors of shape (batch_size, d_model, height, width) in case the model is used as backbone. If False, the feature maps will be 3D tensors of shape (batch_size, seq_len, d_model).
  • use_mask_token (bool, optional, defaults to True) — Whether to use mask_token in embeddings.
  • num_windows (int, optional, defaults to 4) — Number of windows to use for windowed attention. If 1, no windowed attention is used.

This is the configuration class to store the configuration of a RfDetrModel. It is used to instantiate a Rf Detr model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the stevenbucaille/rf-detr-base

Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.

Example:

>>> from transformers import RfDetrDinov2Config, RfDetrDinov2Backbone

>>> # Initializing a RfDetrDinov2 base style configuration
>>> configuration = RfDetrDinov2Config()

>>> # Initializing a model (with random weights) from the base style configuration
>>> model = RfDetrDinov2Backbone(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

RfDetrModel

class transformers.RfDetrModel

< >

( config: RfDetrConfig )

Parameters

  • config (RfDetrConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The bare LW Detr Model (consisting of a backbone and decoder Transformer) outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( pixel_values: FloatTensor pixel_mask: torch.LongTensor | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) RfDetrModelOutput or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, image_size, image_size)) — The tensors corresponding to the input images. Pixel values can be obtained using DetrImageProcessor. See DetrImageProcessor.__call__() for details (processor_class uses DetrImageProcessor for processing images).
  • pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:

    • 1 for pixels that are real (i.e. not masked),
    • 0 for pixels that are padding (i.e. masked).

    What are attention masks?

Returns

RfDetrModelOutput or tuple(torch.FloatTensor)

A RfDetrModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RfDetrConfig) and inputs.

Forward pass of the RF-DETR model. The pipeline proceeds as follows:

  1. Generate an initial set of object query embeddings and spatial location proposals from the backbone’s flattened output.
  2. Initialize storage for refined encoder-stage predictions (accommodating multi-group query structures) and iteratively refine object queries and their coordinates for each query group to capture the highest-confidence candidates from the encoder stage.
  3. Initialize learnable query features and spatial reference points (restricting to the primary group during inference for efficiency).
  4. Project the base reference points across the batch, refine them with the predicted coordinate refinements (shifting attention to the discovered object locations before decoding), and expand the target query features to match the batch dimensions.
  5. Pass the refined queries and updated reference points through the transformer decoder to aggregate detailed spatial context from the multi-scale features.
  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) — Sequence of hidden-states at the output of the last layer of the model.

  • init_reference_points (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Initial reference points sent through the Transformer decoder.

  • intermediate_hidden_states (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, d_model)) — Stacked intermediate hidden states (output of each layer of the decoder).

  • intermediate_reference_points (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, 4)) — Stacked intermediate reference points (reference points of each layer of the decoder).

  • enc_outputs_class (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) — Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are picked as region proposals in the first stage. Output of bounding box binary classification (i.e. foreground and background).

  • enc_outputs_coord_logits (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) — Logits of predicted bounding boxes coordinates in the first stage.

  • hidden_states (tuple[torch.FloatTensor, ...], optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple[torch.FloatTensor, ...], optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

  • cross_attentions (tuple[torch.FloatTensor, ...], optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.

  • backbone_features (list of torch.FloatTensor of shape (batch_size, config.num_channels, config.image_size, config.image_size)) — Features from the backbone.

Examples:

>>> from transformers import AutoImageProcessor, RfDetrModel
>>> from PIL import Image
>>> import requests

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> image_processor = AutoImageProcessor.from_pretrained("stevenbucaille/rfdetr_small_60e_coco")
>>> model = RfDetrModel.from_pretrained("stevenbucaille/rfdetr_small_60e_coco")

>>> inputs = image_processor(images=image, return_tensors="pt")

>>> outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 200, 256]

RfDetrForObjectDetection

class transformers.RfDetrForObjectDetection

< >

( config: RfDetrConfig )

Parameters

  • config (RfDetrConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

LW DETR Model (consisting of a backbone and decoder Transformer) with object detection heads on top, for tasks such as COCO detection.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( pixel_values: FloatTensor = None pixel_mask: torch.LongTensor | None = None labels: list[dict] | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) RfDetrObjectDetectionOutput or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained using DetrImageProcessor. See DetrImageProcessor.__call__() for details (processor_class uses DetrImageProcessor for processing images).
  • pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:

    • 1 for pixels that are real (i.e. not masked),
    • 0 for pixels that are padding (i.e. masked).

    What are attention masks?

  • labels (list[Dict] of len (batch_size,), optional) — Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4).

Returns

RfDetrObjectDetectionOutput or tuple(torch.FloatTensor)

A RfDetrObjectDetectionOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RfDetrConfig) and inputs.

The forward pass proceeds as follows:

  1. Process the visual input through the base RF-DETR model to obtain the transformer’s last hidden state and the final sequence of reference points.
  2. First stage: Generate classification logits from the encoder’s proposed object query embeddings.
  3. Second stage: Predict the final classification labels and refined bounding boxes using the decoder’s last hidden state and the most recent reference points.
  • loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized scale-invariant IoU loss.

  • loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging.

  • logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries.

  • pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). You can use ~DeformableDetrProcessor.post_process_object_detection to retrieve the unnormalized bounding boxes.

  • auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxiliary losses are activated (i.e. config.auxiliary_loss is set to True) and labels are provided. It is a list of dictionaries containing the two above keys (logits and pred_boxes) for each decoder layer.

  • init_reference_points (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Initial reference points sent through the Transformer decoder.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) — Sequence of hidden-states at the output of the last layer of the model.

  • intermediate_hidden_states (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, d_model)) — Stacked intermediate hidden states (output of each layer of the decoder).

  • intermediate_reference_points (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, 4)) — Stacked intermediate reference points (reference points of each layer of the decoder).

  • enc_outputs_class (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) — Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are picked as region proposals in the first stage. Output of bounding box binary classification (i.e. foreground and background).

  • enc_outputs_coord_logits (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) — Logits of predicted bounding boxes coordinates in the first stage.

  • hidden_states (tuple[torch.FloatTensor, ...], optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple[torch.FloatTensor, ...], optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

  • cross_attentions (tuple[torch.FloatTensor, ...], optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.

  • backbone_features (list of torch.FloatTensor of shape (batch_size, config.num_channels, config.image_size, config.image_size)) — Features from the backbone.

Examples:

>>> from transformers import AutoImageProcessor, RfDetrForObjectDetection
>>> from PIL import Image
>>> import requests

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> image_processor = AutoImageProcessor.from_pretrained("stevenbucaille/rf-detr-base")
>>> model = RfDetrForObjectDetection.from_pretrained("stevenbucaille/rf-detr-base")

>>> inputs = image_processor(images=image, return_tensors="pt")
>>> outputs = model(**inputs)

>>> # convert outputs (bounding boxes and class logits) to Pascal VOC format (xmin, ymin, xmax, ymax)
>>> target_sizes = torch.tensor([image.size[::-1]])
>>> results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[
...     0
... ]
>>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
...     box = [round(i, 2) for i in box.tolist()]
...     print(
...         f"Detected {model.config.id2label[label.item()]} with confidence "
...         f"{round(score.item(), 3)} at location {box}"
...     )
Detected cat with confidence 0.8 at location [16.5, 52.84, 318.25, 470.78]
Detected cat with confidence 0.789 at location [342.19, 24.3, 640.02, 372.25]
Detected remote with confidence 0.633 at location [40.79, 72.78, 176.76, 117.25]

RfDetrForInstanceSegmentation

class transformers.RfDetrForInstanceSegmentation

< >

( config: RfDetrConfig )

forward

< >

( pixel_values: FloatTensor = None pixel_mask: torch.LongTensor | None = None labels: list[dict] | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] )

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained using DetrImageProcessor. See DetrImageProcessor.__call__() for details (processor_class uses DetrImageProcessor for processing images).
  • pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:

    • 1 for pixels that are real (i.e. not masked),
    • 0 for pixels that are padding (i.e. masked).

    What are attention masks?

  • labels (list[Dict] of len (batch_size,), optional) — Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4).

Forward pass of the RF-DETR model for instance segmentation. The pipeline proceeds as follows:

  1. Process the visual input through the base RF-DETR model to obtain multi-scale spatial features, query embeddings, and their transformation history.
  2. Generate classification logits and initial segmentation masks from the encoder’s proposed object query embeddings (first stage).
  3. Predict the final classification labels and refined bounding boxes using the decoder’s last hidden state (second stage).
  4. Pass the high-resolution spatial features and query hidden states through the segmentation head to produce the final, detailed instance masks.

RfDetrDinov2Backbone

class transformers.RfDetrDinov2Backbone

< >

( config )

Parameters

  • config (RfDetrDinov2Backbone) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

RfDetrDinov2 backbone, to be used with frameworks like DETR and MaskFormer.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( pixel_values: Tensor **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) BackboneOutput or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.Tensor of shape (batch_size, num_channels, image_size, image_size)) — The tensors corresponding to the input images. Pixel values can be obtained using DetrImageProcessor. See DetrImageProcessor.__call__() for details (processor_class uses DetrImageProcessor for processing images).

Returns

BackboneOutput or tuple(torch.FloatTensor)

A BackboneOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RfDetrConfig) and inputs.

The RfDetrDinov2Backbone forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

  • feature_maps (tuple(torch.FloatTensor) of shape (batch_size, num_channels, height, width)) — Feature maps of the stages.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size) or (batch_size, num_channels, height, width), depending on the backbone.

    Hidden-states of the model at the output of each stage plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Only applicable if the backbone uses attention.

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

>>> from transformers import AutoImageProcessor, AutoBackbone
>>> import torch
>>> from PIL import Image
>>> import requests

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> processor = AutoImageProcessor.from_pretrained("facebook/dinov2-base")
>>> model = AutoBackbone.from_pretrained(
...     "facebook/dinov2-base", out_features=["stage2", "stage5", "stage8", "stage11"]
... )

>>> inputs = processor(image, return_tensors="pt")

>>> outputs = model(**inputs)
>>> feature_maps = outputs.feature_maps
>>> list(feature_maps[-1].shape)
[1, 768, 16, 16]
Update on GitHub