AI & ML interests
None defined yet.
Recent Activity
This is the HuggingFace community for the SAFE: Image Edit Detection and Localization Challenge 2025.
The challenge is hosted by the UL Digital Safety Research Institute (DSRI) and is co-located with the SynRDinBAS: Synthetic Realities and Data in Biometric Analysis and Security Workshop @ WACV 2026.
How to participate
To participate in the challenge, you need to do three things:
- Visit the challenge home page and sign up using the linked registration form. After verifying your team's email, you will receive access credentials for the submission platform.
- Implement your detector model. You can use the example-submission repository as a starting point, but you don't have to.
- Submit your detector model for evaluation. You can build your submission package yourself and submit it using a CLI tool (preferred), or you can build your submission in a HuggingFace Space and submit the Space using a web form.
How to make a submission
The infrastructure for the challenge runs on DSRI's Dyff platform. Submissions to the challenge must be in the form of a containerized web service that serves a simple JSON HTTP API.
If you're comfortable building a Docker image yourself, the preferred way to make a submission is to upload and submit a built image using the Dyff client.
Alternatively, you can create a Docker HuggingFace Space and create submissions from the space using a webform. The advantage of using an HF Space is that it builds the Docker image for you. However, HF Spaces also have some limitations that you'll need to account for.
Getting started
Check out the example submission repository and the pilot task dataset.
The example-submission demonstrates how to package an inference system for submission. You will package your detector model as a containerized web service that serves a simple JSON API. You will then upload your detector to DSRI's testing platform, where it will be evaluated on private datasets.
The pilot-1 dataset is an example of the data format you can expect in the challenge. The pilot task data are public and scores on pilot tasks will not count toward challenge rankings.
⚠️ Important notes ⚠️
⚠️ Submissions run without Internet access
For security reasons, your submitted system will be blocked from accessing the public Internet. Your submission package must include all necessary files. Make sure your system does not attempt to download model files at run-time, and note that downloading on demand is the default behavior of many popular ML packages like transformers and huggingface_hub. You can test that your system starts successfully without Internet access by using a command like:
docker run --network none ...
⚠️ Submissions run as ordinary users
Your submitted system will run as an ordinary user (i.e., not root). Ensure that your submission does not require elevated privileges on the host system. Also note that your submission will not run with a full user account. This means the user will not have a home directory or a user name, and certain commands that depend on the htpasswd database will not work. Your system also should not depend on the user having a particular UID or GID. If you use the USER directive in your Dockerfile, the UID/GID you set will be overridden.
⚠️ Submissions run with a read-only filesystem
Your submitted system will run with a read-only filesystem. Ensure that your system does not attempt to create files in read-only locations. A common source of errors is packages that assume that ~/.config or ~/.cache are writeable.
If you need to create files at run-time, you should create them under the /tmp directory. Most packages that use cache directories allow you to configure the cache location with environment variables. Note that there is a 100MiB storage limit for /tmp.
To check whether your system works with a read-only filesystem, you can run it with a command like:
docker run --read-only --tmpfs "/tmp" ...
⚠️ CUDA version compatibility
Submissions that request a GPU will be allocated 1x Nvidia L4. L4 GPUs require CUDA 11.8 or higher. Our infrastructure is running CUDA driver version 535.
You must include compatible CUDA Toolkit libraries in your submission's Docker image. We recommend using nvidia/cuda:12.6.3-cudnn-runtime-ubuntu24.04 or a similar image as your base image. This image has been validated to work correctly.
⚠️ CUDA 12.8 and CUDA 13.x are not supported by our infrastructure
You will see an error like the following:
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
We recommend using nvidia/cuda:12.6.3-cudnn-runtime-ubuntu24.04 or a similar image as your base image. This image has been validated to work correctly.