GarmentGPT Models

This repository contains all the necessary model components for the GarmentGPT project.

Models Included

This repository hosts three key components:

  1. Vision-Language Model (LLM): A fine-tuned multi-modal model responsible for generating discrete garment tokens from an input image.
  2. Edge Codec: A VQ-VAE-based model for decoding edge indices into high-fidelity geometric curves. The configuration is in codec_config.yaml and weights are in codec_model.pth.
  3. RT Codec: A VQ-VAE-based model for decoding location indices into 3D panel rotation and translation. The configuration is in rt_config.yaml and weights are in rt_model.pth.

Usage

These models are designed to be used with the main application code available at https://github.com/ChimerAI-MMLab/Garment-GPT. The inference script will automatically download these files.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support