How to use Shamima/Blip-finetuned-sd-1k with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "image-to-text" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("image-to-text", model="Shamima/Blip-finetuned-sd-1k")
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("Shamima/Blip-finetuned-sd-1k") model = AutoModelForImageTextToText.from_pretrained("Shamima/Blip-finetuned-sd-1k")
Blip base finetuned on diffusiondb 1k dataset, for image to text tasks