Skip to main content
We’ve updated our Terms of Service. A new AI Addendum clarifies how Stack Overflow utilizes AI interactions.
Filter by
Sorted by
Tagged with
0 votes
0 answers
78 views

How to solve device mismatch issue when using offloading with QwenImageEditPlus pipeline and GGUF weights

After failing to make the QwenImageEditPlus run (https://huggingface.co/spaces/discord-community/README/discussions/9#68d260e32053323e6bfab30c), I tried a different approach (thanks to all the example ...
Siladittya's user avatar
  • 1,215
1 vote
0 answers
294 views

Encountering an AttibuteError: module 'torch' has no attribute 'xpu'

I'm encountering an error AttibuteError: module 'torch' has no attribute 'xpu' when running the diffusers library in a Google Colab environment with a CUDA GPU. I'm trying to use DiffusionPipleline....
Beverly Sellers-Robinson's user avatar
2 votes
1 answer
102 views

How to stop hugging face pipeline operation

I need to stop hugging face pipeline operation. I tried to achieve this using a method from the following question, but it didn't work. I set the breakpoint on the line return flag and expected ...
Intolighter's user avatar
0 votes
1 answer
157 views

Is HuggingFace Accelerate's init_empty_weights Context Manager (Properly) Implemented for a Diffuser?

Discussion HuggingFace accelerate's init_empty_weights() properly loads all text encoders I tested to the PyTorch meta device and consumes no apparent memory or disk space while loaded. However, it ...
Matthew Ross's user avatar
0 votes
0 answers
207 views

IP-adapter plus face model not working as expected

I came from these two links, https://huggingface.co/h94/IP-Adapter-FaceID https://stable-diffusion-art.com/consistent-face/ They all mentioned I can preserve face id with the controlnet model. So I ...
daisy's user avatar
  • 23.7k
5 votes
1 answer
23k views

ImportError: cannot import name 'cached_download' from 'huggingface_hub'

huggingface_hub==0.27.1 diffusers==0.28.0 I am getting this error: Traceback (most recent call last): File "/data/om/Lotus/infer.py", line 11, in <module> from diffusers.utils ...
Om Rastogi's user avatar
  • 1,067
0 votes
1 answer
2k views

ModuleNotFoundError: No module named 'diffusers.models.unet_2d_blocks'

when I use the diffusers in the https://github.com/alvinliu0/HumanGaussian project, I got the error : Traceback (most recent call last): File "launch.py", line 239, in <module> ...
x k G's user avatar
  • 1
0 votes
0 answers
178 views

Diffusers pipeline Instant ID with Ipadapter

I want to use an implementation of InstantID with Ipadapter using Diffusers library. So far I got : import diffusers from diffusers.utils import load_image from diffusers.models import ControlNetModel ...
Felox's user avatar
  • 502
1 vote
1 answer
141 views

Differences in no of ResNet blocks in up blocks and no of channels for Unet2D model of diffusers

I have been reading about Unets and Stable diffusion and want to train one. I understand the original architecture for unets and how its channels, height and width evolve over down blocks and up ...
Krishna Dave's user avatar
2 votes
1 answer
404 views

Huge memory consumption with SD3.5-medium

I have a g4dn.xlarge AWS GPU instance, it has 16GB memory + 48GB swap, and a Tesla T4 GPU Instance with 16GB vRAM. According to the stability blog, it should be sufficient to run SD3.5 Medium model. ...
daisy's user avatar
  • 23.7k
0 votes
0 answers
397 views

Stable Diffusion 3.5 Turbo extremely slow using diffusers library

Running example code directly from the huggingface stable diffusion 3.5 site link and I am getting extremely slow run times, averaging 90 seconds per iteration. For reference when I use Stable ...
ProfessionalFrog's user avatar
1 vote
1 answer
467 views

Cannot merge Lora weights back to the Flux Dev base model

I have a Flux-Dev base model which has been trained with the LoRA technique using the SimpleTuner framework (https://github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/FLUX.md). The ...
user1875136's user avatar
0 votes
1 answer
810 views

Flux.1 Schnell image generator issue, GPU resources getting exhausted after 1 prompt

So, I tried to train a prompt based image generation model using FLUX.1-schnell. I used Lightning AI Studio (an alternate to Google Colab), that helped me to access to L40 GPU, that came with 48gb ...
ACHINTYA GUPTA's user avatar
4 votes
2 answers
2k views

Issue loading FluxPipeline components

import torch from diffusers import FluxPipeline pipe = FluxPipeline.from_pretrained('C:\\Python\\Projects\\test1\\flux1dev', torch_dtype=torch.bfloat16) pipe.enable_sequential_cpu_offload() prompt = ...
Donald Moore's user avatar
1 vote
1 answer
321 views

Shapes mismatch while training diffusers/UNet2DConditionModel

I am trying to train diffusers/UNet2DConditionModel from scratch. Currently I have error on unet forwarding: mat1 and mat2 shapes cannot be multiplied (288x512 and 1280x512). I noticed that mat1 first ...
u1ug's user avatar
  • 11
1 vote
1 answer
925 views

Stable Diffusion 3 does not work with diffusers

I try to use Stable Diffusion 3 on my desktop. But it doesn't work. I make the test.py file, the file is mostly same as sample code on Hugging Face. Difference is only about authentication. The sample ...
Tadashi's user avatar
  • 11
1 vote
0 answers
102 views

GPU out of memory using hugging face

Pytorch is throwing GPU out of memory error this is the code from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler from diffusers.utils import load_image ...
sreerag m's user avatar
0 votes
1 answer
238 views

stabilityai/stable-cascade takes 7+ hours to generate an image

I am using this model: https://huggingface.co/stabilityai/stable-cascade from diffusers import StableCascadeCombinedPipeline print("LOADING MODEL") pipe = StableCascadeCombinedPipeline....
x89's user avatar
  • 3,522
0 votes
1 answer
84 views

Runtime Error: StableCascadeCombinedPipeline: Expected all tensors to be on the same device

In a nutshell: Attempting to pass an image into StableCascadeCombinedPipeline gives a runtime error complaining about tensors not all being in cuda. The app works perfectly if I comment out the image ...
Mike Ellis's user avatar
  • 1,330
2 votes
0 answers
461 views

How to use batch prediction in Diffusers.StableDiffusionXLImg2ImgPipeline library

I'm currently exploring the StableDiffusion Image to Image library within HuggingFace. My goal is to generate images similar to the ones I have stored in a folder. Currently, I'm using the following ...
Adarsh Wase's user avatar
  • 1,931
-1 votes
1 answer
439 views

How does the diffusers progress bar work?

I'm upscaling an image using StableDiffusionUpscalePipeline, but the second timestamp does not appear to be total estimated duration after all. What is it then and how to get a useful estimate? ...
Cees Timmerman's user avatar
1 vote
0 answers
397 views

When I want use "ControlNetModel.from_single_file" in diffuser, I give it correct path , but it doesn't work

code from diffusers import StableDiffusionControlNetPipeline ,ControlNetModel import torch import cv2 img = cv2.imread('/home/leo/blues.png') controlnet_model_path = "/mnt/d/models/...
leo-ke's user avatar
  • 11
3 votes
3 answers
3k views

Generating preview images with Stable Diffusion XL pipeline results in black images

I'm working with the Stable Diffusion XL (SDXL) model from Hugging Face's diffusers library and encountering an issue where my callback function, intended to generate preview images during the ...
kamza's user avatar
  • 105
1 vote
0 answers
507 views

How to change the text_encoder for a Stable Diffusion model for fine-tuning?

I want to use Stable Diffusion model weights to generate class-conditional images- however, I don't want these images to be conditional on a text prompt, but rather on a number of binary class ...
Tomas Premoli Muniagurria's user avatar
-2 votes
1 answer
793 views

I use Diffusers to train LoRA. Training images are my photos, but the result image is not like me

Here is my training code. from accelerate.utils import write_basic_config write_basic_config() import os os.environ["MODEL_NAME"] = "runwayml/stable-diffusion-v1-5" os.environ[&...
Han Pengbo's user avatar
  • 1,436
0 votes
1 answer
650 views

How to generate images with the same seed but with different kind of noise schedulers using Diffusers

I'm trying to use the Diffusers library to generate images with different schedulers (Just generate images I don't want to have to prompt). For this I follow this tutorial: https://huggingface.co/docs/...
Gesser's user avatar
  • 1
1 vote
0 answers
110 views

How to wipe gradients from UNet2DConditionModel

I am working with the "CompVis/ldm-text2im-large-256" building on top of the prompt-to-prompt code. model = DiffusionPipeline.from_pretrained(model_id, height=IMAGE_RES, width=IMAGE_RES).to(...
python_noob's user avatar
1 vote
1 answer
2k views

cannot import name 'AutoPipelineForText2Imag' from 'diffusers

I am trying to run a hugging face AI model, which gives me an error when I try to import the Diffuser module. From here, I take this model, Huggingface Text to image generation model Error log: ...
Ahmad Mujtaba's user avatar
1 vote
1 answer
1k views

RAM won't clear (diffusers)

I'm writing a script that needs to change models sometimes (diffusers & llama-cpp-python) I don't have much RAM and VRAM, so I need to clean RAM and VRAM after using a model. llama is fine, I just ...
minto's user avatar
  • 11
0 votes
1 answer
1k views

Diffusers bug: "UNet2DConditionModel" has no attribute "weight"

When using the official script "convert_lora_safetensor_to_diffusers.py", I tried to load a basemodel by using pipeline = StableDiffusionPipeline.from_pretrained(base_model_path, ...
borylee's user avatar
1 vote
0 answers
486 views

How VAE and UNet sample size work in HF Diffusers?

Does anyone know how sample size work in SD's VAE and UNet? All I know is the SD v1.5 was trained with 512*512, so it can generate 512*512 more properly. But when I set the pipeline like 384*384 or ...
MAPLE LEAF's user avatar
-1 votes
1 answer
3k views

stabilityai/stable-diffusion-2-1-base does not appear to have a file named scheduler_config.json

python preprocess.py --data_path data/horse.jpg --inversion_prompt 'A black horse runs in the sand' Traceback (most recent call last): File "/data1/wz/anaconda3/envs/pnp-diffusers/lib/python3.9/...
Our bank's user avatar
1 vote
2 answers
3k views

Adding multiple LoRa safetensors to my HuggingFace model in Python

Suppose I use this script to load one fine-tuned model: (example taken from https://towardsdatascience.com/hugging-face-diffusers-can-correctly-load-lora-now-a332501342a3) import torch from diffusers ...
D.Giunchi's user avatar
  • 1,960
2 votes
0 answers
567 views

SDXL requires_aesthetics_score=True error

I'm trying to figure this one out, been at it for a while and can't seem to make any headway. Any help is greatly appreciated!!! It is my first time using Stable Diffusion so maybe I'm missing ...
Mark Dabler's user avatar
11 votes
5 answers
36k views

How to fix nsfw error for stable diffusion?

I always get the "Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed." error when using stable ...
Niklas Mohler's user avatar