1

Suppose I use this script to load one fine-tuned model: (example taken from https://towardsdatascience.com/hugging-face-diffusers-can-correctly-load-lora-now-a332501342a3)

import torch
from diffusers import StableDiffusionPipeline
text2img_pipe = StableDiffusionPipeline.from_pretrained(
    "stablediffusionapi/deliberate-v2"
    , torch_dtype = torch.float16
    , safety_checker = None
 ).to("cuda:0")

 lora_path = "<path/to/lora.safetensors>" #only one tensor , not folder
 text2img_pipe.load_lora_weights(lora_path)

This adds one safetensors file. How can I load multiple safetensors? I tried the use_safetensors argument when instantiating the StableDiffusionPipeline, but it is unclear where I should put the safetensors folder I have. I have such an error:

OSError: Could not found the necessary safetensors weights in {'vae/diffusion_pytorch_model.safetensors', 'text_encoder/pytorch_model.bin', 'safety_checker/model.safetensors', 'vae/diffusion_pytorch_model.bin', 'text_encoder/model.safetensors', 'unet/diffusion_pytorch_model.bin', 'safety_checker/pytorch_model.bin', 'unet/diffusion_pytorch_model.safetensors'} (variant=None)

I have also tried to load the weights one after the other, but results suggest that I'm not keeping the previous loaded weights.

2 Answers 2

2

So you can have multiple loras loaded. You always could have. Before we used to need Kohya's LoRA scripts. But now we can load them like this. Below is a method I use to load and unload loras by simply passing the current loras that I want to use.

import hashlib
import json

def hash_dict(d):
    dict_string = json.dumps(d, sort_keys=True)
    return hashlib.sha256(dict_string.encode()).hexdigest()

def load_loras(pipe, settings):
    active_adapters = pipe.get_active_adapters()
    set_adapters_hash = hash_dict(settings["lora"])
    set_loras = []
    set_weights = []
    if len(settings["lora"]) > 0:
        pipe.enable_lora()
        print(f"Checking if Loras settings has changed...)")
        print(f"Stored: {getattr(pipe, 'set_adapters_hash', None)}")
        print(f"Current: {set_adapters_hash}")

        # I make and compare a hash to check if the loras changed as I
        # leave my pipe in memory.
        if getattr(pipe, 'set_adapters_hash', None) == set_adapters_hash:
            print("Loras settings has not changed")
            return 'Loras settings has not changed'

        pipe.unfuse_lora()

        for lora in settings["lora"]:
            file_name = lora["file_name"] or lora["name"]
            adapter_name = file_name.replace(".", "")
            if file_name not in active_adapters:
                print(f"Loading Lora: {file_name}")
                try:
                    pipe.load_lora_weights(
                        f"./assets/lora/{file_name}.safetensors",
                        weight_name=f"{file_name}.safetensors",
                        adapter_name=adapter_name,
                    )
                except:
                    print("Probably loaded already")

                set_loras.append(file_name)
                set_weights.append(lora["weight"])
            else:
               print(f"Lora: {file_name} already loaded")
                set_loras.append(file_name)
                set_weights.append(lora["weight"])

        pipe.unfuse_lora()
        pipe.set_adapters(set_loras, set_weights)
        pipe.set_adapters_hash = set_adapters_hash
        pipe.fuse_lora()
    else:
        pipe.disable_lora()

However below, I'll give another example without all the loops and hashing that I do. Hopefully it is easier to understand.

def load_two_loras(pipe):
    # Hardcoded Loras information
    lora1 = {"file_name": "lora1", "weight": 1.0}
    lora2 = {"file_name": "lora2", "weight": 0.5}

    # Enable Lora if it was disabled
    pipe.enable_lora()

    # Unfuse previous settings if any
    pipe.unfuse_lora()

    # Load Lora 1
    try:
        pipe.load_lora_weights(
            f"./assets/lora/{lora1['file_name']}.safetensors",
            weight_name=f"{lora1['file_name']}.safetensors",
            adapter_name=lora1['file_name'].replace(".", "")
        )
    except:
        print(f"Lora: {lora1['file_name']} probably loaded already")

    # Load Lora 2
    try:
        pipe.load_lora_weights(
            f"./assets/lora/{lora2['file_name']}.safetensors",
            weight_name=f"{lora2['file_name']}.safetensors",
            adapter_name=lora2['file_name'].replace(".", "")
        )
    except:
        print(f"Lora: {lora2['file_name']} probably loaded already")

    # Set and fuse the loaded Loras
    set_loras = [lora1['file_name'], lora2['file_name']]
    set_weights = [lora1['weight'], lora2['weight']]
    pipe.set_adapters(set_loras, set_weights)
    pipe.fuse_lora()
Sign up to request clarification or add additional context in comments.

2 Comments

NameError: name 'hash_dict' is not defined ?
Just keep on reading and go to the second example. The first one adds a hash to check if loras are already loaded. However, you probably just want the second one, which is just loading multiple loras, nothing fancy.
0

Currently you can only use 1 LoRA, in the future Diffusers will create a feature to load multiple LoRAs, currently labeled as WIP https://github.com/huggingface/diffusers/issues/2613

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.