Textual inversion not working #46
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I am unable to do textual inversion, so i tried the given example. even that does not work, plz help.
It says no CUDA GPU, but I have everything set, the model also works and generates obj output, why this is not working?
(magic123) wakeel_furqan@coeai-bahria:~/Magic123$ bash scripts/textual_inversion/textual_inversion.sh runwayml/stable-diffusion-v1-5 data/demo/a-full-body-ironman/rgba.png out/textual_inversion/ironman ironman ironman --max_train_steps 3000
scripts/textual_inversion/textual_inversion.sh: line 13: module: command not found
===> Anaconda env loaded
scripts/textual_inversion/textual_inversion.sh: line 18: magic123/bin/activate: No such file or directory
Sun Dec 3 23:55:25 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA RTX A6000 Off | 00000000:2D:00.0 On | Off |
| 30% 29C P8 23W / 300W | 1143MiB / 49140MiB | 4% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2837 C+G ...ome-remote-desktop-daemon 264MiB |
| 0 N/A N/A 95585 G /usr/lib/xorg/Xorg 695MiB |
| 0 N/A N/A 95751 G /usr/bin/gnome-shell 50MiB |
| 0 N/A N/A 153562 G ...8/usr/lib/firefox/firefox 23MiB |
+-----------------------------------------------------------------------------+
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
coeai-bahria
number of gpus: 1
usage: textual_inversion.py [-h] [--save_steps SAVE_STEPS] [--only_save_embeds] --pretrained_model_name_or_path PRETRAINED_MODEL_NAME_OR_PATH [--revision REVISION] [--tokenizer_name TOKENIZER_NAME]
--train_data_dir TRAIN_DATA_DIR --placeholder_token PLACEHOLDER_TOKEN --initializer_token INITIALIZER_TOKEN [--learnable_property LEARNABLE_PROPERTY] [--repeats REPEATS]
[--output_dir OUTPUT_DIR] [--seed SEED] [--resolution RESOLUTION] [--center_crop] [--train_batch_size TRAIN_BATCH_SIZE] [--num_train_epochs NUM_TRAIN_EPOCHS]
[--max_train_steps MAX_TRAIN_STEPS] [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS] [--gradient_checkpointing] [--learning_rate LEARNING_RATE] [--scale_lr]
[--lr_scheduler LR_SCHEDULER] [--lr_warmup_steps LR_WARMUP_STEPS] [--dataloader_num_workers DATALOADER_NUM_WORKERS] [--adam_beta1 ADAM_BETA1] [--adam_beta2 ADAM_BETA2]
[--adam_weight_decay ADAM_WEIGHT_DECAY] [--adam_epsilon ADAM_EPSILON] [--push_to_hub] [--hub_token HUB_TOKEN] [--hub_model_id HUB_MODEL_ID] [--logging_dir LOGGING_DIR]
[--mixed_precision {no,fp16,bf16}] [--allow_tf32] [--report_to REPORT_TO] [--validation_prompt VALIDATION_PROMPT] [--num_validation_images NUM_VALIDATION_IMAGES]
[--validation_steps VALIDATION_STEPS] [--validation_epochs VALIDATION_EPOCHS] [--local_rank LOCAL_RANK] [--checkpointing_steps CHECKPOINTING_STEPS]
[--checkpoints_total_limit CHECKPOINTS_TOTAL_LIMIT] [--resume_from_checkpoint RESUME_FROM_CHECKPOINT] [--enable_xformers_memory_efficient_attention] [--use_augmentations]
textual_inversion.py: error: unrecognized arguments: 3000
Loading pipeline components...: 57%|██████████████████████████████████████████████████████████████████████████████████▎ | 4/7 [00:00<00:00, 5.61it/s]
text_config_dictis provided which will be used to initializeCLIPTextConfig. The valuetext_config["id2label"]will be overriden.text_config_dictis provided which will be used to initializeCLIPTextConfig. The valuetext_config["bos_token_id"]will be overriden.text_config_dictis provided which will be used to initializeCLIPTextConfig. The valuetext_config["eos_token_id"]will be overriden.Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00, 6.49it/s]
learned_embeds_path ironman does not exist!
Traceback (most recent call last):
File "/home/wakeel_furqan/Magic123/guidance/sd_utils.py", line 682, in
sd = StableDiffusion(device, opt.fp16, opt.vram_O,
File "/home/wakeel_furqan/Magic123/guidance/sd_utils.py", line 189, in init
pipe.to(device)
File "/home/wakeel_furqan/.conda/envs/magic123/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 852, in to
module.to(device, dtype)
File "/home/wakeel_furqan/.conda/envs/magic123/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2271, in to
return super().to(*args, **kwargs)
File "/home/wakeel_furqan/.conda/envs/magic123/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1145, in to
return self._apply(convert)
File "/home/wakeel_furqan/.conda/envs/magic123/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/home/wakeel_furqan/.conda/envs/magic123/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/home/wakeel_furqan/.conda/envs/magic123/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/home/wakeel_furqan/.conda/envs/magic123/lib/python3.10/site-packages/torch/nn/modules/module.py", line 820, in _apply
param_applied = fn(param)
File "/home/wakeel_furqan/.conda/envs/magic123/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/home/wakeel_furqan/.conda/envs/magic123/lib/python3.10/site-packages/torch/cuda/init.py", line 247, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
Hi could you check the official website of textual inversion: https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion