I'm currently experiencing a token-related issue #30
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
(venv_magic123) (base) root@36d231460096:~/Magic123# bash scripts/textual_inversion/textual_inversion.sh runwayml/stable-diffusion-v1-5 data/nerf4/chair chair chair --max_train_steps=3000
===> Anaconda env loaded
36d231460096
number of gpus: 4
09/08/2023 05:00:40 - INFO - main - Namespace(adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, adam_weight_decay=0.01, allow_tf32=False, center_crop=False, checkpointing_steps=500, checkpoints_total_limit=None, dataloader_num_workers=0, enable_xformers_memory_efficient_attention=False, gradient_accumulation_steps=1, gradient_checkpointing=False, hub_model_id=None, hub_token=None, initializer_token='', learnable_property='object', learning_rate=0.0001, local_rank=-1, logging_dir='logs', lr_scheduler='constant', lr_warmup_steps=0, max_train_steps=3000, mixed_precision='no', num_train_epochs=100, num_validation_images=4, only_save_embeds=False, output_dir='chair', placeholder_token='--max_train_steps=3000', pretrained_model_name_or_path='data/nerf4/chair', push_to_hub=False, repeats=100, report_to='tensorboard', resolution=512, resume_from_checkpoint=None, revision=None, save_steps=500, scale_lr=False, seed=None, tokenizer_name=None, train_batch_size=16, train_data_dir='chair', use_augmentations=True, validation_epochs=None, validation_prompt=None, validation_steps=100)
09/08/2023 05:00:40 - INFO - main - Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cpu
Mixed precision type: no
Traceback (most recent call last):
File "textual-inversion/textual_inversion.py", line 927, in
main()
File "textual-inversion/textual_inversion.py", line 628, in main
tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
File "/root/Magic123/venv_magic123/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1838, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'data/nerf4/chair'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'data/nerf4/chair' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
I'm currently experiencing a token-related issue as shown above. Could you please assist me in resolving this? Thank you
I want to implement data parallel processing due to memory limitations. Are there any methods available to use multiple GPUs?
bash scripts/textual_inversion/textual_inversion.sh 0 runwayml/stable-diffusion-v1-5 data/demo/a-fullbody-ironman/rgba.png out/textual_inversion/ironman ironman ironman
-> torch.cuda.OutOfMemoryError: CUDA out of memory.
bash scripts/textual_inversion/textual_inversion.sh runwayml/stable-diffusion-v1-5 data/demo/a-fullbody-ironman/rgba.png out/textual_inversion/ironman ironman ironman --max_train_steps 3000
-> RuntimeError: No CUDA GPUs are available
@kimsh0622 you can use threestudio's reimplementation. They supports multigpu. https://github.com/threestudio-project/threestudio#magic123-
Did u find the solution?
I am also having the same issue, but i have single GPU,
https://github.com/guochengqian/Magic123/issues/46