first commit
This commit is contained in:
71
assets/advanced.md
Normal file
71
assets/advanced.md
Normal file
@@ -0,0 +1,71 @@
|
||||
|
||||
# Code organization & Advanced tips
|
||||
|
||||
This is a simple description of the most important implementation details.
|
||||
If you are interested in improving this repo, this might be a starting point.
|
||||
Any contribution would be greatly appreciated!
|
||||
|
||||
* The SDS loss is located at `./guidance/sd_utils.py > StableDiffusion > train_step`:
|
||||
```python
|
||||
## 1. we need to interpolate the NeRF rendering to 512x512, to feed it to SD's VAE.
|
||||
pred_rgb_512 = F.interpolate(pred_rgb, (512, 512), mode='bilinear', align_corners=False)
|
||||
## 2. image (512x512) --- VAE --> latents (64x64), this is SD's difference from Imagen.
|
||||
latents = self.encode_imgs(pred_rgb_512)
|
||||
... # timestep sampling, noise adding and UNet noise predicting
|
||||
## 3. the SDS loss
|
||||
w = (1 - self.alphas[t])
|
||||
grad = w * (noise_pred - noise)
|
||||
# since UNet part is ignored and cannot simply audodiff, we have two ways to set the grad:
|
||||
# 3.1. call backward and set the grad now (need to retain graph since we will call a second backward for the other losses later)
|
||||
latents.backward(gradient=grad, retain_graph=True)
|
||||
return 0 # dummy loss
|
||||
# 3.2. use a custom function to set a hook in backward, so we only call backward once (credits to @elliottzheng)
|
||||
class SpecifyGradient(torch.autograd.Function):
|
||||
@staticmethod
|
||||
@custom_fwd
|
||||
def forward(ctx, input_tensor, gt_grad):
|
||||
ctx.save_for_backward(gt_grad)
|
||||
# we return a dummy value 1, which will be scaled by amp's scaler so we get the scale in backward.
|
||||
return torch.ones([1], device=input_tensor.device, dtype=input_tensor.dtype)
|
||||
|
||||
@staticmethod
|
||||
@custom_bwd
|
||||
def backward(ctx, grad_scale):
|
||||
gt_grad, = ctx.saved_tensors
|
||||
gt_grad = gt_grad * grad_scale
|
||||
return gt_grad, None
|
||||
|
||||
loss = SpecifyGradient.apply(latents, grad)
|
||||
return loss # functional loss
|
||||
```
|
||||
* Other regularizations are in `./nerf/utils.py > Trainer > train_step`.
|
||||
* The generation seems quite sensitive to regularizations on weights_sum (alphas for each ray). The original opacity loss tends to make NeRF disappear (zero density everywhere), so we use an entropy loss to replace it for now (encourages alpha to be either 0 or 1).
|
||||
* NeRF Rendering core function: `./nerf/renderer.py > NeRFRenderer > run & run_cuda`.
|
||||
* Shading & normal evaluation: `./nerf/network*.py > NeRFNetwork > forward`.
|
||||
* light direction: current implementation use a plane light source, instead of a point light source.
|
||||
* View-dependent prompting: `./nerf/provider.py > get_view_direction`.
|
||||
* use `--angle_overhead, --angle_front` to set the border.
|
||||
* Network backbone (`./nerf/network*.py`) can be chosen by the `--backbone` option.
|
||||
* Spatial density bias (density blob): `./nerf/network*.py > NeRFNetwork > density_blob`.
|
||||
|
||||
|
||||
# Debugging
|
||||
|
||||
`debugpy-run` is a convenient way to remotely debug this project. Simply replace a command like this one:
|
||||
|
||||
```bash
|
||||
python main.py --text "a hamburger" --workspace trial -O --vram_O
|
||||
```
|
||||
|
||||
... with:
|
||||
|
||||
```bash
|
||||
debugpy-run main.py -- --text "a hamburger" --workspace trial -O --vram_O
|
||||
```
|
||||
|
||||
For more details: https://github.com/bulletmark/debugpy-run
|
||||
|
||||
# Axes and directions of polar, azimuth, etc. in NeRF and Zero123
|
||||
|
||||
<img width="1119" alt="NeRF_Zero123" src="https://github.com/ashawkey/stable-dreamfusion/assets/22424247/a0f432ff-2d08-45a4-a390-bda64f5cbc94">
|
||||
|
||||
39
assets/update_logs.md
Normal file
39
assets/update_logs.md
Normal file
@@ -0,0 +1,39 @@
|
||||
### 2023.4.19
|
||||
* Fix depth supervision, migrate depth estimation model to omnidata.
|
||||
* Add normal supervision (also by omnidata).
|
||||
|
||||
https://user-images.githubusercontent.com/25863658/232403294-b77409bf-ddc7-4bb8-af32-ee0cc123825a.mp4
|
||||
|
||||
### 2023.4.7
|
||||
Improvement on mesh quality & DMTet finetuning support.
|
||||
|
||||
https://user-images.githubusercontent.com/25863658/230535363-298c960e-bf9c-4906-8b96-cd60edcb24dd.mp4
|
||||
|
||||
### 2023.3.30
|
||||
* adopt ideas from [Fantasia3D](https://fantasia3d.github.io/) to concatenate normal and mask as the latent code in a warm up stage, which shows faster convergence of shape.
|
||||
|
||||
https://user-images.githubusercontent.com/25863658/230535373-6ee28f16-bb21-4ec4-bc86-d46597361a04.mp4
|
||||
|
||||
### 2023.1.30
|
||||
* Use an MLP to predict the surface normals as in Magic3D to avoid finite difference / second order gradient, generation quality is greatly improved.
|
||||
* More efficient two-pass raymarching in training inspired by nerfacc.
|
||||
|
||||
https://user-images.githubusercontent.com/25863658/215996308-9fd959f5-b5c7-4a8e-a241-0fe63ec86a4a.mp4
|
||||
|
||||
### 2022.12.3
|
||||
* Support Stable-diffusion 2.0 base.
|
||||
|
||||
### 2022.11.15
|
||||
* Add the vanilla backbone that is pure-pytorch.
|
||||
|
||||
### 2022.10.9
|
||||
* The shading (partially) starts to work, at least it won't make scene empty. For some prompts, it shows better results (less severe Janus problem). The textureless rendering mode is still disabled.
|
||||
* Enable shading by default (--latent_iter_ratio 1000).
|
||||
|
||||
### 2022.10.5
|
||||
* Basic reproduction finished.
|
||||
* Non --cuda_ray, --tcnn are not working, need to fix.
|
||||
* Shading is not working, disabled in utils.py for now. Surface normals are bad.
|
||||
* Use an entropy loss to regularize weights_sum (alpha), the original L2 reg always leads to degenerated geometry...
|
||||
|
||||
https://user-images.githubusercontent.com/25863658/194241493-f3e68f78-aefe-479e-a4a8-001424a61b37.mp4
|
||||
Reference in New Issue
Block a user