From 03471a99b319a3efb207a4234cf5872bbce07a77 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Gordon=20Guocheng=20Qian=20=E9=92=B1=E5=9B=BD=E6=88=90?= Date: Tue, 22 Aug 2023 13:48:40 -0700 Subject: [PATCH] Update readme.md --- readme.md | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/readme.md b/readme.md index 5554ebc..960ef12 100644 --- a/readme.md +++ b/readme.md @@ -17,13 +17,20 @@ Training convergence of a demo example: Compare Magic123 without textual inversion with abaltions using only 2D prior (SDS) or using only 3D prior (Zero123): -https://github.com/guochengqian/Magic123/assets/48788073/e5a3c3cb-bcb1-4b10-8bfb-2c2eb79a9289 + +https://github.com/guochengqian/Magic123/assets/48788073/c91f4c81-8c2c-4f84-8ce1-420c12f7e886 Effects of Joint Prior. Increasing the strength of 2D prior leads to more imagination, more details, and less 3D consistencies. + + + +https://github.com/guochengqian/Magic123/assets/48788073/98cb4dd7-7bf3-4179-9b6d-e8b47d928a68 + + Official PyTorch Implementation of Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors. Code is built upon [Stable-DreamFusion](https://github.com/ashawkey/stable-dreamfusion) repo.