Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Text Instructions for Mesh Generation? #41

Open
rrichardson opened this issue Jan 24, 2025 · 1 comment
Open

Text Instructions for Mesh Generation? #41

rrichardson opened this issue Jan 24, 2025 · 1 comment

Comments

@rrichardson
Copy link

Hi. Thank you for making and open-sourcing such an amazing tool. Setting this up and using it has been a lot of fun.

This is probably out of scope for this transformer, but I have had some occasions where the mesh generation model makes some choices that I'd prefer that it doesn't make. For example. I had it generate a mesh from a character, and it put a large, full bag on its back. (Images below) What I'd like to do is to describe the end result, so it doesn't make stylistic choices like that. Is this something that could be done, or would I have to somehow further specialize the model?

My command:

pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2')
mesh = pipeline(image = 'gromble1.png')

Source:

Image

Result:

Image

@Zeqiang-Lai
Copy link
Collaborator

For now, you could try to generate multiple meshes with different seed to see if the bug disappeared.

Besides, we are working on multiview input support, which could also help in this case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants