Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have done exactly that... the results were basically the same as I get from DiffusionBee app for stable diffusion

i.e. regions of the image are locally impressive, it has understood the prompt well, but the overall picture is incoherent... the head or one or more legs may be missing, or legs or wings sprout from odd places

like, it gets the 'texture' spot on but the 'structure' is off



So I was wondering if it needs a more complicated procedure? Lateral thinking?

Should I be asking it for a picture of a cat in the style and setting I want and then use image-to-image to replace the cat with a dragon?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: