OpenPose & ControlNet
ControlNet is a way of adding conditional control to the output of Text-to-Image diffusion models, such as Stable Diffusion. In layman's terms, it allows us to direct the model to maintain or prioritize a particular pattern when generating output. There are many applications of this idea, but an incredibly common use case is generating a consistent pose for human subjects.
OpenPose, meanwhile, is a human pose detection library that works by detecting multiple "keypoints" in a human body and converting that information into a consistent "skeleton" representing the person.
Combine the two and we now have the ability to use OpenPose skeletons to control the pose of subjects in Stable Diffusion outputs, removing a great deal of the randomness and allowing us to be more intentional with our outputs than ever before.
Of course, OpenPose is not the only available model for ControlNot. Multiple other models, such as Semantic Suggestion, User Scribbles, and HED Boundary are available. Consult the ControlNet GitHub page for a full list.
This Site
These OpenPose skeletons are provided free of charge, and can be freely used in any project, commercial or otherwise. That said, we do encourage you to share the site if you find these resources useful, as it will help others do the same!
If you do want to help us out, do feel free to donate a dollar via PayPal. While the site itself is hosted on Netlify's free tier, donations certainly help with the domain name and AWS data transfer costs.
If you're familiar with Automatic1111's WebUI for Stable Diffusion, getting started with ControlNet and OpenPose should be straightforward. If you're lost, take a look at this Reddit post to point you in the right direction.
