All coupons / Design

GenAI with ComfyUI

Course Description

Learn to generate high-quality images and videos using ComfyUI, a powerful visual interface built around Stable Diffusion and many other popular AI models. Whether you're a digital artist, content creator, creative developer, or AI enthusiast, this course will show you how to turn your ideas into stunning visuals - with no coding required. This hands-on course walks you through the essentials of ComfyUI, a node-based system that gives you full control over the generative process. You'll start with the basics of, text-to-image generation, then move into more advanced workflows like image-to-image, Inpainting, Outpainting, compositing image layers together, installing the ComfyUI manager, improving resolution and quality using ESRGAN, comparing various model checkpoints, Flux Schnell, Dev & Kontext, Stable Video Diffusion, frame interpolation for improving videos and motion sequences, Canny, Depth and OpenPose ControlNets IP-Adapters for SD1.5, SDXL and Flux ControlNets from Videos AnimateDiff Video Helper Suite ControlNeXt SVD LTXV Text to Video, Image 2 Video & IC Pose Wan 2.2 T2V, I2V, FLF2V, VACE Video Editing, Animate Character Replacement and S2V Lip Syncing Multiple Camera Angles with Qwen Edit Character LoRA Creation Editing Lighting LTXV2.3 T2VA, I2VA, FL2VA, I2VCA You'll gain a solid understanding of how different nodes interact - including samplers, models, prompts, and schedulers - and how to combine them for powerful creative outputs. Along the way, we'll cover best practices for exporting assets for use in creative or commercial projects. By the end of the course, you'll be able to confidently design and execute complete image and video workflows in ComfyUI. This course is perfect for learners who want creative control without writing code, and who are ready to move beyond "prompt-only" AI tools into building custom visual workflows that are fast, flexible, and future-ready.