Loading...
Loading...
Browse 20 ComfyUI nodes with detailed metadata coverage. See what Numonic captures from each node, what inputs and outputs are tracked, and where provenance gaps exist.
20 nodes
Combines AnimateDiff frames into video or GIF output.
Injects a motion model for coherent animation generation.
Applies ControlNet guidance from a reference image.
Encodes a text prompt into conditioning for the sampler.
Loads dual CLIP encoders for SDXL and Flux models.
Creates a blank latent canvas for txt2img generation.
Runs the diffusion process to generate images from noise.
Extended sampler with start/end step and noise control.
Loads a checkpoint and outputs MODEL, CLIP, and VAE.
Loads a ControlNet model for image-guided generation.
Loads an image from disk for img2img or ControlNet.
Applies a LoRA weight file to a model and CLIP encoder.
Loads a standalone VAE to override the checkpoint default.
Loads a video and extracts frames for processing.
Saves images to PNG with embedded workflow metadata.
Resizes an image using configurable interpolation.
Resizes latent for hi-res fix and multi-pass workflows.
Converts latent output into a visible pixel image.
Encodes a pixel image into latent space for img2img.
Combines image frames into MP4, WebM, or GIF video.