What It Does
AnimateDiffLoader takes a standard image generation model and augments it with a motion module—a lightweight neural network trained on video data that teaches the model how to produce temporally coherent frame sequences.
The motion module filename is critical provenance data: different motion models produce dramatically different animation styles (realistic motion, anime-style movement, camera pans, etc.). Combined with the base checkpoint, LoRAs, and prompts, the motion model selection determines the animation character.
Inputs
modelMODELBase diffusion model to augment.
model_nameSTRINGAnimateDiff motion model filename.
Outputs
MODELMODELModel with AnimateDiff motion module injected.
What Numonic Captures
- Motion model filename
- Base model connection (traceable in workflow graph)
Known Gaps
- Motion model hash
- Motion model version/training details
- Effective frame count and motion strength
Extension Pack: ComfyUI-AnimateDiff-Evolved
This node is not built into ComfyUI. It requires the ComfyUI-AnimateDiff-Evolved custom node package. Numonic detects and extracts metadata from this extension when it appears in workflows.
Related Nodes
Capture ComfyUI metadata automatically
Numonic extracts workflow metadata from every ComfyUI generation — models, samplers, seeds, prompts, and custom nodes. Track provenance, maintain compliance, and never lose a workflow.