AnimationPartial CaptureComfyUI-AnimateDiff-Evolved

AnimateDiff Loader

class_type: AnimateDiffLoader

Source repo

Loads an AnimateDiff motion model and injects it into the base diffusion model, enabling frame-coherent animation generation. AnimateDiff adds temporal awareness to standard image generation models.

What It Does

AnimateDiffLoader takes a standard image generation model and augments it with a motion module—a lightweight neural network trained on video data that teaches the model how to produce temporally coherent frame sequences.

The motion module filename is critical provenance data: different motion models produce dramatically different animation styles (realistic motion, anime-style movement, camera pans, etc.). Combined with the base checkpoint, LoRAs, and prompts, the motion model selection determines the animation character.

Inputs

modelMODEL

Base diffusion model to augment.

model_nameSTRING

AnimateDiff motion model filename.

Outputs

MODELMODEL

Model with AnimateDiff motion module injected.

What Numonic Captures

  • Motion model filename
  • Base model connection (traceable in workflow graph)

Known Gaps

  • Motion model hash
  • Motion model version/training details
  • Effective frame count and motion strength

Extension Pack: ComfyUI-AnimateDiff-Evolved

This node is not built into ComfyUI. It requires the ComfyUI-AnimateDiff-Evolved custom node package. Numonic detects and extracts metadata from this extension when it appears in workflows.

Related Nodes

Capture ComfyUI metadata automatically

Numonic extracts workflow metadata from every ComfyUI generation — models, samplers, seeds, prompts, and custom nodes. Track provenance, maintain compliance, and never lose a workflow.