Auto-Detection
Import a ComfyUI image and Numonic extracts every model reference — LoRAs, checkpoints, VAEs, CLIP encoders, ControlNets. No manual tagging.
Loading...
From import to browsable library in seconds
Import a ComfyUI image and Numonic extracts every model reference — LoRAs, checkpoints, VAEs, CLIP encoders, ControlNets. No manual tagging.
All detected models appear in a filterable list organized by architecture and type. Search by name, filter by base model (SDXL, Flux, SD 1.5), or browse by type.
See every asset generated with a specific model. Click through to view the full gallery — same thumbnails, same navigation as your main asset library.
Numonic reads the embedded ComfyUI workflow JSON and identifies every model reference in the graph — across all node types.
Base model architecture is inferred automatically from naming patterns, covering 30+ CivitAI conventions. Names are normalized so dreamshaper_8.safetensors and DreamShaper v8 resolve to the same entry.
Every model type detected across your library appears once — deduplicated, labelled, and linked to every asset it touched.
Built for how creative work actually happens
Track which LoRA gives you the best results. Record trigger words so you don't forget them. Link to CivitAI or HuggingFace source pages. Build a personal model reference library that grows automatically.
Know what models your team is using across projects. Standardize on approved checkpoints. Track model versions and quality over time. Share model knowledge without tribal knowledge.