Blog/Product Update

From Chaos to Catalog: How Numonic Auto-Builds Your Model Library from ComfyUI Workflows

Stop manually tracking LoRAs and checkpoints. Learn how Numonic auto-detects models from ComfyUI workflows and builds a browsable, searchable library.

·6 min read·Numonic Team

You’ve got 50-plus LoRAs sitting in a folder. Some are SDXL. Some are Flux. A handful are for Pony Diffusion. You can’t remember which one produced that portrait last week—the one with the perfect lighting and subtle texture. You open CivitAI, scroll the same results you’ve already seen, download another version, drop it in the folder, and add to the pile.

The model management problem is universal among ComfyUI users. You accumulate models faster than any manual system can track them. Checkpoints, LoRAs, VAEs, ControlNets—each stored somewhere on disk, each referenced in dozens of workflows, none of them surfaced as a browsable thing you can actually reason about. The output exists. The model that made it is invisible.

Today we’re shipping the Model Library—a feature that reads your ComfyUI workflow metadata and automatically builds a browsable, searchable catalog of every model you’ve ever used, without a single manual entry.

The Problem: Models Are Invisible

Every ComfyUI image carries its full workflow JSON in the PNG metadata—including every model reference. The checkpoint name is there. The LoRA path is there. The VAE, the ControlNet weight, the CLIP encoder. The data exists. But nothing surfaces it as a browsable entity.

The “which LoRA was that?” problem is more expensive than it looks. You see a result you love, but you can’t trace back to the model combination that produced it without reopening the original workflow file and reading the node graph. At 10 models this is inconvenient. At 50 it’s a context switch that kills momentum. At 200 it’s effectively impossible.

Manual tracking breaks down predictably. Spreadsheets start with good intentions. A notes file grows then stops being updated. You rely on filenames, but filenames were never designed to carry architecture compatibility, trigger words, or training intent. portrait_style_v2_FINAL.safetensors tells you nothing about whether it targets SDXL or Flux, or which prompts unlock it.

Compounding the problem: different model types demand different mental models. LoRAs apply style or subject modifications with a weight. Checkpoints define the base generation space. VAEs handle encode/decode quality. ControlNets condition on structural inputs. CLIP encoders shape text understanding. When all of these live in the same flat pile, filtered only by subfolder name, the cognitive overhead of knowing what you have is constant and high.

How Auto-Detection Works

Import a ComfyUI image into Numonic and the workflow JSON is read automatically. Every node in the graph is inspected for model references. When a reference is found, Numonic decomposes it into structured metadata: the model name, the node type it appeared in, and the node’s position in the workflow.

From the node type, Numonic infers the model category. A LoraLoader node means a LoRA. CheckpointLoaderSimple means a base checkpoint. VAELoader means a VAE. The same inference runs across ControlNet loaders, CLIP encoders, upscaler nodes, IP-Adapter loaders, and embedding references—eight model types in total, all detected automatically from the workflow graph.

The base model architecture is inferred from the model’s filename using 30-plus pattern rules drawn from CivitAI naming conventions. Names containing flux, sdxl, pony, illustrious, or sd15 variants resolve immediately. When pattern matching cannot determine the architecture, the model is created with an “Unknown” base and you can set it manually—once, not every time you use the model.

Every detected model is promoted to a first-class entity in your library. It gets its own page. It accumulates metadata across every image import that references it. And the work is front-loaded at import time—when a model appears for the first time, it’s created; when it appears again in a later import, the asset is linked to the existing record. The catalog builds itself.

The detection supports the full range of model types active in production ComfyUI workflows: LoRAs, base checkpoints, UNETs, VAEs, ControlNets, CLIP encoders, upscaler models, textual inversion embeddings, and IP-Adapters. If it appears as a node input in your workflow graph, it is captured.

Browsing Your Model Library

The Model Library lives at /product/models and presents every detected model as a card. The default view gives you a filterable grid with two primary controls: architecture filter and model type filter. Both filters are built dynamically from your actual data—you only see the architectures and types present in your library, not a fixed dropdown of every possible option.

Clicking through to any model opens its detail page. This is where the library earns its keep. The detail page shows the model name and inferred base architecture, a description field you can edit, and a trigger words section—a place to record the prompt tokens that activate the model’s effect. Trigger words are notoriously easy to forget; they are now a persistent, searchable field attached to the model itself.

The most useful panel is the Assets tab. It shows every image in your library that was generated using this model—a full gallery with thumbnails, dates, and links to the originating collection. This is the answer to the “which LoRA did that?” question run in reverse: given a model, find every output it contributed to. You can browse a LoRA’s entire history of outputs without opening a single workflow file.

List view and grid view toggle lets you scan large libraries quickly. Search works across model names. The architecture and type filters compose, so you can isolate, for example, all Flux LoRAs in a single click. The interface adapts to the size of your library—a catalog of 15 models and a catalog of 200 models use the same UI without degradation.

What This Unlocks

The immediate benefit is visibility. Models that were previously implicit references buried in workflow JSON are now browsable, named, typed entities. But the downstream unlocks are more significant.

The Model Library is also the foundation for features we’re building next. Model comparison—side-by-side output galleries for two LoRAs trained on similar subjects—requires a model catalog as its prerequisite. Quality scoring, where usage frequency and curation signals rank models by reliability, requires knowing which models exist and how often they appear. Team model governance, where a studio maintains a vetted set of approved models, requires a shared canonical list.

None of those features are possible if models remain invisible strings in workflow JSON. The Model Library makes models visible. Everything else follows.

Start Building Your Catalog

The Model Library populates automatically from the ComfyUI images you import. There is no setup step, no CSV to fill in, no integration to configure. Import your first ComfyUI image and your model catalog begins.

If you already have images in Numonic, any ComfyUI PNG you imported carries the workflow metadata needed to backfill your library. Browse to /product/models to see what’s already been detected.

Your Model Library, Built Automatically

Import your first ComfyUI images and Numonic builds your model catalog from the workflow metadata—no manual entry required.