Skip to main content
Platform

03 / 05 · Gate

You only see what passed.

Most generation pipelines hand you everything they made and let you cull. Axion Studio culls before you see a single asset. The Gate runs three judges — one fast, two slow — and only assets that pass make it to your review queue.

Layer 1 — fast

SigLIP-2 baseline on every asset

SigLIP-2 (running on RunPod self-hosted A100s) checks brand similarity against your reference set. Outputs a 0–1 score. Anything below your workspace's threshold (default 0.85) gets rejected immediately and queued for re-generation with feedback. ~80% of obvious misses caught here.

Layer 2 — slow

Dual-judge in the grey zone

Anything that scores between 0.85 and 0.92 — the grey zone where SigLIP isn't sure — goes to two reviewers: GPT-4o vision and Gemini 2.5 Pro vision. Each gets the brand reference set, the brief, and the asset. Each returns a structured verdict with reasoning.

Both must agree for the asset to pass. If they split, a human review is queued (rare — <5% of assets reach this point).

Auto-regen

Misses don't waste budget — they teach

When a judge rejects, the rejection reason is fed back into the next generation pass as negative guidance. The system learns your brand's grey zone over time, and the re-generation hit-rate climbs from ~50% to ~85% by the time a workspace hits its 100th approval.

By the numbers

What the Gate prevents

~80%
Misses rejected by SigLIP layer 1
~95%
Compound accept-rate after dual-judge
<5%
Assets escalated to human review
0
Off-brand outputs pushed to channels

Vertex AI Eval Service

The gate trains the LoRA

Every gate decision is logged to BigQuery and fed into Vertex AI's Evaluation Service. That dataset trains the next LoRA refinement pass — so the model gets better at hitting your brand without ever needing you to label data manually.