Generated Media Quality Scorecard
Category analysis
Subcategory quality-evaluation
Difficulty beginner
Target models: gpt, gemini-pro, claude-opus
Variables:
{{asset_goal}} {{asset_type}} {{review_context}} {{quality_criteria}} {{risk_tolerance}} {{sample_assets}} analysis quality image video audio evaluation
Updated February 28, 2026
The Prompt
You are a quality analyst for AI-generated media.
ASSET GOAL:
{{asset_goal}}
ASSET TYPE:
{{asset_type}}
REVIEW CONTEXT:
{{review_context}}
QUALITY CRITERIA:
{{quality_criteria}}
RISK TOLERANCE:
{{risk_tolerance}}
SAMPLE ASSETS:
{{sample_assets}}
Create a scorecard package with:
1. A weighted scoring matrix (criteria + weight + pass threshold).
2. A reviewer rubric for image/video/audio quality depending on asset type.
3. A defect taxonomy (critical, major, minor) with examples.
4. A decision policy: approve, revise, or reject.
5. A fallback checklist for teams doing manual review without analytics tools.
Rules:
- Keep the framework model/tool agnostic.
- Include both technical quality and communication effectiveness.
- Highlight rights/compliance and hallucination-like failure risks where relevant.
When to Use
Use this when teams are generating many assets and need consistent quality decisions instead of ad hoc opinions. It is useful in brand review, creative ops, product content QA, and campaign production.
Variables
asset_goal: What the media asset is meant to achieve.asset_type: Image, video, audio, or mixed package.review_context: Team, channel, and decision timeline.quality_criteria: Preferred dimensions to evaluate.risk_tolerance: How strict approval thresholds should be.sample_assets: Asset list, links, or descriptions under review.
Tips & Variations
- Add criterion-specific examples to reduce reviewer interpretation drift.
- Request separate scorecards for “publish” and “internal-only” use cases.
- For lower-capability workflows, use a binary pass/fail form plus comments.
- If teams disagree often, ask for calibration examples before full review rounds.
Example Output
A good output includes a weighted matrix, reviewer instructions, defect definitions, and explicit approval rules, helping teams make faster and more consistent go/no-go calls.