StructEval Logo StructEval

Benchmarking LLMs' Capabilities to Generate Structural Outputs

Jialin Yang*†, Dongfu Jiang*†, Lipeng He, Sherman Siu, Yuxuan Zhang1, Disen Liao, Zhuofeng Li3, Huaye Zeng, Yiming Jia1, Haozhe Wang2, Benjamin Schneider, Chi Ruan4, Wentao Ma, Zhiheng Lyu, Yifei Wang, Yi Lu1, Quy Duc Do, Ziyan Jiang, Ping Nie4, Wenhu Chen5

University of Waterloo 1 University of Toronto 2 HKUST
3 Shanghai University 4 Independent Contributor 5 Vector Institute
*Equal Contribution
StructEval Overview

Our benchmark encompasses 18 distinct formats and 44 task types organized into two complementary subsets: StructEval-T, which assesses the generation of text-only structures such as JSON and TOML, and StructEval-V, which evaluates the quality of visually rendered outputs from code such as HTML and SVG.

Abstract

As Large Language Models (LLMs) become integral to software development workflows, their ability to generate structured outputs has become critically important. We introduce StructEval, a comprehensive benchmark for evaluating LLMs' capabilities in producing both non-renderable (JSON, YAML, CSV) and renderable (HTML, React, SVG) structured formats. Unlike prior benchmarks, StructEval systematically evaluates structural fidelity across diverse formats through two paradigms: (1) generation tasks, producing structured output from natural language prompts, and (2) conversion tasks, translating between structured formats.

Our benchmark encompasses 18 formats and 44 types of tasks, with novel metrics for format adherence and structural correctness. Results reveal significant performance gaps—even state-of-the-art models like o1-mini achieve only 75.58% average score, with open-source alternatives lagging approximately 10 points behind. We find generation tasks more challenging than conversion tasks, and producing correct visual content more difficult than generating text-only structures.


Dataset Overview

StructEval comprises 2,035 examples covering 44 unique structure generation tasks across 18 structured output formats. The dataset is organized into two main subsets:

All 18 Supported Formats
JSON
CSV
XML
YAML
TOML
HTML
React
Vue
Angular
SVG
LaTeX
Markdown
Mermaid
TikZ
Matplotlib
Canvas
Vega
Typst

📝 StructEval-T

Evaluates text-only structured outputs


  • Formats: JSON, XML, YAML, CSV, TOML
  • Tasks: 19 (5 generation, 14 conversion)
  • Examples: 950
  • Focus: Syntactic validity & structural correctness

🎨 StructEval-V

Evaluates visually rendered outputs


  • Formats: HTML, React, SVG, LaTeX, Mermaid, etc.
  • Tasks: 25 (13 generation, 12 conversion)
  • Examples: 1,085
  • Focus: Visual correctness via VQA evaluation

Example Tasks

📝 StructEval-T Generation Example

Task Prompt:
Please output JSON code.

Task:
Summarize metadata about a fictional 
scientific article.

Feature Requirements:
1. Top-level field "title" is a string
2. Field "authors" is a list of exactly two items
3. Each author has "name" and "affiliation"
4. Field "publication.year" is an integer
5. Field "keywords" is a list of strings
Expected Keywords:
  • title
  • authors[0].name
  • authors[1].affiliation
  • publication.year
  • keywords[2]

🎨 StructEval-V Generation Example

Task Prompt:
Please output HTML code.

Task:
Design a webpage for a travel itinerary.

Feature Requirements:
• Centered <h1> with "Trip Summary"
• Use a <table> with 3 rows and 2 columns
• Apply class "highlight" to second row
• Add <button> labeled "Export PDF"
VQA Evaluation:
  • Q: What text is in the h1 header?
    A: Trip Summary
  • Q: How many rows are in the table?
    A: 3
  • Q: What class is on the second row?
    A: highlight
  • Q: What text is on the button?
    A: Export PDF

Evaluation Framework

Evaluation Pipeline

Our evaluation framework employs four core metrics:

🔧 Render Score (T/V)

Binary metric (0 or 1) indicating whether the generated code can be successfully loaded or rendered without syntax errors

✓ Syntax Score (T)

Verifies structural correctness (existence of required keys, relationships between keys, etc.) using dot-path rules. Calculated as the percentage of dot-path rules satisfied by the generated output format.

🔍 Keyword Matching (V)

Evaluates presence of desired keywords using exact string matching. Calculated as the percentage of keywords found in the raw generated output code.

👁️ VQA Score (V)

Assesses visual correctness of rendered content through question-answer pairs. Calculated as the percentage of Q&A pairs satisfied by the rendered output.

Score Aggregation Methods

🎨 Renderable Formats

HTML, React, SVG, LaTeX, Mermaid, etc.

Final Score = (0.2 × Render Score) + 
             (0.1 × Keyword Matching) + 
             (0.7 × VQA Score)
📝 Non-Renderable Formats

JSON, XML, YAML, CSV, TOML

Final Score = (0.2 × Render Score) + 
             (0.8 × Syntax Score)

Leaderboard

We evaluate various state-of-the-art LLMs in a zero-shot setting. The table below shows the performance breakdown across our four task categories. Click on the column headers to expand detailed results.

Open-Source Closed-Source

Model Type StructEval-T StructEval-V Average
Generation Conversion Generation Conversion

Performance Analysis

Performance by Task Type

Challenging Formats (Average Score < 50%)


Key Findings

Performance Gap

Even state-of-the-art models struggle with structured output generation. GPT-4o achieves only 76.02% average score, while the best open-source model (Qwen3-4B) lags at 67.04%.

Task Difficulty

Generation tasks are generally more challenging than conversion tasks. Visual rendering (StructEval-V) proves harder than text-only structures (StructEval-T).

Challenging Formats

Several formats remain particularly difficult for all models: Text→TOML (35.8%), Text→Mermaid (18.9%), and Matplotlib→TikZ (28.4%) conversions.

Saturated Tasks

Some tasks are effectively solved with scores >90%: JSON, HTML, CSV generation and YAML→JSON, React→HTML conversions show near-perfect performance.


Citation

@misc{yang2025structeval,
  title={StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs},
  author={Jialin Yang and Dongfu Jiang and Lipeng He and Sherman Siu and Yuxuan Zhang and Disen Liao and Zhuofeng Li and Huaye Zeng and Yiming Jia and Haozhe Wang and Benjamin Schneider and Chi Ruan and Wentao Ma and Zhiheng Lyu and Yifei Wang and Yi Lu and Quy Duc Do and Ziyan Jiang and Ping Nie and Wenhu Chen},
  year={2025},
  eprint={2505.20139},
  archivePrefix={arXiv},
  primaryClass={cs.SE},
  doi={10.48550/arXiv.2505.20139}
}