Papers
arxiv:2604.27419

InteractWeb-Bench: Can Multimodal Agent Escape Blind Execution in Interactive Website Generation?

Published on Apr 30
· Submitted by
Mathsion Wong
on May 1
Authors:
,
,
,
,
,
,

Abstract

InteractWeb-Bench presents the first multimodal interactive benchmark for website generation under non-expert low-code conditions, addressing semantic misalignment through diverse user agents and interactive execution environments.

AI-generated summary

With the advancement of multimodal large language models (MLLMs) and coding agents, the website development has shifted from manual programming to agent-based project-level code synthesis. Existing benchmarks rely on idealized assumptions, especially for well-structured, information-rich inputs and static execution settings. In contrast, real-world development is constrained by a critical bottleneck: the semantic misalignment between ambiguous, low-quality instructions from non-expert users and model understanding, which results in a failure mode that we term blind execution. To address this gap, we introduce InteractWeb-Bench, the first multimodal interactive benchmark for website generation under non-expert low-code user conditions. InteractWeb-Bench introduces four types of user agents and persona-driven instruction perturbations to systematically simulate diverse user behaviors, including ambiguity, redundancy, and contradiction, grounded in requirement engineering defect taxonomies. We develop an interactive execution environment for agents, featuring a unified action space comprising Clarify, Implement, Verify, and Submit, enabling iterative intent refinement, code synthesis, and visual feedback-based validation. Extensive experiments and analysis reveal that frontier MLLM-based agents remain trapped in blind execution, exposing limitations in intent recognition and adaptive interaction.

Community

Paper submitter

InteractWeb-Bench is a multimodal interactive benchmark for evaluating website generation agents under real-world, non-expert user conditions.
It simulates ambiguous, noisy, and conflicting user instructions through persona-driven user agents, and introduces a dynamic action space (Clarify, Implement, Verify, Submit) to assess agents’ ability to escape “blind execution” and align with user intent.

If you find our work interesting, we would really appreciate your support and upvote! 🌿🚀

InteractWeb-Bench is open-sourced at https://github.com/AIforIP/InteractWeb-Bench
Project page: https://interactweb-bench.wangqiyao.me/

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.27419
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.27419 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.27419 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.27419 in a Space README.md to link it from this page.

Collections including this paper 1