0:00
/
0:00
Transcript

"From Code to Compliance: Assessing ChatGPT's Utility in Designing an Accessible Webpage -- A Case Study"

Generated below podcast on this paper with Google's Illuminate.

Screenshot-guided prompts make ChatGPT better at fixing web accessibility problems.

This study evaluates ChatGPT's ability to generate and improve accessible webpages, showing how prompt engineering and visual inputs enhance accessibility compliance in web development.

-----

https://arxiv.org/abs/2501.03572

Original Problem 🔍:

96% of frequently used websites fail accessibility standards despite WCAG guidelines. This widespread non-compliance affects 1.3 billion people globally with disabilities, while web complexity has increased by 11.8%.

-----

Solution in this Paper 💡:

→ The researchers used ChatGPT (GPT-4o) to generate a fully functional webpage with common website elements.

→ They combined both automated tools (WAVE, Axe) and manual testing to evaluate accessibility.

→ Screenshots were provided alongside prompts to help ChatGPT analyze visual context for better accessibility fixes.

→ The study tracked iterations needed to fix each accessibility issue and used weighted averages to measure complexity.

→ Prompt engineering techniques were refined through structured feedback and visual context integration.

-----

Key Insights 📊:

→ ChatGPT's default code lacks accessibility features but can implement them when properly prompted

→ Simple accessibility issues require 1-2 iterations, complex ones need 20+ iterations

→ Visual reasoning capability significantly improves contrast and layout accessibility fixes

→ Fixing one component often breaks others, requiring careful iteration management

-----

Results 📈:

→ Achieved 90.91% accuracy in resolving manual accessibility errors

→ Automated errors required average 1.4 iterations to fix

→ Manual errors needed average 13.4 iterations for resolution

→ Successfully resolved all contrast and navigation issues

Discussion about this video