Many internet users understand how to avoid digital pitfalls like catfishing, password hacking and malware. But not all the dangers in the digital world are so straightforward: For example, just browsing the web may set you up for privacy violations and even financial losses.
That’s because many websites have deceptive or “dark” patterns woven into them, which often force users to sign up for unwanted subscriptions, expose them to spam or online tracking, steal or sell their personal information, or make it complex to unsubscribe. Oftentimes, users might click a button or pop-up window just to make it go away, since digging into terms of service is a hassle if you’re just trying to browse fall sweaters.
Now a new framework called AutoBot can help identify and avoid these deceptive patterns as you go about your shopping, scrolling and searching.
Developed by Kassem Fawaz, an associate professor of electrical and computer engineering at the University of Wisconsin-Madison, and PhD student Asmit Nayak, the work earned a distinguished paper award at the Association for Computing Machinery Conference on Computer and Communications Security in Tapei, Taiwan, in October 2025. The meeting is the premier conference in the field of digital security and privacy.
Privacy researchers have explored different options for identifying and alerting users to deceptive patterns—including manually tagging websites and using large-language models like GPT-4 and Gemini to analyze screenshots and website source code. But each of these techniques has failings that prevent it from being deployed broadly.
“Recently, folks have tried using GPT-4 to analyze the source code of websites, but such methods don’t work for a majority of the websites whose source code is so large that it won’t fit in the model,” says Nayak. “Some tried using screenshots. However, even the latest models today cannot perform accurate localization of these patterns even if they can detect them.”
That’s why Nayak, Fawaz and their team decided to take another approach. Instead of creating an algorithm that digs into website code, which can often be changed or obfuscated, AutoBot takes a screenshot of the webpage. Then, a custom vison model analyzes the image and extracts essential information about the design and words on the page, creating a text-only document.
Next, a language model analyzes the document to detect deceptive patterns and categorize them into one of two-dozen patterns identified by privacy researchers, with colorful names like roaching, nagging, privacy zuckering, and forced continuity.
Using a curated dataset of more than 1,100 websites, which included many with known deceptive patterns, the team tested AutoBot, showing that the framework detected dark patterns in the websites with 93% accuracy.
ECE PhD student Asmit Nayak is developing new techniques for identifying deceptive patterns on the web. Photo: Joel Hallberg.
“The main reason why our framework performs well compared to others is that first, we focus on what a user sees by taking screenshots, which overcomes a bunch of issues that exist with using website source codes,” says Nayat. “Second, by converting the screenshots into a text-only format we are able to reduce the ‘info load’ on the large-language model, allowing it to focus on the relevant portions. Not only that, the way our framework creates the text-only table allows us to localize these patterns on the user’s screen.”
While they showed the overall framework of AutoBot is accurate and can work at scale, the researchers wanted to also find ways of making it useful for average web users. So they developed several applications powered by AutoBot. The first is a browser extension that takes a screenshot of each website visited and performs an analysis using a small, distilled language model that runs on a users computer. If the extension detects a deceptive pattern, a pop-up relays that, showing the type of deceptive pattern and highlighting problematic parts of the website in red boxes.
The researchers also integrated AutoBot into Lighthouse, a tool many developers use to audit website quality. The team says developers often don’t realize they are using deceptive patterns and are willing to make changes if the issues are pointed out to them.
Fawaz’s team also developed a large-scale analysis tool designed for researchers and regulators. Using more than 11,000 sites listed by the website ranking service Tranco and the e-commerce platform Shopify, the researchers showed how the tool can characterize the prevalence of deceptive patterns across the broader online landscape.
In the future, the team hopes that AutoBot will be integrated into developer workflows, flagging deceptive designs and helping to stop these patterns before they ever reach the web. The researchers plan to expand AutoBot’s capability to detect more types of deceptive patterns and to train AutoBot on languages other than English. Over time, they believe the framework will identify enough deceptive websites to produce even better training data for future researchers, in turn improving the identification of deceptive patterns.
Kassem Fawaz is the Grainger Institute for Engineering Associate Professor.
Other UW-Madison authors include Shirley Zhang, Yash Wani, and Rishabh Khandelwal. The authors acknowledge support from the National Science Foundation through awards CNS-1942014, and CNS-224738 and a research grant from the Google PSS Privacy Faculty Award program.
Featured image caption: A mock website created by the researchers shows how AutoBot uses red squats to flag suspected deceptive patterns in websites. Credit: Submitted.