A public ecosystem to report and map algorithmic bias, manipulation, and dark patterns. We are building immunity to digital nudging.
Our framework for a transparent digital future.
Shining a light on black-box algorithms. We believe users have the right to know how and why content is served to them.
Restoring control to the individual. We provide tools to identify and resist manipulative design patterns.
Aggregating data to prove systemic bias. We stand with marginalized communities disproportionately affected by AI errors.
Crowdsourced evidence of manipulative interfaces.
Booking engine showing fake countdowns to pressure purchase.
Newsletter service requires 5 clicks and a phone call to cancel.
AI SaaS pricing page designed to obfuscate true monthly costs.
Platform generating synthetic user activity to simulate engagement.
E-commerce AI automatically adding "recommended" insurance to cart.
Our Bias Heatmaps aggregate thousands of user reports to identify which AI models are exhibiting high rates of demographic stereotyping and discriminatory outputs.
Data is updated hourly from global user submissions.
We track specific model versions (e.g., GPT-4, Midjourney v5) to pinpoint regression.
Volunteer moderators review flagged clusters for validity.
We are a community of ethicists, sociologists, and everyday users. Whether you are an aggrieved user or a volunteer moderator, your contribution matters.