A zero-friction web game that loads immediately and presents a new pair of real vs. AI-generated face images with each round. Users click to guess and receive instant visual feedback.
After each guess, the interface provides corrective feedback, often highlighting specific artifacts in the AI-generated image (like distorted backgrounds or jewelry) that gave it away.
Uses a carefully assembled dataset of image pairs where the AI-generated image comes from advanced models like StyleGAN, and the real image is a verified photograph.
A dedicated section of the website provides written guides and visual examples explaining key indicators of AI-generated faces, such as irregularities in ears, hairlines, and backgrounds.
The tool is built and maintained by university researchers studying misinformation and AI, ensuring its design is informed by scientific understanding of human perception and media literacy.
Educators in middle schools, high schools, and universities use the tool as an interactive module in media literacy, computer science, or ethics courses. Students play the game individually or as a class activity, sparking discussions about AI ethics, source verification, and the future of digital information. This hands-on experience makes abstract concepts about synthetic media tangible and memorable.
Professionals who verify visual content for news organizations use the tool to sharpen their ability to spot potential deepfakes or AI-generated profile pictures attached to fake accounts. Regular practice helps them develop a critical eye for subtle artifacts that might indicate manipulation, adding a layer of defense against visually-based disinformation campaigns.
Security teams and human resources departments use the tool to train employees about social engineering risks. A common tactic involves fake profiles with AI-generated photos on LinkedIn or other platforms. Training staff to be skeptical and visually literate helps prevent phishing and impersonation attacks that rely on synthetic identities.
Non-profits, libraries, and community organizations share the tool in workshops aimed at improving digital citizenship. It serves as an engaging entry point for people of all ages to understand the capabilities and potential dangers of generative AI, empowering them to be more critical consumers of online imagery.
Researchers working on generative models or detection algorithms use the tool informally to gauge the perceptual realism of current AI outputs. By testing their own ability to discriminate, they gain intuitive insight into which visual flaws remain challenging for models to overcome, informing future research directions.
Sign in to leave a review
15Five operates in the people analytics and employee experience space, where platforms aggregate HR and feedback data to give organizations insight into their workforce. These tools typically support engagement surveys, performance or goal tracking, and dashboards that help leaders interpret trends. They are intended to augment HR and management decisions, not to replace professional judgment or context. For specific information about 15Five's metrics, integrations, and privacy safeguards, you should refer to the vendor resources published at https://www.15five.com.
20-20 Technologies is a comprehensive interior design and space planning software platform primarily serving kitchen and bath designers, furniture retailers, and interior design professionals. The company provides specialized tools for creating detailed 3D visualizations, generating accurate quotes, managing projects, and streamlining the entire design-to-sales workflow. Their software enables designers to create photorealistic renderings, produce precise floor plans, and automatically generate material lists and pricing. The platform integrates with manufacturer catalogs, allowing users to access up-to-date product information and specifications. 20-20 Technologies focuses on bridging the gap between design creativity and practical business needs, helping professionals present compelling visual proposals while maintaining accurate costing and project management. The software is particularly strong in the kitchen and bath industry, where precision measurements and material specifications are critical. Users range from independent designers to large retail chains and manufacturing companies seeking to improve their design presentation capabilities and sales processes.
3D Generative Adversarial Network (3D-GAN) is a pioneering research project and framework for generating three-dimensional objects using Generative Adversarial Networks. Developed primarily in academia, it represents a significant advancement in unsupervised learning for 3D data synthesis. The tool learns to create volumetric 3D models from 2D image datasets, enabling the generation of novel, realistic 3D shapes such as furniture, vehicles, and basic structures without explicit 3D supervision. It is used by researchers, computer vision scientists, and developers exploring 3D content creation, synthetic data generation for robotics and autonomous systems, and advancements in geometric deep learning. The project demonstrates how adversarial training can be applied to 3D convolutional networks, producing high-quality voxel-based outputs. It serves as a foundational reference implementation for subsequent work in 3D generative AI, often cited in papers exploring 3D shape completion, single-view reconstruction, and neural scene representation. While not a commercial product with a polished UI, it provides code and models for the research community to build upon.