Home / How AI is Used by Researchers to Help Mitigate Misinformation?

How AI is Used by Researchers to Help Mitigate Misinformation?

AI

Researchers tackling the challenge of visual misinformation — think the TikTok video of Tom Cruise supposedly golfing in Italy during the pandemic — must continuously advance their tools to identify AI-generated images.

NVIDIA is furthering this effort by collaborating with researchers to develop and test detector algorithms on our state-of-the-art image-generation models.

Crafting a dataset of highly realistic images with StyleGAN3 — the state-of-the-art media generation algorithm — NVIDIA provided crucial information to researchers testing how well their detector algorithms work when tested on AI-generated images created by previously unseen techniques. These detectors help experts identify and analyze synthetic images to combat visual misinformation.

“This has been a unique situation in that people doing image generation detection have worked closely with the people at NVIDIA doing image generation,” said Edward Delp, a professor at Purdue University and principal investigator of one of the research teams. “This collaboration with NVIDIA has allowed us to build even better and more robust detectors. The early access approach used by NVIDIA is an excellent way to further forensics research.”

Advancing Media Forensics With StyleGAN3 Images

When researchers know an image-generation technique’s underlying code or neural network, developing a detector that can identify images created by that AI model is a comparatively straightforward task.

It’s more challenging — and valuable — to build a detector that can spot images generated by brand-new AI models.

StyleGAN3, a model developed by NVIDIA Research presented at the NeurIPS 2021 AI conference in December, advances state of the art in generative adversarial networks used to synthesize images. The breakthrough brings graphics principles in signal processing and image processing to GANs to avoid aliasing: a kind of image corruption often visible when images are rotated, scaled, or translated.

Researchers developed StyleGAN3 using a publicly released dataset of 70,000 images. Another 27,000 unreleased images from that collection, alongside AI-generated images from StyleGAN3, were shared with forensic research collaborators as a test dataset.

The collaboration with researchers enabled the community to assess how various detector approaches identify images synthesized by StyleGAN3 — before the generator’s code was publicly released.

These detectors work in many different ways: Some may look for telltale correlations among groups of pixels produced by the neural network, while others might look for inconsistencies or asymmetries that give away synthetic images. Yet others attempt to reverse engineer the synthesis approach to estimating if a particular neural network could have created the image.

One of these detectors, GAN-Scanner, reaches up to 95 percent accuracy in identifying synthetic images generated with StyleGAN3, despite never having seen an image created by that model during training. Another detector, developed by Politecnico di Milano, achieves an area under the curve of .999 (where a perfect classifier would achieve an AUC of 1.0).

Our work with researchers on StyleGAN3 showcases and supports the critical, cutting-edge research done by media forensics groups. We hope it inspires others in the image-synthesis research community to participate in forensics research as well.

The GAN detector collaboration is part of Semantic Forensics (SemaFor), a program focused on media forensic analysis organized by DARPA, the U.S. federal agency for technology research and development.

Rent NVIDIA A100 80GB GPU SERVER

Powered by 2x A100 80GB data center GPU for intensive numerical computations.

Buy now

Leave a Reply