• AI Today
  • Posts
  • It’s Getting Harder for Computers to Identify AI Images

It’s Getting Harder for Computers to Identify AI Images

It’s Getting Harder for Computers to Identify AI Images

The refrain “Pics or it didn’t happen” is about to lose all meaning.

As the world’s largest tech companies dive deeper and deeper into the capabilities of artificial intelligence, even the programs designed specifically to identify phony images created by AI can’t tell what is real and what is fake, The Wall Street Journal reported. Now governments are stepping in to try to rein in the Tomorrowland tech.

What is Reality?

Most free AI programs still produce fairly janky images. For example, visit the AI image generator site Craiyon (formerly known as Dall-E Mini), type in "Tony Soprano," and you'll likely end up with a monstrosity that looks like the app smashed 10 different photos of the fictional mafia dad together and then went over it with the smudge tool.

What was once futuristic fodder for Sci-fi writers like Arthur C. Clarke and Harlon Ellison has quickly become run-of-the-mill. And much like those writers, it’s hard not to notice AI’s potential for harm and deception. Fake images like “Puffer Coat Pope” and Emmanuel Macron running from protestors are worth a chuckle for now, but with Microsoft, OpenAI, Alibaba, and others throwing massive investments into AI tech, keeping up with the advances won’t be easy:

•Tech company Optic has a website AI or Not, which up until recently had a 95% accuracy rate, but after AI image-generator Midjourney's latest update, it dropped down to 89%. At one point, the tech was even tricked by “Puffer Coat Pope.”

•Companies like Microsoft are trying to get ahead of the technology by imposing restrictions on their generators and Bing’s Image Creator doesn’t let users enter prompts with prominent public figures. Midjourney has human moderators, but it’s rolling out an algorithm to process user requests, company founder David Holz told the WSJ.

The CEO of Hive – another company that detects AI content – says it’s an arms race. “We look at all the tools out there and every time they’re updating their models, we have to update ours and keep up the pace,” Kevin Guo told the WSJ.