
"Remember the early days of AI image generation? Oh how we laughed when our prompts resulted in people with too many fingers, rubbery limbs, and other details easily pointing to fakes. But if you haven't been keeping up, I regret to inform you that the joke is over. AI image generators are getting way better at creating realistic fakes, partly thanks to a surprising new development: making image quality a little bit worse."
"If you can believe it, OpenAI debuted its image generation tool DALL-E a little less than five years ago. In its first iteration, it could only generate 256 x 256 pixel images; tiny thumbnails, basically. A year later, DALL-E 2 debuted as a huge leap forward. Images were 1024 x 1024, and surprisingly real-looking. But there were always tells."
"In Casey Newton's hands-on with DALL-E 2 just after it launched in beta, he included an image made from his prompt: "A shiba inu dog dressed as a firefighter." It's not bad, and it might fool you if you saw it at a glance."
Early AI image generators produced obvious visual artifacts such as extra fingers and rubbery limbs that revealed fakes. Initial DALL-E produced only 256 x 256 pixel thumbnails, then DALL-E 2 advanced to 1024 x 1024 and much more convincing images, yet subtle tells persisted. A newer approach intentionally degrades image quality slightly to introduce realistic imperfections and remove artificial uniformity. That intentional imperfection helps generated images better mimic photographs and avoids easy detection based on previous algorithmic artifacts, enabling AI-generated images to appear more plausibly real to casual viewers.
Read at The Verge
Unable to calculate read time
Collection
[
|
...
]