Generative Adversarial Networks (GANs) are a two part process in which the generator learns from a discriminator. The descriminator is programmed to tell the difference between a real and fake image, and the generator learns from the descriminator and adjusts its generation process to “fool” it.
The cover image for this post is an interpretation of what the Roman Authority likely looked like as the AI learned from depictions in marble…
3 replies on “GANs; The AI-Tech Behind Deep Fakes?”
thanks so much… loved the insights but found the constantly morphing faces hard to watch… but adored Dagogo… so I subscribed…
I watched the video and it was very interesting, thanks for posting that link.
I don’t find anything exceptionally ominous about this technology except that someday it might be used to fake important, evidentiary videos or still images. But it took a super computer at nVidia to generate a not very convincing image so this isn’t a “right now” concern, though it may become one someday.
In the sense of “adversarial” in the name Generative Adversarial Networks, the word “adversarial” does not mean intrusive or hostile, it means the same as in the sense of “adversarial legal system” like our process of trying and convicting people for crimes. Just because the counsel for defense and prosecution are adversarial doesn’t mean either is getting away with anything and the reason for that system is just the opposite.
One of the more interesting applications that I can see being used wasn’t mentioned in the video and lands a reality bomb square on fiction. In movies and TV shows we’ve all seen where a license plate or reflection across a busy street is “enhanced and enlarged” for dramatic purposes. Up until something like GANs becomes a reality that has all been the purest fiction. You get whatever the pixels in the picture are able to render and nothing more. Enlarging something like that just pixelates the picture after the limit of resolution of the camera is reached … And very few security cameras have resolutions high enough to be useful in this regard. GANs could change that from fiction to reality and that might not be an all bad thing.
Could this technology be used to create convincing fakes? Maybe, maybe not but if it does the result will be a drop in confidence of evidentiary video not necessarily a world wide manipulation of the populace via fake images. Provided that ultra-realistic videos can be made and this becomes common knowledge — No matter how convincing a video may be in appearance, there’s more than the appearance that goes into validating any sort of information and that includes videos. If wholly accurate and lifelike fake videos are commonly produced people will just lose confidence in any videos at all. People don’t like to be taken for fools and will generally reject even the most obvious of truths rather than have their world view challenged or have themselves thought to be fools for being fooled. Meaning no matter how good a video or image may be, there will always be the human factor to contend with also.
The embed tool on the site doesn’t seem to work properly.