The use of GPT-powered deepfakes has become a "powder keg," according to an article in Fast Company.
Startups like Hour One and Synthesia are using generative AI to create virtual people for use in online learning, personalized ads, news reports and other content. While these virtual people have prosaic applications, they also raise concerns about deepfakes being used to spread disinformation, propaganda and political hoaxes.
The combination of GPT language models, face recognition and voice synthesis software has the potential to make it easier and cheaper for scammers and propagandists to create convincing deepfakes. To combat this, efforts are underway to build tools that can detect synthetic people in media and verify their authenticity.















