Recently, a Google Alert informed me that I am the subject of deepfake pornography. For more than a year, I have been the target of a widespread online harassment campaign, and deepfake porn-whose creators, using artificial intelligence, generate explicit video clips that seem to show real people in sexual situations that never actually occurred-has become a prized weapon in the arsenal misogynists use to try to drive women out of public life. The only emotion I felt as I informed my lawyers about the latest violation of my privacy was a profound disappointment in the technology-and in the lawmakers and regulators who have offered no justice to people who appear in porn clips without their consent. Many commentators have been tying themselves in knots over the potential threats posed by artificial intelligence-deepfake videos that tip elections or start wars, job-destroying deployments of ChatGPT and other generative technologies. Read: We haven’t seen the worst of fake news Yet policy makers have all but ignored an urgent AI problem that is already affecting many lives, including mine. Last year, I resigned as head of the Department of Homeland Security’s Disinformation Governance Board, a policy-coordination body that the Biden administration let founder amid criticism mostly from the right.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |