AI Deepfakes Create New Workplace Harassment Risks
Severity: Medium (Score: 58.5)
Sources: Mondaq, Cbia
Summary
AI-generated deepfakes are increasingly being used in workplace harassment, leading to significant employer liability. Employees are creating doctored images and audio to target coworkers, with examples including a trooper in Washington State and a meteorologist in Nashville suing over deepfake content. The Equal Employment Opportunity Commission has warned that sharing AI-generated intimate imagery can lead to unlawful harassment claims under Title VII. Employers may face legal repercussions even if they did not create the deepfakes, depending on their knowledge and response to the incidents. Current employer handbooks often lack specific policies addressing AI misuse, increasing the risk of litigation. Several states have enacted laws allowing victims to pursue civil and criminal penalties for deepfake-related harms. Federal legislation is also advancing to address these issues. The situation is evolving rapidly, and HR teams must adapt to mitigate risks associated with AI-generated content. Key Points: • AI deepfakes are being used for workplace harassment, creating new liability for employers. • The EEOC warns that sharing AI-generated intimate content can lead to Title VII claims. • Employers must update handbooks to address AI misuse to reduce litigation risks.
Key Entities
- Hospitality (industry)
- Life Sciences (industry)
- Technology (industry)