ELVIS Act & Voice Rights: Complete Guide
“A deepfake audio was created using your voice. It sounds exactly like you announcing an endorsement for a casino. You never consented. The video has 10 million views. You have no clear legal recourse—until now.” The ELVIS Act (Ensuring Likeness, Voice, and Image Protection of Publicized Individuals Act) was passed in 2024 and signed into law in 2025. It represents the first major federal legislation protecting voice and likeness rights from synthetic media (deepfakes and AI-generated replicas). Before the ELVIS Act, voice protection was fragmented: some states had “right of publicity” laws, some had personality rights statutes, and others had nothing. A deepfake could be created in one jurisdiction with few legal consequences, then distributed globally. Creators and celebrities had limited recourse. Voice rights are now a critical legal protection. The ELVIS Act establishes federal standards, creates causes of action for unauthorized synthetic voice use, and provides damages and injunctive relief. But the law leaves many questions unresolved: Does it apply to parody? What about accidental infringement? How do you prove infringement? This guide explains the ELVIS Act, how it protects your voice, state-level voice rights, enforcement mechanisms, damages, and how to protect yourself from unauthorized synthetic voice use.