Student-athlete NIL protections in peril as Congress moves to ban state AI laws
- Golf NIL
- 2 days ago
- 3 min read
Updated: 11 hours ago
May 20, 2025

The East Front of the United States Capitol in Washington, DC. | Christian Offenberg/Alamy
The safety net for student-athletes’ digital identities is about to be cut.
Several states—including Tennessee, Utah, Arkansas, California, Virginia, and New Jersey—have raised the bar for athlete protections, passing laws that make it a crime to create or share AI-generated deepfakes or impersonations targeting student-athletes, public figures, and everyday people.
These laws give athletes and others a new line of defense in the digital age—putting these states at the forefront of efforts to shield NIL rights from AI-driven impersonation, abuse, and digital harassment. For student-athletes navigating the high-stakes era of NIL, these protections are much-needed.
But that net could be yanked away. A new provision tucked into the Republican-led federal budget bill would block states from enforcing any AI regulations for the next ten years. If the bill passes, these hard-won deepfake and impersonation protections could be wiped off the books, leaving athletes and public figures exposed.
While our debate focuses on NIL safeguards, the proposed ban actually goes much further: it would block almost any state law on AI, from healthcare and education to jobs. There are a few narrow exceptions for state laws that simply help AI get up and running, but these don’t offer real protections for privacy, civil rights, or consumers.
The House is expected to vote on this sweeping budget package as early as next week.
Why state AI laws matter—especially for student-athletes
For student-athletes, NIL rights are more than just a buzzword—they’re currency. With AI tools now able to spin up convincing fake videos, audio, and images in minutes, the risks are real: reputations can be tarnished, scholarships jeopardized, and endorsement deals lost to a single viral deepfake.
In states with strong AI impersonation and deepfake laws, athletes have a way to fight back with criminal penalties for offenders and clear processes for getting fakes taken down. These protections are especially important as AI impersonation and digital harassment grow more sophisticated.
What the federal bill would do
The budget bill’s AI provision is sweeping. It would:
Ban state and local governments from passing or enforcing any law that regulates AI or automated decision-making for a decade.
Nullify existing state laws—including these new deepfake and impersonation protections.
Put all the power in federal hands, even though Congress has yet to pass any comprehensive AI safeguards.
What’s at stake if state protections disappear
If the federal ban becomes law, here’s what’s on the line:
No quick fixes: Athletes and public figures would lose state-level tools to fight deepfakes or AI-driven identity theft.
NIL rights in limbo: Without state protections, the value of an athlete’s name, image, and likeness could take a hit, with fewer options to stop misuse.
No local recourse: State attorneys general and local courts would be sidelined, leaving victims to navigate a patchwork of uncertain federal rules—or nothing at all.
Companies off the hook: AI firms could sidestep state liability, even if their tech is used for harassment, fraud, or worse.
The bigger picture
These states aren’t alone in this fight. At least 25 states have passed laws to regulate deepfakes in elections, and at least 40 new state laws addressing AI-generated deepfakes have been enacted since 2019. States like Colorado and New Jersey have gone further, targeting algorithmic discrimination and imposing penalties for harmful deepfake content. Legislatures in Ohio, Maryland, Louisiana, and Minnesota are also considering new bills this year.
President Trump signed the Take It Down Act on Monday, making it a federal crime to share or threaten to share nonconsensual, sexually explicit images online—including AI-generated deepfakes. But here’s the catch: the law only covers intimate, sexual content. It doesn’t touch AI-driven impersonation scams or deepfakes that damage NIL reputation or deals if they’re not sexually explicit.
If Congress passes the budget bill as written, state protections would be dead for at least a decade. With the vote set to take place next week, the outcome could reshape the landscape for student-athletes’ rights and AI accountability nationwide.
For those who want to keep state-level protections in play, now’s the time to speak up. Call or email your U.S. House Representative and Senators to share why safeguards against AI abuse matter—especially when it comes to NIL rights, privacy, and digital safety. Personal stories can make all the difference as lawmakers in both chambers decide what comes next.
It’s also worth noting that the bill faces big hurdles in the Senate and could get blocked for including unrelated measures. So while the threat is real, the path to becoming law isn’t guaranteed.