Lawmakers again explore possible artificial intelligence regulation in Nebraska, but
LINCOLN — An interim study ahead of possible 2025 legislation to regulate artificial intelligence in Nebraska elections could hinge on the fate of legislation in at least 19 other states.
State Sen. Tom Brewer, D-Gordon, chairman of the Government, Military and Veterans Affairs Committee. Brewer’s term in the Legislature is limited. October 20, 2023.
On Thursday, state senators Tom Brewer of north-central Nebraska and John Cavanaugh of Omaha each raised the question of whether the state should regulate artificial intelligence to their colleagues. for breweries Legislative Resolution 362the focus is on possible dangers to the general election, as well as Kavanaugh’s LR 412 On the use of artificial intelligence in political campaigns.
Kavanaugh introduced Legislative Bill 1203 this year, which stalled in February, and he said he and others are still working on it Learn about artificial intelligence and how to deal with it. This is especially true when balancing possible dangers against potential uses protected by the First Amendment.
“There’s not going to be a simple solution because technology is constantly changing and we’re trying to keep up right now,” Cavanaugh said. “But, together with the stakeholders in this room and across the state of Nebraska, I believe we can come to some common ground.”
Kavanaugh’s legislation would place artificial intelligence regulation under the auspices of the Nebraska Accountability and Disclosure Commission and require clear and conspicuous disclosures of paid state or local advertising for candidates or ballot issues.
But similar to the February hearing on LB 1203, lawmakers received a lukewarm response to the interim study from state Sen. Danielle Conrad, D-Lincoln, an attorney whom Brewer chairs Member, Committee on Government, Military and Veterans Affairs.
Omaha Sen. John Cavanaugh co-chaired a public hearing on property taxes at the University of Nebraska-Lincoln. July 22, 2024.
Conrad pushed back repeatedly and said she was “very skeptical” that new rules on political speech were needed.
“I think it’s a violation of the First Amendment,” Conrad said. “If it’s not legal, I think it will have a chilling effect even on speech that we find confusing, confusing, offensive or misleading.”
What other states did
Adam Cooke and Ben Williams of the National Conference of State Legislatures, a bipartisan organization that assists legislators and their staffs across the country and tracks legislation At least 19 states has been clearly resolved Artificial intelligence and political messaging Legislation that cuts across political ideologies.
Artificial intelligence has been “the hottest topic” over the past two years, but is “just the latest in a long line of technological changes impacting national campaigns and elections,” Cook said. Other changes include television, social media and cryptocurrency campaign donations, for example, he said.
Many of these laws use different terms for generative AI, such as “synthetic media,” “deceptive media,” or “deep fakes,” but even among researchers, there isn’t an agreed-upon definition. No country has completely banned deceptive AI political messages.
Instead, Cooke said, states prohibit the creation of deceptive messages at some point before an election or require disclosure that the material was generated by artificial intelligence.
‘This is scary’: Nebraska lawmakers react to AI voice cloning and possible regulation
Williams said some states impose civil fines, ranging from the lowest in the nation at $500 for a first violation in Michigan to as high as $10,000 for a second violation in Minnesota. Several states, including New Mexico and Utah, impose fines of $1,000 per violation, while Colorado imposes a 10% fine for promoting deepfakes.
Other states impose criminal penalties, such as in Texas or Mississippi, which can be up to one year in prison if the message is intended to cause violence; in Mississippi, it can be up to five years in prison.
Williams said Texas lawmakers were the first to pass an artificial intelligence law in 2019 that defined “deepfakes” as individual videos, rather than images or audio, and banned such content within 30 days before the election.
Minnesota has a “two strikes” rule before possible jail time. Like Arizona, it prohibits deepfakes in the 90 days before an election unless it is clearly disclosed that it was generated by artificial intelligence.
Cooke said some states, such as Colorado, require disclosures in the form of digital metadata files rather than physical messages. The creator of the content, when it was created and edited, and content based on artificial intelligence must be specified.
Congress has not yet passed legislation but has considered bills that would require federal agencies to monitor the use of artificial intelligence, and the Federal Election Commission is considering new regulations, according to state representatives.
A “reflective” approach?
Conrad said she found many laws problematic in their definition or enforcement, saying political satire, parody and cherry-picking of someone’s words have been around “since the dawn of politics.”
“I’m just worried about a reflexive approach,” Conrad said, noting potential new penalties.
State Sen. Danielle Conrad, D-Lincoln, participates in a public hearing on property taxes on Monday, July 22, 2024, at the University of Nebraska-Lincoln.
Kavanaugh responded that he didn’t want to be reflexive, but wanted to be thoughtful, deliberate, and make a clear decision about whether any action should be taken.
“I think it’s probably a wiser thing to come out and have a conversation before something really comes up,” Kavanaugh said.
Jim Deem, president and executive director of the Nebraska Broadcasters Association, testified in February asking for any legislation to specifically exempt broadcasters from liability. He noted that under federal law, organizations must run political ads regardless of content.
Tim used the analogy of a fever or a sprained knee, which can be checked and determined quickly with a “reliable thermometer” or a trip to the doctor. For artificial intelligence, he said, there is no such detector.
“We don’t have magical powers to make these decisions,” Tim said.
The Nebraska Accountability and Disclosure Commission, which handles certain complaints against elected officials or candidates and oversees campaign finance, opposed Kavanaugh’s bill in February, saying it was outside the commission’s mandate.
“NADC’s mission is not to try to determine the truth or falsity of claims made in campaigns,” NADC Executive Director David Hunter testified at the time. “We are not equipped to be fact-checkers.”
State Senator Tom Brewer, who represents north-central Nebraska, met with panelists following the Nebraska Citizen Community Forum on Artificial Intelligence and Democracy. February 24, 2024.
Brewer, who participated in an artificial intelligence forum hosted by the Nebraska Citizens in February, said again Thursday that some uses of artificial intelligence “could be very scary if they were to become reality.” At the February event, professors, researchers and a county elections commissioner explained how artificial intelligence could be used to spread misinformation or disinformation about elections and described how terrorists could use artificial intelligence.
Other impacts could include false claims that voting deadlines or polling locations have changed, or, as in New Hampshire’s 2024 primary, a presidential voice telling voters to “save” their ballots and stay home until the general election.
“Right now, a lot of the information is very conceptual, but today’s concepts may be tomorrow’s problems,” Brewer said during Thursday’s hearing.
“Not necessarily sinister”
Kavanaugh pointed to a separate project published in the Nebraska Examiner in December that used artificial intelligence to replicate the voices of seven state senators, including Conrad. Kavanaugh said the results of the experiment were “troubling.”
He worries that artificial intelligence will become so good that people will have trouble distinguishing reality, especially in contentious elections where candidates dispute even “half-truths.”
Spike Eickholt is an attorney and registered lobbyist for several organizations, including the ACLU of Nebraska. February 27, 2024.
Spke Eickholt, an attorney and lobbyist for the ACLU of Nebraska, urged lawmakers not to create any new crimes. He also asked lawmakers to be cautious because most AI-related legislation is signed into law without being tested in court.
“They might be unconstitutional, they might be suspect, who knows,” Eickholt said.
Eckholt questioned where the boundaries are for other software that edits images, audio or video, such as photo-editing systems or filters that do not require disclosure. Candidates may also use AI to boost campaigns or offices that may have limited budgets to respond to voter or reading questions, which Eckholt said is not deceptive.
Eckholt noted that the court does not protect false or defamatory statements, pointing to various state laws prohibiting impersonation of a public servant, theft by deception or fraud, election falsification, voter registration fraud or interference, and electioneering.
“This technology doesn’t have to be sinister. It doesn’t have to be scary,” Eickholt said. “We shouldn’t always be afraid of everything just because it’s new.”
Get the morning’s top stories delivered to your inbox
subscription