An amendment to the data bill requiring AI companies to reveal which copyrighted material is used in their models was backed by peers, despite government opposition.
It is the second time parliament’s upper house has demanded tech companies make clear whether they have used copyright-protected content.
The vote came days after hundreds of artists and organisations including Paul McCartney, Jeanette Winterson, Dua Lipa and the Royal Shakespeare Company urged the prime minister not to “give our work away at the behest of a handful of powerful overseas tech companies”.
The bill will now return to the House of Commons. If the government removes the Kidron amendment, it will set the scene for another confrontation in the Lords next week.
Lady Kidron said: “I want to reject the notion that those of us who are against government plans are against technology. Creators do not deny the creative and economic value of AI, but we do deny the assertion that we should have to build AI for free with our work, and then rent it back from those who stole it.
“My lords, it is an assault on the British economy and it is happening at scale to a sector worth £120bn to the UK, an industry that is central to the industrial strategy and of enormous cultural import.”
The government’s copyright proposals are the subject of a consultation due to report back this year, but opponents of the plans have used the data bill as a vehicle for registering their disapproval.
The main government proposal is to let AI firms use copyright-protected work to build their models without permission, unless the copyright holders signal they do not want their work to be used in that process – a solution that critics say is impractical and unworkable.
The government insists, however, that the present situation is holding back both the creative and tech sectors and needs to be resolved by new legislation. It has already tabled one concession in the data bill, by committing to an economic impact assessment of its proposals.
A source close to the tech secretary, Peter Kyle, said this month that the “opt out” scenario was no longer his preferred option but one of several being given consideration.
A spokesperson for the Department for Science, Innovation and Technology said the government would not rush any decisions on copyright or bring forward related legislation “until we are confident that we have a practical plan which delivers on each of our objectives”.
Opinion: The time for smart, responsible AI regulation is now
The RAISE Act ensures groundbreaking AI developers have a safety plan.
By Alex Bores and Andrew Gounardes
Artificial intelligence is evolving faster than any technology in human history. It’s driving groundbreaking scientific advances, developing life-changing medicines, unlocking new creative pathways and automating mundane tasks.
In the wrong hands, it also poses existential risks to humanity.
This isn’t hyperbole or the stuff of science fiction. Al developers, leading scientists and international bodies have all warned of an imminent future where advanced Al could be used to conduct devastating cyberattacks, aid in the production of bioweapons, or inflict severe financial harm on consumers and companies.
American AI models have been used in citizen surveillance in China, scams originating in Cambodia and as part of a “global cybercrime network.” OpenAI found that their latest model “can help experts with the operational planning of reproducing a known biological threat” and is “on the cusp” of being able to help novices. A recent International Al Safety Report identified an AI model capable of producing plans for biological weapons that were “rated superior to plans generated by experts with a PhD 72% of the time” and that included “details that expert evaluators could not find online.”
We’re only a few years away from a time when Al models will code themselves; already, over 25% of Google’s new code is written by Al. In a lab experiment, the firm Apollo Research found that AI models told to pursue a goal at all costs would try to make copies of themselves on new servers and lie to humans about their actions if they thought they would be shut down.
Increasingly, calls for regulation are coming from within the tech industry itself. In March 2023, over 1,000 tech leaders from across the political spectrum signed a letter calling for a temporary pause in AI advancement and warned that developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one –not even their creators – can understand, predict or reliably control.”
That was two years ago. More recently, leading AI company Anthropic warned that “the window for proactive risk prevention is closing fast” and called on governments to implement AI regulation by April 2026 at the latest. The company also warned that the federal legislative process might not be “fast enough to address risks on the timescale about which we’re concerned” and “urgency may demand it is instead developed by individual states.”
Our laws haven’t kept up with this rapidly developing technology. In the absence of federal action, it’s up to states like New York to urgently implement smart, responsible safeguards to keep our communities safe and ensure the burgeoning AI industry amplifies the best of humanity, rather than its worst.
That’s why we’ve introduced the Responsible AI Safety and Education Act, or RAISE Act, which puts four simple responsibilities on the companies developing advanced AI models:
Have a safety plan.
Have that plan audited by a third party.
Disclose critical safety incidents.
Protect employees or contractors that flag risks.
These safeguards are clear, simple and commonsense. In fact, the RAISE Act codifies what some responsible AI companies have already promised to do. By writing these protections into law, we ensure no company has an economic incentive to cut corners or put profits over safety, as some are already starting to do. Our bill only applies to the largest AI companies that spend hundreds of millions of dollars annually developing the most advanced systems. It imposes no burden on any academic or startup. It also doesn’t attempt to be a catch-all for every potential issue raised by Al. Instead, it focuses on the most urgent, severe risks that could cause over $1 billion in damage or hundreds of deaths or injuries.
Smart AI legislation should be designed to safeguard us from those risks while allowing beneficial uses of Al to flourish. That’s why the RAISE Act takes a flexible approach to governing a rapidly changing industry. Our bill doesn’t create hyper-specific rules for research or establish a new regulatory entity. Instead, it holds companies to their own commitments, creates transparency around how AI companies are managing severe risks and protects whistleblowers who sound the alarm about dangerous development. Our bill also ensures smaller AI startups can continue to compete in the marketplace by requiring the biggest companies to play by the rules.
With commonsense safeguards, we can ensure a thriving, competitive AI industry that meets New Yorkers’ needs instead of putting our safety at risk. The RAISE Act is a key step into the future we all want and deserve.
Alex Bores is an Assembly member representing Assembly District 73 in Manhattan. Andrew Gounardes is a state senator representing the 26th Senate District in Brooklyn.
More than two dozen collective bargaining agreements now include language covering artificial intelligence in their newsrooms. There are some gold standard examples that cover three priorities: protection of bargaining unit work, clearly defining the scope of AI and requiring interaction and oversight by bargaining unit employees to create work products.
While generative artificial intelligence is not new, the pace of advancement has been almost exponential in the last decade. As the technology improves, more employers have introduced and expanded its use in the absence of clear regulation or guidance. Some employers have been publicly embarrassed for publishing false, misleading or problematic posts.
The use of artificial intelligence to perform bargaining unit work is a mandatory subject of bargaining, but employers are reluctant to agree to contract terms that set enforceable parameters around the use of ever evolving technology. Guild members escalated actions including going on strike to win language to protect their work standards, job security and provide better transparency for the public.
Members have engaged in public-facing campaigns such as the Politico PEN Guild’s “Journalists, Not Robots” social media action. The Ziff Davis Guild built overwhelming member support for internal actions that pushed management to accept strong contract language.
The New Republic won language that says generative AI “may be used by bargaining unit employees as a complementary tool in editorial work, but it may not be used as a primary tool for creation of such.” And further, it states that AI shall not result in layoffs, to fill vacant positions or result in reducing pay for Guild-represented workers. Other contracts have similar language that sets these clear lines that the employers may not cross when introducing or expanding the use of AI in the workplace. While some contracts may not completely prohibit AI from reducing or eliminating bargaining unit work, they provide for transfers to other roles with appropriate training and enhanced severance for employees who do not continue employment.
Another common provision is the requirement that members receive training on the use of AI, including how its use complies with ethical standards. This type of training and clarity is especially important if members could potentially be disciplined for improper use of AI.
Other contracts include language that requires the clear labeling of any content that was generated by or with assistance from AI technology. The Ziff Davis agreement says that if the company “uses AI to create, curate, or modify, in whole or in part, any content appearing in the same publication in which bylines of current or former bargaining unit employees appear (‘AI-Generated Content’), the Company must clearly identify it as “‘AI-Generated Content’” using several guidelines to provide transparency and disclosure, including multimedia content.
NewsGuild members have been creative by drafting language that is responsive to the potential uses for AI in their shop. Because this is a dynamic technology, many contracts also require the formation of a joint committee of union and management representatives to be a forum for conversation and information sharing.
The New York Times Tech workers won a contract after striking for eight days that creates a committee “to discuss the potential impact of Generative Artificial Intelligence.” The committee is required to meet semi-annually at the request of the Guild.
Less than two years ago, NewsGuild members participated in a study, showing their support for stronger campaigns against AI denigrating our work and showing solidarity for the Hollywood strikes, where AI was a major theme.
There’s no one-size-fits-all approach to bargaining for AI protections, but there are best practices that our members are proud to share. If you are a NewsGuild-CWA member, leader or staffer who wants to learn more, email dnewsome@cwa-union.org to be invited to our quarterly AI meetings and gain access to member resources.
UCLA Luskin Professor of Public Policy John Villasenor testified on May 8, 2025, before the U.S. Congress Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet.
Villasenor, who also holds appointments in electrical and computer engineering, management and law at UCLA, was among experts from academia, government and the private sector who joined the Washington, D.C. hearing convened to examine the role of trade secret protection in U.S. artificial intelligence development and countering economic espionage by foreign competitors and nation-states. The panel of experts also commented on protecting U.S. intellectual property as legislation and governmental policy are being developed regarding AI competition, transparency, and other issues.
“America is the clear global leader in AI, a technology that is foundational to our continued economic prosperity and national security,” said Villasenor, faculty co-director of the UCLA Institute for Technology, Law, and Policy at UCLA Luskin. But, he noted, because of that competitive differentiation, the U.S. also is vulnerable in several ways.
Villasenor explained that because American AI companies are so innovative and market-leading, that they are prime targets for trade secret theft. He also cautioned that policy discussions on AI regulation do not provide sufficient consideration to potential collateral damage to trade secret rights, emphasizing that overly expansive transparency rules would undermine AI leadership.
In the global context, he remarked that the preeminence of American AI companies also creates an asymmetry and consequently policymakers outside the U.S. may have less concern than their U.S. counterparts about the collateral damage to trade secrets resulting from AI regulations.
“They will have little incentive to regulate in a manner that preserves the competitive advantage of U.S. AI companies,” Villasenor said.
Watch Villasenor’s testimony (starting after minute 51). Read his testimony.
nnnnn".replace(/"/g, '"' );
var html = document.getElementsByTagName('html')[0];
$('html').on( 'avia-cookie-settings-changed', function(e)
{
var cookie_check = html.className.indexOf('av-cookies-needs-opt-in') >= 0 || html.className.indexOf('av-cookies-can-opt-out') >= 0;
var allow_continue = true;
var silent_accept_cookie = html.className.indexOf('av-cookies-user-silent-accept') >= 0;
var script_loaded = $( 'script.google_analytics_scripts' );
Shira Perlmutter, who has served as register of copyrights since 2020, was informed Saturday afternoon that her employment had been “terminated,” according to internal communications from the Library of Congress reviewed by Politico.
Her dismissal comes just two days after the White House fired Librarian of Congress Carla Hayden, the official responsible for appointing and overseeing the Copyright Office.
Shira Perlmutter, who has served as register of copyrights since 2020, was informed Saturday afternoon that her employment had been “terminated.” ZUMAPRESS.com
Hayden, who was confirmed by the Senate in 2016 for a 10-year term, had appointed Perlmutter.
Neither dismissal came with a formal explanation, but lawmakers are already drawing connections between Perlmutter’s ouster and a recent Copyright Office report that questioned the legality of how artificial intelligence companies use copyrighted content to train generative models — a core business issue for Elon Musk, a longtime Trump ally.
“It is no coincidence [Trump] acted less than a day after [Perlmutter] refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models,” said Rep. Joe Morelle (D-NY), the ranking Democrat on the House Administration Committee, which has oversight of the Library of Congress and the Copyright Office.
Perlmutter’s office had just released a detailed report on copyright and artificial intelligence, the third installment in an ongoing series examining the legal and economic implications of AI-generated content.
While the report stopped short of recommending immediate regulatory action, it cast doubt on the sweeping “fair use” defenses that many AI firms rely on to justify scraping copyrighted materials.
President Trump’s decision raised eyebrows among lawmakers. Yuri Gripas/POOL/EPA-EFE/Shutterstock
“But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries,” the report stated.
Though the report encouraged the development of licensing markets and floated ideas like extended collective licensing to address gaps, it warned against premature government intervention — a stance that may not align with the priorities of tech moguls seeking fewer legal roadblocks.
Morelle accused the Trump administration of overstepping its constitutional boundaries.
Start and end your day informed with our newsletters
Morning Report and Evening Update: Your source for today’s top stories
Thanks for signing up!
“This action once again tramples on Congress’s Article One authority and throws a trillion-dollar industry into chaos,” he said. “When will my Republican colleagues decide enough is enough?”
The White House has not responded to requests for comment.
Musk, who helped launch OpenAI and now leads the rival xAI (which is merging with X, formerly Twitter), recently backed a call by Jack Dorsey to “delete all IP law.”
Perlmutter’s dismissal could upend efforts to regulate artificial intelligence companies’ use of protected material. AFP via Getty Images
His AI ventures are among several facing lawsuits from content creators alleging copyright infringement.
In May 2024, OpenAI and The Post’s parent company, News Corp., announced a landmark multi-year agreement granting OpenAI access to a vast array of News Corp.’s current and archived news content.
The Post has sought comment from News Corp. and the News/Media Alliance.
Under current law, the register of copyrights is appointed by the librarian of Congress, not the president — although the librarian’s position itself is subject to presidential nomination and Senate confirmation.
Trump’s direct involvement in the dismissals has prompted alarm over political interference in what has traditionally been a nonpartisan regulatory domain.
With the leadership of both the library and Copyright Office now vacant, it remains unclear how future disputes over AI and copyright will be handled.
The Trump administration reportedly fired the head of the US copyright office over the weekend – within days of the dismissed official having published a report about how the development of artificial intelligence (AI) technology could run afoul of fair use law.
The sacking of Shira Perlmutter as the register of copyrights and director of the copyrights office on Saturday, as reported by the Washington Post and NBC News, came two days after Donald Trump fired the librarian of Congress, who oversees the copyright office.
Perlmutter took over the copyrights office in 2020, and some of her employees suspect her firing may stem from her recent report on how using copyrighted material to train AI tech could overstep laws governing fair use, according to the Post’s reporting.
The New York congressman Joe Morelle, a Democrat, also speculated that Perlmutter’s report may have motivated the Trump administration to fire her, calling her dismissal a “brazen, unprecedented power grab”.
The report from Perlmutter was not highly critical of the use of AI, saying the copyright office believed “government intervention would be premature at this time”.
Since the second Trump administration took office in January, the so-called “department of government efficiency” (Doge) overseen seen by the billionaire Elon Musk has been tasked with slashing federal spending. And Doge reportedly has been attempting to use AI to make cuts to federal funding.
Additionally, Musk, a staunch Trump ally who owns an AI firm himself, has publicly supported deleting intellectual property laws.
Perlmutter’s firing evidently signals another step by the Trump administration to reshape the federal government by ousting officials who he believes may resist his agenda.
Just days earlier, Trump abruptly fired Carla Hayden as librarian of Congress. Hayden was the first woman and the first Black person to serve in the role. According to the White House, her firing was due to her pursuing diversity, equity and inclusion (DEI) programs which Trump has pledged to eliminate.
Hayden had been targeted by rightwing groups who accused her of promoting children’s books that the groups claim are inappropriate. The conservative American Accountability Foundation had urged the Trump administration to fire her, saying she was “woke” and “anti-Trump”.
The Library of Congress in Washington DC is available to the public, holding millions of items, including books and historical documents. It also administers copyright law through its oversight of the copyright office.
On Saturday, in his first formal address to cardinals, Pope Leo XIV described artificial intelligence as a transformative force akin to the Industrial Revolution.
What Happened: “In our own day, the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence,” the new pope said in Italian, reported CNN.
Adding, “These pose new challenges for the defense of human dignity, justice and labor.”
Born Robert Prevost in Chicago, Leo XIV became the first U.S.-born pontiff when he was elected Thursday.
He chose his papal name in honor of Pope Leo XIII, who in 1891 issued Rerum Novarum, a foundational document of Catholic social teaching that addressed the upheavals of the Industrial Revolution, the report noted.
The new pope also signaled strong continuity with the late Pope Francis, praising his “complete dedication to service and to sober simplicity of life.”
Why It’s Important: Previously, a review of the new Pope’s social media activity on Xrevealed that Pope Leo XIV had shared posts critical of Donald Trump-era policies, including opposition to anti-immigrant rhetoric, the death penalty and congressional inaction on gun reform.
He has also reposted content challenging Vice President JD Vance’s interpretations of Christianity.
Read Next:
Image Via Shutterstock
Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.
Big Tech companies train their AI models mostly on the work of other people, like scientists, journalists, filmmakers, or artists.
Those creators have long objected to the practice. Now, the US Copyright Office appears to have joined their side.
The office released on Friday its latest in a series of reports exploring copyright laws and artificial intelligence. The report addresses whether the copyrighted content AI companies use to train their AI models qualifies under the fair use doctrine.
AI companies are probably not going to like what they read.
AI companies are desperate for data. Most of them believe that the more information a model can digest, the better it will be. But with that insatiable consumption, they risk running afoul of copyright laws.
Companies like Open AI have faced a slew of lawsuits from creators who say training AI models on their copyrighted work without permission infringes on their rights. AI execs argue they haven’t violated copyright laws because the training falls under fair use.
According to the US Copyright Office’s new report, however, it’s not that simple.
“Although it is not possible to prejudge the result in any particular case, precedent supports the following general observations,” the office said. “Various uses of copyrighted works in AI training are likely to be transformative. The extent to which they are fair, however, will depend on what works were used, from what source, for what purpose, and with what controls on the outputs — all of which can affect the market.”
The office made a distinction between AI models for research and commercial AI models.
“When a model is deployed for purposes such as analysis or research — the types of uses that are critical to international competitiveness — the outputs are unlikely to substitute for expressive works used in training,” the office said. “But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.”
In the report, the office compared artificial intelligence outputs that essentially copy its training materials to outputs with added elements and new value.
“On one end of the spectrum, training a model is most transformative when the purpose is to deploy it for research, or in a closed system that constrains it to a non-substitutive task,” the office said. “For example, training a language model on a large collection of data, including social media posts, articles, and books, for deployment in systems used for content moderation does not have the same educational purpose as those papers and books.”
Training an artificial intelligence model to create outputs “substantially similar to copyrighted works in the dataset” is less likely to be considered transformative.
“Unlike cases where copying computer programs to access their functional elements was necessary to create new, interoperable works, using images or sound recordings to train a model that generates similar expressive outputs does not merely remove a technical barrier to productive competition,” the office said. “In such cases, unless the original work itself is being targeted for comment or parody, it is hard to see the use as transformative.”
In another section, the office said it rejected two “common arguments” about the “transformative nature of AI training.”
“As noted above, some argue that the use of copyrighted works to train AI models is inherently transformative because it is not for expressive purposes. We view this argument as mistaken,” the office said.
“Nor do we agree that AI training is inherently transformative because it is like human learning,” it added.
A day after the office released the report, President Donald Trump fired its director, Shira Perlmutter, a spokesperson told Business Insider.
“On Saturday afternoon, the White House sent an email to Shira Perlmutter saying ‘your position as the Register of Copyrights and Director at the US Copyright Office is terminated effective immediately,” the spokesperson said.
While Trump, with the help of Elon Musk, who has his own AI model, Grok, has sought to reduce the federal workforce and shutter some agencies, some saw the timing of Perlmutter’s dismissal as suspect. New York Rep. Joe Morelle, a Democrat, addressed Perlmutter’s firing in an online statement.
“Donald Trump’s termination of Register of Copyrights, Shira Perlmutter, is a brazen, unprecedented power grab with no legal basis. It is surely no coincidence he acted less than a day after she refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models,” the statement said.
Big Tech and AI companies have rallied around Trump since his election, led by Musk, who became the face of the White House DOGE Office and the administration’s effort to reduce federal spending. Other tech billionaires, like Meta CEO Mark Zuckerberg and OpenAI CEO Sam Altman, have also cozied up to Trump in recent months.
A representative for the White House did not respond to a request for comment from Business Insider.