Category: Uncategorized

  • From Mars to minus: Elon Musk’s approval ratings plummet in Donald Trump’s 100-day shadow, finds poll

    From Mars to minus: Elon Musk’s approval ratings plummet in Donald Trump’s 100-day shadow, finds poll

    Not just Donald Trump, but the US President’s DOGE major-domo Elon Musk’s approval rating by Americans, also looks very glum.

    As Donald Trump is set to complete 100 days in office, 57 percent of Americans said they disapprove of Elon Musk’s performance for the Trump administration, as per polls by several media outlets including ABC News, Ipsos, Washington Post, others.

    Meanwhile, 35 percent of US citizens approved of the Mars mission leader’s performance as part of Trump administration 2.0.

    Why is Elon Musk’s approval rating low?

    Angst against Trump’s federal cuts; layoffs, spearheaded by Elon Musk; shutting down of the Education Department are major factors behind the Tesla boss’s negative ratings as per the ABC News/Washington Post/Ipsos poll released ahead of Donald Trump’ 100 days completion at his office.

    Trump going too far in federal cuts?

    As per the latest polls, 56 per cent of the respondents said they think Donald Trump is going too far in laying off federal workers, an effort led by Elon Musk, and around 57 per cent shared they think Trump is going too far in closing federal agencies.

    Among people who think the layoffs are going too far, Musk has a dismal 6%-89% rating, approve-disapprove. That compares with 72%-16% among those who say the level of layoffs is about right or has not gone far enough, reported ABC News.

    Ratings against shutting of Education Department

    As per the ABC/Ipsos/ Washington poll, sixty-six percent of respondents opposed closing the Department of Education, and 77% are against reducing federal funding for medical research. Those who oppose such cuts also tend to have a negative view of Musk’s efforts.

    Opinions about Musk’s work are also linked to perceptions of waste within the federal government. Forty-three percent of respondents believe waste has decreased since Trump took office, and Musk has a 67%-26% approval rating among this group, reported ABC News.

    Donald Trump’s approval rating ‘horrible’?

    According to a CNN poll, Donald Trump received an approval rating of 41% — the lowest for any newly elected president at the 100-day mark — dating back at least to Dwight Eisenhower. This rating is even lower than Trump’s approval at the same point in his first term.

    CNN’s chief data analyst, Harry Enten, was blunt in his analysis: “These numbers are just horrible, there’s no way to sugarcoat it.”

  • Trump’s First 100 Days of AI: Stargate, Less Regulation, and Brutal Memes

    Trump’s First 100 Days of AI: Stargate, Less Regulation, and Brutal Memes

    Trump’s First 100 Days of AI: Stargate, Less Regulation, and Brutal Memes

    Can Donald Trump impose his ‘America First’ approach onto artificial intelligence research? Here’s what his administration has done so far.
  • Stop worrying about whether content is AI-generated

    Stop worrying about whether content is AI-generated

    Log onto LinkedIn on any given day, and you’re bound to see impassioned debates about how to tell if something is AI-generated.

    If it has an em dash, it was AI! If it uses the phrase “in today’s challenging environment,” a robot did it! On and on, tips for sniffing out AI that begin to sound more like articles of faith than helpful advice, a search for human connection in a time of technological uncertainty.

    As a writer and editor who reads submissions from writers every day, I understand these concerns. In the early days of AI, I used to try to read the tea leaves to determine whether or not something was AI. It worried me! I want to give my readers the best. Could a robot really do that? Of course not.

    I was so confident that a few times, I asked people, as kindly as I could: Did you use AI to write this?

    And every time, the answer was no.

     

     

    Eventually, I came to the realization that it doesn’t matter if AI wrote the content I was reading, just like it didn’t matter if they wrote it in Microsoft Word or Google Docs, whether they used Bing or Google to do their research. What ultimately mattered was, did the piece do what it needed to do?

    If it did, then did it matter if there was an AI assist?

    Now, I believe that at this moment, in the second quarter of 2025, humans will succeed in that goal more than AI will. Generative AI, in its current state, is an aggregation of massive amounts of data written by humans. It can only rearrange those pieces like a giant Mad Lib. It isn’t capable of creating anything truly new.

    But in many cases, neither are humans. I read rehashed submissions long before AI came onto the scene, just like people used em dashes before ChatGPT was invented.

    Ultimately, whether a piece was whipped up by a robot with a great prompt or painstakingly written letter by letter by a human doesn’t matter. Here’s what does:

    • Is the piece accurate? These are table stakes. If the content isn’t trustworthy, nothing else matters. And both AI and people have their struggles in this regard.
    • Is the piece interesting or useful? Not every piece of content is going to have you on the edge of your seat — nor should it. But it should, generally, either entice you with great storytelling or give you the information that you need. Otherwise, why does it exist?
    • Is the piece ethical? If AI is writing about some human emotion it can’t experience, that’s a problem. If its use isn’t transparent, that’s an issue. If it’s stealing content, that’s an issue. But humans lying about facts is also an issue. Keep it all above board.
    • Does the piece have some form of originality? Not every item reinvents the wheel — nor should it. But whether we’re talking about an anecdote, a flash of humor or personalization, something about the content should stand out.
    • Does the piece achieve its goal? Content can be designed to inform, persuade, move to action, entertain and on and on. If it’s an educational piece that doesn’t teach the audience anything, it isn’t successful.

    In other words, communicators should focus more on how content is received than how it’s created. You can achieve this through all the usual methods: analyzing page views, read time, email open rates, pulse surveys or tracking when journalists respond to your pitches. Or heck, you could just show it to another human and ask for their opinion the old-fashioned way.

    AI isn’t the enemy. Bad content is. No matter who the author is.

    Allison Carter is editorial director of PR Daily and Ragan.com. Follow her on LinkedIn.

    COMMENT

  • Public sours on Musk’s role, is skeptical that government is cutting waste

    Public sours on Musk’s role, is skeptical that government is cutting waste

    Public sours on Musk’s role, is skeptical that government is cutting waste

    A Washington Post-ABC News-Ipsos poll finding negative reactions to some cuts made by Musks’s U.S. DOGE service.
  • AI-powered PCs lag behind big promises

    AI-powered PCs lag behind big promises

    AI-powered PCs lag behind big promises

    Microsoft and Apple’s fumbles leave room for an OpenAI computing device.
  • What has DOGE done in Trump’s first 100 days? : NPR

    What has DOGE done in Trump’s first 100 days? : NPR

    Elon Musk wielding a chainsaw at the Conservative Political Action Conference (CPAC) on Feb. 20, 2025 in Oxon Hill, Md.

    Elon Musk wielding a chainsaw at the Conservative Political Action Conference (CPAC) on Feb. 20, 2025 in Oxon Hill, Md.

    Andrew Harnik/Getty Images


    hide caption

    toggle caption

    Andrew Harnik/Getty Images

    When President Trump returned to the White House in January, he promised to “restore competence and effectiveness” to the federal government by establishing a Department of Government Efficiency.

    In the lead-up to his inauguration, DOGE evolved from a meme to an outside commission to a White House office given carte blanche to upend the executive branch in the name of combating perceived waste, fraud and abuse.

    A small cadre of software engineers and others with connections to billionaire Elon Musk quickly fanned out across federal agencies, where they have encouraged the firing of tens of thousands of federal employees, overseen the effective dismantling of agencies, slashed spending on foreign food aid, medical research and basic office supplies and burrowed into multiple sensitive data systems.

    Last week, Musk said he would spend less time on DOGE and focus on Tesla, as the 130-day clock on his appointment as a “special government employee” runs down. “The DOGE team has made a lot of progress in addressing waste and fraud,” he said.

    In an interview with TIME last week, Trump called DOGE a “very big success.” “We found hundreds of billions of dollars of waste, fraud, and abuse,” he said. “It’s a scam. It’s illegal, in my opinion, so much of the stuff that we found, but I think DOGE has been a big success from that standpoint.”

    Despite those claims, 100 days into Trump’s second term, DOGE has not delivered on its promised savings, efficiency or transparency in meaningful ways.

    Musk’s vision of DOGE taking a chainsaw to government spending has hit repeated snags. An initial savings goal of $2 trillion was lowered to $1 trillion before being downgraded again recently to $150 billion — less than a tenth of Musk’s original promise. Even that number may be difficult to reach, given DOGE’s history of inaccurate and overstated claims combined with Trump’s desire to shield spending on Social Security and Medicare, which are major drivers of the federal budget.

    Many of DOGE’s initiatives have been reversed or delayed after legal setbacks and backlash in the court of public opinion. Since Jan. 20, dozens of federal lawsuits have challenged DOGE’s activities or mentioned its actions, according to NPR’s review of district court dockets across the U.S.

    Still, DOGE has already reshaped the federal government in significant ways — and is amassing unprecedented power over government data. With Trump’s blessing, Musk’s group has tried to grant itself virtually unfettered access to the most sensitive personal and financial systems the federal government maintains.

    From a meme to the White House

    DOGE’s very genesis was marked by inefficiency: A week after the November election, Trump announced the entity would be co-led by Musk, the billionaire CEO of Tesla and SpaceX, and Vivek Ramaswamy, a biotech entrepreneur and former Republican presidential candidate.

    “Together, these two wonderful Americans will pave the way for my Administration to dismantle Government Bureaucracy, slash excess regulations, cut wasteful expenditures, and restructure Federal Agencies,” Trump wrote on his Truth Social platform.

    Not only did DOGE have two leaders, they had competing visions of how to accomplish its goal. Ramaswamy pushed to work through the courts and Congress, WIRED and The Washington Post reported. But by Inauguration Day, Ramaswamy had left and Musk’s version prevailed: directly reshaping the federal bureaucracy through mass firings, similar to what he did when he bought Twitter in 2022, and by seizing control of technology across agencies.

    While DOGE was originally described as operating outside the government — a sort of blue-ribbon commission that would make recommendations — an executive order Trump signed on his first day in office gave DOGE a home in the White House and a mandate for direct action.

    Musk speaks alongside President Trump in the Oval Office on Feb. 11, 2025.

    Musk speaks alongside President Trump in the Oval Office on Feb. 11, 2025.

    Andrew Harnik/Getty Images


    hide caption

    toggle caption

    Andrew Harnik/Getty Images

    But the DOGE spelled out by Trump’s order and the DOGE that has embedded itself across and beyond the executive branch share few similarities, NPR’s reporting over the first 100 days of the administration has found.

    Some agencies have upwards of a dozen DOGE-affiliated personnel, while others have just one or two. A small number of DOGE-linked staffers have been working at multiple federal agencies at the same time.

    DOGE’s nebulous organizational structure extends beyond rank-and-file employees: Musk’s role as the de facto head of DOGE has been touted by the White House but downplayed by Justice Department lawyers when legally expedient. Still, Trump has repeatedly described Musk as leading DOGE, and the bulk of its work has been in service of Musk’s stated goal to dramatically slash the federal deficit by the Sep. 30 end of the fiscal year.

    ‘Savings’ claims were off from the start

    There’s little evidence to support the claim that DOGE is saving agencies significant money or changing the fact that the federal government spends more money than it collects — mainly on non-discretionary programs like Medicaid and Social Security. In fact, as of March 31, government spending is up 10% from the same period last year while revenue is only up 3%, leading to a 23% increase in the deficit, according to Treasury Department data.

    Even after Musk’s latest downward revision of DOGE’s savings goal to $150 billion, that number is unlikely to be reached.

    On its website, DOGE claims $160 billion has been saved through canceling contracts, firing workers and other measures. As NPR has reported, that tracker is plagued with inaccuracies, errors, omissions and overstatements.

    As of late April, out of $160 billion in claimed savings, DOGE’s “wall of receipts” has data to account to just $63 billion in purported claims.

    The five contract cancellations with the most claimed savings, accounting for nearly $7.5 billion in the DOGE tracker, actually amount to just under $1 billion in potential savings. They include a contract that was never awarded, one that was already terminated and another that doesn’t appear to be canceled at all, as DOGE continues to use misleading math.

    Cutting contracts and stopping spending

    NPR’s reporting shows the contracts DOGE has terminated and spending it has frozen largely reflect policy disagreements with the Biden administration rather than waste, fraud or abuse. In some cases, DOGE targeted spending on the types of software modernization and efficiency efforts that its mandate claims to support. It eliminated 18F, a tech unit inside the General Services Administration that helped improve digital services across agencies, including developing the IRS’ free online tax-filing software.

    According to the DOGE tracker, many of the contracts terminated would not actually result in any money saved.

    Protesters gather on the National Mall for the "Hands-Off" protest against the Trump administration on April 5, 2025.

    Protesters gather on the National Mall for the “Hands-Off” protest against the Trump administration on April 5, 2025.

    Dominic Gwinn/AFP via Getty Images


    hide caption

    toggle caption

    Dominic Gwinn/AFP via Getty Images

    Federal employees say it appears little thought has been given to many of the cuts beyond trying to reach Musk’s savings target.

    “They are roving in search of cuts they can put up on their wall to get that number up, whether it’s cutting staff, contracts, leases, grants, programs, offices, whatever,” said one General Services Administration worker who asked to remain anonymous for fear of retaliation from the Trump administration.

    Reshaping the federal workforce

    From the beginning, Trump and Musk zeroed in on federal workers, saying they want to “dismantle government bureaucracy” and root out what Trump calls “rogue bureaucrats.”

    An opening salvo haphazardly targeted tens of thousands of workers still in probationary periods because they had recently been hired or promoted into new roles.

    Some firings were so abrupt that agencies scrambled to bring back terminated staff, including those at the Department of Energy’s National Nuclear Security Administration who oversee the nation’s nuclear weapons stockpile. Others saw employees fired, then unfired, before being fired again.

    While challenges to those terminations have worked their way through the legal system, some agencies have said court-ordered reinstatement caused “significant administrative burdens.” The Supreme Court and a federal appeals court paused those rulings this month, clearing the way for firings to continue.

    Then there was the “fork in the road” resignation offer for federal employees to get paid through September without having to work, similar to a push Musk made after taking over Twitter. Some workers who accepted the offer have since been told they can’t actually take it.

    A terminated federal worker leaves the offices of the U.S. Agency for International Development in Washington, D.C. on Feb. 28, 2025 after being laid off following Trump's order to cut funding to the agency. That decision that was driven by DOGE's work.

    A terminated federal worker leaves the offices of the U.S. Agency for International Development in Washington, D.C. on Feb. 28, 2025 after being laid off following Trump’s order to cut funding to the agency. That decision that was driven by DOGE’s work.

    Bryan Dozier/AFP via Getty Images


    hide caption

    toggle caption

    Bryan Dozier/AFP via Getty Images

    A Department of Agriculture employee, who spoke to NPR on condition of anonymity because they feared retaliation in their job, was approved to take the resignation offer and was supposed to go on administrative leave on May 1. On Apr. 23, the employee received an email notifying them their job was considered “mission critical” and asking them to “reconsider their enrollment.” They still plan to resign.

    “At this point, it’s their loss after firing [probationary employees], rehiring, uncertainty, mental anguish, being kept in the dark about decisions that affect your livelihood,” the employee said. “It isn’t fair to Americans because we will all feel the effects of agencies that can’t run effectively, but that’s not my fault. It’s the fault of the decision makers.”

    At some agencies, workers say the number of people who are leaving, between firings, buyouts and early retirements, is affecting the government’s ability to provide services to the public.

    “People are dropping like flies in terms of those who are eligible for retirement,” said an employee at the Internal Revenue Service, to whom NPR granted anonymity because they fear retaliation from the Trump administration. “That makes a lot of work for everyone else, especially since they can’t hire. So many people can’t do their jobs because of the lack of people.”

    “Extreme levels of fraud” — but no proof

    Another Musk-driven initiative asked employees to send weekly emails to the Office of Personnel Management outlining five things they accomplished. Those emails sparked confusion among workers and Cabinet officials who gave conflicting guidance on whether their employees should comply. Musk and Trump claimed the email was meant to identify federal workers who don’t actually exist — an allegation for which they provided no evidence.

    “We think there are a number of people on the government payroll who are dead, which is probably why they can’t respond, and some people who are not real people. Like, there are literally fictional individuals that are collecting paychecks,” Musk said in a February Cabinet meeting.

    Musk has similarly claimed, without providing proof, that the Social Security system is plagued by “extreme levels of fraud,” including benefits checks going to dead people and recipients who are impossibly listed as well over 100 years old in the SSA database. His claims have been debunked by the Social Security Administration’s inspector general and its acting commissioner, Leland Dudek.

    Nowhere to work and nothing to work with

    Federal workers who still have jobs have been ordered back into offices, only to face shortages of desks, internet bandwidth, and even toilet paper. Dozens of workers across multiple agencies told NPR the return to office mandate has made them less productive and flies in the face of previous efforts to encourage telework, including under Trump’s first term, which the federal government estimates has saved hundreds of millions of dollars in reduced costs.

    “The goal of remote work and telework was to bring down the taxpayer burden, to be more efficient,” said a Food and Drug Administration employee who asked to remain anonymous for retribution. The employee was assigned to an office with insufficient space. “This is not sustainable. They are going to have to get bigger spaces,” said the employee, who requested anonymity because they feared retribution for speaking publicly.

    However, even as the administration has demanded workers return to the office, it’s also looking to shrink the federal government’s real estate footprint by up to 25%, an NPR analysis found. Some federal employees have been told the offices they are assigned to work in may close in the near future – and some planned closures have been reversed after public outcry.

    A 2019 photo of the U.S. Department of Housing and Urban Development building in Washington, DC. The Trump administration has put the building up for sale as part of its effort to trim the federal government's real estate holdings.

    A 2019 photo of the U.S. Department of Housing and Urban Development building in Washington, DC. The Trump administration has put the building up for sale as part of its effort to trim the federal government’s real estate holdings.

    Alastair Pike/AFP via Getty Images


    hide caption

    toggle caption

    Alastair Pike/AFP via Getty Images

    Many workers say their ability to do their jobs is also being stymied by a freeze on government-issued payment cards, which has disrupted their ability to buy supplies and services, book travel, and carry out statutorily mandated work. Routine spending now has to be approved by leadership at some agencies, leading to long delays.

    “We are literally jumping for joy over here in our local office because HQ/DOGE has approved our expenses to pump [a] vault toilet at one of our field offices,” said one worker at the Bureau of Land Management, who requested anonymity because they fear retaliation from the Trump administration. “It took weeks to get this approved when it was not an issue before.”

    DOGE in court

    While DOGE has changed how the federal government operates, its own work has largely been conducted in secret, with most of the information about its actions coming from court filings.

    An NPR review of thousands of pages of filings in federal lawsuits over DOGE’s actions finds an alarming pattern across agencies, where DOGE has given conflicting information about what data it has accessed, who has that access and, most importantly, why.

    In a case against the Office of Personnel Management, the Treasury Department and Education Department, a federal judge found agencies shared data with DOGE affiliates “who had no need to know the vast amount of sensitive personal information to which they were granted access.”

    Another judge wrote that DOGE gaining broad access to Social Security data instead of a more narrow approach “is tantamount to hitting a fly with a sledgehammer.” In a different case, the court expressed concern that sensitive Treasury Department data was potentially shared outside of the agency, in violation of federal law.

    In more than a dozen court cases alleging DOGE illegally accessed sensitive personal and financial data at agencies across the government, filings reveal evidence of DOGE staffers violating data-sharing rules and skirting required training.

    Other court documents reveal that a small number of DOGE employees have essentially unlimited access to different federal systems that could be combined to create dossiers about American citizens and noncitizens in violation of privacy laws.

    What is DOGE doing with government data?

    Concerns about data abuse are not just hypothetical. This month, a whistleblower provided evidence that DOGE may have taken sensitive data from the National Labor Relations Board and hidden its tracks.

    Democrats on the House Oversight Committee have alleged that other whistleblowers have evidence that DOGE is creating a master database of Americans’ private information.

    Already, DOGE appears to be using its access to disparate datasets, including Social Security records, to advance baseless claims about noncitizen voting and massive fraud within government programs.

    Its data access is also being used to further the Trump administration’s immigration policies: The Department of Homeland Security announced last week that DOGE helped overhaul an immigration database to serve as “a single, reliable source for verifying non-citizen status nationwide.”

    As Musk steps back, DOGE’s work continues

    It’s too early to say what the long-term impact of DOGE will have on the federal government. Trump’s order gives the temporary DOGE organization a deadline of July 4, 2026 to accomplish its goals.

    What is certain is that DOGE has already reshaped the federal workforce: More than 100,000 federal workers have been fired or taken buyouts to leave the civil service so far, though ongoing court battles mean that number is likely to change in the coming months. Add in planned reduction-in-force efforts across agencies and close to 10% of the 2.5 million-person federal workforce could be gone by the end of the fiscal year.

    Musk speaks during a cabinet meeting at the White House on March 24, 2025.

    Musk speaks during a cabinet meeting at the White House on March 24, 2025.

    Win McNamee/Getty Images


    hide caption

    toggle caption

    Win McNamee/Getty Images

    The layoffs are likely to have far-reaching ramifications in communities across the country: the federal government is the nation’s largest employer and more than 80% of its employees live outside of the Washington, D.C., metro area.

    Legal and logistical challenges to DOGE’s efforts could still block some of its efforts. But in the meantime, changes are already underway that will be hard to unwind, from cutting off funding for scientific research to a reduced foreign policy influence as the U.S. cedes soft power to other countries. Additionally, a number of people who have worked at Musk’s companies are installed in key positions at agencies throughout the government.

    As for Musk, he said he expects to still spend “a day or two a week” on government work.

    “I’ll have to continue doing it for, I think, probably the remainder of the president’s term, just to make sure that the waste and fraud that we stop does not come roaring back, which will do if it has the chance,” he said.

    Have information or evidence to share about DOGE’s access to data and other activities inside the federal government? Reach out to these authors through encrypted communications on Signal: Stephen Fowler is available at stphnfwlr.25 and Shannon Bond is available at shannonbond.01. Please use a nonwork device.

  • The Middle East’s AI Warfare Laboratory

    The Middle East’s AI Warfare Laboratory

    On a chilly morning in November 1911, Lt. Giulio Gavotti, an Italian pilot, leaned out from the cockpit of his monoplane over the oases and farmlands of modern-day Libya and tossed four small grenades onto an encampment of Ottoman soldiers. Widely covered in the international press, the bombardment was ultimately ineffective and caused no casualties. Yet it is acknowledged today as the start of a revolution in military affairs — the first recorded instance of explosives that were dropped from a powered aircraft during an armed conflict, ushering in the age of Guernica, Dresden, and Hiroshima.

    More than a century later, in March 2020, Western media went abuzz again with reports of yet another quantum military leap that occurred only a few kilometers from the Italian aviator’s sortie. According to a U.N. investigation, a Turkish-made Kargu-2 drone engaged the vehicles and troops of a Libyan militia “without requiring data connectivity between the operator and the munition effect”. As such, it may have been the first instance of an attack by a “lethal autonomous weapons system.” However, an exhaustive investigation and years of debate left experts dubious about whether the strike had taken place without human input.

    What largely escaped the discussions over the Kargu incident is the fact that, like the innovation of the Italian pilot at the turn of the previous century, it happened during a conflict of marginal importance at the time, waged by second-tier state powers and their local proxies on the periphery of Eurasia, a region that was then and is still believed to be the world’s geopolitical core. While much of the focus in the West today regarding the weaponization of AI has been directed at that Eurasian heartland — on the developmental threat posed by China and on Ukraine and Russia’s battlefield innovations — the Middle East and North Africa region remains uniquely vulnerable to the uncontrolled and lethal application of these technologies.

    To begin with, many Middle Eastern conflicts fall short of total war, waged by state and non-state actors using a range of unconventional tactics. In such contexts, AI technologies hold out the promise of conferring unique and decisive advantages, while adding new operational and ethical challenges. Moreover,  wars in this region are characterized by the routine flouting of international norms of warfare — often abetted by outside powers — which puts the region even more at risk from the misuse of weaponized AI.

    In many respects, Israel’s recent military campaign in the Gaza Strip epitomizes the intersection of these trends, while also highlighting the great dangers to civilians posed by this technology. It has also demonstrated Israel’s qualitative edge in the regional AI arms race, which has been joined by other ambitious and interventionist Middle Eastern states. The escalatory spiral of this competition, along with the region’s history of conflicts and entrenched rivalries, underscores the urgent need to regulate the development and use of AI weapons better through informal compacts that emerge from within the Middle East itself.

     

     

    Why Middle East Conflicts Are So Conducive to the Use of Weaponized AI

    The Middle East and North Africa are arguably the most conflict-ridden and militarized regions in the world. Four out of 11 of the “extreme conflicts” identified in 2024 by the Armed Conflict Location and Event Data organization occurred there, and six of its 16 countries were listed as conflict zones. Wars in the region often occur on the lower rungs of the escalation ladder and are waged through a blend of irregular tactics, subversion, disinformation, cyberattacks, and the deployment of standoff weapons like drones and ballistic missiles. Because of their promise of imparting greater precision, speed, lethality, and deniability, AI systems are likely to integrate well into these dominant modes of warfighting, amplifying their physical and psychological effects.

    The topography of many Middle Eastern conflicts adds to the allure of deploying artificial intelligence. Presently, AI-enabled technologies show the most promise in the aerial domain of warfare, especially in accelerating the targeting cycle of air-to-ground strikes. For such functions, the object recognition capabilities of the current class of algorithms work best in topographically less complex and less populated environments, such as the deserts or shrub steppes predominating the region. During its recent campaigns against non-state militants in Iraq, Syria, and Yemen, for example, the U.S. military made extensive use of algorithms developed under Project Maven to distinguish between different classes of non-human targets including tanks, trucks, and air defense sites.

    Relatively basic algorithms are also well-suited to maritime military operations, especially in the littoral sea lanes and chokepoints of the Suez Canal, the Bab al-Mandeb, and the Strait of Hormuz. In those environments, AI systems currently favor defensive and surveillance functions, such as the use of pattern and signature recognition to provide forewarning of seaborne and aerial attacks on ships and coastal infrastructure. Even so, future AI advancements will also improve offensive naval capabilities, as shown already by the Ukrainian military’s use of sea drones for deep strikes against Russian warships in the Black Sea. Such systems will undoubtedly prove attractive to Middle Eastern states and non-state actors that have made maritime disruption a centerpiece of their warfighting strategy, most notably Iran and its Yemeni proxy, the Houthis.

    Given the Middle East’s high degree of urbanization, it is unsurprising that wars have often reached their decisive climax in cities and suburbs, such as Aleppo, Raqqa, Mosul, and Sirte. Combat in the three-dimensional battlespaces of such densely built settings imposes acute hardships on belligerents, offsetting advantages in mass, mobility and firepower, and degrading command-and-control. Therefore, artificial intelligence presents the promise of easing, if not erasing, some of these challenges. While the current generation of AI solutions still face difficulties in urban settings, especially from the presence of visual, sonic, and thermal “clutter,” it is only a matter of time before the technology evolves to overcome these limitations.

    Among the most relevant AI applications for urban warfare are battle management systems that can help commanders at all levels obtain a clearer picture of a dynamic cityscape. At the tactical level, small autonomous drones and robots fitted with sensors or munitions can move over mounds of rubble, through the interior rooms of buildings, and inside sewers and tunnels. More controversially, algorithmic pattern-recognition tools, based often on behavioral and biometric data, can provide early warning of an impending insurgent or terrorist attack in densely populated areas. Yet without the appropriate safeguards, this capability is fraught with ethical risks, especially when it is directed at specific ethnolinguistic or religious communities — a particular concern in the Middle East, given the salience of these identities as factors in conflicts.

    Finally, there is a normative aspect to Middle Eastern wars that should raise additional worries about the militarization of AI. State and non-state actors in regional conflicts have historically flouted international conventions governing warfare with alarming frequency, especially those regarding the protection of civilians. In many instances, outside powers have enabled and condoned these transgressions to shield their Middle Eastern allies and clients from scrutiny and sanctions. In Libya, for example, repeated breaches of U.N. arms embargos by the United Arab Emirates and other regional actors went unpunished, in part because the United States and other Security Council members wanted to protect their local partners. More recently, the Biden and Trump administrations have armed, funded, and defended Israel’s military campaign in Gaza despite its repeated violations of international law, while China and Russia have stayed silent on the egregious abuses of Iran’s regional proxies. External actors have also committed rather than simply enabled these violations, as demonstrated by recent U.S. airstrikes against the Houthis, further normalizing such behavior.

    How Middle Eastern States Are Driving the AI Arms Race

    Despite these risks, ambitious and powerful Middle Eastern states are pressing ahead in the race for weaponized AI. Many are led by deeply autocratic regimes that are using this technology to fight terrorists and criminals, but also to silence political dissidents and journalists. Yet the clear regional leader in applying AI for both internal security and military operations is not an autocracy, but a democracy — albeit an increasingly imperiled one.

    Prior to the Gaza War, the Israel Defense Forces had invested in and deployed militarized AI, benefitting from the country’s well-funded technological sector and its close collaboration with the military. In the past, AI was used mostly for population surveillance and border policing, exemplified by AI-powered robotic twin gun turrets installed atop a wall on the occupied West Bank. In 2021, Israel utilized AI-enabled intelligence processing and targeting systems during the Unity Intifada, which Israeli commentators described as the “world’s first AI war.” Two years later, it built upon this experience to deploy this technology on a much larger scale, with the launch of Israel’s incursion into the Gaza Strip following Hamas’s Oct. 7 massacres and hostage-taking.

    The results have been troubling. On the one hand, AI has enhanced commanders’ situational awareness, lessened the human costs of tunnel mapping, improved the speed of military strikes, and boosted the survivability of troops. Yet the benefits afforded by such functions have been overshadowed by mounting civilian deaths reportedly resulting from this technology’s safeguards being loosened or eliminated. For example, the Israel-based +972 Magazine found that the “Where’s Daddy?” application has been used to alert Israeli military personnel when a suspected Hamas militant entered a specified area, often a family home, upon which unguided “dumb” bombs were then dropped. U.N. experts have expressed deep concern about the Israeli military’s use of this and other AI targeting systems, including “Lavender” and “Gospel,” warning about the “lowered human due diligence to avoid or minimise civilian casualties.”

    Elsewhere in the region, the oil-rich Gulf states of Saudi Arabia and the United Arab Emirates are using their wealth to fuel domestic development of AI and to attract investment from abroad, especially China and the United States. Undertaken as part of an economic diversification strategy, their investments have improved the capacity of their domestic security services to counter illicit networks and violent extremist actors, but also to monitor and suppress activists.

    Externally, the two Gulf monarchies are harnessing AI technology to guard against cyberattacks and misinformation, bolster their air and coastal defenses, and assist in the development of semi-autonomous loitering munitions and drone swarms. Riyadh and Abu Dhabi currently appear to be prioritizing domestic economic development, but in the past they have pursued destabilizing military interventions, conducting joint airstrikes and waging proxy wars in Yemen, and, in the case of the UAE, in Libya and Sudan. The adoption of AI into their militaries could embolden them toward greater adventurism abroad, especially if the new technology is perceived to lower the costs of such meddling.

    Across the Gulf, the Islamic Republic of Iran is endeavoring to build an AI arms industry in the face of crippling sanctions. It lacks the sophistication of other regional powers’ programs, but the regime in Tehran likely sees great value in incorporating AI into its efforts to rebuild Iran’s power projection capabilities in the wake of Israel’s punishing strikes against Hamas and Hizballah. Specifically, Iranian officials may believe that transferring this technology to its proxy allies could be a way to signal Iran’s continued viability as a patron and reassert control, while restoring some measure of psychological deterrence against its foes. This logic may also be guiding Iran’s recent pronouncements about AI-enhancements to its fleets of long-range missiles and unmanned aerial vehicles, including the formidable Shahed loitering munition. Tehran is also enlisting AI to strengthen its cyber operations abilities, which constitutes yet another critical pillar in its national defense strategy.

    Turkey’s prowess in AI lags other Middle Eastern powers, despite its much-hyped Kargu-2 strike in Libya. To be sure, Turkey has carved out a niche in so-called “drone diplomacy,” marketing and deploying its unmanned aerial vehicles across Europe, Asia, and Africa. Yet this quantitative edge is not matched by a commensurate level in quality: bereft of Gulf states’ oil wealth and Israel’s technological base, Ankara lacks the resources to emerge as a regional frontrunner in military-use AI. Currently, Turkey appears to be prioritizing its drone production over AI research and development, although President Erdogan’s ambitions to project power across the Mediterranean and into Africa may compel it to make such investments.

    The Need for Region-Led Dialogue on Military AI

    The net effect of weaponized AI on the regional balance of power remains unclear. Yet as long as it is perceived to reduce the physical and political risks of war-making, AI arms may promote prolonged conflicts and spark new ones. Moreover, the future use of AI is not limited to the conventional militaries of the Middle East’s powerhouses: If the proliferation pathways of drones are any guide, these technologies may find their way to non-state militants through a combination of smugglinghomegrown experimentation, and sales. Such diffusion is likely to offset the deterrence power or battlefield advantages the region’s leading states believe they currently possess through AI.

    These risks underscore the urgency of governing and regulating this technology. Such discussions have already started. In 2023, Austria tabled a motion at the United Nations to apply international law to lethal autonomous weapons. The resolution enjoyed widespread support, but five of the eight abstentions were by the Middle East’s leading users of military-use AI. Concurrent with these deliberations, the Biden administration pushed for greater governance of military AI, but those initiatives are now in jeopardy as President Donald Trump pushes for AI deregulation and chafes at international arms control agreements.

    In light of Washington’s ambivalence and the United Nations’ halting progress, it is vital that regulations or norms about AI technology’s usage emerge from within the Middle East. The prospects for this happening through a formal multilateral institution are not encouraging, given the region’s repeated failure to establish a region-wide security forum. Any near-term discussions on militarized AI will likely happen between clusters of like-minded states, such as Gulf Cooperation Council members or parties to the Abraham Accords, who may on arms control and monitoring mechanisms as a form of confidence-building. These modest steps should be encouraged along with the establishment of informal norm-making bodies, like an international experts’ group tasked with investigating the civilian impacts of these systems.

    Ultimately, the deep-seated drivers behind the AI arms race in the Middle East are unlikely to be resolved soon. That said, the aftermath of the Gaza War, along with doubts of America’s future as a security guarantor, has hastened the realization in many capitals that the best hope for stability in the Middle East is through local dialogue and de-escalation. Ensuring that discussions about the use of artificial intelligence for military purposes are included in these initiatives seems the most feasible way for mitigating the risks of this new technology in an already conflict-racked region.

     

     

    Frederic Wehrey is a senior fellow in the Middle East Program at the Carnegie Endowment for International Peace and a former U.S. Air Force intelligence officer.

    Andrew Bonney is a former research assistant in the Carnegie Middle East Program.

    Image: Midjourney

  • “Robots Will Outperform Best Surgeons In 5 Years”: Elon Musk

    “Robots Will Outperform Best Surgeons In 5 Years”: Elon Musk


    New Delhi:

    Amid significant medical breakthroughs being achieved by robots, billionaire Elon Musk on Monday said they have the potential to surpass the best human surgeons within five years.

    The Tesla and SpaceX CEO said that his brain-computer interface company Neuralink depended on robots for the brain-computer electrode insertion as the task was impossible to achieve with humans.

    “Robots will surpass good human surgeons within a few years and the best human surgeons within five years,” Musk shared in a post on the social media platform X.

    “Neuralink had to use a robot for the brain-computer electrode insertion, as it was impossible for a human to achieve the required speed and precision,” he added.

    The post came in response to another post by influencer Mario Nawfal who highlighted a recent breakthrough of robotics in medicine by the US-based medical device company Medtronic.

    Nawfal said that Medtronic successfully deployed its Hugo robotic system in “137 real surgeries — fixing prostates, kidneys, and bladders”.

    The surgery results were “better than doctors expected” and saw “a success rate of over 98 per cent’.

    The complication rates were also significantly low for prostate surgeries (3.7 per cent), kidney surgeries (1.9 per cent), and bladder surgeries (17.9 per cent).

    Of the 137 surgeries, only two needed to switch back to regular surgery — one because of a robot glitch, and one because of a tricky patient case, Nawfal said.

    Meanwhile, Musk’s Neuralink is currently engaged in a clinical trial of its brain-computer interface technology. The company aims to create brain-controlled devices for people with paralysis or neurodegenerative diseases.

    While none of the devices are yet commercial, three people have successfully received a Neuralink brain implant.

    “If all goes well, there will be hundreds of people with Neuralinks within a few years, maybe tens of thousands within 5 years, millions within 10 years,” Musk said, on X in 2024.

    (Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)


  • China-based Huawei to test AI chip aiming to rival Nvidia: Report

    China-based Huawei to test AI chip aiming to rival Nvidia: Report

    Chinese tech giant Huawei has reportedly developed a powerful artificial intelligence chip that could rival high-end processors from US chip maker Nvidia.

    The Shenzhen-based Huawei is poised to start testing a new AI chip called the Ascend 910D, and has approached local tech firms, which are slated to receive the first batch of sample chips by late May, The Wall Street Journal reported on April 27, citing people familiar with the matter.

    The development is still at an early stage, and a series of tests will be needed to assess the chip’s performance and get it ready for customers.

    Huawei is pinning hopes on its latest Ascend AI processor being more powerful than Nvidia’s H100 chip, which was used for AI training in 2022.

    Huawei is also poised to ship more than 800,000 earlier model Ascend 910B and 910C chips to customers, including state-owned telecoms operators and private AI developers such as TikTok parent ByteDance.

    Beijing has also reportedly encouraged Chinese AI developers to increase purchases of domestic chips as trade tensions between China and the US escalate. 

    In mid-April, Nvidia stated that it was expecting around $5.5 billion in charges associated with its AI chip inventory due to significant export restrictions imposed by the US government affecting its business with China. 

    The Trump administration added Nvidia’s H20 chip, its most powerful processor that could be sold to China, to a growing list of semiconductors restricted for sale to the country.

    Some key components for AI chips, such as the latest high-bandwidth memory units, have also been restricted for export to China by the US. 

    Huawei is focusing on building more efficient and faster systems, such as CloudMatrix 384, a computing system unveiled in April, connecting Ascend 910C chips. This would leverage their chip arrays and use brute force rather than making individual processors more powerful.

    China seeks self-reliance on AI

    Reuters reported on April 26, citing state media reports, that Chinese President Xi Jinping pledged “self-reliance and self-strengthening” to develop AI in the country.

    “We must recognise the gaps and redouble our efforts to comprehensively advance technological innovation, industrial development, and AI-empowered applications,” Xi said at a Politburo meeting study session on April 25.

    NVidia, China, Huawei
    Donald Trump (left) meeting with Xi Jinping (right) in 2018 at the G20. Source: Dan Scavino

    Related: US Senate bill threatens crypto, AI data centers with fees — Report

    “We must continue to strengthen basic research, concentrate our efforts on mastering core technologies such as high-end chips and basic software, and build an independent, controllable, and collaborative artificial intelligence basic software and hardware system,” Xi added.

    US President Donald Trump has repeatedly urged Xi to contact him for discussions about a potential trade deal after his administration imposed 145% tariffs on most Chinese goods. 

    China has stated that it is not having any talks with the US and that the country should “stop creating confusion.”

    Magazine: Bitcoin $100K hopes on ice, SBF’s mysterious prison move: Hodler’s Digest