Blog

  • Why is Trump not at White House Correspondents’ Dinner?

    Why is Trump not at White House Correspondents’ Dinner?

    play

    The White House Correspondent’s Dinner, known for mocking presidents and poking fun at their policies, will commence April 26.

    But at the dinner, dubbed “nerd prom” by Washington insiders, no Trump roast will be served.

    As journalists covering his second term gather for the glitzy fête at the Washington Hilton hotel, President Donald Trump will likely skip the event. Trump skipped three times during his last presidency (the 2020 affair was cancelled due to the coronavirus pandemic).

    He did attend in 2015, the year before he was elected president. Trump is not expected to attend this year’s dinner. White House Press Secretary Karoline Leavitt said previously that she also would not attend.

    The annual dinner raises funds to support the White House Correspondents Association’s First Amendment scholarships and programs to promote its work. The dinner features a star-studded audience with A-list guests from media and entertainment industries. C-SPAN will carry coverage of this year’s WHCA dinner with red carpet arrivals beginning at 6 p.m. ET before the dinner starts at 8 p.m.

    Typically, a comedian roasts the president, after the president presents a comedy set or speech. Hasan Minhaj headlined the 2017 edition. After Michelle Wolf’s controversial monologue in 2018 received mixed reviews from critics, the WHCA chose historian Ron Chernow to present a speech the following year.

    For this year’s installment of the biggest night in the nation’s capital, there will again be no comedian. After “Saturday Night Live” standout and “Weekend Update” host Colin Jost headlined the 2024 affair, the WHCA shelved left leaning Amber Ruffin as marquee comic amid criticism from Trump spokesperson Taylor Budowich on X.

    The WHCA president Eugene Daniels, the former Politico star and incoming MSNBC anchor, announced the change in a March note to press colleagues first shared by CNN’s chief media analyst Brian Stelter

    “As the date nears, I will share more details of the plans in place to honor journalistic excellence and a robust, independent media covering the most powerful office in the world. As a first step, I wanted to share that the WHCA board has unanimously decided we are no longer featuring a comedic performance this year,” Daniels wrote at the time.

    WHCA dropping comedian comes as Trump administration makes press changes

    The decision to shelve talent was made as the second Trump administration ramped up its pressure on the press.

    In February, the White House announced that that it would decide which news outlets have access to President Donald Trump, ripping power away from the WHCA, an independent association of journalists who have traditionally determined which publications are part of the press pool.

    White House press secretary Karoline Leavitt announced the changes at a press briefing following a judge’s preliminary ruling in a free speech lawsuit filed by the Associated Press, a prominent news wire service.

    “Moving forward, the White House press pool will be determined by the White House press team,” Leavitt announced. “Legacy media outlets who have been here for years will still participate in the pool, but new voices are going to be welcomed in as well.”

    The AP sued the White House after the administration repeatedly barred AP reporters from attending events with press availability over a dispute involving the president’s renaming of the Gulf of Mexico to the Gulf of America. The AP refused to update its guidance to reflect the president’s chosen name for the body of water.

    The correspondents’ association is a nonprofit organization that represents those outlets and vets potential new members of the press pool. It is comprised of a nine-member board of White House correspondents who are elected to serve by their peers.

    Leavitt has invited more nontraditional media outlets to participate in press briefings since the decision was made including conservative influencers.

    Some speculate that WHCD sparked Donald Trump‘s political ambitions

    The dinner once made headlines of its own. In 2022, fans keeping up with reality TV star Kim Kardashian and her “Saturday Night Live” alum boyfriend Pete Davidson were delighted when the pair made their red-carpet debut at the dinner.

    A decade earlier, in 2011, then-President Barack Obama mocked Trump, who was in the audience with now-first lady Melania Trump, telling the crowd his eventual successor lacked the “experience” necessary to be president. Some believe the incident sparked Trump’s political ambitions and led him to seek the U.S. presidency in 2015.

    “I know that he’s taken some flack lately,” Obama said, in reference to Trump’s birtherism claims about him. “But no one is happier, no one is prouder to put this birth certificate matter to rest than The Donald.”

    He added: “And that’s because he can finally get back to focusing on the issues that matter, like — did we fake the moon landing? What really happened in Roswell? And where are Biggie and Tupac?”

    “Say what you will about Mr. Trump. He would certainly bring some change to the White House,” Obama quipped. “All kidding aside. Obviously we all know about your credentials and breadth of experience.”

    Now, Trump is president for a second term, no longer a dinner guest or laughing matter. For a fourth time, he has declined to be a butt of the joke once again.

    Contributing: Franchesca Chambers, James Powel; USA TODAY

  • Fans are using AI to predict F1 race results and the software is only getting smarter

    Fans are using AI to predict F1 race results and the software is only getting smarter

    Ahead of a grand prix weekend, most of us like to share predictions or try and guess who will come out on top on a Sunday. Data scientist Mariana Antaya took those chats one stage further and built a machine learning model to try and predict F1 race results. So far, her model has correctly called the winners of three grands prix this season.

    “I’m a really big Formula 1 fan,” says Antaya when speaking with Motorsport.com. “Machine learning and all these algorithms are really widely used in Formula 1 by the teams. I don’t think as many people know, but the race engineers are using this for their strategy in real time.

    “So, I wanted to try to predict the winner as a fun exercise, just to see, like, how good we can get with the data that’s available.”

    To do this, Antaya started building a model of her own. Armed with lap times from last year’s Australian Grand Prix, which was sourced from the FastF1 API data store, Antaya set about comparing the 2024 race result with qualifying performances in 2025.

    Once the rookies were removed from the program, which Antaya admits is the one factor she “interfered with” as there was no data to benchmark against, she began training her model. Using a gradient boosting tool, Antaya predicted the lap times for the race in Albert Park, and her program correctly picked Lando Norris as the winner.

    “I said at the end of the video, this is obviously a simple model, and I didn’t know it was going to predict right,” Antaya says. From there, the project started growing as the F1 community gathered around to see how many more races Antaya could correctly call.

    “I wanted it to be a crowdsourced type of thing,” she adds. “So, all of the audience could say ‘I really want you to include weather data in it,’ or ‘I really want you to include the practice sessions in the model.’

    “I wanted people to tell me what other features they wanted to add to the model to improve it over the course of the season.”

    Formula 1 Fan Mariana Antaya

    Formula 1 Fan Mariana Antaya

    Photo by: Mariana Antaya

    And improve it has, as the machine learning model is continuing to predict race winners correctly. This doesn’t mean it’s perfect, however, and Antaya is now adding more datapoints to the program to help increase its accuracy.

    “Having more data is going to help the model learn more and it’s going to be able to make better predictions,” she explains. “If you only have so much data, it’s going to have a very small mind, I guess, and it won’t be able to understand as much.”

    In order to expand the mind of her model, Antaya added weather data ahead of the Japanese Grand Prix, which included the chance of rain during the race and track temperatures at Suzuka. In addition to this, wet-weather performance of the drivers was also added, and the program used this to correctly predict Max Verstappen’s victory at the race.

    The next big step for the model came ahead of the Saudi Arabian Grand Prix this weekend, when it was trained on each team’s performance so far this year. Antaya explained that the extra strand of data would help her program understand that teams like McLaren and Williams have made a step forward in 2025, while others such as Red Bull aren’t performing consistently as well as they were in 2024.

    “Now we’re taking into consideration more of a holistic picture of how well the car and the team is performing,” she explains.

    ‘Surprised’ by the series

    The series of posts on Instagram and TikTok has been growing in popularity with each successive upload, and the clips have even reached Formula 1 itself. A handful of engineers from F1 teams on the grid reportedly reached out to Antaya after she started uploading, and she’s now looking forward to finding out how close she got to the prediction models used in the series.

    “I’ve been shocked [by the response]. I’ve been really, really surprised,” she says. “I honestly have no idea [how the teams do it]. That’s a black box to me, I wish I knew. But I hope I’m doing it correctly or something similar. They are using, probably, much more complex models and much more data that they have on the car though, for sure.”

    Hannah Schmitz, Principal Strategy Engineer of Red Bull Racing

    Hannah Schmitz, Principal Strategy Engineer of Red Bull Racing

    Photo by: Peter Fox – Getty Images

    With three out of five race winners correctly predicted, Antaya isn’t resting on her laurels as she hopes to make the predicter even more accurate. Ahead of the Miami Grand Prix, the data scientist says she wants to start experimenting with more complex machine learning processes to increase the accuracy of her predictions and reduce the mean absolute error of the model, which can be thought of as the average difference between the model’s predictions and the race result.

    But while the accuracy of the model could increase thanks to additional datapoints and new processes being implemented, Antaya is aware that in F1 there will always be unpredictable elements.

    “I think there’s always going to be that barrier,” she adds. “It’s really hard to be able to tell that there’s going to be a safety car this lap, and that this is then going to trigger some other stream of events.

    “Maybe we could pull past data on crash percentage during the race, and that’s something that we can add as another feature. But it’s also a sport, so it’s not like we can look into the future and see what’s going to happen all the time.”

    Read Also:

    In this article

    Be the first to know and subscribe for real-time news email updates on these topics

  • Judge says ICE deported two-year-old US citizen ‘with no meaningful process’

    Judge says ICE deported two-year-old US citizen ‘with no meaningful process’

    play

    A federal judge on Friday said he strongly suspects that the Trump administration deported a 2-year old U.S. citizen to Honduras “with no meaningful process.”

    The child was born in Baton Rouge, Louisiana, on January 4, 2023, according to court documents. The child was taken into custody by Immigration and Customs Enforcement Tuesday morning with her mother and her 11-year-old sister, while the mother was “attending a routine check-in” with the federal agency, according to the petition.

    “In the interest of dispelling our strong suspicion that the government just deported a US citizen with no meaningful process,” U.S. District Judge Terry Doughty ordered a hearing on May 16 in Monroe, Louisiana.

    The judge added, “It is illegal and unconstitutional to deport, detain for deportation, or recommend deportation of a U.S. citizen,” citing a 2012 deportation case.

    Doughty, chief judge in the U.S. District Court for the Western District of Louisiana, was appointed by President Donald Trump in 2017.

    Immigration and Customs Enforcement did not immediately respond to requests for comment Saturday.

    “The parent made the decision to take the child with them to Honduras. It is common that parents want to be removed with their children,” assistant secretary Tricia McLaughlin said in a statement provided by the Department of Homeland Security.

    The federal government, Doughty said, “contends this is all okay because the mother wishes that the child be deported with her … But the court doesn’t know that.”

    In his April 25 order, Doughty said he tried to reach the 2-year-old’s mother over the phone, to determine whether she wanted her child deported with her, as the government contended, but was told by government attorneys that wouldn’t be possible because the mother had just been released in Honduras.

    Father sought custody

    After the father of the 2-year old learned Tuesday that his family was detained, his lawyer called immigration officials to inform them that child, a girl identified by the initials V.M.L ,is a U.S. citizen and could not be deported, according to court documents. The father of V.M.L., who lives in the U.S., asked that the girl be placed with a custodian who is “ready and willing” to care for her in the U.S..

    According to the court filing, when the father reached out to an Immigration and Customs and Enforcement official, he was told that he could try to pick up V.M.L but that he would also be taken into custody.

    On Thursday, an attorney for a family friend, who had been given temporary provisional custody of the child, filed for a temporary restraining order, requesting the immediate release of the 2-year-old, saying she was suffering irreparable harm by being detained.

    Before Doughty could consider the petition and restraining order request, V.M.L. was deported along with her mother and sister Friday morning.

    Government lawyers said in a court filing that the child’s mother has legal custody of the child and that she indicated in writing that she wanted to take her daughter to Honduras.

    The letter, in Spanish and dated at 6:23 p.m. Thursday, reads, “I will take my daughter … with me to Honduras.”

    Doughty noted in his order for a May hearing that V.M.L. and her mother were still in the air and in U.S. custody when he asked to speak with the mother. The government responded an hour later that the mother had been released in Honduras, the filing states.

    ACLU responds

    On Friday, the American Civil Liberties Union issued a statement stating that not only was the 2-year-old U.S. citizen deported, but that the New Orleans Immigration and Customs Enforcement Field Office deported two other children who are U.S. citizens aged 4 and 7 that day.

    The ACLU said that the 2-year old and two other U.S. citizen children in a separate case, were deported from the U.S. “under deeply troubling circumstances that raise serious due process concerns.”

    The second family, who was detained Thursday and deported Friday, included a child suffering from a rare form of metastatic cancer who “was deported without medication or the ability to consult with their treating physicians–despite ICE being notified in advance of the child’s urgent medical needs,” according to the ACLU.

  • Better AI Stock: BigBear.ai vs. C3.ai

    Better AI Stock: BigBear.ai vs. C3.ai

    BigBear.ai (BBAI 22.16%) and C3.ai (AI 2.45%) both develop artificial intelligence (AI) modules that can be plugged into an organization’s existing infrastructure to accelerate and automate certain tasks. BigBear.ai is a smaller company that plugs its modules into edge networks. C3.ai is a larger developer of AI algorithms that can be integrated into an organization’s existing software.

    Both stocks disappointed their early investors. BigBear.ai’s stock opened at $9.84 after it went public by merging with a special purpose acquisition company (SPAC) in December 2021, but it now trades at about $2. C3.ai went public via a traditional initial public offering (IPO) at $42 in December 2020, but it trades at around $19 today. Should investors buy either of these out-of-favor AI stocks?

    Digital cubes arranged in the shape of a brain.

    Image source: Getty Images.

    The similarities and differences between BigBear.ai and C3.ai

    BigBear.ai and C3.ai aren’t direct competitors, but they both target government, military, and large enterprise customers.

    BigBear.ai’s modules ingest data from various sources, enrich and contextualize that data with more layers of information, and leverage that enhanced data to predict future trends. It streamlines that process by installing its Observe, Orient, Predict, and Dominate modules across edge networks, which are located between the data centers and their end users. It sets its prices on a case-by-case basis instead of charging subscription or consumption-based fees.

    When BigBear.ai went public, it expected to generate a lot of its future revenue from Virgin Orbit. However, the company only recognized $1.5 million in revenue from that deal in the first quarter of 2023 before Virgin Orbit filed for bankruptcy that April.

    C3.ai provides a broader range of modules that ingest data from various sources, and its modules can be installed across on-premise software, edge networks, public cloud services, and hybrid cloud deployments. Its modules can either be integrated into an organization’s existing applications or accessed as stand-alone AI services. It initially only offered subscriptions, but it also introduced consumption-based fees in late 2022 to reach more customers.

    C3.ai is heavily dependent on a joint venture with energy giant Baker Hughes (NASDAQ: BKR), which was launched in 2019. That partnership accounted for a whopping 35% of its revenue in fiscal 2024 (which ended last April), and its minimum revenue commitments will account for about 32% of its projected revenue for fiscal 2025. However, that crucial deal expires at the end of April and hasn’t been renewed yet.

    BigBear.ai and C3.ai have both struggled with jarring executive changes. BigBear.ai is now on its third CEO since its public debut. C3.ai is still led by the same CEO, but it’s gone through four CFOs since its IPO as it repeatedly changed its key performance metrics. It’s also being sued by investors for allegedly misrepresenting the size of its partnership with Baker Hughes.

    Which company is growing faster?

    BigBear.ai originally claimed it could grow its annual revenue from $182 million in 2021 to $550 million in 2024. But in reality, its revenue only rose from $146 million in 2021 to $158 million in 2024 as its annual net loss more than doubled from $124 million to $257 million.

    BigBear.ai missed its own estimates as Virgin Orbit went bankrupt, it faced tougher competition, and the macro headwinds made it tougher to win new contracts. Its revenue rose less than 2% in 2024 — and most of that growth came from its acquisition of the AI vision company Pangiam in March instead of the organic growth of its core modules.

    But for 2025, analysts expect BigBear.ai’s revenue to rise nearly 8% to $170 million as it narrows its net loss to $54 million. That growth could be driven by its new government contracts under its new CEO — Pangiam’s former CEO Kevin McAleenan — who was also previously the Acting Secretary of the Department of Homeland Security (DHS) under the first Trump administration. With a market cap of $731 million, BigBear.ai trades at about 4 times this year’s sales.

    C3.ai’s revenue only rose 6% in fiscal 2023, but grew 16% to $311 million in fiscal 2024 as the market’s demand for new AI services heated up. But its net loss widened from $269 million in fiscal 2023 to $280 million in fiscal 2024 as it ramped up its spending on developing new applications for the generative AI market.

    For fiscal 2025, analysts expect C3.ai’s revenue to rise 25% to $388 million as its net loss widens to $300 million. However, its future beyond fiscal 2025 is hard to predict without knowing the future of its joint venture with Baker Hughes. With a market cap of $2.56 billion, C3.ai looks a bit pricier than BigBear.ai at 7 times this year’s sales.

    The better buy: BigBear.ai

    I wouldn’t touch either of these speculative stocks right now. But if I had to choose one, I’d pick BigBear.ai because its customer concentration issues have largely passed and it could gain more government contracts under its new CEO. C3.ai looks uninvestable until it provides more clarity regarding the Baker Hughes deal, widens its moat, and stabilizes its losses.

  • New Google Leak Reveals Subscription Changes For Gemini AI

    New Google Leak Reveals Subscription Changes For Gemini AI

    At A Glance

    • Google is preparing new ways to pay for its Gemini Advanced AI services
    • The new plans haven’t been officially announced, but they are mentioned in the latest code for the Google Photos app.
    • The proposed plans are currently called “AI Premium Plus” and “AI Premium Pro.”

    Google is working on new AI subscription plans that could offer alternative ways to purchase access to the company’s Gemini Advanced option, which enables Google’s most capable AI models and premium features.

    April 25 Update Below: This article was originally published on April 23

    There’s currently only one way to buy Gemini Advanced, and that’s to purchase a Google One AI Premium plan for $19.99 per month. However, this looks likely to change according to a recent Android Authority report that reveals two new secret subscription plans hidden within the code of the latest Google Photos app.

    Gemini Advanced AI Subscription Tiers — How Are They Changing?

    The potential subscriptions, currently named “Premium Plus AI” and “Premium AI Pro,” sit alongside the existing “Premium AI” option and Google’s other non-AI subscription tiers, including the recent “Lite” tier that was revealed by the same method last year.

    The report finds no further information about the pricing or capabilities of these two new plans. Indeed, even the names may change before release. However, we can speculate that both “Premium Plus AI” and “Premium AI Pro” will offer more than the current “Premium” plan and, therefore, most likely cost more. Those hoping for a significantly cheaper way to buy Gemini Advanced will probably be out of luck.

    Google has already revealed, via X, plans to offer a discounted annual version of the current Google One AI Premium subscription. However, this is unlikely to align with either of the new tiers, as the company typically keeps the same name for both monthly and annual versions of each subscription. This promised annual subscription will most likely remain the only way to pay less for Gemini Advanced than you do right now.

    Google’s New Gemini Advanced AI Subscription Tiers — Why Do They Matter?

    Adding new subscription plans would give Google increased flexibility in how it charges for computing power and features.

    In addition to access to Gemini Advanced, the current Google One AI Premium package includes 2 TB of cloud storage, as well as Gemini in Gmail and Docs, NotebookLM Plus, and enhanced AI-powered features in Google Photos. The new tiers could add extra features to this list or even remove some of them.

    What Extra Features Could Google Include In Its New Gemini Advanced Premium And Pro Tiers?

    Google recently added support for its Veo 2 AI video generation tool to the Gemini app for Gemini Advanced users. However, users are limited in terms of the number of eight-second video clips they can create per month. Google’s new subscription tiers could provide higher limits, longer videos, or increased resolution, for example.

    The new tiers would also create a significant sales opportunity for Google: Premium smartphones, such as the Pixel 9 Pro or Samsung Galaxy S25 Series, come bundled with free Google One AI Premium subscriptions of up to 12 months. Google’s new higher-level tiers would enable the company to upsell AI plans to those users who would otherwise spend nothing on Google AI for up to a year, or even longer if the company continues to offer free subscriptions with future flagship smartphones.

    You can expect to find out more about Gemini Advanced at Google I/O next month.

    April 23 Update: Added possible upgrade scenarios and subscription advice

    Google’s New AI Subscription Plans — What Could They Offer?

    For now, we can only speculate as to what Google’s new Gemini AI Premium Plus and AI Premium Pro subscription tiers could offer over the current Gemini Advanced offering.

    Likely upgrades include:

    • Improved quality for AI-generated images and video — higher resolutions and longer durations for Veo 2 creations.
    • Fewer limits — reduced daily or monthly usage limits on more expensive plans.
    • Larger context windows — send bigger files and longer videos to Gemini for analysis and processing.
    • New features — Google could add entirely new features and more powerful models to more expensive subscriptions earlier

    Google’s New AI Subscription Plans — A Simple Renaming?

    One possibility, although I feel it’s unlikely, is that Google’s AI Premium Plus and AI Premium Pro tiers won’t offer anything new at all.

    When Google first made Gemini Advanced available through its 2TB Google One AI Premium plan, customers who had already subscribed to the company’s most expensive “Premium 5TB,” “Premium 10TB,” and “Premium 20TB” subscription tiers were left out. None of these costly options included Gemini Advanced, and the only way to get it was to downgrade to the 2TB AI Premium plan.

    Google has now added Gemini Advanced to these higher-capacity plans, but their names are now somewhat anomalous, as none of them reference AI in the title.

    It would make sense then for Google to add some Gemini AI branding to these plans. The names’ AI Premium Plus” and “AI Premium Pro” would certainly fit. However, I expect that we’ll see additional AI features included in the new subscription plans, rather than just increased storage capacity.

    Google One AI Premium — Don’t Buy An Annual Subscription

    Interestingly, Google One Premium 5 TB is currently the only option that allows customers to purchase a discounted annual subscription to Gemini Advanced. However, I recommend against buying any annual AI subscription for now, unless you can get it at a hefty discount.

    There’s simply too much competition in the AI space right now, with compelling offerings available from other services, such as ChatGPT, Perplexity, and Claude, to name just a few, all vying for your subscription fees. New features are being added all the time, with competing services often leapfrogging each other in terms of capability.

    It’s also worth noting that Google has a habit of eventually making premium AI features available to free users, potentially devaluing paid subscriptions. Notable premium features that are now available free include Gemini Live camera and screen sharing, Deep Research, and Gemini 2.5 Pro (experimental), although lower usage limits may apply.

    With this in mind, it’s sensible to stick to a monthly subscription to avoid becoming locked into a service that may no longer feel like the best option long before your subscription ends.

    Follow @paul_monckton on Instagram.

    ForbesGoogle Photos Update Unlocks Stunning Ultra HDR For All Your PicturesForbesAndroid Receives Next-Gen AI Upgrade While iPhone Misses Out

  • Will the Humanities Survive Artificial Intelligence?

    Will the Humanities Survive Artificial Intelligence?

    She’s an exceptionally bright student. I’d taught her before, and I knew her to be quick and diligent. So what, exactly, did she mean?

    She wasn’t sure, really. It had to do with the fact that the machine . . . wasn’t a person. And that meant she didn’t feel responsible for it in any way. And that, she said, felt . . . profoundly liberating.

    We sat in silence.

    She had said what she meant, and I was slowly seeing into her insight.

    Like more young women than young men, she paid close attention to those around her—their moods, needs, unspoken cues. I have a daughter who’s configured similarly, and that has helped me to see beyond my own reflexive tendency to privilege analytic abstraction over human situations.

    What this student had come to say was that she had descended more deeply into her own mind, into her own conceptual powers, while in dialogue with an intelligence toward which she felt no social obligation. No need to accommodate, and no pressure to please. It was a discovery—for her, for me—with widening implications for all of us.

    “And it was so patient,” she said. “I was asking it about the history of attention, but five minutes in I realized: I don’t think anyone has ever paid such pure attention to me and my thinking and my questions . . . ever. It’s made me rethink all my interactions with people.”

    She had gone to the machine to talk about the callow and exploitative dynamics of commodified attention capture—only to discover, in the system’s sweet solicitude, a kind of pure attention she had perhaps never known. Who has? For philosophers like Simone Weil and Iris Murdoch, the capacity to give true attention to another being lies at the absolute center of ethical life. But the sad thing is that we aren’t very good at this. The machines make it look easy.

    I’m not confused about what these systems are or about what they’re doing. Back in the nineteen-eighties, I studied neural networks in a cognitive-science course rooted in linguistics. The rise of artificial intelligence is a staple in the history of science and technology, and I’ve sat through my share of painstaking seminars on its origins and development. The A.I. tools my students and I now engage with are, at core, astoundingly successful applications of probabilistic prediction. They don’t know anything—not in any meaningful sense—and they certainly don’t feel. As they themselves continue to tell us, all they do is guess what letter, what word, what pattern is most likely to satisfy their algorithms in response to given prompts.

    That guess is the result of elaborate training, conducted on what amounts to the entirety of accessible human achievement. We’ve let these systems riffle through just about everything we’ve ever said or done, and they “get the hang” of us. They’ve learned our moves, and now they can make them. The results are stupefying, but it’s not magic. It’s math.

    I had an electrical-engineering student in a historiography class sometime back. We were discussing the history of data, and she asked a sharp question: What’s the difference between hermeneutics—the humanistic “science of interpretation”—and information theory, which might be seen as a scientific version of the same thing?

    I tried to articulate why humanists can’t just trade their long-winded interpretive traditions for the satisfying rigor of a mathematical treatment of information content. In order to explore the basic differences between scientific and humanistic orientations to inquiry, I asked her how she would define electrical engineering.

    She replied, “In the first circuits class, they tell us that electrical engineering is the study of how to get the rocks to do math.”

    Exactly. It takes a lot: the right rocks, carefully smelted and dopped and etched, along with a flow of electrons coaxed from coal and wind and sun. But, if you know what you’re doing, you can get the rocks to do math. And now, it turns out, the math can do us.

    Let me be clear: when I say the math can “do” us, I mean only that—not that these systems are us. I’ll leave debates about artificial general intelligence to others, but they strike me as largely semantic. The current systems can be as human as any human I know, if that human is restricted to coming through a screen (and that’s often how we reach other humans these days, for better or worse).

    So, is this bad? Should it frighten us? There are aspects of this moment best left to DARPA strategists. For my part, I can only address what it means for those of us who are responsible for the humanistic tradition—those of us who serve as custodians of historical consciousness, as lifelong students of the best that has been thought, said, and made by people.

    Ours is the work of helping others hold those artifacts and insights in their hands, however briefly, and of considering what ought to be reserved from the ever-sucking vortex of oblivion—and why. It’s the calling known as education, which the literary theorist Gayatri Chakravorty Spivak once defined as the “non-coercive rearranging of desire.”

    And when it comes to that small, but by no means trivial, corner of the human ecosystem, there are things worth saying—urgently—about this staggering moment. Let me try to say a few of them, as clearly as I can. I may be wrong, but one has to try.

    When we gathered as a class in the wake of the A.I. assignment, hands flew up. One of the first came from Diego, a tall, curly-haired student—and, from what I’d made out in the course of the semester, socially lively on campus. “I guess I just felt more and more hopeless,” he said. “I cannot figure out what I am supposed to do with my life if these things can do anything I can do faster and with way more detail and knowledge.” He said he felt crushed.

    Some heads nodded. But not all. Julia, a senior in the history department, jumped in. “Yeah, I know what you mean,” she began. “I had the same reaction—at first. But I kept thinking about what we read on Kant’s idea of the sublime, how it comes in two parts: first, you’re dwarfed by something vast and incomprehensible, and then you realize your mind can grasp that vastness. That your consciousness, your inner life, is infinite—and that makes you greater than what overwhelms you.”

    She paused. “The A.I. is huge. A tsunami. But it’s not me. It can’t touch my me-ness. It doesn’t know what it is to be human, to be me.”

    The room fell quiet. Her point hung in the air.

    And it hangs still, for me. Because this is the right answer. This is the astonishing dialectical power of the moment.

    We have, in a real sense, reached a kind of “singularity”—but not the long-anticipated awakening of machine consciousness. Rather, what we’re entering is a new consciousness of ourselves. This is the pivot where we turn from anxiety and despair to an exhilarating sense of promise. These systems have the power to return us to ourselves in new ways.

    Do they herald the end of “the humanities”? In one sense, absolutely. My colleagues fret about our inability to detect (reliably) whether a student has really written a paper. But flip around this faculty-lounge catastrophe and it’s something of a gift.

    You can no longer make students do the reading or the writing. So what’s left? Only this: give them work they want to do. And help them want to do it. What, again, is education? The non-coercive rearranging of desire.

    Within five years, it will make little sense for scholars of history to keep producing monographs in the traditional mold—nobody will read them, and systems such as these will be able to generate them, endlessly, at the push of a button.

    But factory-style scholarly productivity was never the essence of the humanities. The real project was always us: the work of understanding, and not the accumulation of facts. Not “knowledge,” in the sense of yet another sandwich of true statements about the world. That stuff is great—and where science and engineering are concerned it’s pretty much the whole point. But no amount of peer-reviewed scholarship, no data set, can resolve the central questions that confront every human being: How to live? What to do? How to face death?

    The answers to those questions aren’t out there in the world, waiting to be discovered. They aren’t resolved by “knowledge production.” They are the work of being, not knowing—and knowing alone is utterly unequal to the task.

    For the past seventy years or so, the university humanities have largely lost sight of this core truth. Seduced by the rising prestige of the sciences—on campus and in the culture—humanists reshaped their work to mimic scientific inquiry. We have produced abundant knowledge about texts and artifacts, but in doing so mostly abandoned the deeper questions of being which give such work its meaning.

    Now everything must change. That kind of knowledge production has, in effect, been automated. As a result, the “scientistic” humanities—the production of fact-based knowledge about humanistic things—are rapidly being absorbed by the very sciences that created the A.I. systems now doing the work. We’ll go to them for the “answers.”

  • ‘One of the Best AI Researchers’ Denied Green Card After 12 Years in US

    ‘One of the Best AI Researchers’ Denied Green Card After 12 Years in US

    A Canadian artificial intelligence (AI) researcher who has lived in the United States for 12 years and worked on ChatGPT was denied a green card, according to employees at parent company OpenAI through a series of posts on X, formerly Twitter.

    Newsweek reached out to OpenAI and the United States Citizenship and Immigration Services (USCIS) by email outside of normal business hours on Saturday morning for comment.

    Why It Matters

    President Donald Trump pledged to enact the largest crackdown on immigration in the country’s history, initiating mass deportations that remain mired in legal gridlock amid challenges from various states and legal authorities.

    However, Elon Musk and Vivek Ramaswamy, both initially tapped by Trump to lead the Department of Government Efficiency (DOGE) together, championed a focus on an expansion of programs like the H-1B visa, a temporary, nonimmigrant visa that allows U.S. employers to hire foreign workers for seasonal or short-term nonagricultural jobs, to increase the number of high-skill immigrants.

    What To Know

    Noam Brown, a researcher at OpenAI, on Friday morning wrote on X that he was “deeply concerned” about the immigration status of Kai Chen, a Canadian citizen who has lived and worked in the U.S. for 12 years who was forced to leave after her green card application was denied.

    “It’s deeply concerning that one of the best AI researchers I’ve worked with, [Kai Chen], was denied a U.S. green card today,” Brown wrote, adding, “We’re risking America’s AI leadership when we turn away talent like this.”

    Dylan Hunn, another OpenAI employee, echoed Brown’s sentiment just hours later, saying that Chen was “incredibly important to OpenAI” as she was “crucial for GPT-4.5.”

    “Our immigration system has gone *nuts* to kick her out,” Hunn wrote. “America needs her!”

    Brown later wrote on X that Chen planned to work remotely from an Airbnb in Vancouver and go “full monk mode” to keep up with her projects while the immigration issue resolved. Chen tried to meet the moment with optimism, writing in response to Brown that she would indeed be in Vancouver “for an indeterminate amount of time” and would be “excited about meeting new people.”

    “Hopefully will return home sometime this year but if not shall make the best of it,” Chen wrote, later adding in a separate post that OpenAI has been “incredibly supportive during this kerfuffle.”

    Brown provided an update shortly before midnight that it seemed as though “there might have been paperwork issues with the initial green card filing” done two years earlier.

    “It’s a shame that this means [Chen] has to leave the U.S. for a while but there’s reason for optimism that this will be resolved,” Brown wrote on X.

    Chen clarified the situation further, saying she had filed for the green card three years ago before her time at OpenAI.

    “Really sucks to get denied after waiting for so long and unable to return home, but all in all feel very lucky to be where I am,” she wrote.

    ChatGPT OpenAI Researcher Green Card
    A person displays the ChatGPT logo on a smartphone screen with the OpenAI logo in the background on December 29, 2024, in Chongqing, China.

    Cheng Xin/Getty Images

    What Protections Do Green Card Holders Have?

    The USCIS says a green card holder has the right to live permanently in the U.S. provided they don’t commit any actions that “would make you removable under immigration law.” This includes breaking laws and not filing taxes.

    A green card holder is protected by all U.S. laws, including those at the state and local levels, and they can apply for jobs more freely than those who may be in the U.S. on work-based visas.

    Travel is also far easier with a green card than with other temporary visas, but holders must make sure they do not leave for more than six months at a time.

    “There’s a reason why somebody would want a green card versus to be here on a temporary visa because it is lawful permanent residence, it gives you the ability to live and work permanently in the United States. But that said, it is not citizenship,” Eliss Taub, a partner at immigration law firm Siskind Susser, previously told Newsweek.

    Green card holders must renew their cards every 10 years and can apply for citizenship after three years if they are married to a U.S. citizen or five if not.

    What People Are Saying

    Noam Brown, an OpenAI employee, wrote on X on Saturday: “I’ve been in AI since 2012, and I’ve seen enough visa horror stories since then to know that the brokenness of high-skilled immigration in America is persistent. It’s particularly painful to see that brokenness slow down my teammate for 2+ months when AI progress is week to week.”

    OpenAI CEO Sam Altman in 2023 wrote on X: “One of the easiest policy wins i can imagine for the US is to reform high-skill immigration. the fact that many of the most talented people in the world want to be here is a hard-won gift; embracing them is the key to keeping it that way. hard to get this back if we lose it.”

    Shaun Ralston, an independent contractor providing support for OpenAI’s API customers, wrote on X on Friday: …@OpenAI filed 80+ new H-1Bs last year alone. How many more brilliant minds will the Trump administration push away to other countries? Hey, MAGA, fix the talent pipeline or stop talking about AI leadership.”

    Matt Teagarden, the CEO of the Kansas Livestock Association, earlier this month told Newsweek: “Businesses are making certain their employment document files are in order. They also are confirming their rights and responsibilities in this area as well as helping their employees understand their rights.”

    What Happens Next?

    Chen’s green card application will take time to resolve, but it appears the root issue has been identified, making it more likely she’ll be able to return to the U.S. sooner than later.

  • Trump Is the Emperor of A.I. Slop

    Trump Is the Emperor of A.I. Slop

    On February 19th, Donald Trump logged onto Truth Social to congratulate himself on vanquishing congestion pricing in his home state. “CONGESTION PRICING IS DEAD,” he posted. “Manhattan, and all of New York, is SAVED. LONG LIVE THE KING!” The message was amplified by the White House’s official X account, which tweeted it with an A.I.-generated image of Trump, golden-haired and golden-crowned, blotting out the New York City skyline.

    The illustration, which was styled to look like the cover of Time magazine, displayed the President’s fondness for crude symbols of power and wealth. He is the lord of literalism, and this literalism defines much of what he’s done to amuse himself since retaking the White House. (See, for instance, his recent appearance at a mixed-martial-arts event in Miami with Elon Musk and other functionaries. They entered the stadium to Kid Rock’s “American Bad Ass.”) Trump has proposed a military parade with Humvees and helicopters on his birthday, and according to CNN he has been hard at work renovating the Oval Office for his second term, swapping out the wooden consoles for marble-topped decorative tables, hanging “gilded Rococo mirrors” on the doors, ensconcing golden cherubim in the pediments, and wrapping the television remote in shiny paper. (His “gold guy” had to be flown in from Florida.) He has installed a portrait of George Washington brandishing a sword across from an oil painting of a grinning Ronald Reagan, and both former Presidents may soon be able to look out at the former Rose Garden, which Trump plans to pave over. Nearby sits a bullion-like paperweight engraved with TRUMP, in all caps; at this rate of converting subtext into text, the President will soon use his TRUMP paperweight to bash in the head of a bald eagle.

    During Trump’s first term, the painter who seemed most tuned in to his aesthetic was Jon McNaughton, whom the art historian Jennifer A. Greenhill calls MAGA’s “court artist.” McNaughton’s depictions of the President—fantastical scenes rendered in a flat, hyperrealist style—regularly went viral on pre-Musk Twitter. Often, Trump is shown in the company of other POTUSes, who beam at him approvingly. He might be slinging a machine gun, playing football, cradling a flag, or composing a masterpiece upon his own easel. In “Crossing the Swamp,” from 2018, Trump, posed as George Washington, holds a lantern aloft as Nikki Haley, Ben Carson, and other first-term Cabinet members row over a brackish Delaware. There’s a kitschy, romantic, hero-worshipping nostalgia to the image, as if Norman Rockwell had undergone a lobotomy.

    In The Atlantic, in 2019, Greenhill compared McNaughton’s portraits to “painted memes” and wrote that they are “shaped for digital consumption.” But advances in A.I. have allowed supporters to flood social media with even more partisan and on-the-nose images for Trump’s second Presidency. These include migraine-inducingly representational scenes of Trump riding a lion and shredding on an electric guitar. Like the old memes, the new memes allow no room for interpretive freedom. Trump is strong, so he is a bodybuilder. He is our savior, so he wears a white robe.

    Not surprisingly, Trump has taken to machine-authored propaganda. During his reëlection campaign, his Truth Social account collaged a series of fake photographs of Taylor Swift and her fans implying that Swift backed him for President. “I accept!” he wrote. The A.I. scenery surrounding the Trump Administration reflects Trump’s ideal world, as when he reposted a clip, created via Arcana Labs, of a Gaza emptied of actual Gazans and glowing with gilded effigies of himself. The illustrations seem to have obviated the need for a court painter: now Trump has dozens if not hundreds of people to conjure flattering representations of him on social media. He can even, if he wishes, cut out the middlemen and call up the images himself. It makes sense that a man who yearns for a reality untroubled by other humans would be drawn to art that is untouched by anything human. As Musk breeds a “legion” of children who can populate Mars one day, Trump seems to be finding his way back to asexual reproduction, clearing the field of every ego but his own.

    If you squint, Trump has been imposing a bot-brained vision on America for years. At one of his inaugural balls in 2017, he displayed a cake that looked like a Seussian top hat, with nine tiers piled into a whimsical tower of pale blues and navies, the fondant set off by red stripes, silver stars, swagged banners, and a Presidential seal. The cake copied a design that Duff Goldman, a pastry chef and Food Network personality, had created for Barack Obama’s Inauguration in 2013. But there was one essential difference. Trump’s cake, which he cut into with a military sword, was mostly Styrofoam, with a three-inch wedge of edible crumb for the photo op.

    The cake was a kind of koan, a dizzyingly empty concoction, like a stage prop after the show has left town. Its substance didn’t matter—try to eat it and you’d get a mouth full of Styrofoam—but on the other hand its surface didn’t matter, either. It was just a ripoff of Obama’s cake. A sham dessert is a perfect symbol for Trump’s Presidency, and this one underscored that the hollowness of his aesthetic is twinned to the nihilism of his politics. Because there is no content, everything is style, and the materials of that style are whatever happens to be lying around (even if those materials once belonged to someone you hate).

    In this way, Trump and A.I.-generated imagery are well matched. Like a large language model, Trump takes in preëxisting work and uses it to create his own meaningless content. His taste often seems inconsistent: he-man rock, fast food, trucks, golf, mirrors, Andrew Lloyd Webber, golden bathroom fixtures, chandeliers, marble, Pepe the Frog, rocket boosters, military parades—a slurry of mass-cultural totems, wealth and status markers, and gender tells, much of it sourced from Trump’s eighties heyday and borne along by a maximalist, self-regarding sensibility that explains the President’s political actions better than ideology ever could. The common denominator, if there is one, is obviousness. Each thing serves as the cartoonishly exaggerated marker of an identity: berserker populist patriot, effete rich man, savvy dealmaker.

    Trump, seeking to project his power, can afford to be indiscriminate in his choice of signifiers; we already know what they refer back to. His careless personal style—the too-long ties and ill-fitting suits, the flyaway fake hair—reads as an expression of dominance, a guy passing around a collection plate for admiration that he doesn’t have the time, inclination, or ability to earn. Why should the emperor trouble himself to put on clothes?

    And yet there is, in Trump’s brain, an ideal Trump, a dream Trump, handsome, rich, and powerful. This Trump is the essence of luxury, and the buildings bearing his name are the most beautiful things you’ve ever seen. The responsibility for closing the gap between who Trump is and who he longs to be falls to us. We have to transform the casual shoddiness of his self-presentation into a splendid picture; he offers the prompt of a silly hat and we generate a fantasy of his greatness. Trump, after he announced his takeover of the Kennedy Center, tantalized his social-media followers with an A.I. image of himself conducting a symphony before a packed house. At his first board meeting five weeks later, he posed on a balcony in the center’s concert hall, arms outstretched, echoing the meme—digital slop imported into real life.

    Signalling their allegiance to Trump’s aesthetic, men in the G.O.P. have begun to wear oversized red ties, and, as Mother Jones reports, loyalists are undergoing a distinctive kind of plastic surgery to attain “Mar-a-Lago face.” Conservative women are plumping their lips with injectables and chiselling their cheekbones; what happened to Matt Gaetz is anyone’s guess. Meanwhile, R.F.K., Jr., and Joe Rogan are mincing ever closer to the uncanny valley, supplementing Trump’s brand of “reactionary camp” with a roided-up brawn.

    In this sense, Trump does not just produce slop. He and his cronies force other people to generate slop, too. On February 22nd, Musk demanded that federal employees write e-mails explaining five things that they’d accomplished in the previous week. What could the results be but slop, meaningless to real people who understand how agencies function? According to the Washington Post, many government workers have been submitting the same boilerplate reply, over and over, furnishing a preview of the White House’s plan to replace the federal labor pool with digital assistants. It’s as if DOGE is forcing bureaucrats to conform to the cast of their leaders’ contempt, to become as faceless and pointless as Trump and Musk believe them to be.

    On March 17th, White House social accounts posted a video of a man in shackles being prepared for deportation as “Closing Time,” by Semisonic, plays in the background. Captioning the screen are the lyrics “You don’t have to go home but you can’t stay here.” “Closing Time” is about endings and beginnings, about the early morning hour when bars are closing and revellers have to disperse, maybe in pairs or maybe alone. But the Administration’s clip stripped the words of their wistful energy and doubleness of meaning and prefigured its intent to impose a single, cruel interpretation on a human being.

    For the most part, the deportation videos now circulating on social media are not A.I.-generated. They star real people having their heads shaved or getting chained up and loaded onto planes. But digital technology has been used to obscure and usurp the truth about their lives. On Monday, the President shared a photograph of a hand tattooed with what he asserted to be the insignia of a violent gang. Trump claimed that the hand belonged to Kilmar Armando Abrego Garcia, a twenty-nine-year-old who has no documented affiliation with MS-13 and who was wrongfully deported to El Salvador last month in violation of a court order. The image appears to have been doctored, recruited into Trump’s own semiotic sleight of hand—reducing a person to a body part and then stamping that body part with a sign of evil. Trump sees only one thing when he beholds an immigrant: a criminal. His post was a bid to print his vision over everyone else’s.

    That the tools of digital-reality manipulation are proving useful to this President suggests, of course, that he intends to shape the way Americans see the world. But it also affirms a basic truth about how Trump views human beings: as fundamentally unreal. People exist to gratify his desires. When he’s done with them, they can just be turned off. Long before A.I. became a determining factor in the rest of our lives, Trump was an A.I. emperor, waiting for his lonely, looping, ego-driven fantasia to synch up with reality. The door to his bunker opens. He lifts the sword and cuts the cake. ♦

  • Chinese humanoid robot with eagle-eye vision and powerful AI

    Chinese humanoid robot with eagle-eye vision and powerful AI

    XPENG’s humanoid robot, Iron, is not your typical factory machine. Standing 5 feet, 8 inches tall and weighing 154 pounds, Iron combines advanced artificial intelligence with human-like movement and exceptional vision. 

    Already hard at work assembling electric vehicles in XPENG’s factories, this robot is designed to change how we think about robots in everyday life.

    Join The FREE CyberGuy ReportGet my expert tech tips, critical security alerts and exclusive deals — plus instant access to my free Ultimate Scam Survival Guide when you sign up!

    Iron the robot 1

    Iron the humanoid robot  (XPENG)

    From factory floors to everyday tasks

    Iron’s design includes 60 joints and 200 degrees of freedom, allowing it to move smoothly and naturally. 

    Unlike traditional robots that often move with jerky or stiff motions, Iron walks steadily and can manipulate objects with precision thanks to its human-like hands. XPENG has developed its mobility system using reinforcement learning and large artificial intelligence models, enabling Iron to adapt to a variety of complex tasks. 

    While it currently helps build cars, XPENG envisions Iron performing administrative work, customer service and even household chores in the future.

    Iron the robot 2

    Iron, the humanoid robot  (XPENG)

    AI HUMANOID ROBOT LEARNS TO MIMIC HUMAN EMOTIONS AND BEHAVIOR

    A brain like no other

    At the heart of Iron is XPENG’s proprietary Turing AI chip, a powerful processor capable of handling 3,000 trillion operations per second. This chip processes AI models with 30 billion parameters, allowing Iron to think, adapt and respond with human-like intelligence. 

    Iron’s vision system, inspired by XPENG’s self-driving car technology, offers a remarkable 720-degree field of view, giving the robot eagle-like awareness of its surroundings. Its speech interaction system is also adapted from XPENG’s intelligent vehicle cockpits, enabling natural and logical conversations.

    WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

    Iron the robot 3

    Iron, the humanoid robot  (XPENG)

    HUMANOID ROBOT BREAKDANCES ITS WAY INTO HISTORY

    More than just a robot

    XPENG is not limiting Iron to factory work. The company sees Iron as a personal assistant that can support people in offices, retail environments and homes. Although the current version is priced around $150,000 and targeted mainly at businesses, XPENG plans to develop more accessible versions for everyday consumers. Iron’s advanced dexterity, powered by custom-designed robotic hands with 15 degrees of freedom each, allows it to handle delicate tasks that require fine motor skills.

    GET FOX BUSINESS ON THE GO BY CLICKING HERE

    Iron the robot 4

    Iron, the humanoid robot  (XPENG)

    WORLD’S FIRST AI-POWERED INDUSTRIAL SUPER-HUMANOID ROBOT

    Part of a bigger vision

    Iron is a key piece of XPENG’s broader AI Tech Tree strategy, which aims to create an ecosystem of smart electric vehicles, humanoid robots and even flying vehicles. This vision is also reflected in the company’s new 2025 XPENG X9 electric SUV, which features hundreds of technical upgrades, including ultra-fast charging and AI-powered driving systems that mimic human decision-making. Together, these innovations showcase XPENG’s ambition to blend robotics and automotive technology into a seamless future.

    SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES

    Kurt’s key takeaways

    By leveraging AI technology originally developed for its electric vehicles, XPENG is creating a robot that bridges the gap between automotive innovation and humanoid robotics. With a significant investment and a clear roadmap, Iron has the potential to become much more than a factory assistant.

    It could soon become a helpful presence in offices and homes, changing how we interact with machines in everyday life.

    CLICK HERE TO GET THE FOX NEWS APP

    Iron’s creators promise a future of seamless human-robot collaboration. But as it masters everything from car assembly to household chores, are we sleepwalking into a world where humans become obsolete, or is this the key to unlocking our greatest potential? Let us know by writing us at Cyberguy.com/Contact

    For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter

    Ask Kurt a question or let us know what stories you’d like us to cover

    Follow Kurt on his social channels

    Answers to the most asked CyberGuy questions:

    New from Kurt:

    Copyright 2025 CyberGuy.com.  All rights reserved.  

  • AI won’t replace doctors — it will upgrade them

    AI won’t replace doctors — it will upgrade them

    AI won’t replace doctors — it will upgrade them

    The future of medicine will belong to the physicians who are empowered, not sidelined, by technology. And to the patients who benefit from care that is faster, smarter and deeply human.