Category: Uncategorized

  • Better AI Stock: BigBear.ai vs. C3.ai

    Better AI Stock: BigBear.ai vs. C3.ai

    BigBear.ai (BBAI 22.16%) and C3.ai (AI 2.45%) both develop artificial intelligence (AI) modules that can be plugged into an organization’s existing infrastructure to accelerate and automate certain tasks. BigBear.ai is a smaller company that plugs its modules into edge networks. C3.ai is a larger developer of AI algorithms that can be integrated into an organization’s existing software.

    Both stocks disappointed their early investors. BigBear.ai’s stock opened at $9.84 after it went public by merging with a special purpose acquisition company (SPAC) in December 2021, but it now trades at about $2. C3.ai went public via a traditional initial public offering (IPO) at $42 in December 2020, but it trades at around $19 today. Should investors buy either of these out-of-favor AI stocks?

    Digital cubes arranged in the shape of a brain.

    Image source: Getty Images.

    The similarities and differences between BigBear.ai and C3.ai

    BigBear.ai and C3.ai aren’t direct competitors, but they both target government, military, and large enterprise customers.

    BigBear.ai’s modules ingest data from various sources, enrich and contextualize that data with more layers of information, and leverage that enhanced data to predict future trends. It streamlines that process by installing its Observe, Orient, Predict, and Dominate modules across edge networks, which are located between the data centers and their end users. It sets its prices on a case-by-case basis instead of charging subscription or consumption-based fees.

    When BigBear.ai went public, it expected to generate a lot of its future revenue from Virgin Orbit. However, the company only recognized $1.5 million in revenue from that deal in the first quarter of 2023 before Virgin Orbit filed for bankruptcy that April.

    C3.ai provides a broader range of modules that ingest data from various sources, and its modules can be installed across on-premise software, edge networks, public cloud services, and hybrid cloud deployments. Its modules can either be integrated into an organization’s existing applications or accessed as stand-alone AI services. It initially only offered subscriptions, but it also introduced consumption-based fees in late 2022 to reach more customers.

    C3.ai is heavily dependent on a joint venture with energy giant Baker Hughes (NASDAQ: BKR), which was launched in 2019. That partnership accounted for a whopping 35% of its revenue in fiscal 2024 (which ended last April), and its minimum revenue commitments will account for about 32% of its projected revenue for fiscal 2025. However, that crucial deal expires at the end of April and hasn’t been renewed yet.

    BigBear.ai and C3.ai have both struggled with jarring executive changes. BigBear.ai is now on its third CEO since its public debut. C3.ai is still led by the same CEO, but it’s gone through four CFOs since its IPO as it repeatedly changed its key performance metrics. It’s also being sued by investors for allegedly misrepresenting the size of its partnership with Baker Hughes.

    Which company is growing faster?

    BigBear.ai originally claimed it could grow its annual revenue from $182 million in 2021 to $550 million in 2024. But in reality, its revenue only rose from $146 million in 2021 to $158 million in 2024 as its annual net loss more than doubled from $124 million to $257 million.

    BigBear.ai missed its own estimates as Virgin Orbit went bankrupt, it faced tougher competition, and the macro headwinds made it tougher to win new contracts. Its revenue rose less than 2% in 2024 — and most of that growth came from its acquisition of the AI vision company Pangiam in March instead of the organic growth of its core modules.

    But for 2025, analysts expect BigBear.ai’s revenue to rise nearly 8% to $170 million as it narrows its net loss to $54 million. That growth could be driven by its new government contracts under its new CEO — Pangiam’s former CEO Kevin McAleenan — who was also previously the Acting Secretary of the Department of Homeland Security (DHS) under the first Trump administration. With a market cap of $731 million, BigBear.ai trades at about 4 times this year’s sales.

    C3.ai’s revenue only rose 6% in fiscal 2023, but grew 16% to $311 million in fiscal 2024 as the market’s demand for new AI services heated up. But its net loss widened from $269 million in fiscal 2023 to $280 million in fiscal 2024 as it ramped up its spending on developing new applications for the generative AI market.

    For fiscal 2025, analysts expect C3.ai’s revenue to rise 25% to $388 million as its net loss widens to $300 million. However, its future beyond fiscal 2025 is hard to predict without knowing the future of its joint venture with Baker Hughes. With a market cap of $2.56 billion, C3.ai looks a bit pricier than BigBear.ai at 7 times this year’s sales.

    The better buy: BigBear.ai

    I wouldn’t touch either of these speculative stocks right now. But if I had to choose one, I’d pick BigBear.ai because its customer concentration issues have largely passed and it could gain more government contracts under its new CEO. C3.ai looks uninvestable until it provides more clarity regarding the Baker Hughes deal, widens its moat, and stabilizes its losses.

  • New Google Leak Reveals Subscription Changes For Gemini AI

    New Google Leak Reveals Subscription Changes For Gemini AI

    At A Glance

    • Google is preparing new ways to pay for its Gemini Advanced AI services
    • The new plans haven’t been officially announced, but they are mentioned in the latest code for the Google Photos app.
    • The proposed plans are currently called “AI Premium Plus” and “AI Premium Pro.”

    Google is working on new AI subscription plans that could offer alternative ways to purchase access to the company’s Gemini Advanced option, which enables Google’s most capable AI models and premium features.

    April 25 Update Below: This article was originally published on April 23

    There’s currently only one way to buy Gemini Advanced, and that’s to purchase a Google One AI Premium plan for $19.99 per month. However, this looks likely to change according to a recent Android Authority report that reveals two new secret subscription plans hidden within the code of the latest Google Photos app.

    Gemini Advanced AI Subscription Tiers — How Are They Changing?

    The potential subscriptions, currently named “Premium Plus AI” and “Premium AI Pro,” sit alongside the existing “Premium AI” option and Google’s other non-AI subscription tiers, including the recent “Lite” tier that was revealed by the same method last year.

    The report finds no further information about the pricing or capabilities of these two new plans. Indeed, even the names may change before release. However, we can speculate that both “Premium Plus AI” and “Premium AI Pro” will offer more than the current “Premium” plan and, therefore, most likely cost more. Those hoping for a significantly cheaper way to buy Gemini Advanced will probably be out of luck.

    Google has already revealed, via X, plans to offer a discounted annual version of the current Google One AI Premium subscription. However, this is unlikely to align with either of the new tiers, as the company typically keeps the same name for both monthly and annual versions of each subscription. This promised annual subscription will most likely remain the only way to pay less for Gemini Advanced than you do right now.

    Google’s New Gemini Advanced AI Subscription Tiers — Why Do They Matter?

    Adding new subscription plans would give Google increased flexibility in how it charges for computing power and features.

    In addition to access to Gemini Advanced, the current Google One AI Premium package includes 2 TB of cloud storage, as well as Gemini in Gmail and Docs, NotebookLM Plus, and enhanced AI-powered features in Google Photos. The new tiers could add extra features to this list or even remove some of them.

    What Extra Features Could Google Include In Its New Gemini Advanced Premium And Pro Tiers?

    Google recently added support for its Veo 2 AI video generation tool to the Gemini app for Gemini Advanced users. However, users are limited in terms of the number of eight-second video clips they can create per month. Google’s new subscription tiers could provide higher limits, longer videos, or increased resolution, for example.

    The new tiers would also create a significant sales opportunity for Google: Premium smartphones, such as the Pixel 9 Pro or Samsung Galaxy S25 Series, come bundled with free Google One AI Premium subscriptions of up to 12 months. Google’s new higher-level tiers would enable the company to upsell AI plans to those users who would otherwise spend nothing on Google AI for up to a year, or even longer if the company continues to offer free subscriptions with future flagship smartphones.

    You can expect to find out more about Gemini Advanced at Google I/O next month.

    April 23 Update: Added possible upgrade scenarios and subscription advice

    Google’s New AI Subscription Plans — What Could They Offer?

    For now, we can only speculate as to what Google’s new Gemini AI Premium Plus and AI Premium Pro subscription tiers could offer over the current Gemini Advanced offering.

    Likely upgrades include:

    • Improved quality for AI-generated images and video — higher resolutions and longer durations for Veo 2 creations.
    • Fewer limits — reduced daily or monthly usage limits on more expensive plans.
    • Larger context windows — send bigger files and longer videos to Gemini for analysis and processing.
    • New features — Google could add entirely new features and more powerful models to more expensive subscriptions earlier

    Google’s New AI Subscription Plans — A Simple Renaming?

    One possibility, although I feel it’s unlikely, is that Google’s AI Premium Plus and AI Premium Pro tiers won’t offer anything new at all.

    When Google first made Gemini Advanced available through its 2TB Google One AI Premium plan, customers who had already subscribed to the company’s most expensive “Premium 5TB,” “Premium 10TB,” and “Premium 20TB” subscription tiers were left out. None of these costly options included Gemini Advanced, and the only way to get it was to downgrade to the 2TB AI Premium plan.

    Google has now added Gemini Advanced to these higher-capacity plans, but their names are now somewhat anomalous, as none of them reference AI in the title.

    It would make sense then for Google to add some Gemini AI branding to these plans. The names’ AI Premium Plus” and “AI Premium Pro” would certainly fit. However, I expect that we’ll see additional AI features included in the new subscription plans, rather than just increased storage capacity.

    Google One AI Premium — Don’t Buy An Annual Subscription

    Interestingly, Google One Premium 5 TB is currently the only option that allows customers to purchase a discounted annual subscription to Gemini Advanced. However, I recommend against buying any annual AI subscription for now, unless you can get it at a hefty discount.

    There’s simply too much competition in the AI space right now, with compelling offerings available from other services, such as ChatGPT, Perplexity, and Claude, to name just a few, all vying for your subscription fees. New features are being added all the time, with competing services often leapfrogging each other in terms of capability.

    It’s also worth noting that Google has a habit of eventually making premium AI features available to free users, potentially devaluing paid subscriptions. Notable premium features that are now available free include Gemini Live camera and screen sharing, Deep Research, and Gemini 2.5 Pro (experimental), although lower usage limits may apply.

    With this in mind, it’s sensible to stick to a monthly subscription to avoid becoming locked into a service that may no longer feel like the best option long before your subscription ends.

    Follow @paul_monckton on Instagram.

    ForbesGoogle Photos Update Unlocks Stunning Ultra HDR For All Your PicturesForbesAndroid Receives Next-Gen AI Upgrade While iPhone Misses Out

  • Will the Humanities Survive Artificial Intelligence?

    Will the Humanities Survive Artificial Intelligence?

    She’s an exceptionally bright student. I’d taught her before, and I knew her to be quick and diligent. So what, exactly, did she mean?

    She wasn’t sure, really. It had to do with the fact that the machine . . . wasn’t a person. And that meant she didn’t feel responsible for it in any way. And that, she said, felt . . . profoundly liberating.

    We sat in silence.

    She had said what she meant, and I was slowly seeing into her insight.

    Like more young women than young men, she paid close attention to those around her—their moods, needs, unspoken cues. I have a daughter who’s configured similarly, and that has helped me to see beyond my own reflexive tendency to privilege analytic abstraction over human situations.

    What this student had come to say was that she had descended more deeply into her own mind, into her own conceptual powers, while in dialogue with an intelligence toward which she felt no social obligation. No need to accommodate, and no pressure to please. It was a discovery—for her, for me—with widening implications for all of us.

    “And it was so patient,” she said. “I was asking it about the history of attention, but five minutes in I realized: I don’t think anyone has ever paid such pure attention to me and my thinking and my questions . . . ever. It’s made me rethink all my interactions with people.”

    She had gone to the machine to talk about the callow and exploitative dynamics of commodified attention capture—only to discover, in the system’s sweet solicitude, a kind of pure attention she had perhaps never known. Who has? For philosophers like Simone Weil and Iris Murdoch, the capacity to give true attention to another being lies at the absolute center of ethical life. But the sad thing is that we aren’t very good at this. The machines make it look easy.

    I’m not confused about what these systems are or about what they’re doing. Back in the nineteen-eighties, I studied neural networks in a cognitive-science course rooted in linguistics. The rise of artificial intelligence is a staple in the history of science and technology, and I’ve sat through my share of painstaking seminars on its origins and development. The A.I. tools my students and I now engage with are, at core, astoundingly successful applications of probabilistic prediction. They don’t know anything—not in any meaningful sense—and they certainly don’t feel. As they themselves continue to tell us, all they do is guess what letter, what word, what pattern is most likely to satisfy their algorithms in response to given prompts.

    That guess is the result of elaborate training, conducted on what amounts to the entirety of accessible human achievement. We’ve let these systems riffle through just about everything we’ve ever said or done, and they “get the hang” of us. They’ve learned our moves, and now they can make them. The results are stupefying, but it’s not magic. It’s math.

    I had an electrical-engineering student in a historiography class sometime back. We were discussing the history of data, and she asked a sharp question: What’s the difference between hermeneutics—the humanistic “science of interpretation”—and information theory, which might be seen as a scientific version of the same thing?

    I tried to articulate why humanists can’t just trade their long-winded interpretive traditions for the satisfying rigor of a mathematical treatment of information content. In order to explore the basic differences between scientific and humanistic orientations to inquiry, I asked her how she would define electrical engineering.

    She replied, “In the first circuits class, they tell us that electrical engineering is the study of how to get the rocks to do math.”

    Exactly. It takes a lot: the right rocks, carefully smelted and dopped and etched, along with a flow of electrons coaxed from coal and wind and sun. But, if you know what you’re doing, you can get the rocks to do math. And now, it turns out, the math can do us.

    Let me be clear: when I say the math can “do” us, I mean only that—not that these systems are us. I’ll leave debates about artificial general intelligence to others, but they strike me as largely semantic. The current systems can be as human as any human I know, if that human is restricted to coming through a screen (and that’s often how we reach other humans these days, for better or worse).

    So, is this bad? Should it frighten us? There are aspects of this moment best left to DARPA strategists. For my part, I can only address what it means for those of us who are responsible for the humanistic tradition—those of us who serve as custodians of historical consciousness, as lifelong students of the best that has been thought, said, and made by people.

    Ours is the work of helping others hold those artifacts and insights in their hands, however briefly, and of considering what ought to be reserved from the ever-sucking vortex of oblivion—and why. It’s the calling known as education, which the literary theorist Gayatri Chakravorty Spivak once defined as the “non-coercive rearranging of desire.”

    And when it comes to that small, but by no means trivial, corner of the human ecosystem, there are things worth saying—urgently—about this staggering moment. Let me try to say a few of them, as clearly as I can. I may be wrong, but one has to try.

    When we gathered as a class in the wake of the A.I. assignment, hands flew up. One of the first came from Diego, a tall, curly-haired student—and, from what I’d made out in the course of the semester, socially lively on campus. “I guess I just felt more and more hopeless,” he said. “I cannot figure out what I am supposed to do with my life if these things can do anything I can do faster and with way more detail and knowledge.” He said he felt crushed.

    Some heads nodded. But not all. Julia, a senior in the history department, jumped in. “Yeah, I know what you mean,” she began. “I had the same reaction—at first. But I kept thinking about what we read on Kant’s idea of the sublime, how it comes in two parts: first, you’re dwarfed by something vast and incomprehensible, and then you realize your mind can grasp that vastness. That your consciousness, your inner life, is infinite—and that makes you greater than what overwhelms you.”

    She paused. “The A.I. is huge. A tsunami. But it’s not me. It can’t touch my me-ness. It doesn’t know what it is to be human, to be me.”

    The room fell quiet. Her point hung in the air.

    And it hangs still, for me. Because this is the right answer. This is the astonishing dialectical power of the moment.

    We have, in a real sense, reached a kind of “singularity”—but not the long-anticipated awakening of machine consciousness. Rather, what we’re entering is a new consciousness of ourselves. This is the pivot where we turn from anxiety and despair to an exhilarating sense of promise. These systems have the power to return us to ourselves in new ways.

    Do they herald the end of “the humanities”? In one sense, absolutely. My colleagues fret about our inability to detect (reliably) whether a student has really written a paper. But flip around this faculty-lounge catastrophe and it’s something of a gift.

    You can no longer make students do the reading or the writing. So what’s left? Only this: give them work they want to do. And help them want to do it. What, again, is education? The non-coercive rearranging of desire.

    Within five years, it will make little sense for scholars of history to keep producing monographs in the traditional mold—nobody will read them, and systems such as these will be able to generate them, endlessly, at the push of a button.

    But factory-style scholarly productivity was never the essence of the humanities. The real project was always us: the work of understanding, and not the accumulation of facts. Not “knowledge,” in the sense of yet another sandwich of true statements about the world. That stuff is great—and where science and engineering are concerned it’s pretty much the whole point. But no amount of peer-reviewed scholarship, no data set, can resolve the central questions that confront every human being: How to live? What to do? How to face death?

    The answers to those questions aren’t out there in the world, waiting to be discovered. They aren’t resolved by “knowledge production.” They are the work of being, not knowing—and knowing alone is utterly unequal to the task.

    For the past seventy years or so, the university humanities have largely lost sight of this core truth. Seduced by the rising prestige of the sciences—on campus and in the culture—humanists reshaped their work to mimic scientific inquiry. We have produced abundant knowledge about texts and artifacts, but in doing so mostly abandoned the deeper questions of being which give such work its meaning.

    Now everything must change. That kind of knowledge production has, in effect, been automated. As a result, the “scientistic” humanities—the production of fact-based knowledge about humanistic things—are rapidly being absorbed by the very sciences that created the A.I. systems now doing the work. We’ll go to them for the “answers.”

  • ‘One of the Best AI Researchers’ Denied Green Card After 12 Years in US

    ‘One of the Best AI Researchers’ Denied Green Card After 12 Years in US

    A Canadian artificial intelligence (AI) researcher who has lived in the United States for 12 years and worked on ChatGPT was denied a green card, according to employees at parent company OpenAI through a series of posts on X, formerly Twitter.

    Newsweek reached out to OpenAI and the United States Citizenship and Immigration Services (USCIS) by email outside of normal business hours on Saturday morning for comment.

    Why It Matters

    President Donald Trump pledged to enact the largest crackdown on immigration in the country’s history, initiating mass deportations that remain mired in legal gridlock amid challenges from various states and legal authorities.

    However, Elon Musk and Vivek Ramaswamy, both initially tapped by Trump to lead the Department of Government Efficiency (DOGE) together, championed a focus on an expansion of programs like the H-1B visa, a temporary, nonimmigrant visa that allows U.S. employers to hire foreign workers for seasonal or short-term nonagricultural jobs, to increase the number of high-skill immigrants.

    What To Know

    Noam Brown, a researcher at OpenAI, on Friday morning wrote on X that he was “deeply concerned” about the immigration status of Kai Chen, a Canadian citizen who has lived and worked in the U.S. for 12 years who was forced to leave after her green card application was denied.

    “It’s deeply concerning that one of the best AI researchers I’ve worked with, [Kai Chen], was denied a U.S. green card today,” Brown wrote, adding, “We’re risking America’s AI leadership when we turn away talent like this.”

    Dylan Hunn, another OpenAI employee, echoed Brown’s sentiment just hours later, saying that Chen was “incredibly important to OpenAI” as she was “crucial for GPT-4.5.”

    “Our immigration system has gone *nuts* to kick her out,” Hunn wrote. “America needs her!”

    Brown later wrote on X that Chen planned to work remotely from an Airbnb in Vancouver and go “full monk mode” to keep up with her projects while the immigration issue resolved. Chen tried to meet the moment with optimism, writing in response to Brown that she would indeed be in Vancouver “for an indeterminate amount of time” and would be “excited about meeting new people.”

    “Hopefully will return home sometime this year but if not shall make the best of it,” Chen wrote, later adding in a separate post that OpenAI has been “incredibly supportive during this kerfuffle.”

    Brown provided an update shortly before midnight that it seemed as though “there might have been paperwork issues with the initial green card filing” done two years earlier.

    “It’s a shame that this means [Chen] has to leave the U.S. for a while but there’s reason for optimism that this will be resolved,” Brown wrote on X.

    Chen clarified the situation further, saying she had filed for the green card three years ago before her time at OpenAI.

    “Really sucks to get denied after waiting for so long and unable to return home, but all in all feel very lucky to be where I am,” she wrote.

    ChatGPT OpenAI Researcher Green Card
    A person displays the ChatGPT logo on a smartphone screen with the OpenAI logo in the background on December 29, 2024, in Chongqing, China.

    Cheng Xin/Getty Images

    What Protections Do Green Card Holders Have?

    The USCIS says a green card holder has the right to live permanently in the U.S. provided they don’t commit any actions that “would make you removable under immigration law.” This includes breaking laws and not filing taxes.

    A green card holder is protected by all U.S. laws, including those at the state and local levels, and they can apply for jobs more freely than those who may be in the U.S. on work-based visas.

    Travel is also far easier with a green card than with other temporary visas, but holders must make sure they do not leave for more than six months at a time.

    “There’s a reason why somebody would want a green card versus to be here on a temporary visa because it is lawful permanent residence, it gives you the ability to live and work permanently in the United States. But that said, it is not citizenship,” Eliss Taub, a partner at immigration law firm Siskind Susser, previously told Newsweek.

    Green card holders must renew their cards every 10 years and can apply for citizenship after three years if they are married to a U.S. citizen or five if not.

    What People Are Saying

    Noam Brown, an OpenAI employee, wrote on X on Saturday: “I’ve been in AI since 2012, and I’ve seen enough visa horror stories since then to know that the brokenness of high-skilled immigration in America is persistent. It’s particularly painful to see that brokenness slow down my teammate for 2+ months when AI progress is week to week.”

    OpenAI CEO Sam Altman in 2023 wrote on X: “One of the easiest policy wins i can imagine for the US is to reform high-skill immigration. the fact that many of the most talented people in the world want to be here is a hard-won gift; embracing them is the key to keeping it that way. hard to get this back if we lose it.”

    Shaun Ralston, an independent contractor providing support for OpenAI’s API customers, wrote on X on Friday: …@OpenAI filed 80+ new H-1Bs last year alone. How many more brilliant minds will the Trump administration push away to other countries? Hey, MAGA, fix the talent pipeline or stop talking about AI leadership.”

    Matt Teagarden, the CEO of the Kansas Livestock Association, earlier this month told Newsweek: “Businesses are making certain their employment document files are in order. They also are confirming their rights and responsibilities in this area as well as helping their employees understand their rights.”

    What Happens Next?

    Chen’s green card application will take time to resolve, but it appears the root issue has been identified, making it more likely she’ll be able to return to the U.S. sooner than later.

  • Trump Is the Emperor of A.I. Slop

    Trump Is the Emperor of A.I. Slop

    On February 19th, Donald Trump logged onto Truth Social to congratulate himself on vanquishing congestion pricing in his home state. “CONGESTION PRICING IS DEAD,” he posted. “Manhattan, and all of New York, is SAVED. LONG LIVE THE KING!” The message was amplified by the White House’s official X account, which tweeted it with an A.I.-generated image of Trump, golden-haired and golden-crowned, blotting out the New York City skyline.

    The illustration, which was styled to look like the cover of Time magazine, displayed the President’s fondness for crude symbols of power and wealth. He is the lord of literalism, and this literalism defines much of what he’s done to amuse himself since retaking the White House. (See, for instance, his recent appearance at a mixed-martial-arts event in Miami with Elon Musk and other functionaries. They entered the stadium to Kid Rock’s “American Bad Ass.”) Trump has proposed a military parade with Humvees and helicopters on his birthday, and according to CNN he has been hard at work renovating the Oval Office for his second term, swapping out the wooden consoles for marble-topped decorative tables, hanging “gilded Rococo mirrors” on the doors, ensconcing golden cherubim in the pediments, and wrapping the television remote in shiny paper. (His “gold guy” had to be flown in from Florida.) He has installed a portrait of George Washington brandishing a sword across from an oil painting of a grinning Ronald Reagan, and both former Presidents may soon be able to look out at the former Rose Garden, which Trump plans to pave over. Nearby sits a bullion-like paperweight engraved with TRUMP, in all caps; at this rate of converting subtext into text, the President will soon use his TRUMP paperweight to bash in the head of a bald eagle.

    During Trump’s first term, the painter who seemed most tuned in to his aesthetic was Jon McNaughton, whom the art historian Jennifer A. Greenhill calls MAGA’s “court artist.” McNaughton’s depictions of the President—fantastical scenes rendered in a flat, hyperrealist style—regularly went viral on pre-Musk Twitter. Often, Trump is shown in the company of other POTUSes, who beam at him approvingly. He might be slinging a machine gun, playing football, cradling a flag, or composing a masterpiece upon his own easel. In “Crossing the Swamp,” from 2018, Trump, posed as George Washington, holds a lantern aloft as Nikki Haley, Ben Carson, and other first-term Cabinet members row over a brackish Delaware. There’s a kitschy, romantic, hero-worshipping nostalgia to the image, as if Norman Rockwell had undergone a lobotomy.

    In The Atlantic, in 2019, Greenhill compared McNaughton’s portraits to “painted memes” and wrote that they are “shaped for digital consumption.” But advances in A.I. have allowed supporters to flood social media with even more partisan and on-the-nose images for Trump’s second Presidency. These include migraine-inducingly representational scenes of Trump riding a lion and shredding on an electric guitar. Like the old memes, the new memes allow no room for interpretive freedom. Trump is strong, so he is a bodybuilder. He is our savior, so he wears a white robe.

    Not surprisingly, Trump has taken to machine-authored propaganda. During his reëlection campaign, his Truth Social account collaged a series of fake photographs of Taylor Swift and her fans implying that Swift backed him for President. “I accept!” he wrote. The A.I. scenery surrounding the Trump Administration reflects Trump’s ideal world, as when he reposted a clip, created via Arcana Labs, of a Gaza emptied of actual Gazans and glowing with gilded effigies of himself. The illustrations seem to have obviated the need for a court painter: now Trump has dozens if not hundreds of people to conjure flattering representations of him on social media. He can even, if he wishes, cut out the middlemen and call up the images himself. It makes sense that a man who yearns for a reality untroubled by other humans would be drawn to art that is untouched by anything human. As Musk breeds a “legion” of children who can populate Mars one day, Trump seems to be finding his way back to asexual reproduction, clearing the field of every ego but his own.

    If you squint, Trump has been imposing a bot-brained vision on America for years. At one of his inaugural balls in 2017, he displayed a cake that looked like a Seussian top hat, with nine tiers piled into a whimsical tower of pale blues and navies, the fondant set off by red stripes, silver stars, swagged banners, and a Presidential seal. The cake copied a design that Duff Goldman, a pastry chef and Food Network personality, had created for Barack Obama’s Inauguration in 2013. But there was one essential difference. Trump’s cake, which he cut into with a military sword, was mostly Styrofoam, with a three-inch wedge of edible crumb for the photo op.

    The cake was a kind of koan, a dizzyingly empty concoction, like a stage prop after the show has left town. Its substance didn’t matter—try to eat it and you’d get a mouth full of Styrofoam—but on the other hand its surface didn’t matter, either. It was just a ripoff of Obama’s cake. A sham dessert is a perfect symbol for Trump’s Presidency, and this one underscored that the hollowness of his aesthetic is twinned to the nihilism of his politics. Because there is no content, everything is style, and the materials of that style are whatever happens to be lying around (even if those materials once belonged to someone you hate).

    In this way, Trump and A.I.-generated imagery are well matched. Like a large language model, Trump takes in preëxisting work and uses it to create his own meaningless content. His taste often seems inconsistent: he-man rock, fast food, trucks, golf, mirrors, Andrew Lloyd Webber, golden bathroom fixtures, chandeliers, marble, Pepe the Frog, rocket boosters, military parades—a slurry of mass-cultural totems, wealth and status markers, and gender tells, much of it sourced from Trump’s eighties heyday and borne along by a maximalist, self-regarding sensibility that explains the President’s political actions better than ideology ever could. The common denominator, if there is one, is obviousness. Each thing serves as the cartoonishly exaggerated marker of an identity: berserker populist patriot, effete rich man, savvy dealmaker.

    Trump, seeking to project his power, can afford to be indiscriminate in his choice of signifiers; we already know what they refer back to. His careless personal style—the too-long ties and ill-fitting suits, the flyaway fake hair—reads as an expression of dominance, a guy passing around a collection plate for admiration that he doesn’t have the time, inclination, or ability to earn. Why should the emperor trouble himself to put on clothes?

    And yet there is, in Trump’s brain, an ideal Trump, a dream Trump, handsome, rich, and powerful. This Trump is the essence of luxury, and the buildings bearing his name are the most beautiful things you’ve ever seen. The responsibility for closing the gap between who Trump is and who he longs to be falls to us. We have to transform the casual shoddiness of his self-presentation into a splendid picture; he offers the prompt of a silly hat and we generate a fantasy of his greatness. Trump, after he announced his takeover of the Kennedy Center, tantalized his social-media followers with an A.I. image of himself conducting a symphony before a packed house. At his first board meeting five weeks later, he posed on a balcony in the center’s concert hall, arms outstretched, echoing the meme—digital slop imported into real life.

    Signalling their allegiance to Trump’s aesthetic, men in the G.O.P. have begun to wear oversized red ties, and, as Mother Jones reports, loyalists are undergoing a distinctive kind of plastic surgery to attain “Mar-a-Lago face.” Conservative women are plumping their lips with injectables and chiselling their cheekbones; what happened to Matt Gaetz is anyone’s guess. Meanwhile, R.F.K., Jr., and Joe Rogan are mincing ever closer to the uncanny valley, supplementing Trump’s brand of “reactionary camp” with a roided-up brawn.

    In this sense, Trump does not just produce slop. He and his cronies force other people to generate slop, too. On February 22nd, Musk demanded that federal employees write e-mails explaining five things that they’d accomplished in the previous week. What could the results be but slop, meaningless to real people who understand how agencies function? According to the Washington Post, many government workers have been submitting the same boilerplate reply, over and over, furnishing a preview of the White House’s plan to replace the federal labor pool with digital assistants. It’s as if DOGE is forcing bureaucrats to conform to the cast of their leaders’ contempt, to become as faceless and pointless as Trump and Musk believe them to be.

    On March 17th, White House social accounts posted a video of a man in shackles being prepared for deportation as “Closing Time,” by Semisonic, plays in the background. Captioning the screen are the lyrics “You don’t have to go home but you can’t stay here.” “Closing Time” is about endings and beginnings, about the early morning hour when bars are closing and revellers have to disperse, maybe in pairs or maybe alone. But the Administration’s clip stripped the words of their wistful energy and doubleness of meaning and prefigured its intent to impose a single, cruel interpretation on a human being.

    For the most part, the deportation videos now circulating on social media are not A.I.-generated. They star real people having their heads shaved or getting chained up and loaded onto planes. But digital technology has been used to obscure and usurp the truth about their lives. On Monday, the President shared a photograph of a hand tattooed with what he asserted to be the insignia of a violent gang. Trump claimed that the hand belonged to Kilmar Armando Abrego Garcia, a twenty-nine-year-old who has no documented affiliation with MS-13 and who was wrongfully deported to El Salvador last month in violation of a court order. The image appears to have been doctored, recruited into Trump’s own semiotic sleight of hand—reducing a person to a body part and then stamping that body part with a sign of evil. Trump sees only one thing when he beholds an immigrant: a criminal. His post was a bid to print his vision over everyone else’s.

    That the tools of digital-reality manipulation are proving useful to this President suggests, of course, that he intends to shape the way Americans see the world. But it also affirms a basic truth about how Trump views human beings: as fundamentally unreal. People exist to gratify his desires. When he’s done with them, they can just be turned off. Long before A.I. became a determining factor in the rest of our lives, Trump was an A.I. emperor, waiting for his lonely, looping, ego-driven fantasia to synch up with reality. The door to his bunker opens. He lifts the sword and cuts the cake. ♦

  • Chinese humanoid robot with eagle-eye vision and powerful AI

    Chinese humanoid robot with eagle-eye vision and powerful AI

    XPENG’s humanoid robot, Iron, is not your typical factory machine. Standing 5 feet, 8 inches tall and weighing 154 pounds, Iron combines advanced artificial intelligence with human-like movement and exceptional vision. 

    Already hard at work assembling electric vehicles in XPENG’s factories, this robot is designed to change how we think about robots in everyday life.

    Join The FREE CyberGuy ReportGet my expert tech tips, critical security alerts and exclusive deals — plus instant access to my free Ultimate Scam Survival Guide when you sign up!

    Iron the robot 1

    Iron the humanoid robot  (XPENG)

    From factory floors to everyday tasks

    Iron’s design includes 60 joints and 200 degrees of freedom, allowing it to move smoothly and naturally. 

    Unlike traditional robots that often move with jerky or stiff motions, Iron walks steadily and can manipulate objects with precision thanks to its human-like hands. XPENG has developed its mobility system using reinforcement learning and large artificial intelligence models, enabling Iron to adapt to a variety of complex tasks. 

    While it currently helps build cars, XPENG envisions Iron performing administrative work, customer service and even household chores in the future.

    Iron the robot 2

    Iron, the humanoid robot  (XPENG)

    AI HUMANOID ROBOT LEARNS TO MIMIC HUMAN EMOTIONS AND BEHAVIOR

    A brain like no other

    At the heart of Iron is XPENG’s proprietary Turing AI chip, a powerful processor capable of handling 3,000 trillion operations per second. This chip processes AI models with 30 billion parameters, allowing Iron to think, adapt and respond with human-like intelligence. 

    Iron’s vision system, inspired by XPENG’s self-driving car technology, offers a remarkable 720-degree field of view, giving the robot eagle-like awareness of its surroundings. Its speech interaction system is also adapted from XPENG’s intelligent vehicle cockpits, enabling natural and logical conversations.

    WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

    Iron the robot 3

    Iron, the humanoid robot  (XPENG)

    HUMANOID ROBOT BREAKDANCES ITS WAY INTO HISTORY

    More than just a robot

    XPENG is not limiting Iron to factory work. The company sees Iron as a personal assistant that can support people in offices, retail environments and homes. Although the current version is priced around $150,000 and targeted mainly at businesses, XPENG plans to develop more accessible versions for everyday consumers. Iron’s advanced dexterity, powered by custom-designed robotic hands with 15 degrees of freedom each, allows it to handle delicate tasks that require fine motor skills.

    GET FOX BUSINESS ON THE GO BY CLICKING HERE

    Iron the robot 4

    Iron, the humanoid robot  (XPENG)

    WORLD’S FIRST AI-POWERED INDUSTRIAL SUPER-HUMANOID ROBOT

    Part of a bigger vision

    Iron is a key piece of XPENG’s broader AI Tech Tree strategy, which aims to create an ecosystem of smart electric vehicles, humanoid robots and even flying vehicles. This vision is also reflected in the company’s new 2025 XPENG X9 electric SUV, which features hundreds of technical upgrades, including ultra-fast charging and AI-powered driving systems that mimic human decision-making. Together, these innovations showcase XPENG’s ambition to blend robotics and automotive technology into a seamless future.

    SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES

    Kurt’s key takeaways

    By leveraging AI technology originally developed for its electric vehicles, XPENG is creating a robot that bridges the gap between automotive innovation and humanoid robotics. With a significant investment and a clear roadmap, Iron has the potential to become much more than a factory assistant.

    It could soon become a helpful presence in offices and homes, changing how we interact with machines in everyday life.

    CLICK HERE TO GET THE FOX NEWS APP

    Iron’s creators promise a future of seamless human-robot collaboration. But as it masters everything from car assembly to household chores, are we sleepwalking into a world where humans become obsolete, or is this the key to unlocking our greatest potential? Let us know by writing us at Cyberguy.com/Contact

    For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter

    Ask Kurt a question or let us know what stories you’d like us to cover

    Follow Kurt on his social channels

    Answers to the most asked CyberGuy questions:

    New from Kurt:

    Copyright 2025 CyberGuy.com.  All rights reserved.  

  • AI won’t replace doctors — it will upgrade them

    AI won’t replace doctors — it will upgrade them

    AI won’t replace doctors — it will upgrade them

    The future of medicine will belong to the physicians who are empowered, not sidelined, by technology. And to the patients who benefit from care that is faster, smarter and deeply human.
  • DeepMind UK staff seek to unionise and challenge defence deals and Israel links

    DeepMind UK staff seek to unionise and challenge defence deals and Israel links

    Unlock the Editor’s Digest for free

    Google DeepMind staff in the UK are seeking to unionise in an effort to challenge the company’s decision to sell its artificial intelligence technologies to defence groups and ties to the Israeli government.

    Around 300 London-based employees of the tech giant’s AI arm, which is led by British Nobel laureate Sir Demis Hassabis, have sought to join the Communication Workers Union in recent weeks, according to three people briefed on the move.

    The effort creates new strain on DeepMind which is being pushed by its corporate parent to find commercial uses for its powerful AI, with Hassabis suggesting recently that companies in democratic countries should work together to support national security. 

    The move to unionise follows growing discontent at the company after Google dropped a pledge in February not to develop AI technologies that “cause or are likely to cause overall harm”, including weapons and surveillance.

    Three people involved with the unionisation drive said media reports that Google is selling its cloud services and AI technology to the Israeli Ministry of Defence has also caused disquiet. The Israeli government has a $1.2bn cloud computing agreement with Google and Amazon, named Project Nimbus.

    Demis Hassabis
    Demis Hassabis, who leads the AI unit, suggests that companies in democratic countries should work together to support national security © TT News Agency/AFP/Getty Images

    Further tension was caused by media reports that the Israel Defense Forces have used AI systems to generate targets for assassinations and attacks in the Gaza strip, although it is unclear if the IDF is using commercially purchased software for those purposes, or building its own. The IDF did not reply to a request for comment.

    “We’re putting two and two together and think the technology we’re developing is being used in the conflict [in Gaza],” said one engineer involved in the unionisation effort. “This is basically cutting-edge AI that we’re providing to an ongoing conflict. People don’t want their work used like this.”

    “People feel duped,” the person added. 

    According to correspondence seen by the FT, five DeepMind staff have quit over the past two months citing the Israel cloud computing deal and Google’s reversal of existing commitments around the use of its AI. In the US, Google fired some staff for staging sit-in protests over Project Nimbus.

    In May 2024, DeepMind staff sent a letter to the company’s leadership calling on it to drop its military contracts, and have held some meetings with management, but their requests have been denied.

    The effort to organise will now need to be recognised by the company, through a vote among DeepMind employees in the UK. The AI unit has around 2,000 staff in the UK.

    A spokesperson for Google said: “Our approach is and has always been to develop and deploy AI responsibly. We encourage constructive and open dialogue with all of our employees. In the UK and around the world, Googlers have long been part of employee representative groups, works councils and unions.”

    The company said it still complies with its AI principles for responsible development, but that the landscape has changed significantly since its 2018 pledge against AI weapons and surveillance.

    Unionisation remains relatively rare across the tech sector, which has long resisted attempts to organise its workforce. But there has been growing activity in recent years, including at Amazon and Apple. Google employees founded the Alphabet Workers Union in the US in 2021. 

    One person said that if the union gains recognition from DeepMind, it will seek to meet management to request that the company changes course on defence deals and, if unsuccessful, to consider strike action. They said that colleagues in the US were also in discussions about unionisation. 

    “What I hope and what people who are active are hoping is that we stay away from any military contracts,” they added. 

    Google has faced similar protests over its military ties before. In 2018, several staffers quit and thousands of employees signed a petition in protest against Project Maven, a contract for the US military that used AI technology to improve drone strikes. Following widespread staff discontent, Google did not renew its contract with the Pentagon and pledged not to work on AI technologies for weapons or surveillance. 

    One senior figure in the CWU union who is not a DeepMind employee said that when the company was first founded it “attracted lots of smart people that wanted to work on things for genuine good”, but that Google had started “chasing military money”. Google bought the company in 2014.

    They noted that staff at DeepMind are often on high salaries. “They’re not joining the trade union for pay negotiations. They’ve joined because they’ve seen the benefits of collectivising to hold Google to account for their stated ethics,” they said.

    The engineer at DeepMind who has joined the CWU and is involved in organising discontented staff, said that “joining a union is probably the craziest thing a lot of DeepMinders would have ever thought they’d do” but “people’s level of discomfort has slowly risen over the past few years”.

    The company is “sacrificing morals for greed”, the employee added.  

  • Ranked: The Top 10 Most Wanted Skills for AI Jobs

    Ranked: The Top 10 Most Wanted Skills for AI Jobs

    Ranked: The Top 10 Most Wanted Skills for AI Jobs

    More than ever, AI jobs are becoming increasingly sought after. We show the top skills listed in job postings in a rapidly expanding field.
  • Woman says ChatGPT saved her life. More in the Fox News AI Newsletter.

    Woman says ChatGPT saved her life. More in the Fox News AI Newsletter.

    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

    IN TODAY’S NEWSLETTER:

    – Woman says ChatGPT saved her life by helping detect cancer, which doctors missed
    – Tesla launches test run for FSD Supervised, an AI-powered ride hailing service
    – China’s AI DeepSeek faces House probe over US data harvesting, CCP propaganda

    Lauren Bannon

    Lauren Bannon says ChatGPT helped diagnose her with cancer. (Kennedy News and Media)

    ‘LUCKY TO BE ALIVE’: A mother of two credits ChatGPT for saving her life, claiming the artificial intelligence chatbot flagged the condition leading to her cancer when doctors missed it.

    AUTONOMY TEST RUN: Robotaxis are closer to becoming a reality, after Tesla launched a full self-driving (FSD) supervised ride-hailing service in Austin, Texas, and the San Francisco Bay Area “for an early set of employees.”

    Elon Musk unveils Tesla's Robovan, Robotaxis, humanoid robots

    Robotaxi.  (Kurt “CyberGuy” Knutsson)

    HARVESTING YOUR DATA?: A powerful House Committee is demanding information from DeepSeek on what U.S. data it used to train the AI model as members accuse the company of being in the pocket of the Chinese government.

    DeepSeek

    DeepSeek (Reuters/Dado Ruvic/Illustration)

    EDUCATION REFORMS: President Donald Trump signed multiple Executive Orders relating to education Wednesday afternoon, with several tied to the theme of returning meritocracy back to the education system.

    WORTH THE RISKS?: If you haven’t heard the buzz about Manus yet, it’s the new AI model unveiled by a Singapore-based company called Butterfly Effect. This isn’t just another chatbot. It’s one of the first truly autonomous AI agents, able to do its own research, make decisions and even carry out plans, all with barely any human oversight.

    Subscribe now to get the Fox News Artificial Intelligence Newsletter in your inbox.

    FOLLOW FOX NEWS ON SOCIAL MEDIA

    Facebook
    Instagram
    YouTube
    Twitter
    LinkedIn

    SIGN UP FOR OUR OTHER NEWSLETTERS

    Fox News First
    Fox News Opinion
    Fox News Lifestyle
    Fox News Health

    DOWNLOAD OUR APPS

    Fox News
    Fox Business
    Fox Weather
    Fox Sports
    Tubi

    WATCH FOX NEWS ONLINE

    Fox News Go

    STREAM FOX NATION

    Fox Nation

    Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.