Blog

  • OpenAI Hires Instacart C.E.O. to Run Business and Operations

    OpenAI Hires Instacart C.E.O. to Run Business and Operations

    OpenAI said late Wednesday that it hired Fidji Simo, the chief executive of Instacart, to take on a new role running the artificial intelligence company’s business and operations teams.

    In a blog post, Sam Altman, OpenAI’s chief executive, said he would remain in charge as the head of the company. But Ms. Simo’s appointment as chief executive of applications would free him up to focus on other parts of the organization, including research, computing and safety systems, he said.

    “We have become a global product company serving hundreds of millions of users worldwide and growing very quickly,” Mr. Altman said in the blog post. He added that OpenAI had also become an “infrastructure company” that delivered artificial intelligence tools at scale.

    “Each of these is a massive effort that could be its own large company,” he wrote. “Bringing on exceptional leaders is a key part of doing that well.”

    Ms. Simo, a member of OpenAI’s board, will oversee sales, marketing and finance. She will report to Mr. Altman.

    OpenAI, which ignited a frenzy over A.I. with its ChatGPT chatbot, has grown rapidly and juggled multiple initiatives — sometimes unsuccessfully. The San Francisco company has steadily released new A.I. models and products, including systems that can “reason.” In March, it completed a $40 billion fund-raising deal, led by the Japanese conglomerate SoftBank, that valued it at $300 billion and made it one of the most valuable private companies in the world.

    But OpenAI, which was set up as a nonprofit, has struggled to adopt a new corporate structure. As the commercial appeal of artificial intelligence has grown, the company had tried to remove itself from control by the nonprofit. That attracted scrutiny from critics such as Elon Musk, an OpenAI founder who sued the company and accused it of putting profit ahead of A.I. safety. The attorneys general of California and Delaware also scrutinized the restructuring.

    On Monday, OpenAI backtracked on the plan and said it would allow the nonprofit to retain its grip on the company.

    (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)

    In a statement late Wednesday, Ms. Simo said that OpenAI “has the potential of accelerating human potential at a pace never seen before and I am deeply committed to shaping these applications toward the public good.”

    She added in a memo to Instacart employees that she had a “passion for A.I. and in particular for the potential it has to cure diseases” and that “the ability to lead such an important part of our collective future was a hard opportunity to pass up.”

    Ms. Simo will remain at Instacart for the next few months as the company names a successor, a role she said would be filled by a member of Instacart’s management team. She will also remain on the company’s board as its chairperson.

    “Today’s announcement is not a reflection of any changes in our business or operations,” Instacart said in a statement.

  • Most women have yet to form an opinion about breast imaging AI

    Most women have yet to form an opinion about breast imaging AI

    In a nationwide survey of 3,500 patients, those with higher electronic health literacy, educational attainment or of a younger age were “significantly” likelier to see AI as beneficial.
  • Agatha Christie, Who Died in 1976, Will See You in Class

    Agatha Christie, Who Died in 1976, Will See You in Class

    Agatha Christie is dead. But Agatha Christie also just started teaching a writing class.

    “I must confess,” she says, in a cut-glass English accent, “that this is all rather new to me.”

    The literary legend, who died in 1976, has been tapped to teach a course with BBC Maestro, an online lecture series similar to Master Class. Christie, alongside dozens of other experts, is there for any aspiring writer with 79 pounds (about $105) to spare.

    She has been reanimated with the help of a team of academic researchers — who wrote a script using her writings and archival interviews — and a “digital prosthetic” made with artificial intelligence and then fitted over a real actor’s performance.

    “We are not trying to pretend, in any way, that this is Agatha somehow brought to life,” Michael Levine, the chief executive of BBC Maestro, said in a phone interview. “This is just a representation of Agatha to teach her own craft.”

    The course’s release coincides with a heated debate about the ethics of artificial intelligence. In Britain, a potential change to copyright law has frightened artists who fear it will allow their work to be used to train A.I. models without their consent. In this case, however, there is no copyright issue: Christie’s family, who manage her estate, are fully on board.

    “We just had the red line that it had to be her words,” said James Prichard, her great-grandson and the chief executive of Agatha Christie Ltd. “And the image and the voice had to be like her.”

    Christie is hardly the only person to have been resurrected with A.I.: Using the technology to talk to the dead has become something of a cottage industry for wealthy nostalgics.

    She’s not the first dead artist to be turned into an avatar, either.

    In 2021, A.I. was used to generate Anthony Bourdain’s voice reading out his own words. The actor Peter Cushing has been resurrected to act in movies. Last year a Polish radio station used A.I. to “interview” a dead luminary, leading many to worry that it had put words in her mouth.

    For Christie, A.I. was used only to create her likeness, not to build the course or write the script.

    That’s part of why Mr. Levine rejects the idea that this is an Agatha Christie deepfake. “The implication of the word ‘fake’ suggests that there is something about this which is sort of passing off,” he said. “And I don’t think that’s the case.”

    Mr. Pritchard said his family would never have agreed to a project that invented Christie’s views. And they are proud of the course.

    “We’re not speaking for her,” he said. “We are collecting what she said and putting it out in a digestible and shareable format.”

    A team of academics combined or paraphrased statements from Christie’s archive to distill her advice about the writing process. They took care to preserve what they believed to be her intended meaning, with the aim of helping more of her fans interact with her work, and with fiction writing in general.

    “We didn’t make anything up in terms of things like her suggestions and what she did,” said Mark Aldridge, who led the academic team.

    That, for Carissa Véliz, a professor of philosophy and the Institute for Ethics in A.I. at Oxford University, is still “extremely problematic.”

    Even if the author’s family consented, Christie has not, and cannot, agree to the course. That is complex with any sort of historical re-enactment or animation, but Dr. Véliz noted that writers spend hours finding the right word, or the right rhythm.

    “Agatha Christie never said those words,” Dr. Véliz said in a phone interview. “She’s not sitting there. And therefore, yes it’s a deepfake.”

    “When you see someone who looks like Agatha Christie and talks like Agatha Christie, I think it’s easy for the boundaries to be blurred,” she said, adding, “What do we gain? Other than it being gimmicky?”

    But Felix M. Simon, a research fellow in A.I. and News at the Reuters Institute at Oxford University, noted that this Christie was meant to entertain and also educate — which the author did when she was alive.

    And the representation draws from something “close to her actual writings and her actual words — and therefore by her extension, to some degree, her thinking,” Dr. Simon said.

    “There’s also very little risk of this harming, posthumously, her dignity or her reputation,” he argued. “I think that makes these cases so complicated because you can’t apply a hard and fast rule for every single one of them and say: ‘This is generally good or generally bad.’”

    Perhaps this sort of fact-fiction-futurism mélange is just the way things are going in an age when A.I. can be used to finish sentences, replace jobs and, perhaps, even try to resurrect the dead.

    Either way, the creators think Christie — a brave and creative adventurer — would have liked it. “Can we definitively know that this something she would be approving of?” said Mr. Levine, of BBC Maestro. “We hope. But we don’t definitively know, because she’s not here.”

  • Steelers minority owner compares Aaron Rodgers’ situation to AI, says it’s ‘more complex than artificial intelligence’ | NFL News

    Steelers minority owner compares Aaron Rodgers’ situation to AI, says it’s ‘more complex than artificial intelligence’ | NFL News

    Steelers minority owner Thomas Tull joked that Aaron Rodgers’ situation is more complex than artificial intelligence, sparking buzz among NFL fans. (Credit: Getty Images)

    The Pittsburgh Steelers are sitting in anticipation, as the possibility of landing veteran quarterback Aaron Rodgers remains the biggest question mark in their offseason plans. With each passing day, signals from within the organization suggest that the team is confident Rodgers will eventually call Pittsburgh home — but the clock keeps ticking, and the ink has yet to meet the paper.

    Aaron Rodgers’ bizarre NFL journey mocked as more confusing than AI by Steelers part-owner Thomas Tull

    Back in March, principal owner Art Rooney II made headlines when he confidently stated, “he does want to come here” — a sentiment that seemed to all but confirm Aaron Rodgers’ arrival. Yet, weeks later, Steelers fans are still left wondering if this much-discussed union will finally materialize. Adding to the intrigue, Thomas Tull, one of the team’s minority owners, offered his own take during an appearance on CNBC. When asked about the situation, Tull quipped, “I’m here to talk about AI, and that’s a more complex issue than artificial intelligence.”The remark drew laughter, but also underlined a serious truth. Rodgers’ future is as complicated and unpredictable as the quarterback himself — a player known for his introspection, cryptic messaging, and methodical decision-making. As Tull’s comment subtly implied, even seasoned executives within the Steelers’ hierarchy find themselves perplexed by the enigmatic signal-caller.Rodgers, for his part, hasn’t ruled out Pittsburgh. During an appearance on The Pat McAfee Show, he acknowledged ongoing conversations with the Steelers and expressed admiration for head coach Mike Tomlin. “I’ve been upfront with them,” Rodgers said. “I’ve said, listen, if you need to move on, by all means. … I am trying to be open to everything and not specifically attached to anything … I’m not holding anybody hostage.”The statement reflects both transparency and indecision. It’s clear that Rodgers appreciates the franchise’s legacy and leadership, but his own personal challenges have kept him from making a definitive move. Given his thoughtful nature, it wouldn’t be surprising if he waits until the NFL’s full 2025 schedule is released — expected next Wednesday — before making any commitments. As sources suggest, Rodgers may be watching to see how the Steelers fare in terms of prime-time games and competitive positioning, especially compared to a scenario where Mason Rudolph remains the starter.Despite the uncertainty, the Steelers’ behavior paints a picture of optimism. Rooney’s repeated hints and the team’s patience in negotiations all point to one thing: they believe Rodgers is coming. And they’re willing to wait — even if it’s uncomfortable — for the payoff.In the grand chess game of NFL quarterback movement, Aaron Rodgers remains one of the few true kings left on the board. Whether Pittsburgh becomes his final destination or just another conversation in a long offseason saga, one thing is certain: until his signature is on a contract, the story is far from over.Also Read: NFL sparks political firestorm by having President Trump announce 2027 Draft from the White House

  • Artificial intelligence programs powering into education for better or worse

    Artificial intelligence programs powering into education for better or worse

    Artificial intelligence programs powering into education for better or worse
  • Heartland Gen Zers Feel Unprepared to Use AI at Work

    Heartland Gen Zers Feel Unprepared to Use AI at Work

    WASHINGTON, D.C. — As artificial intelligence continues to reshape the day-to-day workplace experience, about one-third of Gen Z adult workers living in America’s Heartland feel at least somewhat prepared to integrate artificial intelligence into their current jobs. Meanwhile, four in 10 Gen Z 5th– to 12th-grade students in the Heartland feel prepared to use AI in their future jobs.

    Fewer than one in 10 Heartland Gen Z employees (9%) say they feel “extremely” prepared to use artificial intelligence in their current jobs, while 25% say they are “somewhat” prepared.

    ###Embeddable###

    When asked about their ability to use AI in their future roles, Gen Z adults who are no longer in secondary school are only slightly more optimistic: 11% feel extremely prepared, while 32% feel somewhat prepared. Meanwhile, just 3% of Gen Z middle and high school students feel extremely prepared to use AI in their future jobs, with 37% feeling somewhat prepared.

    These findings are from a new survey conducted by the Walton Family Foundation and Gallup spanning 20 states[1] in the Midwest and noncoastal South of the United States in partnership with Heartland Forward, a nonprofit organization committed to studying economic and wellbeing trends in the middle of the country.

    The online survey — the latest in the Voices of Gen Z study — was conducted March 6-13, 2025, using the Gallup Panel. The results are based on responses from 1,474 13- to 28-year-old Gen Z children and adults living in the 20 Heartland states.

    Industry and Workplace Policies Are Linked to Employee AI Preparedness

    Gen Z employees’ confidence in their ability to use artificial intelligence in their work is closely related to the type of industry they are employed in. More than six in 10 (61%) Gen Zers who work in a science, technology, engineering or math (STEM) role feel at least somewhat prepared to use AI in their jobs. Meanwhile, workers in education (43%), other white-collar industries (32%), blue-collar and service jobs (30%), and healthcare (22%) are 18 to 39 percentage points less likely to feel prepared to use AI at work.

    ###Embeddable###

    Notably, nearly half of healthcare (48%) and blue-collar and service workers (47%) say artificial intelligence does not exist for their jobs.

    For employers looking to increase their workers’ comfort with artificial intelligence, their AI use policies may be an important factor. Nearly six in 10 workers (59%) whose employers permit AI use feel prepared to use AI at work, compared with about one in four workers (26%) whose employers do not permit its use or do not have clear AI policies.

    However, 36% of Gen Z workers say their employer allows them to use artificial intelligence for their work, while 10% say it is not permitted, 21% are unsure about whether their workplace allows its use, and 33% do not have jobs that can use AI.

    ###Embeddable###

    Gen Z workers in some fields are more likely than others to say their employer allows them to use AI for their work. About six in 10 STEM (61%) and education workers (59%) say their employer permits artificial intelligence use, far higher than the 10% of healthcare workers and 17% of blue-collar and service workers who say the same. Nearly half of white-collar workers not employed in STEM or education (45%) are allowed to use AI at work.

    Gen Z Students’ Schools Are Not Preparing Them to Use AI After Graduation

    Gen Z middle and high school students are less likely than Gen Z employees to say they are allowed to use AI in their work. A narrow majority of students (53%) say their school has not implemented a clear AI use policy, while 26% say it is permitted in at least some class-related activities and 20% report their school has banned AI for use in schoolwork.

    Students living in counties with a median household income that is less than $60,000 per year, as well as those in nonmetro (rural) areas, are least likely to say their school allows them to use artificial intelligence and are especially likely to say their school has not established rules regarding AI use.

    ###Embeddable###

    The effects of schools’ limited engagement with artificial intelligence are reflected in students’ postgraduation employment outlook. Just over half of Gen Z middle and high schoolers in schools that permit AI use (56%) feel at least somewhat prepared to use this technology in their future jobs, compared with 34% of students in schools that ban AI or do not have a policy. As students in rural and lower-income areas are less likely than their peers to say their school permits artificial intelligence use, this may leave these students uniquely unprepared to enter the workforce with needed artificial intelligence knowledge and skills.

    Implications

    As artificial intelligence continues to change the way Americans work, Gen Zers will increasingly need to know how to leverage this technology in their current and future jobs. However, two-thirds of Gen Z workers do not feel prepared to use AI at work or do not believe AI could assist them in their roles, while 60% of Gen Z students do not feel prepared to use it after graduation.

    The extent to which schools and workplaces have clear policies permitting artificial intelligence use influences students’ and employees’ confidence in their AI skills; however, most Gen Z students say their school does not allow AI use or that it does not have a clear AI policy, while more than four in 10 Gen Z workers say AI use is disallowed or they don’t know whether it’s allowed in their workplace. Workplaces that will rely on their employees’ ability to leverage artificial intelligence, as well as schools seeking to prepare students for postgraduation success, should consider whether their rules regarding artificial intelligence use are clear and facilitate students’ and workers’ development of those skills.

    To stay up to date with the latest Gallup News insights and updates, follow us on X @Gallup.

    Learn more about how the Voices of Gen Z survey works.

    ###Embeddable###

  • Singapore’s Vision for AI Safety Bridges the US-China Divide

    Singapore’s Vision for AI Safety Bridges the US-China Divide

    The government of Singapore released a blueprint today for global collaboration on artificial intelligence safety following a meeting of AI researchers from the US, China, and Europe. The document lays out a shared vision for working on AI safety through international cooperation rather than competition.

    “Singapore is one of the few countries on the planet that gets along well with both East and West,” says Max Tegmark, a scientist at MIT who helped convene the meeting of AI luminaries last month. “They know that they’re not going to build [artificial general intelligence] themselves—they will have it done to them—so it is very much in their interests to have the countries that are going to build it talk to each other.”

    The countries thought most likely to build AGI are, of course, the US and China—and yet those nations seem more intent on outmaneuvering each other than working together. In January, after Chinese startup DeepSeek released a cutting-edge model, President Trump called it “a wakeup call for our industries” and said the US needed to be “laser-focused on competing to win.”

    The Singapore Consensus on Global AI Safety Research Priorities calls for researchers to collaborate in three key areas: studying the risks posed by frontier AI models, exploring safer ways to build those models, and developing methods for controlling the behavior of the most advanced AI systems.

    The consensus was developed at a meeting held on April 26 alongside the International Conference on Learning Representations (ICLR), a premier AI event held in Singapore this year.

    Researchers from OpenAI, Anthropic, Google DeepMind, xAI, and Meta all attended the AI safety event, as did academics from institutions including MIT, Stanford, Tsinghua, and the Chinese Academy of Sciences. Experts from AI safety institutes in the US, UK, France, Canada, China, Japan and Korea also participated.

    “In an era of geopolitical fragmentation, this comprehensive synthesis of cutting-edge research on AI safety is a promising sign that the global community is coming together with a shared commitment to shaping a safer AI future,” Xue Lan, dean of Tsinghua University, said in a statement.

    The development of increasingly capable AI models, some of which have surprising abilities, has caused researchers to worry about a range of risks. While some focus on near-term harms including problems caused by biased AI systems or the potential for criminals to harness the technology, a significant number believe that AI may pose an existential threat to humanity as it begins to outsmart humans across more domains. These researchers, sometimes referred to as “AI doomers,” worry that models may deceive and manipulate humans in order to pursue their own goals.

    The potential of AI has also stoked talk of an arms race between the US, China, and other powerful nations. The technology is viewed in policy circles as critical to economic prosperity and military dominance, and many governments have sought to stake out their own visions and regulations governing how it should be developed.

  • Steelers’ courtship of Aaron Rodgers is more ‘complex’ than artificial intelligence, part-owner says

    Steelers’ courtship of Aaron Rodgers is more ‘complex’ than artificial intelligence, part-owner says

     

    NEWYou can now listen to Fox News articles!

    The calendar has turned to May, and Aaron Rodgers is still a free agent.

    Rodgers has been linked to the Steelers for a couple of months, but Thomas Tull, a part-owner of the Steelers, said the courtship of Rodgers is more “complex” than artificial intelligence. 

    “I’m here to talk about AI, and that’s a more complex issue than artificial intelligence,” Tull said when asked about Rodgers in an interview on CNBC’s “Power Lunch.”

    CLICK HERE FOR MORE SPORTS COVERAGE ON FOXNEWS.COM

    The team has three quarterbacks on its roster — Mason Rudolph, Skylar Thompson and sixth-round draft pick Will Howard. 

    After Russell Wilson departed Pittsburgh and signed with the New York Giants, the Steelers have been mentioned as a possible landing spot for the four-time MVP because most teams seem to have their starting quarterbacks for the 2025 season in place.

    And Rodgers has not closed the door on retirement. 

    “I’m open to anything and attached to nothing. Retirement could still be a possibility, but right now my focus is and has been and will continue to be on my personal life. … There’s still conversations that are being had,” Rodgers said on “The Pat McAfee Show” in April. 

    COWBOYS, STEELERS AGREE TO TRADE FOR GEORGE PICKENS

    Aaron Rodgers in action

    “I’m in a different phase of my life. I’m 41 years old, I’m in a serious relationship. I have off-the-field stuff that requires my attention. I have personal commitments I’ve made not knowing what my future was going to look like after last year that are important to me. And I have a couple of people in my inner, inner circle who are really battling some difficult stuff. So, I have a lot of things that are taking my attention — and have, beginning really in January — away from football.”

    It remains to be seen whether Rodgers decides to play football and sign with the Steelers or if he will decide to call it a career after 20 seasons. 

    CLICK HERE TO GET THE FOX NEWS APP

    Aaron Rodgers waits for the snap

    With the New York Jets last season, Rodgers threw for 3,987 yards, 28 touchdowns and 11 interceptions in 17 games. 

    Regardless of who ends up starting for the Steelers in Week 1, they will not have George Pickens as a receiver. The team traded Pickens to the Dallas Cowboys on Wednesday. 

    Follow Fox News Digital’s sports coverage on X, and subscribe to the Fox News Sports Huddle newsletter.

       

  • President Trump signs executive order to implement AI in K-12 schools

    President Trump signs executive order to implement AI in K-12 schools

    WEST MONROE, La. (KNOE) – President Trump signed an executive order to bring Artificial Intelligence to K-12 classrooms.

    On April 23, the White House published a statement explaining how the executive order will promote AI literacy and proficiency among Americans by adding AI to education, providing comprehensive training for educators and more.

    A cyber professional and CEO of HiTech, Richard Raue says the use of AI can be good or bad. He says it is a tool that can advance education, if used properly.

    He says, “All phones are pretty much AI capable, so simply asking Google a question or Microsoft or anyone, all those tools are backed with AI and all the questions and answers. Think of all the children that have access to phones, they can do whatever they want to with it to a certain extent but what we’ve done over the last 30 years, just like we will do with AI, we’ve made errors and learn from them, and put guardrails in place, just as we will with AI to ensure our technology is safe and proficient.”

    A local Band Director at Riser Middle School, Charles Longino says he uses AI in the classroom to critique students performance skills and it offers suggestions for music that will target certain performance areas. He says it could be helpful for other subjects and lessen the pressure on teachers.

    He says, “It could ask them specific questions about missing steps and the student would have to resubmit that and then it could teach them, saying they missed a step and help them by showing them how to rework the problem properly.”

    Cyber professional Richard Raue says with AI likely being implemented in schools, the next focus would be to evaluate funding for its use and create safety barriers for students.

  • Artificial intelligence programs powering into education for better or worse

    Artificial intelligence programs powering into education for better or worse

    BISMARCK, N.D. (KFYR) – From Snapchat AI to Chat GPT and AI-powered translation apps, artificial intelligence programs that were once viewed with suspicion are now being embraced by schools, but with a healthy dose of skepticism.

    These programs can offer advantages for teachers and students, but they can also cause problems.

    Artificial intelligence programs are here to stay.

    Educators like Legacy High School’s Haleigh Harter believe it’s important to adapt and evolve.

    She said AI has created a more level playing field.

    “There’s a difference between equality and fairness, and because I work with students who are in reading strategies and may need a little extra help in reading comprehension, there are ways to use Chat GPT to make texts more accessible to them,” said Harter.

    Harter also said AI programs like Chat GPT can format her lesson plans and act as a study partner for students.

    For ESL teacher Christina Kitzman, AI and translation apps have only benefited her and her students.

    “My students practice speaking. And so, they talk into a program, and it gives them feedback on how they’re doing with their English speaking, so that’s really cool. We also have conversations with AI as well, so they can practice real-world situations,” said Kitzman.

    These programs help students like Modou make strides in and out of the classroom.

    “It helped me out with spelling, reading, everything, really, everything the teachers are teaching me. It helped me out at home as well,” said Modou.

    But these intuitive programs also have pitfalls that can limit learning progress, like writing an entire paper by typing in a prompt or citing sources that either don’t exist, or provide an author’s name who never wrote the cited work.

    “One of the things we do talk about is how Chat GPT hallucinates—that’s kind of the term that they use for it. What I tell my students is Chat GPT is lazy too,” said Harter.

    Simply put, when it comes to teaching and learning, AI programs are here to guide students and educators, not to help them take the easy way out.

    Teachers have programs to monitor their students’ AI use and to help protect their students’ personal information.