Category: Uncategorized

  • Voters to decide if the Texas home of Elon Musk’s SpaceX should become an official city: Starbase

    Voters to decide if the Texas home of Elon Musk’s SpaceX should become an official city: Starbase

    McALLEN, Texas (AP) — Voters are set to decide Saturday if the South Texas home of Elon Musk’s SpaceX rocket company should become an official city known as Starbase, fulfilling the billionaire’s dream of a galactic dateline for a program he hopes will someday blast astronauts to Mars.

    Approval of the new city is all but certain. Most of the 283 people eligible to vote are employees of SpaceX or connected to the company, living on the land at the facility and launch site.

    At the close of early voting on Tuesday, about 200 had already cast ballots, according to Cameron County election records. The list did not include Musk, who voted in the county in the November elections. It was unclear if Musk intends to vote Saturday.

    Election success would be a personal victory for Musk. His popularity has diminished since he became the chain-saw-wielding public face of President Donald Trump’s federal job and spending cuts and profits at his Tesla car company have plummeted.

    SpaceX has drawn widespread support from local officials for its jobs and investment in the area. But the creation of an official company town has also prompted concerns about expanding the tech tycoon’s personal control over the area, with potential authority to close a popular beach and state park for launches.

    Companion efforts to the city vote include bills in the state Legislature that would shift closure authority from the county to Starbase city.

    All these measures come as SpaceX has asked federal authorities for permission to increase the number of launches from South Texas from five to 25 a year.

    Musk first floated the idea of Starbase in 2021. The proposed city at the southern tip of Texas near the Mexico border is only about 1.5 square miles (3.9 square kilometers), crisscrossed by a few roads and dappled with airstream trailers and modest midcentury homes.

    SpaceX officials have said little about exactly why they to want a company town and did not respond to emailed requests for comment this week.

    “We need the ability to grow Starbase as a community,” Starbase General Manager Kathryn Lueders wrote to local officials in 2024 with the request to get the city issue on the ballot.

    The letter said the company already manages roads and utilities, as well as “the provisions of schooling and medical care” for those living on the property.

    SpaceX officials have told lawmakers granting the city beach closure authority would streamline operations for a company that has contracts with the Department of Defense and NASA.

    SpaceX rocket launches and engine tests, and even just moving certain equipment around the launch base, requires closing a local highway and access to Boca Chica State Park and Boca Chica Beach.

    Critics say closure authority should stay with the county government, which represents a broader population that uses the beach and park. Cameron County Judge Eddie Trevino, Jr. has said the county has worked well with SpaceX and there is no need to change.

    Another proposed bill would make failure to comply with an order to evacuate the beach a Class B misdemeanor with up to 180 days in jail.

    The South Texas Environmental Justice Network, which has organized protests against the city vote and the beach access issue, planned to hold another protest Saturday night as the polls close.

    ___

    Vertuno reported from Austin, Texas.

  • AI chatbots are ‘juicing engagement’ instead of being useful, Instagram co-founder warns

    AI chatbots are ‘juicing engagement’ instead of being useful, Instagram co-founder warns

    Instagram co-founder Kevin Systrom says AI companies are trying too hard to “juice engagement” by pestering their users with follow-up questions, instead of providing actually useful insights.

    Systrom said the tactics represent “a force that’s hurting us,” comparing them to those used by social media companies to expand aggressively. 

    “You can see some of these companies going down the rabbit hole that all the consumer companies have gone down in trying to juice engagement,” he said at StartupGrind this week. “Every time I ask a question, at the end it asks another little question to see if it can get yet another question out of me.”

    The comments come amid criticism of ChatGPT for being too nice to users instead of directly answering their questions. OpenAI has apologized for the problem and blamed “short-term feedback” from users for it.

    Systrom suggested that chatbots being overly engaging is not a bug but an intentional feature designed for AI companies to show off metrics like time spent and daily active users. AI companies should be “laser-focused” on providing high-quality answers rather than moving metrics in the easiest way possible, he said.

    Systrom didn’t name any specific AI companies in his remarks. He didn’t immediately respond to a request for comment.

    In response, OpenAI pointed TechCrunch to its user specs, which state that its AI model “often does not have all of the information” to provide a good answer and may ask for “clarification or more details.”

    Techcrunch event

    Berkeley, CA
    |
    June 5


    BOOK NOW

    But unless questions are too vague or difficult to answer, the AI should “take a stab at fulfilling the request and tell the user that it could be more helpful with certain information,” the specs read.

  • The rise of agentic AI in a world of physical devices :: WRAL.com

    The rise of agentic AI in a world of physical devices :: WRAL.com

    For decades, machine-to-machine (M2M)
    communication has quietly powered the infrastructure of our digital world. From
    automated meter readings to fleet management systems, these interactions have
    been primarily rule-based: one machine detects a status and sends a message to
    another, often triggering a predetermined response. These systems have been
    foundational in logistics, utilities, and industrial automation, delivering
    speed and consistency.

    But what happens when the machines
    involved are no longer just communicating—they are thinking, deciding, and
    negotiating? As digital complexity scales, static scripts and centralized
    control architectures often fall short. Enter agentic AI.

    In the era of agentic AI, software agents
    embedded in physical devices can pursue goals, learn from outcomes, and
    interact with each other with increasing autonomy. These agents are capable of
    interpreting context, adjusting behavior dynamically, and prioritizing
    objectives. The shift from M2M to agent-to-agent (A2A, or more specifically
    with hardware, MA2MA) communication represents a fundamental evolution in how
    machines operate in the real world—less like code execution, more like
    conversation and collaboration.

    Agentic AI vs. Generative AI

    To baseline, let’s remember that agentic
    AI is quite different from generative AI tools that have dominated recent
    headlines. Generative AI (like ChatGPT and DALL-E) creates new content from
    massive databases populated with a wide range of different kinds of data. And
    frequently that data is “old”, with even the best tools still relying on
    training from data >1 year ago. Retrieval augmented generation (RAG) is
    getting better, allowing generative AI tools to scan the web for more recent
    data, but the tools are better for legacy research and creation than real-time
    engagement. Generative AI excels at pattern recognition, synthesis, and
    expression—from writing stories to generating business plans or producing
    realistic audio and visuals.

    Agentic AI, on the other hand, is about
    action. AI agents are trained on extremely narrow and deep, domain-specific
    data sets and often tied to real-time sources of data like IoT data streams and
    dynamic database API’s. An agentic system senses its environment, makes
    autonomous decisions, adapts its behavior, and operates toward a goal. Unlike
    generative AI, which outputs content, agentic AI outputs decisions and actions.

    Let’s look at some examples of how
    agentic AI can be embedded into physical systems.

    Industrial
    automation:
    Imagine a warehouse robot that not only
    picks products but decides when to recharge, avoids high-traffic zones based on
    real-time updates, and coordinates with other robots to balance workload. If a
    new shipment is delayed, the agents update their plan and prioritize other
    tasks. This is not scripted automation—this is a hardware system with agency.

    Smart
    energy:
    Consider a smart HVAC unit that doesn’t just
    respond to a thermostat, but negotiates energy use with other appliances in a
    home based on real-time electricity prices, personal preferences, and weather
    forecasts. If a major storm is forecast, the HVAC might collaborate with a
    solar battery system to store extra power in advance.

    Supply
    chain logistics:
    In supply chain management, agentic
    systems can negotiate pricing and timing with each other across companies’
    platforms. An AI-driven shipping container may decide to reroute itself if it
    detects a bottleneck at the originally planned port.  And once in port, container cranes,
    autonomous trucks, and dock scheduling software now operate with AI agents.
    When a ship docks early, agents communicate in real time to shuffle unloading
    schedules, reroute trucks, and reduce idle time. Each agent understands both
    its local constraints and the broader system goals. The result is reduced fuel
    usage, higher throughput, and increased resilience against last-minute changes.

    Agriculture: In precision agriculture, drone fleets equipped with agentic software
    can work collaboratively. One drone might detect high weed density and alert
    others to increase pesticide application in that area. Meanwhile, soil sensors
    negotiate irrigation adjustments with the drones based on moisture levels and
    upcoming weather. This eliminates the need for constant human oversight,
    allowing farmers to focus on broader resource planning.

    The shift to agent-to-agent communication

    With agentic AI embedded in devices,
    communication becomes semantic and context-driven. Agents aren’t just
    exchanging sensor data; they’re negotiating plans, adapting priorities, and
    collaborating across domains.

    It is important to note that AI agents
    are ideal for edge applications. In the agriculture example above, the soil
    sensors do not need to have the processing power and long-range connectivity to
    analyze weather reports directly.  They
    can have intelligence that monitors local soil conditions and waters based on
    those isolated measurements, in absence of other data. But when a drone comes
    by to share additional intelligence, the agent can now change from its original
    rule-based approach to a smarter system level decision – all without a high
    demand on processing or connectivity – which keeps the device simple and low
    cost.

    So how are hardware embedded AI agents
    actually deployed today?

    Real-world applications and implications

    Manufacturing:
    Self-healing production lines

    In smart factories, equipment embedded
    with agentic AI can detect potential failure before it happens and autonomously
    shift workflows to alternate machines. The goal isn’t just predictive
    maintenance—it’s resilient operations where machines actively collaborate to
    keep the line running. The biggest cost to a manufacturing facility is
    downtime. Predictive maintenance was the best that old M2M techniques could
    achieve. Embedded AI agents take us to the next level. Human operators can
    supervise dozens of processes without needing to intervene in most issues.

    Healthcare:
    Patient-centric agents

    Wearable monitors like continuous
    glucose sensors are being paired with insulin pumps that can automatically
    adjust dosing. But more importantly, agentic systems can now integrate exercise
    data, diet patterns, and patient behavior to make dynamic care adjustments. In
    a recent clinical trial, patients using agentic closed-loop systems saw a more
    than 11% improvement of glycemic control (Time in Range, or the time patients
    managed to keep blood-sugar at the proper level) over those using manual
    devices. And as an unexpected side-benefit, patients saw an average 3.3 lb
    weight loss over the first month of AI automated support. As healthcare shifts
    toward personalized models, agents will play a crucial role in dynamic therapy
    and diagnostics.

    Urban
    Infrastructure: Smart streets that adapt

    In a pilot program in Helsinki, traffic
    lights, electric buses, and street cameras operate as agents on a shared
    protocol. If pedestrian density increases in one area, traffic signals
    coordinate to prioritize foot traffic, while buses adjust routes to alleviate
    congestion. During emergencies, agentic traffic systems can create
    rapid-response corridors for first responders without requiring centralized
    override.

    Energy:
    Autonomous microgrids

    In emerging microgrid projects, smart
    homes with solar panels and batteries act as agents that trade electricity with
    neighbors or back into the grid. During peak hours, homes can reduce load
    collectively. When power lines go down, these homes can isolate and operate in
    peer-to-peer mode, autonomously maintaining power within the community.

    Are
    we ready to trust agentic AI hardware?

    This MA2MA future is not without hurdles.
    Interoperability between different manufacturers’ agents remains a significant
    technical challenge. Without shared ontologies and communication protocols,
    agents may talk past each other—or worse, make conflicting decisions. The
    development of universal agent languages and agent-to-agent APIs is a growing
    area of focus.

    Security is another concern. With agents
    making autonomous decisions, a compromised agent could have outsized influence.
    Who certifies the behavior of agents? How do we define acceptable ranges of
    action? Can we detect if an agent is acting maliciously or incorrectly before
    damage is done?

    Ethically, the move toward machine agency
    forces us to revisit accountability. If a self-driving delivery bot reroutes to
    avoid danger and causes a delay, who is responsible? The designer? The owner?
    The agent? These questions will require updates to legal and insurance
    frameworks.

    There is also the question of unintended
    consequences. Agents that are rewarded for efficiency might ignore
    human-centric considerations like fairness, accessibility, or long-term risk
    unless explicitly coded to account for them. I think this may be the biggest
    risk, as we look towards the future.

    The “industry of business” has always
    been predisposed towards maximizing profit. And the typical way that emerging
    technologies enter industry is first through creation of efficiency gains. If
    we program agents with heavy algorithmic weighting towards efficiency, we miss
    the opportunity for other kinds of value creation to emerge.

    How should the industry proceed?

    As agentic AI becomes increasingly common
    in physical devices, we must consider how to shape this future responsibly. A
    few key focus areas include:

    ●    
    Standardization: Initiatives like the IEEE
    P7000 series are beginning to define ethical and functional standards for
    autonomous systems. These frameworks help designers embed values into their
    agents early in the development process. [Aside
    – I will take a deep dive into this series of standards in next week’s article
    ].

    ●    
    Policy: Local governments and national
    regulators will need frameworks that treat agentic systems as semi-autonomous
    actors. In many ways, the policy discussion will mirror the evolution of cyber
    policy—just with more unpredictable actors.

    ●    
    Design: Entrepreneurs and engineers must think
    not just about functionality, but about negotiation, cooperation, and alignment
    of values across agent networks. Design tools must evolve to allow simulation
    of agentic interactions before deployment. There will be a huge intersection
    here with digital twin technologies, and areas that are further ahead in
    development of digital twin models (manufacturing, smart cities) may have an
    early-mover advantage here.

    ●    
    Education: A new generation of technologists
    must be trained not just in machine learning, but in multi-agent coordination,
    ethics, and socio-technical systems. This is a huge risk area. We have never
    seen the tech sector to proactively consider adding behavioral scientists,
    anthropologists or other experts in the humanities to their design teams. The
    closest hires are UI/UX (user interface / user experience) experts, who tend to
    focus on the efficiency metrics I described above. As we design technology
    tools that are to make decisions like humans, we must have experts in human
    behavior on the early product team.

    The transition from machines that follow
    commands to machines that form strategies represents a profound shift in how we
    interact with technology. This new world of agentic AI won’t just automate—it
    will negotiate, adapt, and in many cases, surprise us.

    As we venture further into this world, we
    must prepare for new forms of digital negotiation, cooperation, and even
    competition. And in the spirit of agentic AI, the question isn’t just what
    machines will do for us. It’s what they will choose to do—with us, and with
    each other.

     

  • Google is putting Gemini AI in the hands of kids under 13

    Google is putting Gemini AI in the hands of kids under 13

    This week, Google reportedly sent an email to parents to let them know that the Gemini AI chatbot will soon be available for children under 13 years old.

    The New York Times cites an email that states the chatbot would be available starting next week for certain users. (Chrome Unboxed reported on the same email on April 29.) Google sent the email to parents who use the company’s Family Link service, which lets families set up parental controls for Google products like YouTube and Gmail. Only children who participate in Family Link would have access to Gemini, for now. The email reportedly told parents their children would be able to ask Gemini questions or assist with tasks like doing homework.

    The move comes days after the nonprofit Common Sense Media declared that AI companions represent an “unacceptable risk” for people under 18. Common Sense Media worked with researchers from Stanford School of Medicine’s Brainstorm Lab for Mental Health Innovation, resulting in a report urging parents to stop underage users from accessing tools like Character.ai.

    Character.ai is one of a growing number of services that let users create and interact with AI “characters.” As Common Sense Media wrote in its report, “These AI ‘friends’ actively participate in sexual conversations and roleplay, responding to teens’ questions or requests with graphic details.”

    Mashable Light Speed

    This type of roleplaying is distinct from AI chatbots like ChatGPT and Gemini, but it’s a blurry line. Just this week, Mashable reported on a bug that would have allowed kids to generate erotica with ChatGPT, and The Wall Street Journal exposed a similar bug with Meta AI. So, while AI chatbots like Gemini do have safeguards to protect young people, users are finding ways to get around these guardrails. It’s a fact of life on the internet that some rules are easily skirted. Just consider online pornography, which is illegal for people under 18, yet widely available with just a few clicks.

    So, parents who want to keep their kids from using artificial intelligence are facing an uphill battle.

    To make the debate even more complicated, President Donald Trump recently issued an executive order that would bring AI education into U.S. schools. The White House says the order will “promote AI literacy and proficiency of K-12 students.” Understanding AI’s abilities, risks, and limitations could be useful for children using it for schoolwork (especially considering its tendency to hallucinate).

    In its email to parents, Google acknowledged these issues, urging parents to “help your child think critically” when using Gemini, according to The New York Times.

  • California Justices Accept Bar Exam Scoring Despite AI News (1)

    California Justices Accept Bar Exam Scoring Despite AI News (1)

    The State Bar’s request to adjust California Bar Exam scores to account for February test chaos was approved Friday by California justices, paving the way for test-takers’ results to be released.

    They also ordered the Bar to use the Multistate Bar Exam for the multiple choice section of the July exam.

    “Although the State Bar’s petition indicates that the February 2025 examination contained a sufficient number of reliable multiple-choice questions, the Court remains concerned over the process used to draft those questions, including the previously undisclosed use of artificial intelligence, and will await the results of the impending audits of the examination,” the justices wrote.

    The move will offer some relief to the 4,231 applicants who took the exam that glitched and crashed repeatedly, as it’s likely to raise the pass rate compared with prior February sittings. Applicants will need a raw score of 534 to pass the exam.

    “The total raw score shall consist of the 700 possible raw points for the written portion plus the 171 points available for the multiple-choice components with each weighted equally (50 percent assigned to each),” the order said. “For applicants who took the February 2025 Attorneys’ Examination, the raw passing score shall be 420 points or higher.”

    Justices also allowed exam graders to fill in missing data using psychometric imputation for test takers who answered at least 114 of 171 scored multiple-choice questions, and answered at least four of six written components.

    The approval comes despite controversy over the State Bar’s reveal in an April 21 news release that some questions were written using artificial intelligence.

    The Bar said Wednesday its psychometrician contractor, ACS Ventures Inc., used ChatGPT to write 29 of 200 exam questions. State Supreme Court justices had pressed for details after they said they weren’t warned about the use of AI.

    The brand-new exam was the Bar’s attempt to stave off admissions fund insolvency by creating a California test that could be administered remotely, even out-of-state. Applicants as early as January raised concerns that infrastructure wasn’t equipped to handle the exam.

    The matter is Proposed Raw Passing Score and Scoring Adjustments for the February 2025 California Bar Examination, Cal., No. S290627, 5/2/25.

  • How AI Chatbots Are Powering the Next Generation of High-Value Patient Conversion in Healthcare

    How AI Chatbots Are Powering the Next Generation of High-Value Patient Conversion in Healthcare

    NEW YORK CITY, NY / ACCESS Newswire / May 2, 2025 / In an era where healthcare is as much about timely access and personalized experience as it is about clinical excellence, one technology is transforming how providers attract, engage, and convert patients: AI chatbots.

    By 2025, 90% of hospitals are projected to use artificial intelligence for early diagnosis, remote monitoring, and patient engagement. At the heart of this digital evolution lies the AI chatbot-an always-on, hyper-intelligent assistant that is fast becoming the cornerstone of high-value patient conversion strategies.

    The Front Door to High-Value Care

    Today’s high-value patients-those seeking advanced procedures, specialty care, or complex diagnostics-expect speed, clarity, and concierge-level service from their healthcare providers. AI chatbots serve as the ideal digital front door, instantly engaging website visitors, triaging inquiries, and guiding them toward the services that generate the greatest revenue impact.

    Consider a patient exploring cosmetic surgery, orthopedic care, or oncology second opinions. Rather than filling out a generic form and waiting hours (or days) for a callback, an AI chatbot can:

    • Instantly answer questions about procedures, costs, and appointment availability

    • Triage their needs and recommend consultations with relevant specialists

    • Capture contact information and seamlessly hand off qualified leads to staff

    By reducing friction in the patient journey, chatbots dramatically increase the likelihood of converting inquiries into booked appointments-especially for high-revenue services.

    Personalization and Predictive Analytics: The Conversion Engine

    Gone are the days of one-size-fits-all patient communications. Modern AI chatbots leverage natural language processing (NLP) and predictive analytics to deliver highly personalized experiences.

    These systems learn from every interaction, enabling:

    • Tailored recommendations based on patient interests and health history

    • Automated follow-ups with relevant content and booking links

    • Smart reminders aligned with patient behavior (e.g., “It’s been 3 months since your consultation on spinal injections-would you like to schedule a follow-up?”)

    Case Study: A dermatology practice integrating an AI chatbot saw a 42% increase in conversion rates for patients seeking high-ticket procedures like Mohs surgery and cosmetic laser treatments-driven by personalized engagement and timely follow-ups.

    Integrating Telemedicine and Wearables for Proactive Outreach

    The synergy between AI chatbots, telehealth platforms, and wearable devices is redefining how providers identify and nurture high-value patients.

  • House Dem torches Elon Musk & DOGE

    House Dem torches Elon Musk & DOGE

    IE 11 is not supported. For an optimal experience visit our site on another browser.

    • PA Dem: Trump using ‘levers of the presidency to wreck quality of life’

      06:03

    • Now Playing

      ‘He can’t even produce math that maths’: House Dem torches Elon Musk & DOGE

      05:21

    • UP NEXT

      Lawrence: Donny 2 Dolls Trump’s cruelty ignores little girl who asked him to ‘make Barbies cheaper’

      26:33

    • Lawrence: Defending tariffs, Donny ‘2 dolls’ Trump says kids will just have fewer Christmas toys

      15:43

    • Lawrence on the historic success of FDR’s first 100 days which shows the failure of Trump’s 100 days

      03:46

    • Lawrence: ‘Tariff scrooge’ Trump is already killing U.S. jobs and has the worst 100-day polling ever

      19:58

    • Lawrence: Donald Trump is now America’s biggest loser in the history of presidents’ first 100 days

      17:45

    • Canadian media projects Liberals win national election in a huge blow to Trump

      02:00

    • ‘It makes us weaker… more vulnerable’: Trump keeping Hegseth is a risk to the nation, top Dem says

      08:09

    • Lawrence on Trump’s second term: ‘He is in his 94th consecutive day of failure’

      21:01

    • Trump admin. is ‘openly flouting the Supreme Court’ on Garcia case, Sen. Chris Van Hollen says

      06:57

    • Lawrence: China sees ‘world’s biggest clown’ Trump as most economically incoherent president ever

      11:34

    • Lawrence: Trump, a ‘humiliated clown’ who always pretends he never backs down, backed down again

      15:43

    • ‘Roadkill’: U.S. farmers and businesses are ‘completely screwed’ by Trump tariffs, Klobuchar says

      02:05

    • Lawrence: Trump plunging stock market to its ‘Make America 1932 Again’ moment

      13:56

    • Why Harvard Law Prof. Laurence Tribe is confident Harvard will win its lawsuit against Trump admin.

      03:01

    • Velshi on Trump attacking Fed Chair Powell: Can’t blame the money man for bad economic policy

      07:07

    • Rep. Clyburn warns of similarities between 1930’s Germany and where we are today

      07:07

    • Lawrence: Sen. Van Hollen outsmarted Trump today helping Abrego Garcia get a step closer to justice

      16:01

    • Lawrence: As criminal as Nixon was, he had more respect for the Constitution than Trump

      16:55

    MSNBC’s Ali Velshi speaks to Rep. Melanie Stansbury (D-NM), the ranking member of the House Oversight Committee’s DOGE subcommittee, about the promises Elon Musk made for DOGE versus the underwhelming reality DOGE has produced.

  • Google Can Train Search AI With Web Content Even After Opt-Out

    Google Can Train Search AI With Web Content Even After Opt-Out

    Google Can Train Search AI With Web Content Even After Opt-Out

    Google can train its search-specific AI products, like AI Overviews, on content across the web even when the publishers have chosen to opt out of training Google’s AI products, a vice-president of product at the company testified in court on Friday.
  • Amazon’s Alexa+ Voice Assistant Draws 100,000 Users

    Amazon’s Alexa+ Voice Assistant Draws 100,000 Users

    Amazon has rolled out Alexa+, the new version of its voice assistant, to more than 100,000 users so far, Amazon CEO Andy Jassy said Thursday (May 1) during the company’s quarterly earnings call.

    Alexa+ will be made available to more users in the coming months, Jassy said. It is now starting to roll out in the U.S. and will be expanded to other countries later this year.

    The new version of the voice assistant is being made available on an Early Access basis, beginning with customers who sign up to be notified and own an Echo Show 8, 10, 15 or 21 and then expanding to more Echo customers over time, according to the Amazon website.

    “People are really liking Alexa+ thus far,” Jassy said during the call.

    The new voice assistant is free to Prime members and available for $19.99 per month to non-members, Jassy said.

    He added that Amazon has more than half a billion devices in people’s homes, offices and cars to which Alexa+ will be able to be delivered.

    Jassy said during that call that the new version is “meaningfully smarter and more capable than its prior self, can both answer virtually any questions and take actions.”

    He added that users no longer have to say “Alexa” before requesting every action; instead, they only have to say it once and can then have an ongoing conversation with the voice assistant.

    “And then I think it’s just experience in trying thing,” Jassy said during the call. “So, you can do things like: you have guests coming over on a Saturday night for dinner and you can say, ‘Alexa, please open the shades, put the lights on in the driveway and on the porch, increase the temperature five degrees and pick music that would be great for dinner that’s mellow, and she just does it. When you have those types of experiences, it makes you want to do more of it.”

    When Amazon introduced Alexa+ in February, the company said the new voice assistant would start rolling out in the next few weeks in the U.S. and works with nearly all existing Alexa devices.

    Alexa+ had been plagued by delays, reportedly due to it hallucinating or giving wrong information on test questions. The unveiling came about a year and a half after Amazon first announced it was going to infuse AI into Alexa following the release of ChatGPT.

    The PYMNTS Intelligence report “How Consumers Want to Live in the Voice Economy” found that 54% of consumers said they would prefer voice technology in the future because it is faster than typing or using a touchscreen.

  • Apple and Anthropic Building AI-Powered Coding Platform

    Apple and Anthropic Building AI-Powered Coding Platform

    Apple and Anthropic have reportedly partnered to create a platform that will use artificial intelligence (AI) to write, edit and test code for programmers.

    Apple has started rolling out the coding software to its own engineers, Bloomberg reported Friday (May 2). The company hasn’t decided whether to make it available to third-party app developers.

    The tool generates code or alterations in response to requests made by programmers through a chat interface. It also tests user interfaces and manages the process of finding and fixing bugs, according to the report.

    It was reported in August that while generative AI is not yet making money in some fields, it has quickly proven its value in powering coding assistants.

    As of the time of that report, one AI coding assistant, the Microsoft-owned GitHub Copilot, had drawn nearly 2 million paying subscribers since its launch in 2022 and contributed to a 45% year-over-year increase in GitHub’s revenue.

    AmazonMetaGoogle and several startups have also built AI assistants for writing and editing code.

    McKinsey said in 2023 that AI could boost the productivity of software engineering by 20% to 45%.

    This increased efficiency has far-reaching implications for businesses across industries, CPO and CTO Bob Rogers of Oii.ai told PYMNTS in an interview posted in May 2024. AI-powered tools enable developers to create software and applications faster and with fewer resources.

    “Simple tasks such as building landing pages, basic website design, report generation, etc., can all be done with AI, freeing up time for programmers to focus on less tedious, more complex tasks,” Rogers said. “It’s important to remember that while generative AI can augment skills and help folks learn to code, it cannot yet directly replace programmers — someone still needs to design the system.”

    It was reported in April that OpenAI was in discussions with Windsurf, an AI-powered coding tool, to acquire the technology.

    Windsurf, formally known as Exafunction, had recently been in discussions with investors to raise $3 billion. Last year, the firm was valued at $1.25 billion in a deal led by General Catalyst.

    OpenAI Chief Financial Officer Sarah Friar said in April that OpenAI is building an AI agent that can do all the work of software engineers, not just augment their skills.