“Earlier today, there was a media report erroneously claiming that the Tesla Board had contacted recruitment firms to initiate a CEO search at the company,” Denholm wrote in a post published by Tesla’s X.com account. “This is absolutely false … The CEO of Tesla is Elon Musk and the Board is highly confident in his ability to continue executing on the exciting growth plan ahead.”
The Journal reported Tesla’s board of directors was in the initial stages of a formal process to find the EV maker’s next CEO. The board, according to the Journal, also told Musk he needed to spend more time back at the company and needed to let Tesla investors and the public know he was returning.
NasdaqGS – Nasdaq Real Time Price•USD
As of 12:33:19 PM EDT. Market Open.
Musk apparently didn’t push back and publicly stated on Tesla’s first quarter earnings call in late April that he would indeed be “allocating far more” of his time at Tesla.
Tesla’s investor relations department did not immediately respond to Yahoo Finance’s request for comment. Tesla does not maintain a press office in the US.
Despite Denholm’s denial, the Journal’s reporting suggests a search was at least initially explored, and it is unclear if Musk, a member of the board, was even aware.
Tesla’s board of directors has a fiduciary duty to shareholders, though in recent years its reputation was for rubber-stamping CEO Elon Musk’s initiatives, signing off on huge pay packages, and offering little resistance to the mercurial CEO’s various whims. That may have changed following Musk’s foray into politics, which brought significant harm to the brand and may have led to a sales slump.
Tesla’s auto sales have been hurting, with the EV maker suffering a year-over-year decline in global sales last year. Last month, Tesla reported first quarter deliveries of 336,681 units, making it the worst quarter for deliveries since the second quarter of 2022.
Tesla’s first quarter earnings missed the mark as well, with revenue down 9% and profit sliding over 70%.
Denying it all: Elon Musk speaks during a cabinet meeting at the White House, Wednesday, April 30, 2025, in Washington. (AP Photo/Evan Vucci) ·ASSOCIATED PRESS
Musk’s big bets on self-driving cars and robotics, like the Optimus humanoid robot, are key to unlocking trillions in value, the CEO has said time and time again. The Journal report even suggests Musk himself no longer wanted to be CEO of Tesla, but he felt no one else could “sell the vision” that Tesla’s future was robotics and autonomous vehicles.
Tesla bulls like Wedbush’s Dan Ives believe Musk forced the board’s hand in exploring a new chief executive, but cooler heads prevailed with Musk back in the “driver’s seat” at Tesla.
Tesla’s future? Tesla Optimus humanoid robot on display inside the Tesla pop-up store near Shibuya crossing. (Photo by Stanislav Kogiku/SOPA Images/LightRocket via Getty Images) ·SOPA Images via Getty Images
“While this was a very tense situation, we believe Musk clearly did the right thing [returning to Tesla] and we believe Musk will remain CEO for at least five years at Tesla and we would be surprised if the Board was still heading down this search path as of today,” Ives wrote in a note to investors. “We continue to believe Musk’s days at the White House are now ending after this ‘warning shot’ from the Tesla Board.”
With the launch of Tesla’s more affordable vehicle coming any day, robotaxi testing coming to Austin next month, and industrialization of Optimus robots underway, a fully focused Musk is key for the company’s initiatives to be successful.
The big question for Tesla and its board is when Musk will finally leave Washington and head back to Tesla HQ in Austin. Maybe a bigger question: Is it too late?
Pras Subramanian is a reporter for Yahoo Finance. You can follow him on X and on Instagram.
The Nasdaq 100 plunged as much as 15% since the sky-high tariff announcement, but has since surged as much as 20% as the Trump administration rescinded some of the tariffs and instituted a 90-day pause.
Solid earnings results from Microsoft and Meta Platforms helped boost tech stocks even higher on Thursday. During their earnings calls, both companies hyped up AI tech and committed to making massive investments in AI infrastructure.
Shares of AI-related stocks boomed Thursday morning, with Nvidia up as much as 5%, and other names including Arista Networks, Vistra, CoreWeave, and Vertiv also surging.
The jump in tech shares helped indexes rally after a mixed day on Wednesday, following a slate of weak economic data.
Here’s where major US indexes stood at about 11:10 a.m. ET:
S&P 500: 5,652.21, up 1.5%
Dow Jones Industrial Average: 41,053.19, up 0.93% (+376.35 points)
Nasdaq Composite: 17,895.36, up 2.6%
Meta raised capex guidance for 2025 to $64 billion to $72 billion from $60 billion to $65 billion, though higher costs due to tariffs could be playing a role in the increased guidance.
The jump in capex plans suggests that the AI infrastructure buildout isn’t slowing down as had been feared in recent months following the release of the highly efficient DeepSeek model from a startup in China.
“We expect this significant infrastructure footprint we are building will not only help us meet the demands of our business in the near term, but also provide us an advantage in the quality and scale of AI services we can deliver,” Meta Platforms CFO Susan Li said on the company’s earnings call.
Wall Street analysts cheered the results. Bank of America reiterated its “Buy” rating for the stock on Thursday and raised its price target to $690 per share, a potential jump of 16% from Thursday’s intraday high.
Meanwhile, Microsoft spent the bulk of its earnings call talking about the important of AI and how it’s helping grow its business across the cloud and enterprise software.
“Cloud and AI are the essential inputs for every business to expand output, reduce costs, and accelerate growth,” Microsoft CEO Satya Nadella said.
Microsoft’s Azure cloud business posted year-over-year revenue growth of 33%, of which about 16% was attributed to AI.
“The company is still doubling down on the AI monetization strategy within cloud,” Wedbush analyst Dan Ives said in a note after Microsoft’s earnings.
Microsoft reiterated its guidance to spend $80 billion in capital expenditures this year.
As AI companies describe their models in increasingly human terms, critics question whether this is a genuine technical shift, or a calculated narrative to drive hype.
Generative artificial intelligence is a relatively new technology. Consequently, it presents new security challenges that can catch organizations off guard.
Chatbots powered by large language models are vulnerable to various novel attacks. These include prompt injections, which use specially constructed prompts to change a model’s behavior, and data exfiltration, which involves prompting a model thousands, maybe millions, of times to find sensitive or valuable information.
These attacks exploit the unpredictable nature of LLMs, and they’ve already inflicted significant monetary pain.
“The largest security breach I’m aware of, in monetary terms, happened recently, and it was an attack against OpenAI,” said Chuck Herrin, the field chief information security officer of F5, a multicloud-application and security company.
Chuck Herrin, F5’s field chief information security officer.
F5
AI models are powerful but vulnerable
Herrin was referencing DeepSeek, an LLM from the Chinese company by the same name. DeepSeek surprised the world with the January 20 release of DeepSeek-R1, a reasoning model that ranked only a hair behind OpenAI’s best models on popular AI benchmarks.
But DeepSeek users noticed some oddities in how the model performed. It often constructed its response similarly to OpenAI’s ChatGPT and identified itself as a model trained by OpenAI. In the weeks that followed, OpenAI told the Financial Times it had evidence that DeepSeek had used a technique called “distillation” to train its own model by prompting ChatGPT.
That evidence OpenAI said it had was not made public, and it’s unclear whether the company will pursue the matter further.
Still, the possibility caused serious concern. Herrin said DeepSeek was accused of distilling OpenAI’s models down and stealing its intellectual property. “When the news of that hit the media, it took a trillion dollars off the S&P,” he said.
Alarmingly, it’s well known that exploiting AI vulnerabilities is possible. LLMs are trained on large datasets and generally designed to respond to a wide variety of user prompts.
A model doesn’t typically “memorize” the data it’s trained on, meaning it doesn’t precisely reproduce the training data when asked (though memorization can occur; it’s a key point New York Times’ copyright infringement lawsuit against OpenAI). However, prompting a model thousands of times and analyzing the results can allow a third party to emulate a model’s behavior, which is distillation. Techniques like this can also gain some insight into the model’s training data.
This is why you can’t secure your AI without securing the application programming interface used to access the model and “the rest of the ecosystem,” Herrin told Business Insider. So long as the API is available without appropriate safeguards, it can be exploited.
To make matters worse, LLMs are a “black box.” Training an LLM creates a neural network that gains a general understanding of the training data and the relationships between data in it. But the process doesn’t describe which specific “neurons” in an LLM’s network are responsible for a specific response to a prompt.
That, in turn, means it’s impossible to restrict access to specific data within an LLM in the same way an organization might protect a database.
Sanjay Kalra, the head of product management at the cloud security company Zscaler, said: “Traditionally, when you place data, you place it in a database somewhere.” At some point, an organization could delete that data if it wanted to, he told BI, “but with LLM chatbots, there’s no easy way to roll back information.”
Sanjay Kalra, the head of product management at Zscaler.
Zscaler
The solution to AI vulnerabilities is … more AI
Cybersecurity companies are tackling this problem from many angles, but two stand out.
The first is rooted in a more traditional, methodical approach to cybersecurity.
“We already control authentication and authorization and have for a long time,” Herrin said. He added that while authenticating users for an LLM “doesn’t really change” compared with authenticating for other services, it remains crucial.
Kalra also stressed the importance of good security fundamentals, such as access control and logging user access. “Maybe you want a copilot that’s only available for engineering folks, but that shouldn’t be available for marketing, or sales, or from a particular location,” he said.
But the other half of the solution is, ironically, more AI.
LLMs’ “black box” nature makes them tricky to secure, as it’s not clear which prompts will bypass safeguards or exfiltrate data. But the models are quite good at analyzing text and other data, and cybersecurity companies are taking advantage of that to train AI watchdogs.
These models position themselves as an additional layer between the LLM and the user. They examine user prompts and model responses for signs that a user is trying to extract information, bypass safeguards, or otherwise subvert the model.
“It takes a good-guy AI to fight a bad-guy AI,” Herrin said. “It’s sort of this arms race. We’re using an LLM that we purpose-built to detect these types of attacks.” F5 provides services that allow clients to use this capability both when deploying their own AI model on premises and when accessing AI models in the cloud.
But this approach has its difficulties, and cost is among them. Using a security-tuned variant of a large and capable model, like OpenAI’s GPT-4.1, might seem like the best path toward maximum security. However, models like GPT-4.1 are expensive, which makes the idea impractical for most situations.
“The insurance can’t be more expensive than the car,” Kalra said. “If I start using a large language model to protect other large language models, it’s going to be cost-prohibitive. So in this case, we see what happens if you end up using small language models.”
Small language models have relatively few parameters. As a result, they require less computation to train and consume less computation and memory when deployed. Popular examples include Meta’s Llama 3-8B and Mistral’s Ministral 3B. Kalra said Zscaler also has an AI and machine learning team that trains its own internal models.
As AI continues to evolve, organizations face an unexpected security scenario: The very technology that suffers vulnerabilities has become an essential part of the defense strategy against those weak spots. But a multilayered approach, which combines cybersecurity fundamentals with security-tuned AI models, can begin to fill the gaps in an LLM’s defenses.
In an hour-long Q&A session with news outlets in the White House’s Roosevelt Room, Musk answered a series of questions about his cost-cutting team DOGE.
Most startups bolt AI onto old products. Ravenna reimagined the entire workflow.
When we first met Ravenna Founders Kevin Coleman and Taylor Halliday, it was clear they weren’t just chasing the hype cycle. They were pairing AI-native architecture with deep founder-market fit, and rebuilding how internal ops work — from first principles.
Their new company is going after a market dominated by legacy players. But instead of being intimidated by incumbents, they got focused, making some smart moves that more early-stage teams should consider:
Speak with 30+ customers before writing a line of code
Define a clear ICP and pain points
Build natively for Slack — where support actually happens
Prioritize automation, iteration, and real workflow transformation
Stayed radically transparent with investors and early customers
In this episode of Founded & Funded, these two sit down with Madrona Managing Director Tim Porter and talk through their journey, what they did differently this second time around as co-founders, and how they’re building a durable, agentic platform for internal support.
If you’re a founder building in AI, SaaS, or ops — this conversation is full of lessons worth hearing.
This transcript was automatically generated and edited for clarity.
Tim: So I mentioned in the intro, you’ve done a company together before and this is a second one. We’re super excited to have been able to invest in the company, an announcement that just came out here recently. But let’s go back. Tell us about the moment you decided to start Ravenna. What problems did you see that you said, Hey, we got to go solve this for customers?
Kevin: I was at Amazon for four years, and I think the whole time I was there, I was looking around trying to figure out what was going to be the next company that we go and do. It took a while to find it, but about halfway through my tenure there, I realized one day that I was spending a lot of time in an internal piece of software Amazon has that serves as the help desk across a lot of different teams. It was the tool where I would go to request onboarding for new employees, to request new dashboards to get built from our BI team. My teams would use it for other teams to request work from us. I realized I was spending so much time in this tool, it wasn’t a great product experience. The way I always described it to folks is it was like the grease in the enterprise gears, if you will. It was the way that things got done internally.
And so I got obsessed with what is this product category? It’s so foundational to how Amazon as a business operates and I started doing a bunch of research in this space. I found out it’s called enterprise service management, which is the category. ServiceNow is the leader. I finally understood what ServiceNow as a business did and why they’re such a valuable business and how large this market is. I started thinking, what does a next-generation, amazing version of this product look like if a very innovative startup built it that cared about design and user experience and cared about automation as well? So really, what does the next generation ESM platform look like?
Tim: I love that because ESM is a category, it’s a big market. ServiceNow is a leader, but I also think, like a lot of things, Amazon did it in an innovative kind of scrappier way. You actually used it for more things. This was the way you just requested and got things done across different groups, as opposed to “Well, we got to log it into this system of records so somebody has a record of it,” and it’s like, no, this is actually the way it was getting done.
Kevin: Yeah, absolutely. And so I came up with this concept and when a concept gets lodged in your mind, you can’t get rid of it. I went and ran to Taylor, who obviously was my co-founder previously and the guy I wanted to start the next company with, and I said, “Hey, I’ve got this awesome idea. We’re going to build a next-generation enterprise service management platform.”
Taylor: On first blush, it was tickets and queues. I was looking at this like, “Is this Jira? I don’t quite understand what’s going on here.” But it came at a good time. So rewind the clock, ChatGPT hit the scene, Zapier, just like every other company probably on the planet, had a little mini freak out, like, “What do we do with this? What does this mean for us?” Product strategy, what have you. At the time, I was lucky enough to pair up with some of the founders to basically do a listening tour. We headed around to mid-market size companies, talked to C-suite directors, executives, what have you. Obviously, Zapier is known for its automation game; we wanted to try to figure out what would be a great solution here in the world of AI/LLMs to bring a new level of value to them.
We asked an open question. Where would you like to have us poke and prod? We did 20 to 30 of these calls. It became pretty clear, resoundingly — we kept hearing about internal operations, over and over and over.
I had a problem picking through that in my own head, and I even had a blunt conversation with one of the CEOs saying, “I’ve heard this so many times. What’s the deal? What’s going on here? Why do I keep on hearing about internal operations?” I think it was a couple answers. One, I can wrap my head a lot better around internal efficiencies or lack thereof in this company, and what we are hoping to do. There was this desire or gap of a desire kind of thing in terms of “I don’t have a good amount of visibility to what folks are actually doing. I can’t top-down efficiency at my company. They would say, “As much as I say, ‘Hey look, I want you to be more efficient, do these different things,” I don’t know super well what the market or what the engineer is doing.” They all kind of use AI as an opportunity to help maybe bottoms-up some of this efficiency without it being a top-down thing.
I think that was a lot of the interest. And so when Kevin ran to me, it was like, “I have this idea circling around, this internal management tool, and there’s an opportunity perhaps in this larger market that’s old, 30-year plus incumbents are all over the place. That’s what got me interested and sparked a lot of the collaboration in the early days.
Tim: We think a lot about founder market fit when we’re looking at new investments, and I remember our first meeting Matt McIlwain and I had together with you guys and we both left like, “Oh, my gosh. We have to find a way to fund this.” It was this unbelievable founder market fit that you had lived the tool in using it at Amazon. You literally witnessed it across all of these customers at Zapier who are using it to get these automations in place, but it wasn’t a full product. So, awesome to see you both come together with those insights. I’m going to abstract away a bit. We’ll come back to Ravenna and the specifics about enterprise service management, but you guys have done a company before.
Kevin: We have.
Tim: You’ve been through YC, and you decided to do it again. That’s a testament to your working style. But were there things like, “Okay, we’ve got the idea again.” And other things like, “Hey, we’re going to do something different. We’re going to do it the same”? How is it different for you guys going at it a second time around?
Taylor: It’s funny, Kevin and I, you mentioned a past company, I always joke, it depends on how you count a company. Kevin and I have been working together for quite a long time. Whether that be in the early days just finding a coffee shop or bar at night working on the smallest app. Then to San Francisco, we ganged up together and tried to start a CDP of sorts, went through YC with that, and molded that into several different things. But regardless, we were kind of taking stock of that history, if you will.
To your point, what went wrong, and what went right? I think an interesting way of characterizing it, which I feel like a lot of entrepreneurs do this in the early days, and early innings is trying to, “What’s the new, new? What’s the thing that doesn’t exist out there yet?” If you were to take a retrospective look on the stuff that we’ve put a lot of time and effort into, it circles around that, which is frankly speaking some of the fun and exciting when you’re with some buddies and say, “Hey, this doesn’t exist yet. What if we made this?”
We were thinking like, “Hey, look, what if we flipped it on side this time? Instead of doing that approach, let’s try to figure out a market that’s super well-defined and try to focus in on opportunities to actually bring better experience.” Especially in the age of AI, it seemed like the perfect time to target this one in particular.
Kevin: Taylor hit the nail on the head there. We, for the life of us building software together have always been trying to identify something that doesn’t exist and shy away from competition, and this time we’re taking ahead on. So we’re super excited about that. Big markets are exciting. We don’t have to go find a small group of people who need what we want. We know there’s a ton of people out there who need what we’re building. That’s really exciting to us.
Taylor: As part of that analysis of, “What do we want to work on and where do we want to press?” I remember talking to you, Kevin, thinking about taking a step back, “What kind of risks do we want to bring on?” We kind of framed it like that. Going back to my earlier point, I would characterize a lot of the early endeavors as pretty high in market risk. We’re trying to figure out, “Hey, let’s try to not optimize for that this time. Let’s try to optimize for something else.”
To compliment ourselves a bit, I think we’re pretty good at building a lot of products. And doing it pretty fast, too. Also at getting together a lot of good folks to work with. So from, call it a human capital risk, I didn’t see that on the table. Taking a larger market, trying to take the market risk off the table. We tried to optimize more for what we thought of as go-to-market risk.
Kevin: The other thing I’d say that we’re doing better on, I don’t know if we’re doing great at it, but we’re doing better at it this time around, is understanding who our customer is and being super clear about what we’re building and for who. So ICP, ideal customer profile, if you will. Taylor mentioned the last company, the first product that we built was a customer data platform. We effectively at his startup, my startup, we had problems with our customer data. It was sprawling all over the place. Folks who were non-technical were always asking us to integrate various tools so they could get customer data where it needed to go. We would go around to potential customers and say, “Hey, you probably have problems with your customer data. Can we help you?” And they’re like, “Yes, of course we do.”
The problem was the problems were all over the place. There wasn’t a product that we could identify that would cut across a bunch of companies. Part of that was, we were early entrepreneurs and didn’t know what we were doing. This time around, before we even built any software, we spent months talking to customers and understanding the space and understanding what pain they had before we started writing a line of code. We wanted to be super clear about our ICP, what they needed, what their problem was, and then we back into the product from that. So a hundred percent this time around, we’re doing a lot better on that front than we were last time and we think it’s definitely the right way to go.
Tim: Well, this is definitely a super big market, and another thing that came through from the beginning, as we have been engaging and working together, is customer-driven, customer-driven. Sort of the maniacal customer focus that is maybe the core attribute for I think successful startups. So that’s been awesome.
Let’s talk a little bit more about what the product does and bring it more to life. I’ll lead you into that by talking about some of the investment themes that Madrona has that we thought Ravenna embodies. A big part of that is AI and part of the why now for Ravenna. Probably our biggest theme is around how AI can reimagine and disrupt different enterprise apps. You’re using what I would call or many in the industry would call an agentic approach where you can actually create various agents that don’t just surface insights but can automate and finish tasks. This world and this product area is really ripe for that, and you’ve done some interesting things there.
And then new UIs. The user experience, you’ve embraced Slack as a place that work is getting done and made the product be extremely Slack native and fully integrated in people’s existing workflow as well as an ethos around clean, simplistic. Taylor, you and I talked about this the very first time we talked about the product, but maybe give a better description. Okay, great service management, tickets, people have something in mind, but say more about the key features and then maybe tie that back to when you were out talking to these initial prospects, what did you hear about what was missing and what could you deliver in your product to make this experience such a big leap forward?
Taylor: Going back to why we picked this. There’s a well-known UX product pattern that you see basically in this market. We weren’t very impressed by what we saw. In the age of AI/LLMs, the popular thing, I would argue, would be to come at this with an intelligence layer. We definitely consider that, and we made a conscious decision on what we think is maybe where the longer-term value is — but also perhaps the tougher one — which is that we’re not building the intelligence layer for this market. We have a lot of confidence, conviction, if you will, that there’s room for a new rethought platform. What that means in actual practice for those who are familiar with even the space, a help desk is probably the most down-to-earth version.
Tickets and queues, it’s a very similar pattern to how you expect it to be from a customer interface with customer service software, same type of thing except the primary difference is that this software is geared towards basically solving your colleagues’ problems. The canonical example, is the IT help desk. You asked for a password reset, new equipment, what have you, that creates a case, it creates a ticket. That’s the typical way of going about this. We’re not talking purely about the intelligence layer and the agents, which we are super excited about, I think we have a lot of fun stuff there, but also very much of building and rethinking what the larger brick and mortar ticketing platform looks like.
Kevin: Yeah, 100%. So enterprise service management is the category. That’s a very broad term. Most people don’t know what enterprise service management is. The easiest way to think about it is it’s an internal employee support platform, internal customer support platform if you will. So, you have functions across an organization, whether it’s rev ops, HR ops, sometimes called people ops, facilities, legal, etc. They all offer internal services. What I mean by that is they offer services that other colleagues can leverage.
So in legal, a service might be, “Hey, can you review this contract?” In facilities, it might be, “Hey, my desk is broken, can somebody come and fix it?” And so this pattern exists across companies, and what people need is a tool that allows them to intake these requests, assign those requests, resolve the requests, and then get reporting and analytics. Increasingly, with AI and automation, classic workflow automation, they want to automate a lot of this work as well.
What we’re building is a platform that allows any team within a company to offer a best-in-breed service, best-in-breed help desk and provide amazing service to their colleagues and then also automate a lot of their work with our AI. That’s a pretty straightforward way of describing it.
Tim: You recently were part of a launch that Slack did for partner companies. Pretty cool. You’re Slack native but yet a new company, kind of an interesting series of events that maybe led to that. What’s the background on that and what has it been like trying to partner closely with Slack?
Kevin: I’ll say upfront that when you start a company, weird, cool, fun stuff just happens. It’s kind of like Murphy’s Law, right? Anything that can happen will happen. It feels like that is embodied in a startup to a certain extent. So yeah, we were a launch partner for the launch of the AI and assistance app category in the Slack marketplace. You can find Ravenna in the Slack marketplace, which is super cool.
The way it happened is very fun. Matt McIlwain, who is obviously your partner here at Madrona, when we were going through our recent fundraise, he said, “Hey, there’s a local entrepreneur you should go and talk to.” He made the introduction, this local entrepreneur went on a walk with Taylor, heard what we were talking about, what we were building and said, “Hey, a certain high level executive at a large CRM company in the Bay Area,” who happens to be Slack’s parent company, “should learn about this.” We were like, “Of course, anybody who’s an executive of these companies should learn about us.”
They ended up forwarding along our deck. That got forwarded over to the executive team at Slack, and they got in touch with us and said, “Hey, what you guys are doing is super interesting, we should talk.” We had a conversation, and we got a Slack channel open with a couple of those folks, as you do when you’re working with folks at Slack. Then we noticed that this new app category is coming out. So because we had that introduction there, we reached out and said, “Hey, we think Ravenna fits really nicely into this new app category. What’s going on here? How can we get involved?”
It was, fortunately, really good timing. We got connected with the partnership folks over there, and they said, “We’re launching this category in two months. If you guys can get your stuff ready, we’re happy to feature you as a launch partner.” Funny how these things work out.
Tim: You all have been great about using advisers but also using your own networks to get feedback. You never know where it’s going to go.
Kevin: You never know, you never know.
Tim: This is another example of putting yourself out there, and getting the feedback. Sometimes it takes you right through to the CEO’s desk.
Taylor: Kevin mentioned, it’s the funner parts, to be frank with you about it. If you have humility to understand that there’s so much out there to learn — especially going into a category that you’re trying to make some hay in and do a different thing in — It’s valuable to get a lot of perspectives there. The more of that you do, there’s tangible stuff that tactically you might get some Ps and Qs kind of learnings along the way, but there’s also some of the funner random doors to get open, such as that one.
Tim: One thing I think is cool too, and part of it is using Slack, and part of it is how you can pull data in from other places — is that questions get asked, and people didn’t realize, this question’s been answered already. How do you create this instant knowledge base from what’s already in Slack all over the place or maybe from an existing knowledge base that is there, but people don’t go look at it. It’s easier to fire off a Slack like, “Hey, Taylor, can you tell me the answer to X?” And by doing that, you can create an automation that the person, and the task gets finished and you didn’t have to do anything, right? That’s a big unlock here.
Kevin: You’ve mentioned Slack a couple times, and we should revisit that really quickly. Slack is the interface for the end customer of the platform. That’s super critical and was a learning during our listening tour at the beginning of last year. The traditional help desk, there’s basically a customer portal where you go, you fill out some form and then your request goes into the ether and you don’t know what happens to it until somebody pings you back a couple of days later is like, “Hey, we resolved your issue.”
What basically every customer across the board told us is employee support happens in Slack now. So, “If you guys are going to build this platform, everything needs to be Slack native, that’s where our employees work. We don’t want to take them out of there. That’s super key to us. If you go to our website, it’s very clear that we’re deeply integrated with Slack. So, we started building into Slack and then to your point about knowledge, we started talking to customers and said, “Hey, you get a lot of repeat questions. A lot of those questions pertain to probably documents or knowledge bases that you’ve written. If you give us access to those, we can ingest them and use AI to basically automate answers to those questions so you don’t have to answer them over and over again. Just to save you time.”
Some people were like, “That’s amazing, let’s definitely do that.” Other people basically said, “Yeah, it’s not going to work for us.” And so we were like, “Okay, why not?” They were like, “We don’t have good knowledge. We don’t have time to maintain it, it gets out of date really quickly and, frankly, it just doesn’t make the priority list.” And so we asked the next question, which is, “Okay, if you hire somebody, how do they get up to speed? How do they learn how to answer these questions if you’re answering them in Slack?” And they were like, “We literally point them to Slack channels and say, ‘Go read up on how we answer these questions and that’s how you should answer going forward.’”
That was this light bulb moment where there is a treasure trove of corporate information and really knowledge that exists in Slack, or any team chat application, so Teams as well, that is sitting there. And companies don’t derive a ton of value from that. A lot of what we’re trying to build is not only give operators of these help desks tools to turn Slack conversations into knowledge-based articles, but really to build a system that can learn autonomously over time.
You should assume that when you’re using Ravenna, your answers are going to get better over time. The system’s going to get better at resolving your internal employees’ queries over time because we’re listening to that data and evolving the way that we respond and take action based on how your employees are answering their colleagues’ questions.
Tim: One of the things that is super exciting here is that I see this as how work gets done inside businesses, and it’s really broadly applicable. On the other hand, a truism about successful startups is that they stay focused, and there is this IT scenario initially where IT is used to using tickets, people are used to asking IT for things. Those things tend to be maybe more automatable, I don’t know. But how do you balance that? Staying focused on, let’s just go nail IT service management, ITSM, versus we have this broader vision around better automation for how enterprises get work done. How do you get that balance right? What are you learning from customers and where are they drawn to initially as you start talking to them and start working together through an initial set of users and customers?
Taylor: I’m going to tie this back to some of the questions you asked around, what’d you get excited about working on this? Rewind the clocks, Kevin runs over, “I got this great idea. The market’s called ITSM.” I’m like, “What? I haven’t heard of this thing.” “No, it’s a huge market.” “Really? I’ve never heard of this acronym before.” ITSM is the name of the larger market and it’s been traditionally known as, okay. Half that acronym’s IT.
Today if you say, “Look, who’s the ICP? Who do you want us to introduce you to at a company?” We’re going to say, “Look, it’s the IT manager.” And it’s because they know what it is. Again, longstanding industry, they know what to call it. They know that funny acronym. They know the pain points very, very well and they understand how to kind of wade through the software. And so that is typically I’d say the beachhead if you will, for us approaching.
Tim: That’s the initial wedge. That’s a great place to enter.
Taylor: Correct. Where this gets more interesting, in my opinion, though, I remember kind of noodling on this thing. I was looking at Zapier’s help desk channel and I was kind of looking through it and being like, “Huh, this is not the most inspiring, password reset, what have you, kind of stuff. Is this really this massive market that Kevin’s super excited about?” No shade if anyone from Zapier’s listening in. The channel’s great, by the way. But I would say it’s what light-bulbed, looking around the rest of the company, it was the same interaction pattern. The same user pattern that you see in what was traditionally known as the help desk channel — that same pattern is present in HR ops. It’s the same thing that you see in marketing ops. It’s the same thing you’ve seen in engineering ops.
It was interesting, though, because I was being very coy interviewing a lot of folks back then. IT knows what they call it. They know what the class of software is, right? But the folks who were in charge of marketing ops, engineering ops, I couldn’t find many who knew the acronym ITSM, so I stopped doing that pretty early, but they know the pain. I started to realize, I came around, circling around to being like, “Look, if you are in, call it an ops position, marketing, engineering, pick your flavor and/and department. If your job is to provide great service to your colleagues, you are operating a help desk, whether or not you know it. That’s the fact of the matter. So again, to your question about who do you start with, we start with IT, it’s the most well-known commodity in that space.
The excitement for me, maybe it’s broader than IT, maybe there’s more stuff than that. That’s kind of grown to be true so far in the early innings here is that other folks see basically a better class of software being introduced by IT. It’s this interesting thing, it’s an infectious being, like, “Wait, what is that? Where’d that come from?”
And so therefore we are trying to maintain in terms of precedence, IT is the number one persona and that’s the one where we’re going to, I’d say charge ahead on the absolute most in terms the bespoke workflows that they have to do and the ones that we have to help automate better. Nonetheless, though, HR ops seems to be the one that we’ve just seen in organic pull with, it’s kind of second in position, and after that is revenue ops.
Kevin: I’ll give you a very concrete example. This morning I had a demo call with a large email marketing tool in Europe. They got out these four IT guys on the call like, “Hey, we’re looking for a new tool. We need a new help desk tool, we need AI, etc.” We’re like, halfway through, they’re going through all the requirements. They’re like, “Oh, by the way, it’s not just us. It’s facilities, it’s HR,” and I think they said product is the other team. That happens all the time.
We are always talking to IT people, and it always comes up on our calls, “It’s not just for us. Other people who offer internal services need this as well.” So it’s exciting for us because IT is the entry point, but then you’ve got this really nice glide path into the rest of the organization. Again, I don’t know if it’s a secret or whatnot, but it’s one of our core learnings you’re going through this journey — there’s a lot of teams across these companies who need this type of tool. So that’s exciting for us.
Tim: Yeah, it’s an interesting form of land and expand.
Kevin: Yeah, exactly.
Tim: IT has budget, they get it, they need it, but everybody is asking them for something so you can get sort of a viral spread, and there’s no difference in the product functionality to start using it for sales ops as you were using it for IT ops.
We referenced ServiceNow a couple of times, so one of the most valuable application software companies in the world, $175 billion market cap. VCs like to use shorthand to describe companies’ investments, one of our best investments ever, Rover, it was Airbnb for dogs. I’ve shorthanded Ravenna as an AI-native ServiceNow for mid-market and growing companies. ServiceNow is obviously upper enterprise. You like that moniker? Should I keep using that, or do I need to disabuse myself of that type of VC speak?
Taylor: I think that’s a good one. It ties back for me at least to the distinction I made earlier around the platform versus the intelligence layer, kind of like, well what are you guys doing? I always like to joke, for better or for worse, we’re doing both. I say for better or for worse, again because it’s a lot of software, but that’s where the conviction is. ServiceNow is what we view as someone who’s taken a very similar bet a long time ago in terms of, “Look, we want to actually own the data layer. We want to actually be the thing that is close to basically all the customer data and the employee data at a company.” We view that as a more durable, longer-term play rather than just the intelligence layer. And so, I like the moniker.
Kevin: Definitely like the moniker.
Tim: All right, I’ll stick with it. So, it’s been fun in this conversation as you ping-pong back and forth, Taylor talking about go-to-market things, Kevin talking about product things. Taylor, your background is traditional engineering leadership. Kevin, you most recently have been doing go-to-market at Amazon, but an engineer by background. How do you divide it up? How do you divide up the responsibilities inside the company? That’s always an interesting thing that sometimes founders struggle with, is we’re full stack, you guys are both full stack, but we have to have some roles and responsibilities here.
Taylor: For Kevin and I, given how long we worked together, I think it’s probably more blurry than most, but I think that’s also one of the benefits of working with him. I know him so well that I can trust him for a wide range of things. That all said, we do try to basically divide up the product and how we go about this. I’ve tried to focus more on the AI automation side of the fence. Kevin’s very much more on the, call it the broader platform side of the fence, and so that’s roughly speaking from a product angle.
From a go-to-market angle, I’d say it’s messy at this point. We’re a young startup, it’s kind of like hit the network, hit all your networks.
Tim: Most of you’re on customer prospect calls all the time.
Kevin: Of course.
Taylor: I mean — roles and responsibilities only matter so much in terms of if you have people that you think might want to buy this kind of stuff, we got to do that. It’s good to have some delineation between roles, but I think at the earliest stages it’s just messy, and embracing that I think is part of the deal.
Tim: Another way you run the business that was super nice for us in the process of leading up to investing is you’re radically transparent. And ll of the prospect calls or customer calls, all those videos you record, they were all on Slack. You just gave us access to all of them. Like, “Here, go watch them and see what we’re learning and help us along the way.” That was super nice. But that must also permeate through your organization, and maybe it gets to the culture a bit, maybe speak to the culture some and what you’re trying to be intentional about in terms of building culture here in the relatively early days of Ravenna.
Kevin: I think this was, for me at least, a core learning from the first business. We didn’t do a good job of talking about what we were doing or telling people what we were doing. Part of it was, I don’t know, I didn’t think the business that we had was the most exciting thing in the world. So it was a little bit of not wanting to broadcast it as that much. I would hang out with friends and whatnot, and they wouldn’t know what my business was back then, and I would be kind of frustrated internally like, “How do you not know? We don’t have a lot of friends who started businesses, you should know.” But the fact of the matter is, they shouldn’t know. I should have been a lot more vocal about our business.
This time around, I think there’s two things. We want as many people to know about what we’re doing as possible because we think it’s pretty cool. Hopefully, other people will think it’s pretty cool. Hopefully, customers will think it’s pretty cool. The other thing is we want as many sharp minds helping us — in the product, and business, as possible. We think the way to accomplish both of those goals is being radically transparent. It’s radically transparent with our team and our customers. When we talk about roadmap, or when we talk about the stage of the business, what we have and what don’t. It’s all an open book, and we’re very transparent with them on where we’re at and where we’re going.
With investors as well, we shared a ton of stuff with you guys, and it wasn’t an angle to get you guys excited about what we were doing. It was more that we really liked you guys. We thought you were really sharp, and if we share a lot of stuff and you guys see what’s going on, hopefully you’ll get excited about the business. But then also hopefully you’ll, I don’t know, see something that we’re doing and be able to give us feedback on how we can sell better, how we can build better, pattern match across different portfolio companies that you’ve seen and help us. We want everybody to know what we’re doing, and we want as many smart people helping us and being transparent helps us accomplish this.
Tim: Super effective. We should say in other investors that were even investors before us, Khosla, Founders’ Co-op, have been really I’d say best practice in making a great collaborative style where we always are up to speed and can try to add value.
It probably has impacted recruiting too. It’s a hard recruiting market, especially for good technical talent and AI talent. You’ve done an amazing job of building the initial engineering team, including great AI background. Without giving away any hiring secrets, talk a little bit about how you’ve been able to do that. It’s never easy, but you’ve made it look relatively easy in these early days. What’s it been like in this hiring market, especially when you’re competing for AI talent?
Taylor: I don’t have any deeply held secrets.
Tim: At least that you’re going to share.
Taylor: If I did, I wouldn’t give it away anyway on podcast. But, really —we’re super excited about the team we have, and I think equally as proud about the culture that we’ve been much more intentional building this time around. We’ve tried to hold a high bar with the folks that we’re interviewing. I think that was more of a self-serving thing originally. But I like to think that comes through, frankly speaking, for a lot of the folks that we are speaking with.
It’s not just about the mission per se, it’s also about knowing that we basically have built quite a bit of software in our past lives and have a lot of perspective and a lot of conviction. Not just the market we’ve talked a lot about, but also how to go about building this and how we’re thinking about taking a different approach. I think that in itself has helped basically attract a lot of folks that, frankly speaking, we’re honored to be working with at this point as well.
Kevin: Totally. My playbook, I’m happy to share it because it’s pretty simple. I reach out to a lot of people and I tell them that Khosla and Madrona put some money into a company to help go after ServiceNow’s market, and people get excited about it. Yeah, it’s just trying to find good people and trying to get them to have a conversation with you and then explain the vision of what we’re doing and why we think not only the opportunity is really big, but we want to build the next great Northwest software company, if not West Coast software company.
We want to be intentional about building an amazing engineering culture, an amazing product culture, an amazing culture that works backward from customers. Amazon likes to say they’re the most customer-centric company. Hopefully, we’re going to be the most customer-centric company over time. And we’re very much striving to do that right now, but just really build a great place where people want to come work.
Tim: What’s an example of something that maybe you had an assumption coming into this company, now a year later it turned out to be wrong, and you had to quickly work through that. Not necessarily like an 180 degree change in direction, but constantly sort of course correcting.
Taylor: It goes back to perhaps what I would say about picking a large market, and being conscious about that. Nothing in life is for free. You get into that and you quickly realize a couple different things. If you pick a large existing market, sure people know it, you can assign a market cap to it. It probably makes the investor conversation a little more easy in terms of figuring out what the TAM is. But you start actually building here, you quickly realize that a well-known market has a lot of well-known features, a lot of well-known capabilities, a lot of basically well-known expectations from the buyer. Which in some level is good. It kind of clears things up.
The trade-off that what we found is that it translates into a lot of software. So again, for better or for worse fits well to some of our strengths and also some of the recruiting that we’ve done. We’ve been moving extremely fast because we have to. Another quick tenant about Kevin and I, the way we think about doing building companies, the whole stealth thing is orthogonal to us. I’m not going to go so far to bash some of the folks who want to do that type of thing.
One of the things of learnings from our journey is that there’s nothing more true, harsh, and real than the market. Every bit of time that you spend not interfacing with that market with what you’re building is a gap that you were accumulating and accumulating. One thing we always talk about at Ravenna is making contact with the reality as fast as possible.
Tim: I agree. I think the value from asking for feedback, shipping, and getting the feedback from actual shipping so outweighs any risk of, “Gosh, somebody else took my idea, or we should have stayed in stealth longer.” It’s just not even close. You guys have lived that. We keep talking about this big market. We alluded to this, that a way to think about Ravenna is a AI native ServiceNow for mid-market. So ServiceNow just did a big acquisition.
Kevin: Yeah, it did.
Tim: They bought this company called Moveworks, you know, biggest acquisition in the history of ServiceNow. It’s kind of an AI ITSM. How do you think about that move? Is that relevant for Ravenna? How is Moveworks similar or different to the product you’re building in the market you’re going after?
Kevin: In terms of is it relevant? Sure, it’s relevant in the sense that it’s definitely in the market that we’re playing in. We got really excited when we saw it. There’s clearly, we’re not the only smart people in the world who know that there’s a lot of opportunity in this space, but it’s exciting to see the activity and obviously a big acquisition, so it’s cool to see.
Moveworks is a previous generation AI intelligence layer on top of existing help desks. It was brought up a lot by investors when we were going through initial fundraising. Which was like, “Are you guys trying to be Moveworks? Are you to be ServiceNow? How do you guys think about it?” Because there’s AI, but there’s also the platform. Our approach is distinct from them in the sense that Moveworks sits on top of existing platforms like ServiceNow, whereas we’re trying to build the foundational platform plus the intelligence layer on top.
At the end of the day, customers will get similar AI capabilities from Ravenna, current next-gen capabilities, because we’re LLM native. I think they’re built on previous generation NLP technologies.
Tim: Which has a huge impact on accuracy and does it work?
Kevin: We think so. Yeah, exactly. I mean, no shade or anything to the Moveworks folks. They’ve clearly built an awesome business and had an amazing outcome and congratulations to the team because that’s fantastic. That’s what every entrepreneur strives for. We just believe, in the fullness of time, the majority of the value accrues to the platform if you can become the system of record. We honestly felt like this was the time to take a shot at building a new system of record in this space. That’s one of the fundamental differences between us.
Now in terms of near-term impacts on the market, I’m not sure what ServiceNow’s plans are for Moveworks, but there is a large kind of call it mid-market enterprise segment of customers who need this AI capabilities. Whether or not Moveworks continues to play there or ServiceNow kind of brings a more upmarket into large enterprise, which is where they like to play, there’s just a lot of opportunity for us in this space.
Tim: Yeah, that’s a great point because I think the things we talked about, Slack, easy to get up and running, beautiful UI, but another thing is price point.
Kevin: Yeah, very true.
Tim: You get a lot of functionality at the enterprise level, but you’re making this accessible and a price point that’s accessible for faster-growing companies and for them to grow with you.
Kevin: 100%.
Tim: We’ve talked about how AI is an integral part of the product, and you also built AI systems at Zapier, Taylor. One question we think about a lot from an investment standpoint is what’s durable? Is there a moat from the AI itself? What’s your take on that? Do you feel like the technology itself is a place that you can build competitive advantage? You’re building an agent-based system here. What does that mean to you, and is that part of what you think you’ll provide customers with, with durable competitive advantage over time?
Taylor: This goes back to the things that got me excited about this originally. It might be first useful instructive, breakdown, when we say AI and automation here. Like, what’s that mean? Big generalization, big time. 50% of the stuff in terms of automation here falls into the category that we talked about earlier, around like, “Hey, there’s information somewhere. It’s Slack, it’s a KB, it’s in these other interesting places. Can we answer that in a more automated way?” That’s one side of it.
The other side of it is actions. When I say that, for lack of a better example, instead of asking, “Hey, what’s the procedure to reset my password?” It’s more interesting to say, “Hey, can you reset my password,” right? Actions. On the first side, I think we covered decently well. One of the things that Kevin touched upon is creating knowledge. I think that’s a very interesting thing here is whether or not you want to call it us building a KB, we haven’t gone so far to put that stake in the ground in terms of our product feature yet.
Nonetheless, one of the things that gets me excited about the idea, it’s like Ravenna is growing with you. That this knowledge is in all these disparate places and we have the ability to go through and hone in on where people work, and make Ravenna better.
Tim: Awesome. So exciting. So much to go build, so much opportunity. Last question. Any advice for other aspiring founders out there thinking about going to get something started in this market right now with AI?
Kevin: The thing I would encourage everybody to do if you’re thinking about building a product, is go talk to a lot of customers before doing it. It was definitely the biggest mistake we’ve made many times throughout our career is like, “Oh, this seems cool. Let’s go and spend a month, three months, six months.” As people who know how to code as engineers, the bias is just, let’s go build because it’s easy. Building is way easier than going and finding 20 customers who will give you a half hour of their time to validate your idea or whatnot. However, you’re going to save yourself so much time, disproving ideas or you’re going to validate your idea and have a lot more conviction about going off and doing it. The biggest piece of advice I can give to folks who want to start companies is go talk to 20, or 30 companies or customers before you start writing a single line code.
Tim: You think you’ve done enough customer validation? Go do more, double down.
Kevin: You can never have enough. Even now, every customer call we’re on at our stage, I mean, we’re not a super old company, we’re eight months old, but we treat it as a discovery call. We spend most of the time asking questions and trying to learn as much as possible about the pain that they’re trying to solve for, because that influences what we’re going to build next week, next month, et cetera. We spend a little bit of time talking about Ravenna as well, but the learnings are still critical for us, and I think will always be.
Tim: Bring us home, Taylor.
Taylor: I’m always reticent to give advice. It’s because I’ve found that just doing this for a decent amount of time, everyone’s experience is so bespoke to them. I do love advice. I do love hearing other people’s journeys, but that’s the way I kind of think about it.
One of the things from my journey that I try to hold true, and I always get word, even conversations a little bit like this. We’ve talked about ServiceNow so much and the incumbents out there, but at the end of the day, the only thing that matters is the customer. That’s the only thing that matters. I try to very much hold a, call it competitor aware, but customer obsessed point of view.
That’s critical because I’ve seen the playbook the other way around, and I’ve seen basically not a lot of success. Whereas I have been lucky enough to work with folks, even to my surprise, a maniacal, just focus on the customer, despite the fact that we were circling around by crazy incumbents and everything on the wall said we were going to lose, it was that maniacal focus on the customer and the problem that pulled us through at the end of the day. So I’ll try and pull that together where we’re at here too.
Tim: Customers, it’s all about the customers.
Kevin: A hundred percent.
Tim: Thank you both so much. It’s a real privilege and a ton of fun to be working together. Looking forward to the future.
Related Insights
April 28, 2025
by Matt McIlwain
Winning the Wedge: The Flywheels for Durable AI-Native Companies
April 23, 2025
by Tim Porter and Rasik Parikh
AI-Native Service Management for Mid-Market and Growth Companies: Why We Invested in Ravenna
April 24, 2025 | 31:09
Reinventing Recruiting: SeekOut’s Anoop Gupta on the Rise of Agentic AI
An industry-standard league table for ranking artificial intelligence models is being deliberately distorted by technology giants, researchers have claimed, leading to a misleading picture of which AIs are the best.
Sara Hooker at Cohere Labs, a US non-profit, and her colleagues claim to have found that the popular Chatbot Arena benchmark is a “distorted playing field”, with policies that end up giving an advantage to large companies like Meta, Amazon and Google by allowing them to discard models that score poorly.
Retrieval-Augmented Generation (RAG) is rapidly emerging as a robust framework for organizations seeking to harness the full power of generative AI with their business data. As enterprises seek to move beyond generic AI responses and leverage their unique knowledge bases, RAG bridges general AI capabilities and domain-specific expertise.
Before diving into the dangers, let’s review what RAG is and its benefits.
What is RAG?
RAG is an AI architecture that combines the strengths of generative AI models — such as OpenAI’s GPT-4, Meta’s LLaMA 3, or Google’s Gemma — with information from your company’s records. RAG enables large language models (LLMs) to access and reason over external knowledge stored in databases, documents, and live in-house data streams, rather than relying solely on the LLMs’ pre-trained “world knowledge.”
When a user submits a query, a RAG system first retrieves the most relevant information from a curated knowledge base. It then feeds this information, along with the original query, into the LLM.
Maxime Vermeir, senior director of AI strategy at ABBYY, describes RAG as a system that enables you to “generate responses not just from its training data, but also from the specific, up-to-date knowledge you provide. This results in answers that are more accurate, relevant, and tailored to your business context.”
Why use RAG?
The advantages of using RAG are clear. While LLMs are powerful, they lack the information specific to your business’s products, services, and plans. For example, if your company operates in a niche industry, your internal documents and proprietary knowledge are far more valuable for answers than what can be found in public datasets.
By letting the LLM access your actual business data — be these PDFs, Word documents, or Frequently Asked Questions (FAQ) — at query time, you get much more accurate and on-point answers to your questions.
In addition, RAG reduces hallucinations. It does this by grounding AI answers to reliable, external, or internal data sources. When a user submits a query, the RAG system retrieves relevant information from curated databases or documents. It provides this factual context to the language model, which then generates a response based on both its training and the retrieved evidence. This process makes it less likely for the AI to fabricate information, as its answers can be traced back to your own in-house sources.
As Pablo Arredondo, a Thomson Reuters vice president, told WIRED, “Rather than just answering based on the memories encoded during the initial training of the model, you utilize the search engine to pull in real documents — whether it’s case law, articles, or whatever you want — and then anchor the response of the model to those documents.”
RAG-empowered AI engines can still create hallucinations, but it’s less likely to happen.
Another RAG advantage is that it enables you to extract useful information from your years of unorganized data sources that would otherwise be difficult to access.
Previous RAG problems
While RAG offers significant advantages, it is not a magic bullet. If your data is, uhm, bad, the phrase “garbage-in, garbage out” comes to mind.
A related problem: If you have out-of-date data in your files, RAG will pull this information out and treat it as the gospel truth. That will quickly lead to all kinds of headaches.
Finally, AI isn’t smart enough to clean up all your data for you. You’ll need to organize your files, manage RAG’s vector databases, and integrate them with your LLMs before a RAG-enabled LLM will be productive.
The newly discovered dangers of RAG
Here’s what Bloomberg’s researchers discovered: RAG can actually make models less “safe” and their outputs less reliable.
Bloomberg tested 11 leading LLMs, including GPT-4o, Claude-3.5-Sonnet, and Llama-3-8 B, using over 5,000 harmful prompts. Models that rejected unsafe queries in standard (non-RAG) settings generated problematic responses when the LLMs were RAG-enabled.
They found that even “safe” models exhibited a 15–30% increase in unsafe outputs with RAG. Moreover, longer retrieved documents correlated with higher risk, as LLMs struggled to prioritize safety. In particular, Bloomberg reported that even very safe models, “which refused to answer nearly all harmful queries in the non-RAG setting, become more vulnerable in the RAG setting.”
What kind of “problematic” results? Bloomberg, as you’d expect, was examining financial results. They saw the AI leaking sensitive client data, creating misleading market analyses, and producing biased investment advice.
Besides that, the RAG-enabled models were more likely to produce dangerous answers that could be used with malware and political campaigning.
In short, as Amanda Stent, Bloomberg’s head of AI strategy & research in the office of the CTO, explained, “This counterintuitive finding has far-reaching implications given how ubiquitously RAG is used in gen AI applications such as customer support agents and question-answering systems. The average internet user interacts with RAG-based systems daily. AI practitioners need to be thoughtful about how to use RAG responsibly, and what guardrails are in place to ensure outputs are appropriate.”
Sebastian Gehrmann, Bloomberg’s head of responsible AI, added, “RAG’s inherent design-pulling of external data dynamically creates unpredictable attack surfaces. Mitigation requires layered safeguards, not just relying on model providers’ claims.”
What can you do?
Bloomberg suggests creating new classification systems for domain-specific hazards. Companies deploying RAG should also improve their guardrails by combining business logic checks, fact-validation layers, and red-team testing. For the financial sector, Bloomberg advises examining and testing your RAG AIs for potential confidential disclosure, counterfactual narrative, impartiality issues, and financial services misconduct problems.
You must take these issues seriously. As regulators in the US and EU intensify scrutiny of AI in finance, RAG, while powerful, demands rigorous, domain-specific safety protocols. Last, but not least, I can easily see companies being sued if their AI systems provide clients with not merely poor, but downright wrong answers and advice.
The 8:15 a.m. scene in front of the school that sits on a dusty, sun-soaked residential street in Brownsville, Texas, just across the border with Mexico, looks much like any other elementary or middle school in that chaotic period before the morning bell. Groups of tousled boys and girls get off the bus or otherwise trickle, saunter and dawdle into a nondescript building where they will spent the next few hours, in theory, learning. These kids, though, seem more jubilant than might be expected for a Tuesday morning in April.
The days of dodging class or suffering from a lack of motivation appear to be a thing of the past at Alpha School, a private pre-K through eighth grade institution that utilizes personalized artificial intelligence to teach an entire day of core academic lessons in just two hours.
The tech-savvy students then spend their afternoons working on non-academic critical life skills like public speaking, financial literacy or even how to ride a bike. Staff — known here as “guides” rather than teachers — say they strive to facilitate a sense of independence into each child while overseeing a supportive, nurturing environment like any attentive teacher in any solid school district in America.
The innovative approach at the South Texas campus, which opened in 2022, primarily aims to instill a love for learning into each young mind, cofounder MacKenzie Price told Newsweek in mid-April ahead of Alpha’s expansion this fall.
Kindergartner Sarah Schipper, 6, said staff at the “special” school, known as guides, act as facilitators rather than traditional teachers. (Newsweek) Kindergartner Sarah Schipper, 6, said staff at the “special” school, known as guides, act as facilitators rather than traditional teachers. (Newsweek)
A 2-Hour School Day?
Once inside, it becomes clearer that whatever is happening at this school, it’s unique. In kindergarten, the students show a palpable level of excitement as 6-year-old Sarah Schipper collaborates with a dozen other classmates to solve a simple logic game. In the lesson, students deduce the correct path by jumping on colored dots to find their way across six multi-hued rows. Wide smiles, upbeat pop music and gentle suggestions of where to hop next dominate the lively room.
“There’s a little code and we aren’t able to see it,” the bubbly kindergartener. “And we have to guess it — and people can cheer for us and give us ideas of how to win.”
Sarah and her classmates encouraged each other to make bold choices at each pass but had a sense of compassion for any wrong move. One girl suggested the cohort would “grow from losing” while another boy kindly proposed a more collaborative approach — along with less shouting.
The cooperative activity serves as a springboard into Alpha’s AI-powered 2 Hour Learning platform, where students use laptops for 30-minute sessions in core academic subjects, including math, English, science and social studies. The personalized approach utilizes proprietary and third-party apps and allows students to master topics up to five times faster than traditional methods, Alpha claims.
Sarah, who prefers Alpha’s life skills workshops that come later in the day, said she wants to be a scientist and study “microscopic things,” insisting the tech-laden model will help her attain that goal while honing an unabashedly self-sufficient educational perspective.
“That we all work on computers,” she said when asked what separated Alpha from most other public and private schools. “Not all schools work on computers, and this is a very special school. Maybe all the other schools everywhere can work on computers?”
Trump Weighs In
The precocious digital native said she believes Alpha’s AI-centric formula will set her apart from her peers — and she’s not alone.
Last week, President Donald Trump signed an executive order to integrate AI into K-12 classrooms nationwide, aiming to cultivate tech-related expertise in future generations. The directive also establishes the White House Task Force on AI Education and requires Department of Education Secretary Linda McMahon to prioritize the use of AI in discretionary grant programs for teacher training.
President Trump speaks to reporters while flanked by Secretary of Commerce Howard Lutnick, Secretary of Labor Lori Chavez-DeRemer and Secretary of Education Linda McMahon after signing seven education-related executive orders on April 23, including integrating… President Trump speaks to reporters while flanked by Secretary of Commerce Howard Lutnick, Secretary of Labor Lori Chavez-DeRemer and Secretary of Education Linda McMahon after signing seven education-related executive orders on April 23, including integrating AI into K-12 education.
Chip Somodevilla/Getty Images
“[AI] is rapidly transforming the modern world, driving innovation across industries, enhancing productivity, and reshaping the way we live and work,” the executive order reads. “To ensure the United States remains a global leader in this technological revolution, we must provide our nation’s youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology.”
Trump’s order also creates a “Presidential Artificial Intelligence Challenge,” a competition for students and teachers to showcase their AI skills — a modern take on LBJ’s Presidential Physical Fitness Challenge — and stresses the need for educators to fully embrace technology.
“To achieve this vision, we must also invest in our educators and equip them with the tools and knowledge to not only train students about AI, but also to utilize AI in their classrooms to improve educational outcomes,” the directive continues. “Professional development programs focused on AI education will empower educators to confidently guide students through this complex and evolving field.”
The order calls for the education and agriculture departments to allocate discretionary grant money and repurpose other training initiatives for the AI expansion. Educators who spoke to GovTech said they were skeptical schools would be given the resources, particularly since the administration is in the process of gutting the Department of Education that would ostensibly be charged with allocating those resources.
The Screen Time Question
Back at Alpha, Sarah, praised her guides, who largely act as facilitators rather than traditional instructors.
“They don’t tell me the answers,” Sarah said of how her guides interact with her during the two-hour learning sprints. “They just give me resources.”
Some students at Alpha Brownsville learn piano or how to ride a bike during afternoon workshops following personalized, AI-powered lessons in core subjects every morning. Some students at Alpha Brownsville learn piano or how to ride a bike during afternoon workshops following personalized, AI-powered lessons in core subjects every morning. Joshua Rhett Miller
Sarah also stressed how she’s motivated to keep at it long after her school day concludes, around 3:30 p.m. most days.
“And I work on my computer, I can bring it home,” she said. “Sometimes, there’s incentives, but I have to get a lot of masteries, so I work probably until midnight.”
Youthful exaggeration aside, Sarah’s affinity for school appears evident within minutes.
“I want to figure what stuff are in the world that I don’t know,” she said. “I did get a science kit for Christmas. I got a real microscope, actually.”
Mo Swain, who serves as campus lead and interim director at Alpha Brownsville, said the vast majority of the school’s roughly 60 students complete requisite lessons during their accelerated learning sessions, negating the need for homework or more screen time in most cases. Pre-K students aren’t even allowed to take their laptops home, she said.
“If kids bring them home, parents know it’s to wrap up some work or maybe they have a passion project they’re working on, like some of the older students,” Swain said. “When they’re older, they have more autonomy but when they’re younger, we really do focus on getting it done at school. Parents don’t want their kids on technology all the time.”
Swain’s comments reflect the tension at the heart of an experiment like Alpha. Schools across the country are banning devices, with New York the latest state to agree on a “bell-to-bell” phone ban as part of its budget. But such measures do not address the school-issued devices increasingly common in classrooms across the country. According to a survey by educational software firm Lightspeed Systems, K-12 students in the U.S. are spending an average of 98 minutes per day on school-issued laptops or tablets.
Usage of those devices peaks in middle school at 2 hours and 24 minutes for sixth graders, then declines in high school as students move to personal devices and become more involved in extracurricular activities or start working, according to Lightspeed data provided to the Wall Street Journal in January.
In that context, Alpha’s two-hour sprint approach seems relatively moderate. Just a “handful” of parents this year have voiced concerns about students spending too much time on their laptops after the school day ends, Swain said.
Mo Swain, campus lead and interim campus director at Alpha Brownsville, said students at the school invariably display “intrinsic motivation” to learn. (Newsweek) Mo Swain, campus lead and interim campus director at Alpha Brownsville, said students at the school invariably display “intrinsic motivation” to learn. (Newsweek)
“But kids, when they really want something or they’re really working hard to achieve that goal, they’ll go home and say, ‘I have to get this done,’” Swain said. “Because they really want to — but our parents know they don’t have to.”
Cultivating that internal motivation is a big part of Alpha’s mission. And based on some early data, it’s working. Price, the cofounder, told Fox News in March that Alpha classes were in the “top 2 percent” of test scores in the country.
Students are universally “surpassing” education benchmarks and take Northwest Evaluation Association’s Measures of Academic Progress assessments that are aligned with common core standards, Swain added to Newsweek.
“They’re learning the same things you could expect to learn in a public school or a different private school,” she said. “They’re just learning it in a different way and at a faster pace.”
Alpha’s Origin Story
Price launched the framework for Alpha in Austin, Texas a decade ago out of personal necessity. The mother of two said her zoned school district couldn’t meet the needs of her daughters, particularly regarding “personalized attention” despite being one of the top districts in Texas.
“The teacher in front of the classroom model is required to teach a certain curriculum kind of, you know, to the middle,” said Price, adding one of her daughters’ particularly suffered in second grade, inspiring her to act.
“She looked at me and said, ‘School is so boring,’” Price recalled. “In two and a half years, they had taken a child who was tailor-made to love school and be curious and interested, and they wiped away that passion. And I realized it wasn’t about the teachers or that school, or moving from a public to a private school. It was the model of a teacher in front of a classroom that wasn’t working.”
Price said she soon realized adaptive apps could be the key to providing a personalized process to each student, combined with regular surveys and interactions with guides.
Staff at the private school in Brownsville, Texas, known as guides, emphasize a considerate approach as students begin their personalized learning sessions. Staff at the private school in Brownsville, Texas, known as guides, emphasize a considerate approach as students begin their personalized learning sessions. Newsweek
“Over the next 11 years, this has grown into numerous schools all based on the idea that number one, kids should love school,” Price said.
“If they’re going to be spending five days a week the majority of the year for 13 years in a place, they should love it. And kids can learn twice as fast in only a couple of hours a day by getting this one-to-one, mastery-based tutoring experience.”
Pandemic Reveals Parent Frustrations
Price, who studied psychology at Stanford University, insists the model is “accessible and scalable,” pointing to Alpha’s expansion plans in markets beyond Texas. Alpha first ventured in Florida last year, launching a campus in Miami, while additional locations in Tampa, Palm Beach and Orlando are expected to open this fall. Job postings for guides at some of those locations list a starting salary of $100,000.
Alpha also intends to grow its footprint in Texas, opening schools in fall 2025 in Houston and Fort Worth, as well as outposts in Phoenix, New York City and Santa Barbara, California. Tuition ranges from $10,000 in Brownsville to $65,000 in New York, according to its website.
“Education is ripe for transformation and the beauty of what has happened in the last few years with artificial intelligence coming on is that now we can really make sure that children are learning efficiently and effectively,” Price said. “And they are getting that one-to-one, mastery-based experience.”
The traditional role of teachers has also been vastly reimagined, Price said, as Alpha guides primarily provide motivational and emotional support rather than creating lesson plans, delivering lectures or grading assignments.
“We believe kids are limitless and we provide really high standards,” she said. “And we provide really high levels of support because our teachers have time to do that. That’s what’s really special — artificial intelligence is allowing us to raise human intelligence.”
Price said Alpha’s growth reflects the increasing number of parents nationwide who want alternatives to traditional teaching methods — what was once a pet cause of a niche subset of parents that exploded with the frustrations born out of the pandemic. She praised the Brownsville Independent School District, but said its “severely underfunded” schools had ongoing challenges.
The SpaceX Connection
Roughly half of Alpha Brownsville’s students are children of SpaceX employees who work at the company’s headquarters, about 25 miles east. Billionaire Elon Musk is trying to formally incorporate the Cameron County community, known as Boca Chica Village, as Starbase, Texas. The SpaceX investment has helped boost the economy around Brownsville, which is home to about 185,000 people and among the poorer parts of Texas. More than a quarter of Brownsville residents live below the poverty line, about twice the national average. Alpha says it provides need-based assistance in special cases to offset the $10,000 tuition.
The SpaceX Starbase facility is seen a day before Starship Flight 3’s scheduled launch near Boca Chica beach on March 13, 2024 in Brownsville, Texas. The SpaceX Starbase facility is seen a day before Starship Flight 3’s scheduled launch near Boca Chica beach on March 13, 2024 in Brownsville, Texas. Brandon Bell/Getty Images
“It’s been a really great environment for us to test out the model and understand how it works with a diverse population, both racially and socially economically,” Price said of Brownsville. “What we’re really showing is artificial intelligence and delivering education via that format is kind of a great equalizer.”
Messages seeking comment regarding Alpha from Brownsville school district officials, as well as the federal Department of Education, were not returned.
The American Federation of Teachers, the union that represents 1.8 million pre-K through 12th grade educators, said AI can be a “powerful tool” in classrooms so long as it is used safely and thoughtfully.
“Yet no matter how advanced, it cannot replace the critical role of human educators,” AFT Secretary Secretary-Treasurer Fedrick Ingram told Newsweek in a statement. “Real, consequential learning only happens when teachers and students collaborate in an atmosphere of mutual trust, charting a learning path forward together.”
Much like pencil and calculators, Ingram acknowledged AI is “here to stay,” but said it can only reach its full potential under the guidance of trained educators who know how best to integrate the technology into their classrooms.
Some students at Alpha Brownsville are so pleased with their success and progress from past public-school environments that they’re working to open a high school, effectively attempting to fill their own need.
Seventh-grader Savannah Marrero, now in her third year at the school, wants to launch the new high school so she can continue her personal momentum after she felt “stagnant” at schools in Fort Worth and Dallas, she said.
Savannah Marrero, now in seventh grade at Alpha Brownsville, wants to launch a high school in the Texas town in fall 2026 to continue her AI-powered education. (Newsweek) Savannah Marrero, now in seventh grade at Alpha Brownsville, wants to launch a high school in the Texas town in fall 2026 to continue her AI-powered education. (Newsweek)
“Right now, the Rio Grande Valley, Brownsville and all the cities surrounding it are lagging behind in education in the U.S., so students going from a fast-paced environment like Alpha and then having to cut it off and go to a traditional school doesn’t make sense,” she said. “So that’s why I want to continue it.”
Savannah would attend the public high school in Brownsville if her vision doesn’t become reality, but she’s currently researching legal requirements and previously visited Alpha’s high school in Austin as part of her plan.
“Obviously, since I’m 12, I’m not going to know everything about financial stuff and funding and things like that, so I have adults to help me,” she said.
Savannah isn’t sure about her ultimate career choice, but thinks about owning a business one day. Helping to launch a private school that she would ultimately attend would be a fitting apprenticeship, she realized as she was speaking to Newsweek.
“Even when I get home, I want to do work,” she said. “Like, all the kids here are passionate about learning and reaching their goals.”
The seventh-grader also said she experienced something of a “mindset” changed when she switched schools to attend Alpha, and feels newly empowered in a way she didn’t earlier in her education.
“The guides will never tell you that you can’t do something,” she said. “So over time, you develop that mindset and the environment from day one is you can do anything if you set your mind to it. That’s where I started to learn motivation.”
AI and the Role of Teachers
As the Trump administration moves to advance the use of AI in education, some experts said Alpha’s condensed, tech-heavy model and the early successes it is showing should prompt discussions in public districts across the country.
The tenets of Alpha’s condensed, AI-powered learning model encourage students to empower themselves as they maximize efficiency. The tenets of Alpha’s condensed, AI-powered learning model encourage students to empower themselves as they maximize efficiency. Newsweek
“Take a look at how they’re delivering core instruction, whether it could be reconfigured and delivered more efficiently,” said Robin Lake, director of the Seattle-based Center on Reinventing Public Education.
Lake said many public school districts should take a critical look at how teachers use their time, and think of how to use electronic learning tools more coherently and efficiently. AI and other technological advancements have created an opportunity to rethink the role of educators entirely, she said.
“That’s a really important one, a really important challenge for public schools,” Lake said of the teacher question. “Because, you know, the one teacher for 25- to 30-kid model has proven maybe impossible for a lot of schools.”
Alpha acknowledges its model may not work for every student, but the looming integration of AI into education is undeniable and fast approaching, Lake said.
“It’s going to push everybody to start asking these kinds of questions,” she said. “Are we leveraging teachers to their best effect? Are we putting their talents to the most important use? There are many who are saying it’s a waste of time to be spending time on basic instruction that an AI-powered tool can deliver. Instead, they should be mentoring, building relationships with kids, helping to motivate kids, helping kids to think critically and deeply — which is what I think Alpha School is trying to do.”
Lake suggested public school officials could begin analyzing staffing models and the typical student’s daily schedule, among other aspects, as part of a possible reimagining of K-12 education.
“We have to question that and think differently, especially as dollars get really tight for a lot of public schools and outcomes are not increasing,” Lake told Newsweek.