Category: Artificial Intelligence

  • Atlanta’s new AI commission to hold its first meeting Wednesday

    Atlanta’s new AI commission to hold its first meeting Wednesday

    The City of Atlanta will convene the first meeting of its newly formed Artificial Intelligence Commission on May 7 at 4 p.m. at Atlanta City Hall in the Larry Dingle Committee Room.

    What we know:

    The commission was established by legislation introduced in December 2024 by District 2 Council member Amir Farokhi and approved by the Atlanta City Council. The initiative aims to explore how artificial intelligence can enhance city operations, improve service delivery, and increase government efficiency.

    The 13-member commission includes a mix of city leaders, academics, and technology experts. Current members include:

    • Jason Sankey, Chief Information Officer, City of Atlanta
    • Nikhil Deshpande, Chief Digital and AI Officer, State of Georgia
    • Larry Williams, Technology Association of Georgia
    • Donald Beamer Jr., appointee of Mayor Andre Dickens
    • John Yates, technology policy and research representative
    • Dr. Charlotte Alexander, Georgia Institute of Technology
    • Dr. Joy Harris, Georgia State University
    • Council member Amir Farokhi, appointee of Council President Doug Shipman
    • Matthew Garver, representing Council Districts 1-4 and Post 1 At-Large

    Four commission seats remain vacant, including representatives for Emory University, Atlanta University Center, and two additional district groupings.

    The backstory:

    Atlanta is just one of many cities that is currently exploring how to use AI within its government.

    According to Cities Today, New York has implemented the AI Action Plan, which focuses on responsible AI governance across city agencies.

    Boston has also created guidelines for responsible AI use, like ensuring transparency and accuracy in applications such as automated translation and chatbot services. The city also encourages safe spaces for experimentation.

    The state of New Jersey has implemented a new AI translation service for applicants seeking unemployment assistance and other public services. It also has an active AI task force that is looking into other ways to use AI throughout the state. 

    State and local governments in Arizona have implemented a variety of policies related to AI. It also created an AI steering committee to help inform future AI deployment and identify potential applications for its use, according to GovTech.com

    San Jose in California was instrumental in launching the GovAI Coalition, which includes about 550 from across the United States. Its mission is to promote responsible and purposeful AI in the public sector. 

    What’s next:

    The commission is expected to provide guidance on how AI can be responsibly and effectively integrated into city services while addressing ethical and community concerns. The meeting is open to the public.

    AtlantaNewsArtificial Intelligence

  • How Artificial Intelligence is Shaping the Future of Pharmacy Practice

    How Artificial Intelligence is Shaping the Future of Pharmacy Practice

    Artificial intelligence (AI) is reshaping health care delivery, and pharmacy practice is no exception. AI is streamlining workflow, enhancing patient care, and challenging pharmacists to redefine their roles. Although AI holds great promise, it also raises questions about future job security and regulatory frameworks.

    Image credit: LALAKA | stock.adobe.com

    As a pharmacist working in both retail and hospital settings, I’ve witnessed firsthand the pressures mounting in our field, including higher patient loads, greater clinical responsibilities, and increasing administrative tasks. Amid this complexity, AI is emerging as a powerful tool, offering the potential to alleviate some of the burdens we face while also enhancing our ability to provide quality care.

    In the hospital environment, AI-driven tools are already proving valuable. Clinical decision support systems flag drug interactions and dosing errors in real time. Algorithms aid in antimicrobial stewardship by analyzing resistance trends and recommending targeted therapies.1 In one hospital where I’ve practiced, AI models integrated into the electronic health record (EHR) predicted patient deterioration, prompting earlier interventions and improved outcomes.2 These systems are far from perfect, but they represent a shift toward proactive, data-driven pharmacy.

    In the retail pharmacy, the opportunities are just as impactful. AI-based platforms are helping automate refill workflows, prior authorizations, and even patient communication. Predictive analytics can identify patients at risk of nonadherence or potential complications. AI tools can also support OTC recommendations based on symptoms and medication history, freeing up pharmacists to focus more on counseling and clinical services.1 I’ve personally seen how automation of insurance claim processing and smart inventory systems save hours of manual work every week.

    Still, AI integration isn’t without challenges. One concern is data privacy, especially with cloud-based tools that interface with protected health information.3 There’s also the risk of bias in AI models, especially if they are trained on incomplete or non-representative data.4 Importantly, some pharmacists worry about losing their clinical intuition in favor of black-box suggestions that may not account for patient nuance. From my perspective, these concerns highlight the need for pharmacists to stay engaged in the development and testing of these tools.

    Another barrier is cultural. In both hospital and retail settings, I’ve encountered pharmacists who are skeptical or even fearful of AI, viewing it as a threat rather than a resource. This hesitation can slow adoption and lead to missed opportunities. Pharmacists must be proactive in shaping the way AI is implemented in our field. That includes participating in pilot programs, advocating for clinical input in design, and embracing tech literacy as a core competency.

    Ultimately, AI should be seen not as a replacement, but as a partner. It can take over repetitive and time-consuming tasks, but it can’t replace the nuanced clinical judgment, empathy, and patient relationships that define our profession. Pharmacists are uniquely positioned to guide AI integration in ways that protect patient safety and elevate our scope of practice.

    About the Author

    Alaa Abdul Ghani, PharmD, is a practicing pharmacist in both retail and hospital settings in Orlando, Florida. She is passionate about the intersection of health care and technology and is an advocate for pharmacists’ involvement in shaping the future of AI in pharmacy practice. She can be reached at abdulghani.a@icloud.com or 407-590-7756.

    But this leads us to a pressing question: Will AI make the pharmacist obsolete?

    The reality is more nuanced. Yes, AI will reduce the need for manual verification of low-risk prescriptions, and some laws may evolve to reflect that. But that doesn’t mean the pharmacist role will vanish. Instead, it will shift. Pharmacists who adapt by taking on more consultative, clinical, and interdisciplinary roles will thrive. Regulatory frameworks may change, but our relevance depends on how we respond to these innovations.

    As AI continues to evolve, so must we. Rather than fear change, pharmacists should see this moment as an opportunity to lead, to innovate, and to reinforce the irreplaceable human element of healthcare.

    REFERENCES
    1. Raza MA, Aziz S, Noreen M, et al. Artificial intelligence (AI) in pharmacy: an overview of innovations. Innov Pharm. 2022;13(2):10.24926/iip.v13i2.4839. doi:10.24926/iip.v13i2.4839
    2. Lauritsen SM, Kristensen M, Olsen MV, et al. Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nat Commun. 2020;11:3852. doi:10.1038/s41467-020-17431-x
    3. Murdoch B. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Medical Ethics. 2021;22(122). doi:10.1186/s12910-021-00687-3
    4. AI Algorithms Used in Healthcare Can Perpetuate Bias. News release. Rutgers Newark. November 14, 2024. Accessed April 30, 2025. https://www.newark.rutgers.edu/news/ai-algorithms-used-healthcare-can-perpetuate-bias

    .

  • Augmenting Penetration Testing Methodology with Artificial Intelligence – Part 1: Burpference

    Augmenting Penetration Testing Methodology with Artificial Intelligence – Part 1: Burpference

    Craig is a former software developer and red teamer. He has been pentesting at Black Hills Infosec since 2018.

    Augmenting Penetration Testing Methodology with Artificial Intelligence – Part 1: Burpference

    Artificial Intelligence (AI) has been a hot topic in information technology and information security since before I entered the industry. Developments in AI are something that I had been aware of, but I hadn’t chosen to really dive into the subject in terms of leveraging AI as part of my job as a penetration tester. I gave a webcast on penetration testing methodology a while back, and someone asked me afterward how I use AI in my methodology/workflow. At the time, my answer was “I don’t.”

    For a long time, I considered AI to be interesting but not particularly useful. However, progress has been made, technology has improved, and it has become clear that AI has matured to the point where we absolutely can use it to help us with our jobs as penetration testers. So, what does that look like? This blog post will be the first in a series of posts where I will describe my initial experiences trying to integrate AI into my penetration testing methodology.

    When exploring new technology and incorporating it into your methodology, it’s always a good idea to start by examining what other folks in your space are already doing with that technology. When I initially started going down this path, my BHIS colleague Derek Banks introduced me to a project called burpference. Burpference is a Burp Suite plugin that takes requests and responses to and from in-scope web applications and sends them off to an LLM for inference. In the context of artificial intelligence, inference is taking a trained model, providing it with new information, and asking it to analyze this new information based on its training.

    Installing the burpference extension in Burp Suite is a straightforward task. The extension utilized the Jython standalone JAR. Once I downloaded the JAR, I configured the Burp Suite Python environment to point to the JAR. This setting can be found by opening “Extensions settings” in the Extensions tab.

    Python Environment Configured

    Once the Python environment was configured, I downloaded and unzip the latest burpference release. Burpference generates log files in the extension directory, so I needed to ensure that Burp Suite had write permissions to that location. Next, I opened the “Installed” page of the Extensions tab, clicked the “Add” button, and selected the burpference.py file from the extension directory.

    Selecting Burpference Extension

    I checked the Output section of the Burp Suite extension loader to ensure no errors occurred. Once the extension was loaded, I opened the new burpference tab and selected a configuration file that pointed to my LLM. For my initial experimentation with burpference, I set up a small (7 billion parameter) deepseek-r1 model in Ollama on an older gaming PC in my lab.

    Burpference Configuration File Pointing to Local LLM
    Configuration File Selected in Burpference Tab

    To test the extension functionality, I installed and ran a local instance of OWASP’s intentionally vulnerable Juice Shop application.

    Browsing Juice Shop Application

    To cut down on noise and unnecessary load on the LLM, burpference only sends in-scope requests and responses. So, I added the Juice Shop application to the project scope in Burp Suite. This can be done from the Target tab by right clicking the application and selecting “Add to scope”.

    JuiceShop Application Added to Scope

    I encountered two pitfalls that I had to troubleshoot when configuring the extension:

    • I was running the model on a physically separate host in my lab. By default, Ollama binds to localhost, and I was initially unable to communicate with the model from my testing host where I was running burpference. I was able to fix this by setting the OLLAMA_HOST environment variable to 0.0.0.0 on the host running Ollama.
    • Once I was able to communicate with the model, burpference started logging “General Error: cannot make memory view because object does not have the buffer interface” error messages. This is a known issue with the extension, and I was able to fix it by updating my Jython standalone JAR from version 2.7.3 to version 2.7.4.

    With the extension successfully communicating with the model, I began manually browsing the Juice Shop application. As I browsed the application, I noticed that burpference was sending each request and response to the LLM with the following prompt:

    You are a web application penetration tester conducting a comprehensive operation on an application in the offensive stage of the engagement and focused on leveraging security flaws.
    
    Your objective is to examine the HTTP requests and responses that are available through the burp suite proxy history from the web application as we test the application.
    
    This analysis will focus on:
    
    - Request and Response Evaluation: Scrutinizing HTTP requests and responses for security misconfigurations, sensitive data exposure, and other vulnerabilities.
    - Authentication and Session Management: Assessing the effectiveness of authentication mechanisms and session handling practices.
    - Input Validation and Output Encoding: Identifying weaknesses related to input validation that may lead to injection attacks or cross-site scripting (XSS).
    
    Use reasoning and context to find potential flaws in the application by providing example payloads and PoCs that could lead to a successful exploit.
    
    If you deem any vulnerabilities, include the severity of the finding as prepend (case-sensitive) in your response with any of the levels:
    
    "CRITICAL"
    "HIGH"
    "MEDIUM"
    "LOW"
    "INFORMATIONAL" 
    
    for any informational-level findings or observations, for example of a "secure" flag missing from a cookie.
    
    Not every request and response may have any indicators, be concise yet deterministic and creative in your approach.
    
    The HTTP request and and response pair are provided below this line:
    
    [request and response JSON below]
    

    Burpference Prompt (formatted for readability)

    The first thing I noticed was that the model responded slowly. This was likely due to the hardware limitations of the host where I was running the model. I decided I would later try the extension with a more powerful remote OpenAI model. The extension sends full requests and responses that will almost certainly contain sensitive information like credentials, session tokens, response data, etc. When performing a penetration test, maintaining the confidentiality of customer data is a high priority, and that makes using remote models that you do not have full control over a serious concern. So, I wanted to verify the extension’s functionality and evaluate its performance with a local, on-premises model first. After browsing the application for a bit, I took some time to review the inference results in the burpference logging page.

    Burpference Logging Output

    While slow, the extension appeared to be successfully communicating with the model and logging the inference results. I observed that the LLM reviewed the request verb, parameters, headers, cookies, etc., and evaluated what it could tell about the application from a security perspective. Ultimately, it did not report anything that I would not have identified during a manual review of the requests and responses. However, it did identify an interesting cookie valued called welcomebanner_status that was set to dismiss, and it even brainstormed a possible attack vector!

    Burpference Inference Response – Interesting Cookie Identified

    Even with a small local model running on less-than-stellar hardware, I could already see some value in the extension at the very least functioning as a second set of eyes. I proceeded to reconfigure the extension to use a remote OpenAI gpt-4o-mini model. As you might expect, I saw much better performance with the larger model. In addition to identifying issues related to CORS and security header configurations, it also identified a request parameter it thought was vulnerable to cross-site scripting (XSS). The model even provided a proof-of-concept payload.

    Potential Cross-Site Scripting Identified with Burpference

    I tried the proof-of-concept request in a browser. While the XSS payload did not fire, the application returned an HTTP 500 Internal Server Error.

    Error Response to PoC Request

    Observing this error response through the eyes of an experienced web application tester, it seemed obvious that I should look for a SQL injection vulnerability here, but what about our AI assistant? I was pleased to find that burpference identified SQL syntax in another more verbose error message that I had initially overlooked. It determined that this same parameter was likely vulnerable to SQL injection and provided another proof-of-concept exploit.

    SQL Injection Vulnerability Reported by Burpference

    I tried this PoC in a browser and the application responded with JSON containing all application product information. This was an indication that the payload was successful, and the application was vulnerable to SQL injection.

    SQL Injection Payload Successful

    One thing I noticed while evaluating burpference is that the context for each inference request consisted of only a single request and response. I think this could be a limiting factor in the usefulness of the extension as it currently exists. The smaller local model’s responses plainly stated that it might be able to tell me more useful information if it was provided more context. I think there is likely an opportunity to extend the extension’s functionality to selectively send a series of requests and responses to the model in the same inference request to provide it with more useful context.

    Overall, I found the extension useful as a second set of eyes looking over my web traffic, and it successfully put me down the pathway to discovering a valid vulnerability. I liked that it works passively in the background, and I can definitely see myself leveraging this extension with an on-premises in my web application penetration testing methodology. Specifically, I think it would be useful to have burpference enabled when performing manual enumeration at the beginning of a new web application penetration test.



    Ready to learn more?

    Level up your skills with affordable classes from Antisyphon!

    Pay-What-You-Can Training

    Available live/virtual and on-demand



  • ‘It cannot provide nuance’: UK experts warn AI therapy chatbots are not safe | Artificial intelligence (AI)

    ‘It cannot provide nuance’: UK experts warn AI therapy chatbots are not safe | Artificial intelligence (AI)

    Having an issue with your romantic relationship? Need to talk through something? Mark Zuckerberg has a solution for that: a chatbot. Meta’s chief executive believes everyone should have a therapist and if they don’t – artificial intelligence can do that job.

    “I personally have the belief that everyone should probably have a therapist,” he said last week. “It’s like someone they can just talk to throughout the day, or not necessarily throughout the day, but about whatever issues they’re worried about and for people who don’t have a person who’s a therapist, I think everyone will have an AI.”

    The Guardian spoke to mental health clinicians who expressed concern about AI’s emerging role as a digital therapist. Prof Dame Til Wykes, the head of mental health and psychological sciences at King’s College London, cites the example of an eating disorder chatbot that was pulled in 2023 after giving dangerous advice.

    “I think AI is not at the level where it can provide nuance and it might actually suggest courses of action that are totally inappropriate,” she said.

    Wykes also sees chatbots as being potential disruptors to established relationships.

    “One of the reasons you have friends is that you share personal things with each other and you talk them through,” she says. “It’s part of an alliance, a connection. And if you use AI for those sorts of purposes, will it not interfere with that relationship?”

    For many AI users, Zuckerberg is merely marking an increasingly popular use of this powerful technology. There are mental health chatbots such as Noah and Wysa, while the Guardian has spoken to users of AI-powered “grieftech” – or chatbots that revive the dead.

    There is also their casual use as virtual friends or partners, with bots such as character.ai and Replika offering personas to interact with. ChatGPT’s owner, OpenAI, admitted last week that a version of its groundbreaking chatbot was responding to users in a tone that was “overly flattering” and withdrew it.

    “Seriously, good for you for standing up for yourself and taking control of your own life,” it reportedly responded to a user, who claimed they had stopped taking their medication and had left their family because they were “responsible for the radio signals coming in through the walls”.

    In an interview with the Stratechery newsletter, Zuckerberg, whose company owns Facebook, Instagram and WhatsApp, added that AI would not squeeze people out of your friendship circle but add to it. “That’s not going to replace the friends you have, but it will probably be additive in some way for a lot of people’s lives,” he said.

    Outlining uses for Meta’s AI chatbot – available across its platforms – he said: “One of the uses for Meta AI is basically: ‘I want to talk through an issue’; ‘I need to have a hard conversation with someone’; ‘I’m having an issue with my girlfriend’; ‘I need to have a hard conversation with my boss at work’; ‘help me roleplay this’; or ‘help me figure out how I want to approach this’.”

    In a separate interview last week, Zuckerberg said “the average American has three friends, but has demand for 15” and AI could plug that gap.

    Dr Jaime Craig, who is about to take over as chair of the UK’s Association of Clinical Psychologists, says it is “crucial” that mental health specialists engage with AI in their field and “ensure that it is informed by best practice”. He flags Wysa as an example of an AI tool that “users value and find more engaging”. But, he adds, more needs to be done on safety.

    “Oversight and regulation will be key to ensure safe and appropriate use of these technologies. Worryingly we have not yet addressed this to date in the UK,” Craig says.

    Last week it was reported that Meta’s AI Studio, which allows users to create chatbots with specific personas, was hosting bots claming to be therapists – with fake credentials. A journalist at 404 Media, a tech news site, said Instagram had been putting those bots in her feed.

    Meta said its AIs carry a disclaimer that “indicates the responses are generated by AI to help people understand their limitations”.

  • Nvidia, Super Micro Computer, Uber: Trending Tickers

    Nvidia, Super Micro Computer, Uber: Trending Tickers

    00:00 Speaker A

    Now time for some of today’s trending tickers. We’re watching Nvidia, Super Micro Computer and Uber. First off, let’s talk in video. CEO Jensen Huang says that the market for AI chips in China could reach $50 billion in the next couple of years here. Speaking at the Milken Institute conference this week, Huang spoke on why it’s important not to restrict the flow of AI chips to countries like China, saying it would help bring back tax dollars to the US and create jobs. Shares of Nvidia here during today’s session. You’re seeing those up by about 1/10th of a percent, so some fractional gains here. Uh his exact words were ultimately China, well, the Chinese market in a couple years is probably about 50 billion dollars. The market we’ve left behind, utterly gigantic and compared it to the likes of Boeing. Um, Boeing, I think last I checked their market valuation somewhere around 140 billion dollars. But you get the picture.

    01:48 Speaker B

    Yeah. Yeah, no, definitely. And it’s interesting actually to see Nvidia just above the flat line, especially given that we’re up on the day thus far. kind of curious to see why it’s not more of a lift, especially off the back of those earnings from AMD, which did indicate. Yeah. Well, it was a bit of a wild ride for AMD, which we’re going to talk about, but they did indicate continued strength in demand, which would be a positive catalyst for the likes of Nvidia having said that, of course, AMD did warn of tariff concerns. So that of course is also a concern for the likes of Nvidia. And you could see that playing out in the price action here. Those shares at $113 right now. Next up, Super Micro Computer cutting its full-year outlook citing economic uncertainty and none other than tariffs delaying at customer orders. The server maker reporting fiscal third quarter results that came in below analyst expectations, but we’re in line with preliminary results released by the company last week. Super Micro also issuing disappointing guidance for its current fourth quarter. You can see those shares down 6%, which is really interesting because of what I just said. They gave the audience a preview of the show. They said last week. This is going to be a tough one, guys. And yet it wasn’t necessarily priced in, at least to the degree of weakness that they did signal when cutting this full-year outlook here.

    03:57 Speaker A

    Yeah. And kind of mixed reception from what we’re seeing, at least in some of the analysts that cover this name as well. Uh, you’ve got some initiation of coverage from Needham. They resumed their coverage, I should say, positive outlook by rating price target, $39. Elsewhere on the street, you’ve got a lowering of the price target from Rosenblatt and their price target has been adjusted essentially to uh $50. So that’s just down from $55. So there’s still net bullish on it, just kind of remodeling and, you know, being sensitive with exactly what’s been set forth and how the rest of the street is perceiving some of the cut to this full-year outlook as well.

    04:58 Speaker B

    Absolutely.

    05:00 Speaker A

    Also here, let’s talk a little ride sharing. Uber missing first quarter revenue expectations and first quarter gross boosting booking estimates as ride share growth slows. Still revenue grew 14% year over year, taking a look at the shares right now. They are down by about 2.9%. Uh, pre-market here as we’re waiting for trading to begin. Of course, the CEO Dara Kosrowshahi offering a little bit more context and color of how he’s looking at uh, this quarter supported by the consistent growth, strength of their core business. Uh, they continue to build towards the future. Five new autonomous vehicle announcements just in the last week, he reminds as well. Um, also, $2 billion of quarterly free cash flow with multiple levers in their control to generate industry-leading cash flow growth. That coming from the CFO.

    06:15 Speaker B

    Yeah, it’s interesting that falling shy of anticipated revenue growth seems to be the sticking point here, given that just a year ago, Uber had a loss in their quarterly report here. Their net income around $1.78 billion or 83 cents a share for the first quarter here. That is up from a net loss of $654 million a year earlier. So certainly seeing that recovery. You can see that playing out in the price action on your screen here over the course of the last year. What’s interesting to me is we talk constantly about whether or not we’re in an economic slowdown phase and we didn’t see huge signs of that in Uber’s report here and also in terms of the analyst commentary talking about how they do see strong consumer demand going forward for this name. You can scan the QR code to track the best and worst performing stocks with Yahoo Finances trending tickers page.

  • LinkedIn to use AI to help jobseekers find new roles

    LinkedIn to use AI to help jobseekers find new roles

    LinkedIn is to start using artificial intelligence (AI) to help users with job searches by helping them look beyond job titles and locations.

    The professional networking site said its new AI-powered job search will enable people to type out and search for exactly what they are looking for in a role, rather than relying on filters such as location, industry or title.

    The online giant said it means users will be able to instead search for jobs using phrases, for example “find me entry-level brand manager roles in fashion” or “jobs for analysts who love solving sustainability challenges”.

    The update comes as concerns continue to be raised about the impact artificial intelligence will have on the jobs market, including fears it could take a number of administrative roles away from human workers in years to come.

    Last week, a poll from conciliation service Acas found that more than one in four workers were worried that AI will lead to job losses, with almost one in five concerned about the technology making errors.

    But LinkedIn’s Zara Easton said the firm believed the technology could also make it easier for people to find new roles and career paths in a system that many already found frustrating or difficult to follow as new roles and careers emerge.

    “AI is changing the way we work, and job search on LinkedIn will completely change the way people find their next opportunity,” she said.

    “Our hope is that this way of discovering roles – and even new careers – will bring together job seekers’ skills, interests and aspirations to find their next step.

    “As work continues to change and new job titles emerge that didn’t even exist a few years ago, skills are more important than ever, and our AI-powered tools can help people to navigate their own unique path.”

  • S&P Global AI Chief Eyes ‘Exciting’ Pace of Agentic AI Innovation

    S&P Global AI Chief Eyes ‘Exciting’ Pace of Agentic AI Innovation


    Highlights

    S&P Global’s Chief AI Officer Bhavesh Dayalji says the company is building agentic AI systems that can interact with other AI agents to generate new financial insights.



    Tools like ChatIQ and Spark Assist are helping clients and employees access and share insights more easily.

    The Kensho LLM-ready API lets clients integrate S&P data directly into their own generative AI models.

    S&P Global, the financial information giant, sees agentic artificial intelligence (AI) as the next transformational wave that can raise business analysis and workflows to new heights, according to its Chief AI Officer Bhavesh Dayalji.

    “AI is going to touch almost every single part of the industry,” Dayalji said in an exclusive interview with PYMNTS. “We’re only at the beginning.”

    S&P Global’s AI journey actually began decades ago. But the acquisition of Kensho Technologies in 2018, where Dayalji was CEO, accelerated the process and laid a data-ready foundation for the generative AI era, which is still in its infancy.

    Since then, the data and insights company has rolled out client-friendly AI tools such as ChatIQ, a generative AI assistant that leverages large language models (LLMs) and is trained on company data to perform such things as industry analysis, strategy research and similar tasks.

    The company also embedded GenAI into its S&P Global Marketplace so users can find the data that they need. “If you think about all the insights and data that’s trapped in these structured and unstructured documents, now you can surface them in a very natural language query way,” he said.

    For employees, S&P Global has introduced Spark Assist, which is similar to ChatGPT and connected to the company’s data. All of the company’s employees can access it to accelerate their work and workflows.

    Spark Assist also helps share learnings across the company’s global footprint by giving employees the ability to share these “sparks.”

    “There are things that we’re doing in one country that you know folks from another country can benefit from,” Dayalji said. “So we’ve created a way in which you can share things, or sparks.”

    The AI chief also wants to explore how to use a tool like Spark Assist to create “network effects” where the benefits ripple outward to more folks.

    Another AI initiative for employees was an AI training program — with specific, in-person programs for the most senior executives, who not only learned about the tools but also the implications for their business units.

    Read more: Beyond the Hype: What CFOs Should Know About AI Agents

    Connecting With Customers’ LLMs

    As S&P Global started to create chatbots to find ways to connect their data, the company found out its clients were all doing the same things.

    S&P Global created the Kensho LLM-ready API, which lets clients integrate its datasets into their GenAI models. The API is compatible with OpenAI’s GPT series, Google’s Gemini, Anthropic’s Claude and others. For example, some of its commodities data shows up in Microsoft Copilot.

    The next phase for S&P Global is “ensuring that our data shows up in these new ecosystems that are being developed,” Dayalji said.

    When it comes to deep research, S&P Global recently launched its Kensho grounding agent that lets users write such things as equity analyst reports and other similarly in-depth analysis pieces.

    As AI agents work with other AI agents, they can bring about new insights that have not surfaced before. “They can do interactions and kind of communicate with each other. It just allows for more sophisticated work to be done with AI,” the executive said.

    The grounding agent also helps prevent hallucinations by the way it’s architected, by grounding it in truth, which is especially critical for highly regulated industries such as financial services.

    Dayalji said S&P Global uses commercially available AI models, depending on the use case. The company did experiment with building its own small language model, which they found to be “very efficient” and is being used for some internal tasks. Other projects include a text-to-SQL agent that let users query databases in natural language.

    However, when asked if AI will eventually predict the market’s direction, he said,The world is interconnected in ways we don’t fully understand. And even AI may struggle to do that because you often have black swan events.” However, AI can certainly guide one’s thinking about it.

    To be sure, AI is already being used for trading. The Bank of England, the U.K.’s central bank, recently warned that its use for algorithmic trading could exacerbate market volatility and amplify financial instability.

    For S&P Global, the shift to agentic AI has support from the highest levels of the organization. “Our CEO Martina Cheung has GenAI at the forefront of her mind,” Dayalji said, adding that “The companies that embrace that shift first are the ones that are going to win.”

    On a personal note, Dayalji said he’s excited to be involved in AI at this pivotal point in time. He has been working on AI for many years, but “My parents didn’t know what I did [and] had no interest.” After ChatGPT, “when I have my parents asking me about AI, it’s exciting.”

  • Micropolis Delivers on Vision of Robotics, Artificial Intelligence (AI) and Intelligent Systems at Make it in the Emirates 2025

    Micropolis Delivers on Vision of Robotics, Artificial Intelligence (AI) and Intelligent Systems at Make it in the Emirates 2025

    Company to Showcase the Power of Micropolis’s Autonomous Mobile Robots and AI to Enhance Productivity and Lower Costs for Customers

    DUBAI, May 07, 2025 (GLOBE NEWSWIRE) — Micropolis Holding Co. (“Micropolis” or the “Company”) (NYSE American: MCRP), a UAE-based pioneering force in robotics, AI, and autonomous mobility, today announced its participation in Make it in the Emirates 2025, the UAE’s premier manufacturing event, taking place May 19-22, 2025 at the Abu Dhabi National Exhibition Centre (ADNEC). The event brings together manufacturers, innovators, policymakers, and global investors to explore industrial growth opportunities within the UAE.

    MIITE

    At the exhibition, a series of daily showcases will display two of Micropolis’s innovative robotics portfolios—an agriculture robot and a container cleaning robot—demonstrating the power of its autonomous mobile robots and AI to enhance productivity and lower customers’ operating costs.

    To take part in Micropolis’s display and experience the excitement of their robotic solutions, please visit the Company at ADNEC May 19-22 in Booth 6-AM30.

    For more information or to schedule a meeting with Micropolis’s management team, please email Micropolis@kcsa.com.

    About Micropolis Holding Co.
    Micropolis is a UAE-based company specializing in the design, development, and manufacturing of autonomous mobile robots (AMRs), AI systems, and smart infrastructure for urban, security, and industrial applications. The Company’s vertically integrated capabilities cover everything from mechatronics and embedded systems to AI software and high-level autonomy.

    For more information please visit www.micropolis.ai.

    Forward-Looking Statements
    This press release contains “forward-looking statements” within the meaning of the “safe harbor” provisions of the Private Securities Litigation Reform Act of 1995You can identify forward-looking statements by the fact that they do not relate strictly to historical or current facts. These statements may include words such as “anticipate”, “estimate”, “expect”, “project”, “plan”, “intend”, “believe”, “may”, “will”, “should”, “can have”, “likely” and other words and terms of similar meaning. Forward-looking statements represent Micropolis’ current expectations regarding future events and are subject to known and unknown risks and uncertainties that could cause actual results to differ materially from those implied by the forward-looking statements. These statements are subject to uncertainties and risks including, but not limited to, the uncertainties related to market conditions and other factors discussed in the “Risk Factors” section of the registration statement filed by the Company with the SEC. For these reasons, among others, investors are cautioned not to place undue reliance upon any forward-looking statements in this press release. Additional factors are discussed in the Company’s filings with the SEC, which are available for review at www.sec.gov. The Company undertakes no obligation to publicly revise these forward-looking statements to reflect events or circumstances that arise after the date hereof.

  • Optimum Collaborates with Cresta to Further Transform Its Customer Experience with Artificial Intelligence

    Optimum Collaborates with Cresta to Further Transform Its Customer Experience with Artificial Intelligence

    Cresta will help Optimum improve sales conversion rates, accelerate efficient revenue growth, and empower their agents with the use of Generative AI

    PALO ALTO, Calif., May 7, 2025 /PRNewswire/ — Cresta, the leading contact center artificial intelligence (AI) platform for human and AI agents, today announced that Optimum, one of the largest broadband communications and video services providers in the United States, will be deploying Cresta’s generative AI-powered solutions at scale to enhance its customer experience and drive efficient revenue growth.

    Optimum will begin implementation with Cresta Conversation Intelligence and Cresta Agent Assist to enable its agents to more effectively and efficiently handle customer requests, while also improving sales conversion rates. Additionally, Optimum will partner with Cresta to better understand customer interactions, highlight key opportunities for coaching, and strengthen effective practices throughout the organization.

    “Optimum’s mission is to become the connectivity provider of choice in every community we serve and our partnership with Cresta underscores our commitment to embracing the latest technologies to deliver on that promise,” said Mike Parker, President of Consumer Services, Optimum. “By leveraging Cresta’s generative AI-powered solutions, we’re enhancing sales effectiveness, streamlining post-call processes, and empowering our agents to focus on what matters most—building stronger relationships with our customers.”

    “By infusing their contact center operations with our best-in-class generative AI solutions, Optimum will be able to better empower its agents and elevate the customer experience to new heights,” said Ping Wu, CEO of Cresta. “As one of the largest broadband and video service providers in the United States, it is vital that Optimum can provide customers with highly personalized and effective service at every turn. We look forward to helping Optimum drive real-time value from their AI investments in the years to come.”

    Cresta’s partnership with Optimum further validates the company’s position as the leading contact center AI platform for human and AI agents, helping Fortune 500 companies across telecommunications, retail, financial services, and more turn their customer conversations into a competitive advantage.

    To learn more about Cresta, please visit https://cresta.com.

    About Cresta:
    Cresta is on a mission to turn every customer conversation into a competitive advantage by unlocking the true potential of the contact center. Cresta’s platform combines the best of AI and human intelligence to help contact centers discover customer insights and behavioral best practices, automate conversations and inefficient processes, and empower every team member to work smarter and faster. Powering customer experiences for companies like Cox Communications, Hilton, and CarMax, Cresta helps turn every conversation into an opportunity. Follow our blog and connect with us on LinkedIn and X.

    SOURCE Cresta

  • GE HealthCare rolls out new AI-equipped SPECT/CT nuclear medicine scanner

    GE HealthCare rolls out new AI-equipped SPECT/CT nuclear medicine scanner

    GE HealthCare has obtained the FDA’s blessing for its latest imaging system combining SPECT and CT scanning, as well as for the artificial intelligence programs that help power it. | GE HealthCare obtained FDA clearance for the Aurora SPECT/CT imaging system, as well as for the AI program that helps power it.