Category: Artificial Intelligence

  • It’s Been a Minute : NPR

    It’s Been a Minute : NPR

    An AI generated image of a murder victim, Chris Pelkey.

    YouTube/Annotation by NPR


    hide caption

    toggle caption

    YouTube/Annotation by NPR


    An AI generated image of a murder victim, Chris Pelkey.

    YouTube/Annotation by NPR

    Should AI give you a voice? Even when you’ve been murdered?

    An AI avatar of a murder victim addressed his killer in court last week, and it may have been the first admittance of an AI-generated victim impact statement in a US court. Chris Pelkey, who was shot in a road rage incident in 2021, was recreated in a video made by his sister to offer forgiveness to his killer. This could mark the start of a new relationship between AI and the law, but will it change the relationship between us and the law? And what are the broader impacts we might see on our culture?

    Brittany sits down with NPR digital news reporter Juliana Kim and Brandon Blankenship, assistant professor and director of the pre-law program at the University of Alabama at Birmingham, to find out.

    This episode was reported by Juliana Kim. It was produced by Liam McBain. It was edited by Neena Pathak. We had engineering support from David Greenburg. Our Supervising Producer is Barton Girdwood. Our Executive Producer is Veralyn Williams. Our VP of Programming is Yolanda Sangweni.

  • 3 US Artificial Intelligence Growth Stocks That Can Power Your Portfolio Higher

    3 US Artificial Intelligence Growth Stocks That Can Power Your Portfolio Higher

    Artificial intelligence, or AI, has been a game-changer.

    AI is responsible for making our lives easier by automating tasks, making search easier, and allowing for more intuitive, natural responses to queries.

    This new technology has an impact not just on individuals, but has been harnessed by many corporations to improve their analytics capabilities and enhance their cloud platforms.

    Generative AI, which relies on large language models (LLMs), has also changed the way we interact and communicate.

    More businesses are now deploying AI agents, also known as agentic AI, to help sift through mountains of data and personalise queries and preferences for their customers.

    With AI proving to be a formidable driving force, here are three growth stocks that can allow you to ride on this trend.

    Palantir (NASDAQ: PLTR)

    Palantir is a software company utilising AI to power big data analytics.

    The company operates a platform that allows organisations to integrate, analyse, and utilise and make sense of vast amounts of data to draw insights.

    Palantir reported a robust performance for the first quarter of 2025 (1Q 2025).

    Revenue climbed 39.3% year on year to US$883.9 million and operating profit more than doubled year on year from US$80.9 million to US$176 million.

    Net profit stood at US$213 million, up 103% year on year.

    The business also churned out free cash flow of US$304.1 million, up nearly 140% year on year.

    Management is optimistic about 2025 and raised the company’s revenue guidance to between US$3.89 billion to US$3.92 billion (previous forecast: US$3.74 billion to US$3.757 billion).

    Free cash flow for this year is projected to come in between US$1.6 billion to US$1.8 billion.

    The company closed 139 deals of at least US$1 million during the quarter, with 31 deals exceeding US$10 million each.

    Palantir’s total customer count has grown by 39% year on year to 769 for 1Q 2025, and its total commercial customers has shot up 46% year on year to 622.

    Innodata (NASDAQ: INOD)

    Innodata is a data engineering company helping to drive the adoption of generative AI and spearhead AI innovation.

    The company provides a range of solutions, platforms, and services for generative AI and AI builders and adopters.

    Innodata reported an impressive 1Q 2025 with revenue leaping 120.1% year on year to US$58.3 million.

    Net profit soared more than sevenfold year on year from US$989,000 to US$7.8 million.

    The business also generated a positive free cash flow of US$17.5 million for the quarter, more than triple the US$5.4 million churned out a year ago.

    Innodata is expanding its relationship with existing customers that could result in the award of more than US$30 million in contracts in the near term.

    Management is also onboarding a number of potentially significant customers and plans to invest in targeted technologies to support current and prospective customers in their AI journeys.

    The company expects to chalk up revenue growth of 40% or more for 2025.

    Two months ago in March, Innodata announced the beta launch of its generative AI test and evaluation platform powered by Nvidia (NASDAQ: NVDA) advanced inference technology.

    Management believes that generative AI is set to become the next-generation computation platform, and that the addressable market opportunity will grow by 42% per annum from just US$217 billion in 2025 to US$1.3 trillion by 2032.

    Verint Systems (NASDAQ: VRNT)

    Verint Systems provides actionable solutions and services by harnessing AI and focuses on customer experience (CX) automation, cyber intelligence, and business analytics.

    The company reported a commendable set of earnings for its fiscal 2025 (FY2025) ending 31 January 2025.

    Revenue stayed flat year on year at US$909.2 million but operating profit surged 56% year on year to US$106.4 million.

    Net profit more than tripled year on year to US$65 million.

    The business also generated a positive free cash flow of US$129.9 million for FY2025, up 4% year on year.

    Healthy AI momentum drove record bookings for Verint as it saw software-as-a-service annual contract value (ACV) bookings from new deals climb 30% year on year.

    Subscription annual recurring revenue (ARR) for FY2026 is expected to grow around 8% year on year.

    Verint is seeing increasing AI consumption from its customers, with Woolworths (ASX: WOW) expanding its Verint bots from one to six business units.

    For AXA Insurance, the business has expanded its existing bots from 100 to 500 licences and plans to grow this to 1,700 licences by the first half of this year.

    Management believes that Verint’s long-term growth opportunity remains significant.

    There will be US$2 trillion spent to shift manual labour to AI to help automate CX workflows.

    The market is in the early stages of doing so and the company is well-positioned for this shift and targeting long-term double-digit ARR growth.

    Generative AI is reshaping the stock market, but not in the way most investors think. It’s not just about which companies are using AI. It’s about how they’re using it to unlock new revenue, dominate their markets, and quietly reshape the business world. Our latest FREE report “How GenAI is Reshaping the Stock Market” breaks the hype down, so you can invest with greater clarity and confidence. Click here to download your copy today.

    Follow us on Facebook and Telegram for the latest investing news and analyses!

    Disclosure: Royston Yang does not own shares in any of the companies mentioned.

    The post 3 US Artificial Intelligence Growth Stocks That Can Power Your Portfolio Higher appeared first on The Smart Investor.

  • Inside the FDA’s plans to embrace AI agencywide

    Inside the FDA’s plans to embrace AI agencywide

    OpenAI has been connecting with various companies in an effort to incorporate its generative artificial intelligence technology, and federal agencies are rumored to be part of that effort. While a contract with Open AI has yet to be confirmed, the Food and Drug Administration is suggesting that AI will be a part of its operations sooner rather than later.

    FDA seeking ‘aggressive’ adoption of AI

  • Police fabricated legal clause with AI, judge finds – Israel news

    Police fabricated legal clause with AI, judge finds – Israel news

    The Hadera Magistrate’s Court ruled last week that the police had fabricated a legal clause using artificial intelligence.

    During an investigation carried out by Lahav 433, police confiscated a suspect’s mobile phone.

    The suspect subsequently opposed the move and requested, through attorneys Tamir Calderon and Rami Zoabi from the law firm Doron Tikotzky & Co., to have the device returned to him. 

    The police objected, citing a legal clause that does not exist and was instead generated with the help of artificial intelligence.

    ‘Doesn’t exist in anyone’s imagination’

    “The law does not exist in the statute books of the State of Israel, nor does it exist in anyone’s imagination—it was created by artificial intelligence,” said Judge Ohad Kaplan.

    An illustrative image of an Israel Police officer. (credit: ISRAEL POLICE)

    “If I thought I’d seen it all in the 30 years I’ve been on the bench, I was apparently mistaken.” It should be noted that the police admitted the mistake at the start of the hearing.

    According to Ynet, the police representative admitted the error. “We retract our claim. What was quoted is incorrect. The person who wrote it did so in good faith, by mistake. We acknowledge that an error was made.”



  • CRCF Will Host Seminar on Artificial Intelligence May 20

    CRCF Will Host Seminar on Artificial Intelligence May 20

    The Cattaraugus Region Community Foundations next Link and Learn Seminar, “Artificial Intelligence for Nonprofits,” is scheduled for May 20 from noon to 1 p.m.
  • What’s Next in Artificial Intelligence?

    What’s Next in Artificial Intelligence?

    Failed to save article

    Please try again

    Over the shoulder view of a young woman using artificial intelligence to plan a trip. (Oscar Wong via Getty Images)

    Guests:

    Nitasha Tiku, tech culture reporter, Washington Post

    Jeff Horwitz, tech reporter, The Wall Street Journal

    Kylie Robison, reporter, Wired; Robison covers the business of AI

  • Lawmakers make no revisions to artificial intelligence law | Colorado

    Lawmakers make no revisions to artificial intelligence law | Colorado

    Editor’s note: This story was updated late Monday afternoon with a statement from Gov. Jared Polis’ office.

    (The Center Square) – The Colorado General Assembly wrapped up its legislative session last week without any revisions to its artificial intelligence law, which takes effect early next year.

    Colorado lawmakers passed landmark AI regulation legislation in 2024. Supporters said the law will protect consumers, but critics argue it will be bad for business and innovation.

    Senate Bill 24-205 put in place requirements for AI developers protecting consumers against “algorithmic discrimination” and added risk management, impact assessments and reviews for AI developers. 

    Gov. Jared Polis last year signed the bill, which takes effect on Feb. 1, 2026, but expressed concern “about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike.”

    A last-minute effort to amend the law during the legislative session that ended last week was found in Senate Bill 25-318, but the bill died on the calendar despite pressure from the governor and other officials to delay SB 24-205’s implementation. The General Assembly adjourned Wednesday.

    “The stakeholder collaboration that took place over many months leading up to and during the 2025 legislative session brought many ideas, concerns, and priorities to the table from a wide range of communities,” said a letter sent to lawmakers one week ago. “However, with just hours remaining in the 2025 legislative session, it is clear that more time is needed to continue important stakeholder work to ensure that Colorado’s artificial intelligence regulatory law is effective and implementable.”

    The letter was signed by Polis, Denver Mayor Mike Johnston, Attorney General Phil Weiser, U.S. Sen. Michael Bennet, and U.S. Reps. Joe Neguse and Brittany Pettersen.

    “Together, we implore leadership and members of the Colorado General Assembly to take action now to delay implementation of SB 24-205 until January 2027,” the letter added. “Colorado communities in every corner of our state deserve the benefit of well-crafted artificial intelligence consumer protection law that more time for stakeholder engagement and policy development work will bring.”

    Democratic Senate Majority Leader Robert Rodriguez, the sponsor of SB 24-205 and SB 25-318, told reporters after the session he will work with AI stakeholders in the coming weeks, KUNC reported

    “We will get working, and whether we go into special session or go into next year, we’ll be in a much better place with the policy and have more consensus,” Rodriguez said, according to KUNC.

    Shelby Wieman, Polis’ press secretary, told The Center Square late Monday afternoon, “The Governor has been clear – both in the letter he signed with Majority Leader Rodriguez and Attorney General Phil Weiser, and throughout this session – that key changes needed to be made to the law created by SB24-205.

    “There is broad support for ensuring that Colorado continues to lead in technology innovation while still pioneering critical consumer protections,” Wieman said in an emailed statement. “Unfortunately, the legislature failed to take meaningful action this session to address the shared principles articulated before the session, nor did they delay implementation to allow more time to plan and work on this, despite strong support from small businesses, school districts, institutions of higher education, hospitals, and other key stakeholders.

    “That’s why before the bill died, the Governor joined with AG Weiser, Mayor Johnston, and members of the Congressional Delegation to call for the legislature to take action in the final hours of session,” Wieman said. “This will need to be addressed, and a special session is one such venue where it could be addressed.”

     In Colorado, special sessions are covened by the governor.

  • When the Government Should Say ‘No’ to an AI Use Case

    When the Government Should Say ‘No’ to an AI Use Case

    States across the nation are creating “sandboxes” and otherwise encouraging experimentation with AI that enables more effective and efficient operations. Call it, perhaps, AI with a purpose. But advancing innovation in government comes with risk.

    In Colorado, CIO David Edinger said his office has so far reviewed about 120 ideas for potential uses of AI in state government. Below, he explains how they vet agency proposals to use AI. For ideas classified as “high” risk under the NIST framework, most of the ones they reject have something in common: data practices that don’t meet the state’s data privacy requirements.

    Colorado is not alone in keeping the data practices of potential AI partners at the forefront of its decision-making.

    In a conversation with Government Technology at last month’s National Association of State Chief Information Officers (NASCIO) Midyear Conference, California Chief Technology Officer Jonathan Porat explained that there are three main components to how the state evaluates prospective use cases of artificial intelligence. Aside from the appropriateness of the use case itself for state government, officials also consider the track record of the technology itself. Thirdly, they dig into the data involved in the proposal.

    “Are the data that we’re using appropriate for a GenAI system?” Porat said. “Are they properly being governed and secure?”

    Video transcript: I would say we’ve reviewed maybe 120 proposals so far across every agency for all possible uses and we follow the NIST framework for that. So it’s medium, high or prohibited. If it’s prohibited, we prohibit it. If it’s medium, we just deploy it. If it’s high, we evaluate it more thoroughly. And when we do evaluate it and we say no, it’s almost always not because of how it was intended to be used, but because of data sharing and what data we’re then sharing with whoever that provider is per their standard contract that we can’t usually by state law share. So it’s PII or HIPAA or CJIS or something like that and we have to say it’s not because of how you want to use the tool, it’s because you’re giving away the data in a way that we can’t accept. And that’s really the crux of it and that was another surprise was it’s not how people are trying to use it. It’s what’s going on with the privacy of the data.

    Noelle Knell is the executive editor for e.Republic, responsible for setting the overall direction for e.Republic’s editorial platforms, including Government Technology, Governing, Industry Insider, Emergency Management and the Center for Digital Education. She has been with e.Republic since 2011, and has decades of writing, editing and leadership experience. A California native, Noelle has worked in both state and local government, and is a graduate of the University of California, Davis, with majors in political science and American history.

    Nikki Davidson is a data reporter for Government Technology. She’s covered government and technology news as a video, newspaper, magazine and digital journalist for media outlets across the country. She’s based in Monterey, Calif.

  • With trust in AI flagging, senators want Commerce to lead education campaign

    With trust in AI flagging, senators want Commerce to lead education campaign

    As the internet becomes overrun with AI slop and public trust in artificial intelligence plummets, a bipartisan group of senators want to enlist the Commerce Department in an education operation about the emerging technology.

    The Artificial Intelligence Public Awareness and Education Campaign Act would require the Commerce secretary to oversee an initiative to provide Americans with information about the benefits of AI in their daily lives, as well as the risks the technology presents.

    “With the rapid increase of AI in our society, it is important that individuals can both clearly recognize the technology and understand how to maximize the use of it in their daily lives,” Sen. Todd Young, R-Ind., a co-sponsor of the bill, said in a statement. “The Artificial Intelligence Public Awareness and Education Campaign Act is an important step in ensuring all Americans can benefit from the opportunities created by AI.”

    The campaign would detail the ubiquity of AI in everyday life and highlight its benefits, including for small business owners and in workforce opportunities with the federal government. It would also note the different ways in which various regions, economies and subpopulations may interact with the technology, while making clear “the rights of an individual under law with respect” to AI.

    “America has the opportunity to embrace artificial intelligence and all of the benefits it can bring to numerous industries — health care, business and national security to name a few,” Sen. Mike Rounds, R-S.D., another co-sponsor, said in a statement. “Consumer literacy and education is a critical piece of keeping the United States ahead of the curve on artificial intelligence development and adoption.”

    Another co-sponsor, Sen. Brian Schatz, D-Hawaii, said the legislation is “essential” for helping the public understand the risks and benefits associated with AI. The lawmakers call for the campaign to include best practices for “detecting and differentiating AI-generated media,” including deepfakes and content produced by chatbots.

    “Our bill will direct the Commerce Department to educate the public about how best to take advantage of these tools while staying vigilant to AI-enabled scams and fraud,” Schatz said in a statement.

    On AI, House GOP wants more money for Congress, less say for states

    The introduction of the legislation last week came days before House Energy and Commerce Committee Republicans unveiled a reconciliation bill Sunday night that would provide the Commerce Department with $500 million for an artificial intelligence and information technology modernization initiative.

    Those funds, per the bill, would be available to Commerce until Sept. 30, 2035, to “modernize and secure Federal information technology systems through the deployment of commercial artificial intelligence, the deployment of automation technologies, and the replacement of antiquated systems.” 

    Also included in Republicans’ proposal is a provision that would ban state laws or other regulations on AI models, systems or related automated systems. Grace Gedye, policy analyst for AI issues at Consumer Reports, said in a statement that Congress has “long abdicated” its responsibilities on AI regulation, and barring states from “taking actions to protect their residents” is not the answer.

    “This incredibly broad preemption would prevent states from taking action to deal with all sorts of harms,” Gedye said, “from non-consensual intimate AI images, audio, and video, to AI-driven threats to critical infrastructure or market manipulation, to protecting AI whistleblowers, to assessing high-risk AI decision-making systems for bias or other errors, to simply requiring AI chatbots to disclose that they aren’t human.”

    AI regulations have been passed into law in several states over the past decade, sparking criticism from major AI companies for what they say is a patchwork system that stifles innovation. Americans for Responsible Innovation President Brad Carson said in a statement that “tying the hands” of state lawmakers on AI could have “catastrophic consequences” for the public and small businesses.

    “Lawmakers stalled on social media safeguards for a decade and we are still dealing with the fallout. Now apply those same harms to technology moving as fast as AI,” Carson said. “Without first passing significant federal rules for AI, banning state lawmakers from taking action just doesn’t make sense. Ultimately, the move to ban AI safeguards is a giveaway to Big Tech that will come back to bite us.”

    Written by Matt Bracken

    Matt Bracken is the managing editor of FedScoop and CyberScoop, overseeing coverage of federal government technology policy and cybersecurity.

    Before joining Scoop News Group in 2023, Matt was a senior editor at Morning Consult, leading data-driven coverage of tech, finance, health and energy. He previously worked in various editorial roles at The Baltimore Sun and the Arizona Daily Star.

    You can reach him at matt.bracken@scoopnewsgroup.com.

  • Pope Leo XIV’s name choice and facing the world of artificial intelligence

    Pope Leo XIV’s name choice and facing the world of artificial intelligence

    Among the reasons Pope Leo XIV gave for selecting the name he did was his desire to address the pressing human questions raised by artificial intelligence—just as his namesake and forerunner Pope Leo XII courageously and profoundly addressed the challenges of the Industrial Revolution. One such question is what A.I. does to our ability to communicate. As we rush to acquire a host of personal A.I. assistants that are designed to speak for us, we might pause and consider: What makes it so difficult for us to speak?

    It turns out that the one thing we need help with is the one thing these assistants, by their very design, cannot help us with.

    Though it may seem obvious when we consider human perception that we look out at the world and simply report what we see, there may be more than meets the eye. The philosopher Martin Heidegger said something peculiar about language: “We do not say what we see, but rather the reverse, we see what one says about the matter.”

    So, for example, one says, “Majoring in philosophy is impractical,” or, along the same lines, “Poetry is useless.” By virtue of these routine phrases, we are primed to see only the supposed lack of utility. We might notice that philosophy majors do not qualify for high-paying jobs as petroleum engineers, but we simply cannot register the fact that philosophy majors end up making more money over the long term than such practical majors as biology and business.

    Who is the “one” that scripts our speech for us? The one is each of us when we speak habitually without due consideration. We express what the philosopher Edmund Husserl called “sedimented” judgments, ones that are carried to us downstream from past occasions of thought but without having the full force of present insight behind them. Like the children’s game of telephone, much can be lost in the transmission.

    Coding what one says

    ChatGPT, we can see, gives voice to what one says. It generates new speech exclusively on the basis of having digitally analyzed 45 terabytes of text concerning what human beings have said previously. The averaged-out result is the voice of the one. The computer scientist Stephen Wolfram writes that “What ChatGPT is always fundamentally trying to do is to produce a ‘reasonable continuation’ of whatever text it’s got so far, where by ‘reasonable’ we mean ‘what one might expect someone to write after seeing what people have written on billions of webpages, etc.’”

    If we look under the hood of ChatGPT, we don’t find any texts or knowledge of language. Instead, we find scores and scores of numbers, gleaned from these terabytes of data, that can be used by the system’s neural net to generate, word by word, what we speakers of language can recognize as intelligible prose.

    The computers are blind. The speech they generate is not nourished by fresh experience or thoughtfulness. Instead, their prose is parasitic on the experience and intelligence that people have brought to speech. And they have been trained by human teachers to refine their algorithm so that instead of regurgitating nonsense echoed from the human conversation, they are able to regurgitate something that sounds sensible and appropriate.

    Hence, these systems originate in a version of what one says, namely a mathematical representation of what has previously been said, and the output is measured by our native sense of what one might say.

    Saying what we see

    Though we often thoughtlessly repeat whatever one says and notice only whatever one happens to notice, we humans are free to do otherwise. There remains the possibility that experience will nudge us to grasp a little more of the truth, and thereby to bring to speech a little more of what can and should be said. There remains the possibility that we might invest ourselves a little more carefully and thoughtfully in considering the topic at hand.

    In Phenomenology of the Human Person, Robert Sokolowski calls attention to how we use speech to take ownership of what is said. When we say, with meaning, “I think that such and such is the case,” we put our own credibility on the line and indicate that we have taken all reasonable precautions to ascertain the truth. When a chatbot uses the pronoun, I,by contrast, there is no self, no responsible person coming through. There is no “agent of truth,” to use Sokolowski’s term.

    We might rattle off “Poetry is useless” as what one says about poetry, but we should pause before saying, “I think that poetry is useless.” That phrase carries with it the burden of having actually considered the matter, at least for a moment, and with that consideration comes the possibility of disconfirming what one says. “Hmm. Consider ‘The Iliad’ and the Psalms, and how the spiritedness of Achilles and the jubilation of David have buoyed the spirits of countless people. Poetry may be among the most beneficial of things for human life.”

    The fact that A.I. systems are so good at generating text that sounds like human speech can lead us to believe that we are dealing with an individual who is responsible for what is said. But in fact these machines are expert only at echoing back to us what others typically say. The question of truth makes this deficient character plain. The machines, we say, “hallucinate” when they start fabricating truth. But there is no normative dimension to their processes. They follow rules mechanically. They cannot care for truth as we do; they are indifferent to the distinction between truth and falsity except insofar as the terms affect the way we measure their outcomes.

    They cannot turn and face the truth or culpably fail to face the facts, not for the trivial reason that they do not have faces, but for the profound reason that they can only do what they in fact do; there is no striving to reach a measure they can fail to reach. They don’t hunger for truth or hanker after the good. We are dealing with creatures that simulate our behaviors rather than duplicating our powers of intellect and of will.

    Five ways to live with A.I.

    In The Language Animal, Charles Taylor details the central role of speech in human life: We are the animal that speaks and harkens to the voices of others. Yet today, our natural habitat is threatened. The dialogical character of speech is being replaced by an ever-louder monologue in which we are cast in the role of mere auditors for what “one says.”

    Now, the switch from what one says to what I think is not automatic but requires effort. Instead of just following the ruts in the wagon trail, the way of least resistance, we have to goad ourselves to blaze a better path if necessary. And it is precisely that additional effort that might induce us to look for shortcuts provided by our growing legion of digital assistants.

    In light of this situation, Rainer Maria Rilke’s advice to a young writer is particularly germane: “Go into yourself. Examine the reason that bids you to write; check whether it reaches its roots into the deepest regions of your heart, admit to yourself whether you would die if it should be denied you to write.…Then, approach nature. Then try, like the first human being, to say what you see and experience and love and lose.”

    Following Rilke, here are five tips to help each of us realize our human vocation to say what we see instead of what one says.

    First, humanize but anonymize the machine. The voice of a digital assistant is nothing more than an algorithmically averaged presentation of the millions of writers that composed the texts on which the assistant was trained. Its voice is at once that of many and of none. Each of us must therefore challenge its authority by returning its anonymous judgments to the truth of the matter.

    Second, sink your words down into the soil of living experience. Talk face to face with others whenever possible, making a point to look into the eyes of your interlocutor and then to look with your interlocutor towards the things you are talking about, so that your speech might take its bearings from your joint experience of the things themselves. Be ready and willing to tarry with the real; impatience breeds superficiality.

    Third, be mindful of lips and hands. Speech, whether spoken or written, is always the work of one’s own bodily agency. Handwritten notes express one’s thoughts so much more authentically and personally; the philosopher Ludwig Wittgenstein even said he thought with his hands rather than his head.

    Fourth, speak off script. Say unexpected things and go out of your way to call attention to what is important and insightful rather than what is expected or typical. Don’t let algorithms write your text, even your pleasantries. Dare to be idiosyncratic and to forge and fashion the human conversation in new ways.

    Fifth, rediscover the wellspring of speech. Poetry is the art of saying things beautifully, and philosophy the art of saying things truthfully. Commit several choice poems to memory, and try your hand at writing some yourself. Read some philosophical prose, and dare to bring to light essential truths in your own voice.

    We endangered language animals know not only what to say but, more importantly, why. Although it takes a modicum of effort, this is nonetheless our birthright and great joy: to articulate truths freshly and compellingly, to invest our plain words with the substance of our intentions and to say all the words that we know really matter.