The next time you're due for a medical exam, you may get a call from someone like Ana: a friendly voice that can help you prepare for your appointment and answer any pressing questions you might have.
With her calm, warm demeanor, Ana is trained to put patients at ease — like many nurses across the U.S. Unlike them, she is available to chat 24-7 in multiple languages, from Hindi to Haitian Creole.
That's because , but an created by Hippocratic AI, one of a number of new companies offering ways to automate time-consuming tasks usually performed by nurses and medical assistants.
Hundreds of hospitals use to monitor patients' vital signs, flag emergency situations and trigger step-by-step action plans for care — jobs previously handled by nurses and other health professionals.
People are also reading…
Hospitals say AI is helping their nurses work more efficiently while addressing burnout and understaffing.
However, nursing unions argue that this poorly understood technology is overriding nurses' expertise and degrading the quality of care.
"Hospitals have been waiting for the moment when they have something that appears to have enough legitimacy to replace nurses," said Michelle Mahon of National Nurses United. "The entire ecosystem is designed to automate, de-skill and ultimately replace caregivers."

Nurses hold a rally April 22 in San Francisco to highlight safety concerns about using artificial intelligence in health care.
Mahon's group, the largest nursing union in the U.S., helped organize more than 20 demonstrations at hospitals across the country, pushing for the right to have say in how AI can be used — and protection from discipline if nurses decide to disregard automated advice.
The group raised new alarms in January when , the incoming health secretary, suggested AI nurses "as good as any doctor" could help deliver care in rural areas. Earlier this month, , nominated to oversee Medicare and Medicaid, said he believes AI can "liberate doctors and nurses from all the paperwork."
Hippocratic AI initially promoted a rate of $9 an hour for its AI assistants, compared with about $40 an hour for a registered nurse. It since dropped that language, instead touting its services and seeking to assure customers that they were carefully tested. The company did not grant requests for an interview.
Hospitals experimented for years with technology designed to improve care and streamline costs, including sensors, microphones and motion-sensing cameras. Now that data is being linked with electronic medical records and analyzed in an effort to predict medical problems and direct nurses' care — sometimes before they've evaluated the patient themselves.

The website of artificial intelligence company Xoltar shows two demonstration avatars for conducting video calls with patients.
Adam Hart was working in the emergency room at Dignity Health in Henderson, Nevada, when the hospital's computer system flagged a newly arrived patient for sepsis, a life-threatening reaction to infection. Under the hospital's protocol, he was supposed to immediately administer a large dose of IV fluids.
Hart determined he was treating a dialysis patient, or someone with kidney failure. Such patients have to be carefully managed to avoid overloading their kidneys with fluid. He raised his concern with the supervising nurse but was told to follow the standard protocol. Only after a nearby physician intervened did the patient instead begin to receive a slow infusion of IV fluids.
"You need to keep your thinking cap on— that's why you're being paid as a nurse," Hart said. "Turning over our thought processes to these devices is reckless and dangerous."
Hart and other nurses say they understand the goal of AI: to make it easier for nurses to monitor multiple patients and quickly respond to problems. The reality is often a barrage of false alarms, sometimes erroneously flagging basic bodily functions as an emergency.
"You're trying to focus on your work but then you're getting all these distracting alerts that may or may not mean something," said Melissa Beebe, a cancer nurse at UC Davis Medical Center in Sacramento. "It's hard to even tell when it's accurate and when it's not because there are so many false alarms."

The website of artificial intelligence company Xoltar shows a demo of an avatar for conducting video calls with patients.
Even the most sophisticated technology will miss signs nurses routinely pick up on, such as facial expressions and odors, notes Michelle Collins, dean of Loyola University's College of Nursing. But people aren't perfect either.
"It would be foolish to turn our back on this completely," Collins said. "We should embrace what it can do to augment our care, but we should also be careful it doesn't replace the human element."
More than 100,000 nurses left the workforce during the COVID-19 pandemic, according to one estimate, the biggest staffing drop in 40 years. As the U.S. population ages and nurses retire, the U.S. government estimates there will be more than 190,000 new openings for nurses every year through 2032.
Faced with this trend, hospital administrators see AI filling a vital role: not taking over care, but helping nurses and doctors gather information and communicate with patients.
Qventus' AI assistant contacts patients and health providers, sends and receives medical records and summarizes their contents for human staffers. Qventus says 115 hospitals use its technology, which aims to boost hospital earnings through quicker surgical turnarounds, fewer cancellations and reduced burnout.
Israeli startup Xoltar specializes in humanlike avatars that conduct video calls with patients. The company is working with the Mayo Clinic on an AI assistant that teaches patients cognitive techniques for managing chronic pain. The company is also developing an avatar to help smokers quit. In early testing, patients spend about 14 minutes talking to the program, which can pickup on facial expressions, body language and other cues, according to Xoltar.
Not everyone is keen on artificial intelligence. Here's why some businesses are skeptical.
Not everyone is keen on artificial intelligence. Here's why some businesses are skeptical.

When a California doctor asked state Assemblymember Rebecca Bauer-Kahan to sign a form allowing AI to transcribe her child's medical visit, she refused. Her concern? Intimate medical conversations are being shared with for-profit tech companies, according to an October 2024 . Her hesitation reflects a growing tension across industries as artificial intelligence transforms business operations.
While AI adoption has nearly doubled in the past year, with 65% of organizations regularly using the technology, according to , significant sectors of the economy have reservations. Health care providers are concerned about AI "hallucinating" medical information, small businesses lack resources for AI implementation, and creative industries are fighting copyright infringement. explored the hesitations that stem from a complex mix of technical limitations, ethical concerns, and regulatory uncertainties that could widen the gap between AI adopters and holdouts.
Hesitations often run deeper than technical challenges. "The real barrier here is that companies realize that there needs to be an actual cultural change within the company, and they need to ask very concrete and very rigorous questions," Kimon Drakopoulos, director of the AI for business degree program at the University of Southern California's Marshall School of Business, told Stacker. Those questions, he said, must be specific.
"Can you define what you're trying to achieve? What are the metrics that you're trying to achieve? And if there is a conflict within these metrics, for example, accuracy versus fairness, which one do you value more?"
Different industries, a variety of concerns

When University of Michigan researchers examined an AI transcription tool used in 40 health systems nationwide, they discovered "hallucinations"—outputs from an AI tool that are inaccurate or illogical—in 4 in 5 transcriptions reviewed, according to the AP investigation.
"Nobody wants a misdiagnosis," Alondra Nelson, former head of the White House Office of Science and Technology Policy, told the AP. The concerns are widespread—3 out of 5 Americans say they would feel uncomfortable if their health care provider and recommend treatments, according to a Pew Research Center poll conducted in December 2022.
Similar hesitation emerges in creative industries, where copyright concerns loom large. More than a dozen in 2023 alone, according to a 2024 article published in Issues in Science and Technology. "For many artists and designers, this feels like an existential threat," the authors noted. "Their work is being used to train AI systems, which can then create images and texts that replicate their artistic style."
The scale of potential infringement alarms many creators. One popular AI image generator, Stable Diffusion, "scraped from the internet indiscriminately," including substantial copyrighted material, according to an April 2023 Harvard Business Review analysis. Companies using AI-generated content could face damages up to $150,000 per instance of willful copyright infringement.
Small businesses face different challenges, particularly around expertise and resources. A 2024 report by the ​​U.S. Chamber of Commerce Technology Engagement Center and Teneo Research found that among those , nearly a third said one reason was that they didn't know enough about the technology, while a quarter said they had compliance concerns.
"The first question you need to ask yourself as a small or medium enterprise business is, do you have a healthy data ecosystem," Drakopoulos said. "Do you keep track of the data? Are you tracking everything from your inventory or which customers showed up or didn't show up for the appointment?"
The talent and resource gap between large and small organizations continues to widen. While major financial firms employ hundreds of staff for AI development, smaller institutions "do not have the IT resources or expertise to develop their own AI models," according to a 2024 . Securities and Exchange Commission chair Gary Gensler warned in February 2024 about , noting this concentration could lead to "herding and network interconnectedness," problems that typically trigger financial crises.
The National Institute of Standards and Technology's said that AI systems are vulnerable to cyberattacks, which could compromise sensitive business data. These security risks are challenging for organizations lacking sufficient IT resources, which frequently are smaller ones.
The Treasury Department noted that these smaller institutions often . Larger institutions that develop tools in-house have more control over the building, testing, and monitoring of the systems to meet their specific needs and work reliably.
Cautious optimism when it comes to AI

History offers stark warnings about hesitating to adopt transformative technologies.
"The railway industry was driving like 6% of the total national income at the time, and then by the '60s, they had pretty much declined," Drakopoulos said. He pointed to more recent examples like Blockbuster's downfall, adding, "Things change, and I think the AI revolution is going to be much faster than the timeline of 20 years that we have been seeing until now."
The Department of State's warned that AI systems could amplify existing inequities when organizations lack resources for proper oversight. The report suggested that without coordination among various regulators, these technologies risk worsening "discrimination and existing socio-economic inequalities."
It also noted that AI systems can be applied in ways that unintentionally infringe on human rights, such as through biased or inaccurate outputs from AI models. The report found that AI's misuse can have detrimental impacts across sectors, including finance, welfare, health care, and education, potentially reinforcing bias and discrimination.
Despite these challenges, some see room for cautious optimism. The Chamber of Commerce and Teneo study revealed that 91% of small businesses surveyed are optimistic AI will help their business grow, with 89% saying AI has helped them enjoy managing their business and 86% saying it has made their business operations more efficient.
Drakopoulos advocated starting with basics before pursuing advanced AI applications. "I am a huge proponent of investing in simpler solutions," Drakopoulos said. "First, build your data, ask the right questions, and focus on simpler tools and simpler applications of data science."
As AI adoption accelerates, with McKinsey reporting in May 2024 that over the next three years, the divide between early adopters and hesitant organizations will reshape entire industries.
Story editing by Carren Jao. Additional editing by Kelly Glass. Copy editing by Kristen Wegrzyn. Photo selection by Clarese Moller.
originally appeared on and was produced and distributed in partnership with Stacker Studio.
5 ways companies are incorporating AI ethics
5 ways companies are incorporating AI ethics

As more companies adopt generative artificial intelligence models, AI ethics is becoming increasingly important. Ethical guidelines to ensure the transparent, fair, and safe use of AI are evolving across industries, albeit slowly when compared to the fast-moving technology.Â
But thorny questions about equity and ethics may force companies to tap the brakes on development if they want to maintain consumer trust and buy-in.  Â
found that about half of consumers think there is not sufficient regulation of generative AI right now. The lack of oversight tracks with limited trust that institutions—particularly tech companies and the federal government—will ethically develop and implement AI, according to KPMG.Â
Within the tech industry, ethical initiatives have been set back by a , according to an article presented at the 2023 ACM Conference on Fairness, Accountability, and Transparency. Layoffs at major corporations, including Amazon's streaming platform Twitch, Microsoft, Google, and X, hit hard, leaving a vacuum.
While nearly 3 in 4 consumers say they trust organizations using GenAI in daily operations, confidence in AI varies between industries and functions. Just over half of consumers trust AI to deliver educational resources and personalized recommendations, compared to less than a third who trust it for investment advice and self-driving cars. Consumers are open to AI-driven restaurant recommendations, but not, it seems with their money or their life.   Â
Clear concerns persist around the broader use of a technology that has elevated scams and deepfakes to a new level. The KPMG survey found that the biggest consumer concerns are the spread of misinformation, fake news, and biased content, as well as the proliferation of more sophisticated phishing scams and cybersecurity breaches. As AI grows more sophisticated, these concerns are likely to be amplified as more people may potentially be negatively affected—making ethical frameworks for approaching AI all the more essential.Â
That puts the onus to set ethical guardrails upon companies and lawmakers. In May 2024, Colorado became the first state to introduce with provisions for consumer protection and accountability from companies and developers introducing AI systems used in education, financial services, and other critical, high-risk industries.
As other states evaluate similar legislation for consumer and employee protections, companies especially possess the in-the-weeds insight to address high-risk situations specific to their businesses. While consumers have set a high bar for companies' responsible use of AI, the KPMG report also found that organizations can take concrete steps to garner and maintain public trust—education, clear communication and human oversight to catch errors, biases, or ethical concerns.
The reality is that the tension between proceeding cautiously to address ethical concerns and moving full speed ahead to capitalize on the competitive advantages of AI will continue to play out in the coming years.Â
analyzed current events to identify five ways companies are ethically incorporating artificial intelligence in the workplace.Â
Actively supporting a culture of ethical decision-making

AI initiatives within the financial services industry can speed up innovation, but companies need to take care in protecting the financial system and customer information from criminals. To that end, JPMorgan Chase has , including an ethics team to work on the company's AI initiatives. The company ranks top on the , which looks at banks' AI readiness, including a top ranking for transparency in the responsible use of AI.
Development of risk assessment frameworks

The National Institute of Standards and Technology has developed an that helps companies better plan and grow their AI initiatives. The approach supports companies in identifying the risks posed by AI, defining and measuring ethical activity, and implementing AI systems with fairness, reliability, and transparency. The Vatican is even getting in on the action—it collaborated with the Markkula Center for Applied Ethics at Santa Clara University, a Catholic college in Silicon Valley, to for companies to navigate AI technologies ethically.
Specialized training in responsible AI usage

Amazon Web Services has developed many tools and guides to help its employees think and act ethically as they develop AI applications. The , a YouTube series produced by AWS Machine Learning University, serves as an introductory course that covers fairness criteria and methods for mitigating bias. tool helps developers detect bias in AI model predictions.
Communication of AI mission and values

Companies that develop a mission statement around their AI practices clearly communicate their values and priorities to employees, customers, and other company stakeholders. Examples include Dell Technologies' and IBM's , which clarify their approach to AI application development and implementation, publicly setting guiding principles such as "respecting cultural norms, furthering social equality, and ensuring environmental sustainability."
Implementing an AI ethics board

Companies can create to help them find and fix the ethical risks around AI tools, particularly systems that produce biased output because they were trained with biased or discriminatory data. has had an AI Ethics Advisory Panel since 2018; it works on current ethical issues and looks ahead to identify potential future problems and solutions. Northeastern University has to work with companies that prefer not to create their own.
Story editing by Jeff Inglis. Additional editing by Alizah Salario. Copy editing by Paris Close. Photo selection by Clarese Moller.Â