A few times a week, Jonathan Freidin, a medical malpractice attorney in Miami, says he’ll notice that people will fill out his firm’s client contact sheet with text littered with emojis and headings. That’s a telltale sign that they copied and pasted from ChatGPT. Other clients will say they’ve “done a lot of research” on their potential case using AI. “We’re seeing a lot more callers who feel like they have a case because ChatGPT or Gemini told them that the doctors or nurses fell below the standard of care in multiple different ways,” Friedin tells me. “While that may be true, it doesn’t necessarily translate into a viable case.”
People are increasingly turning to generative AI chatbots to research everything from dinner recipes to their complex legal and medical problems. In a December 2025 survey from the legal software company Clio, 57% of consumers said they have or would use AI to answer a legal question. A 2025 Zocdoc survey found that one in three Americans use generative AI tools to get health advice each week, and one in ten use it daily. Zocdoc CEO Oliver Kharraz predicted in the report that “AI will become the go-to tool for pre-care needs like symptom checking, triage, and navigation, as well as for routine tasks like refills and screenings.” He cautioned that he also believes “patients will recognize that it is no substitute for the vast majority of healthcare interactions, especially those that require human judgment, empathy, or complex decision-making.” If he’s wrong, Zodoc and its competitors have a problem.
Doctors and lawyers are now sifting through generative AI emails or working to convince laypeople that they have the expertise and understand nuances of how each local judge acts or how a patient’s medical history plays into their condition. Generative AI has democratized access to information that was often elusive and expensive to obtain, but it’s also shifted how legal and medical professionals talk to people, and what people expect of them.
ChatGPT is the new WebMD and LegalZoom, turning the average person into an armchair expert with just a few prompts. And it’s driving the real experts crazy.
“We have to dispel the information that they were able to obtain versus what is actually going on in their case and kind of work backwards,” says Jamie Berger, a family law attorney in New Jersey. For example, Berger says that until recently most people knew little to nothing about the legal proceedings of divorce, and would come to the attorneys seeking information. Now, they might come armed with a step-by-step gameplan, but it’s generic, and likely not the best fit for their situation. Berger will notice after emailing a client if their tone suddenly changes, that they might be using AI to write out lengthy legal strategies or questions. Then, she has to explain, “it’s not necessarily your factual circumstance,” and address their various points. “You have to rebuild or build the attorney-client relationship in a way that didn’t used to exist,” says Berger. “They don’t realize that there’s so many offshoots along the way that it’s not a linear line from A to Z.”
AI acts as a second opinion without the wait.
Like a real expert, generative AI chatbots speak with authority. That can be far more persuasive than reading a blogpost on a legal issue or summaries of medical conditions on a forum. A third of Americans said yes in a 2025 survey from Survey Monkey and financial services company Express Legal Funding that asked: “Would you ever trust ChatGPT more than a human expert?”, although respondents were less likely to use it for medical and legal advice, and more likely to consult it for educational and financial advice.
Chatbots also have an infinite amount of doctors’ most precious resource: time. While doctors are pressured to quickly evaluate and consult patients, chatbots are always available and designed to respond in a way that affirms users. People have more health data than ever from wearables like smartwatches and Oura rings. As they’ve become accustomed to information and services on-demand, getting answers from doctors was one of the things that remained behind a wall — often people wait months for appointments with specialists or spend hours fighting with insurance companies to get costs covered. “They really love that tempo of being able to know that ChatGPT never goes away, never goes to sleep, never says no, never says, ‘sorry, your list is too long,'” says Hannah Allen, chief medical officer at Heidi, an AI medical scribe tool.
AI also acts as a second opinion without the wait. Heidi Schrumpf, director of clinical services at teletherapy platform Marvin Behavioral Health, says she’s had patients return after a counseling session and tell her that they took her input to ChatGPT or another AI bot, and that they trust her because the bot confirmed what she said. But Scrumpf isn’t offended by being double-checked. “It’s great that they have the access to a quick second opinion, and then, if it doesn’t agree with me, that allows them to ask me better questions.”
A 2024 poll tracking health misinformation from health policy research group KFF found that 17% of US adults said they consult AI chatbots at least once a month, but 56% of those people were not confident that the info from the AI chatbots was accurate. Still, people are turning to ChatGPT in growing numbers. “That type of technology does want to encourage patients to continue to interact with them,” Allen says. “Ultimately, you do need a human in there to understand the nuances of the communication and the softer communication skills, and the unspoken communication skills, and the entire medical picture and the history.”
Without detailed information, the chatbots will likely give generic advice. But supplying too many personal details is also a risk. People are handing over their entire medical histories to ChatGPT, but HIPAA, the federal law that protects confidential health information, doesn’t apply to consumer AI products. There’s also a risk of voiding the kind of protections people get from the attorney-client confidentiality privilege if people put too much specific information about their case into a chatbot, says Beth McCormack, dean of the Vermont Law School. And, they likely still need an attorney to really understand the implications of AI’s legal advice. “There’s so much nuance to the law,” McCormack says. “It’s so fact dependent.”
An OpenAI spokesperson declined to provide comment on the record for this story, but told me that ChatGPT is not meant to substitute legal or medical advice, but act as a complimentary resource to help people understand medical and legal information. The spokesperson also said the company is trying to improve the responses of its models, and that it takes steps to protect personal data in the event of legal inquiries. OpenAI made changes to its policies last fall, specifying that users cannot turn to ChatGPT for “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional,” but the chatbot does still answer health- and law-related questions.
Professionals aren’t totally against their patients and clients consulting gen AI. There are shortages of doctors, and cases that require hiring an attorney with upfront money that people don’t have. While the information spit out by AI isn’t always perfect, it largely makes previously gate-kept legal and medical advice accessible, breaking it down without jargon. For people who can’t afford upfront legal costs, turning to AI can be helpful in some cases, says Golnoush Goharzad, a personal injury and employment lawyer in California. People are using ChatGPT to represent themselves in court, to act as a stand-in therapist, nutritionist, or physical therapist. For people who can’t afford lawyers and are facing issues like eviction or needing to file small claims cases, AI tools have helped them win. But Goharzad says she’s had conversations, sometimes with friends, where they think they have cases to sue landlords or others. She asks, “Why? That doesn’t even make any sense, and they’re like, well ChatGPT thinks it makes sense.”
The chatbot floodgates have opened, and it’s too late for professionals to resist them. People are going to keep doing their own research. Rather than fight it, experts say there’s room to recognize and advise people on the best ways to use them. “We need to keep as clinicians in the back of our mind that this might be a tool that is being used, and it can be very helpful, especially with some guidance and integrating it into our treatment plans,” Schrumpf says. “But it could go sideways if we’re not paying attention.” For experts, the time has come to assume that AI is also working on the case.
Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends.
Business Insider’s Discourse stories provide perspectives on the day’s most pressing issues, informed by analysis, reporting, and expertise.

