Some amazing ways AI/LLMs can help with your health conditions

You may be surprised (or maybe not!) to hear that many folks are using AI/LLM (artificial intelligence/large language model) applications for their healthcare issues. They’re using it to help figure out whether their symptoms might be due to COVID or Long COVID, to suggest procedures and lab tests to help rule in or rule out various illnesses, and to suggest verbiage to use with healthcare providers.

In this post I’ll generally refer only to ChatGPT since that’s the most commonly used one. Several other AI/LLMs exist including Gemini and Claude. They may have differences in the language they use in their answers and how they present data, so check out a few of them to see which one you like best.

You should be aware that chats with any AI/LLM are not private and may be searchable on a web search engine. Also, be advised that no AI/LLM is a health professional. You should not consider anything it tells you to be medical advice. Any suggested healthcare options should be discussed with your provider.

One of the primary ways some Long COVID patients are using ChatGPT is to keep track of symptoms and test results. Patients may have several specialists and may need to get lab tests run frequently. They may also be dealing with several symptoms due to their chronic health conditions that they want to track. To the extent that they’re comfortable sharing their information, they can tell ChatGPT their vital statistics and health background. Then they will periodically record their symptoms and test results. Some folks with Long COVID have memory issues and have found tremendous benefit in using ChatGPT as a kind of external brain. Because ChatGPT remembers conversations, they can ask it if their symptoms and test results are getting better or worse.

Another interesting solution is wording questions and descriptions of symptoms most effectively for doctors. Many Long COVID patients have learned that frequently doctors will listen for specific words in order to trigger the right conversations. Patients are also using ChatGPT to draft notes for their doctors to ask for certain tests or procedures, or to consider a diagnosis. ChatGPT can word those notes so that they’re geared towards specific doctors like general practice vs specialties.

In order to get the best results from an AI/LLM, you should constrain give it as much context as possible. Creating good prompts is a bit of an art form and you should feel free to keep refining your question. One good example of a set of steps to use, from Google AI, is their model: Task, Context, References, Evaluate, and Iterate. (https://www.youtube.com/watch?v=lp-Ft3Ex5_k). Using this model you would tell the AI/LLM:

  • what is the task
  • what is the context and background and intended audience
  • what are the references, with links or text if you have them
  • then enter your prompt and after getting your answer it, evaluate it
  • then refine the question more

Most AI/LLMs will let you engage in a conversation, where you can say something like “give me all the possible diseases with these symptoms, here are some possible references to use, and arrange the answers in a list with citations”. When you get your answer, you may want it to be more technical or less technical, or to consider other circumstances. You can then tell the AI/LLM, “in addition to the previous question, please consider this additional information”. 

AI/LLMs can be very powerful, but it takes practice to get the best answers. And again, be aware that the conversations are not private, and this is not medical advice. Always talk to your provider when you have questions about your healthcare. You may find that AI/LLMs can be a great educational and tracking tool that help both you and your provider. We’d love to hear if you’re using an AI/LLM with your symptoms and conditions and how that’s working for you.

Check out board member Phillip Alvelda’s comments on a brief explanation of how to use an LLM for health issues:

You can find the entire meeting here with longer discussion around the utility and caution of LLMs: https://www.youtube.com/watch?v=q5rh078FzvA