×
ADVERTISEMENT

NOVEMBER 21, 2024

Discovering the AI Capabilities Introduced by ChatGPT

A Clinical Exploration


Originally published by our sister publication Anesthesiology News

D. John Doyle, MD, PhD, DPhil
Emeritus Professor of Anesthesiology
Case Western Reserve University
Consultant Anesthesiologist
Department of General Anesthesiology
Cleveland Clinic
Doyle reported no relevant financial disclosures.

As we continue to harness the power of these remarkable technologies, it is imperative to strike a balance between their extraordinary capabilities and ethical considerations to ensure that they are applied



Originally published by our sister publication Anesthesiology News

D. John Doyle, MD, PhD, DPhil
Emeritus Professor of Anesthesiology
Case Western Reserve University
Consultant Anesthesiologist
Department of General Anesthesiology
Cleveland Clinic
Doyle reported no relevant financial disclosures.

As we continue to harness the power of these remarkable technologies, it is imperative to strike a balance between their extraordinary capabilities and ethical considerations to ensure that they are applied responsibly.

The field of artificial intelligence continues to expand at an unprecedented pace, pushing the boundaries of what machines can achieve.1 In November 2022, OpenAI, a prominent AI research and development organization, introduced two groundbreaking AI products: a “chatbot” known as ChatGPT (release 3.5) and an AI image generator known as DALL•E 2.2 In May 2024, OpenAI introduced a combined successor to these products known as GPT-4o. This landmark product and its many recent AI cousins have garnered significant attention for their remarkable capabilities and potential applications in education, commerce, medicine and industry.2

ChatGPT represents a significant leap forward in conversational AI. The GPT part stands for “generative pre-trained transformer,” and this software technology enables natural language interactions with machines in truly remarkable ways. ChatGPT allows human users to engage in meaningful conversations with an AI companion with well-crafted, intelligent responses free of grammatical errors and misspelled words. ChatGPT has demonstrated its proficiency in various applications, including customer support, video content creation, language translation and creative writing.

GPT-4o has also brought about a revolution in the realm of visual AI. By understanding the semantics and context of the provided prompts, GPT-4o also can synthesize intricate and imaginative visual content, as illustrated in the original artwork examples provided later in this article.

This article investigates some potential applications of such AI chatbots, with discussions concerning their promise to transform patient care, medical research and clinical decision making. It is expected that ChatGPT and other chatbot implementations will soon be able to facilitate doctor–patient interactions, streamline administrative tasks and support medical professionals in making informed decisions. Although the clinically focused exploration of AI offered in this article concentrates on the new capabilities introduced by ChatGPT, many of the considerations apply equally well to many recently launched competing AI products, such as pi.ai, an especially friendly chatbot, or You.com.

image

We have now entered a new technological era where AI has the power to augment human intelligence, enable new discoveries and redefine the boundaries of what is possible in medicine. By responsibly embracing these advances, we can pave the way for a future where AI seamlessly collaborates with healthcare workers, hopefully bringing about improved patient outcomes and a more efficient healthcare ecosystem—that’s the goal. The reality is likely to be more nuanced, with profound questions to be addressed as AI applications continue to evolve.

Getting Started

To get started with ChatGPT, first obtain access to these services by creating an account with OpenAI; details are available at OpenAI.com. Once you have installed the necessary software, follow the supplied documentation on how to best format requests (“prompts”). Experiment by trying out different requests and observing the obtained responses. For GPT-4o, you can generate text based on a given prompt, as well as generate images based on textual descriptions. It’s worth noting that both ChatGPT and its AI cousins are powerful tools, but they should be used responsibly. (OpenAI provides guidelines on its website for ethical AI use.) To better illustrate these AI tools, I have included several AI-authored examples. Use cases illustrated include a discussion on the dangers of AI (Figure 1) and on the limitations of using ChatGPT for medical advice (Figure 2). Other examples of how ChatGPT might be helpful concern the construction of quality questions (prompts), illustrated in Figure 3 and discussed further below, while Figure 4 provides an example of ChatGPT providing completely wrong information, sometimes called a “hallucination.” In another example, I asked ChatGPT to prepare a computer program to calculate body mass index using the Python programming language (Figure 5).

Concerns about the potential risks associated with the proliferation of artificial intelligence (AI) have been raised by various thinkers and experts. While AI has the potential to bring numerous benefits and advancements to society, there are valid concerns about its impact if not properly developed, deployed, and regulated.

One of the primary concerns is the concept of superintelligence, where AI systems surpass human intelligence in virtually all areas. If such a superintelligent AI were to act in ways that are not aligned with human values or if its goals were misaligned with the well-being of humanity, it could lead to unintended consequences and potentially pose a threat to human existence.

Additionally, there are worries about the misuse or unintended consequences of AI technologies. For example, if AI systems are used for malicious purposes, such as autonomous weapons, it could lead to an escalation of conflicts or the loss of human control over critical decision-making processes.

Another concern is the potential for AI to exacerbate existing societal inequalities. If AI technologies are not developed with fairness, transparency, and inclusivity in mind, they could perpetuate or even amplify biases present in the data they are trained on, leading to discriminatory outcomes in areas such as hiring, lending, or criminal justice.

To address these concerns, many researchers and organizations advocate for responsible AI development and the adoption of robust ethical frameworks. These frameworks include principles like transparency, accountability, fairness, and human oversight. Additionally, there are ongoing discussions about the need for AI safety research, international collaboration, and governance mechanisms to ensure the safe and beneficial development of AI technologies.

Ultimately, while AI has enormous potential, it is crucial to proceed with caution, foresight, and careful consideration of the potential risks and ethical implications involved to mitigate any negative impact on humanity.

Figure 1. ChatGPT commentary on the potential risks associated with the proliferation of artificial intelligence.

Prompt: What are the limitations of using ChatGPT for medical advice?

CHATGPT Response: As an AI language model, ChatGPT has several limitations when it comes to providing medical advice. Some of these limitations include:

  • Lack of Personalized Diagnosis: ChatGPT does not have access to your medical history or physical examination. Therefore, it cannot provide personalized diagnosis and treatment options based on your individual health condition.
  • Limited Information: ChatGPT can only provide information based on the data it has been trained on, which may not be comprehensive or up-to-date. It may also lack context or details that are critical for accurate medical advice.
  • No Substitute for Medical Professionals: ChatGPT is not a substitute for medical professionals such as doctors, nurses, and pharmacists. In case of a medical emergency, it is essential to seek professional medical help immediately.
  • Liability Issues: Providing medical advice without proper credentials or licensing can result in legal liability issues. Therefore, it is essential to seek advice from qualified medical professionals.

In summary, while ChatGPT can provide general information and suggestions related to medical conditions, it is not a substitute for professional medical advice, diagnosis, or treatment. It is always advisable to seek advice from a qualified healthcare professional for any specific health concerns.

Figure 2. ChatGPT commentary: What are the limitations of using ChatGPT for medical advice?

Prompt Engineering

The most important “secret” to getting useful commentary from AI chatbots like ChatGPT is asking the right question. The best responses are obtained when the prompt question is carefully crafted, as a lawyer would do in a courtroom cross-examination. The technical term for this art is “prompt engineering,” and understanding its principles can help enormously in obtaining well-crafted, well-organized responses. In other words, prompt engineering is a technique used in language models like ChatGPT to generate specific outputs or responses to user inputs, and this is done by creating specialized prompts that guide the model toward producing the desired output.

The process of prompt engineering thus involves carefully constructing prompts that include relevant keywords, phrases or other cues that help guide the model toward generating the desired output. As crafted prompts are designed to elicit a specific response or output from the model, the choice of words and phrases used in the prompt can have a significant impact on the quality and accuracy of the generated responses.

Prompt engineering can be used to improve the accuracy and relevance of the responses generated by the AI language model. For example, a prompt can be designed to provide specific information or context that can help the model generate a more accurate and relevant response. Similarly, prompts can be used to guide the model toward generating responses that match a particular tone, style or level of formality. Figure 3 gives examples provided by ChatGPT.

Example of Good Prompt Engineering:

Prompt: “Can you recommend some popular restaurants in New York City for Italian cuisine? I’m looking for options in the midtown area. Please provide me with at least three suggestions and include the average price range if possible.”

In this example, the prompt clearly states the user’s intent, specifies the location and cuisine preference, and provides additional details such as the desired area and request for multiple recommendations with pricing information. The prompt is specific, well structured and includes relevant instructions, which can help the chatbot generate accurate and useful responses.

Example of Bad Prompt Engineering:

Prompt: “Restaurants NYC Italian food?”

In this example, the prompt is vague and lacks clarity. It fails to provide essential information such as the user’s preferences, location or any specific requirements. The prompt is poorly formatted and lacks explicit instructions. The chatbot may struggle to understand the user’s intent and provide relevant recommendations, resulting in inaccurate or incomplete responses.

Figure 3. Examples of good and bad prompt engineering offered by ChatGPT.
image
Figure 4. Example of a completely fictitious literature citation in a ChatGPT response to a query about obtaining an mRNA review article.
It is expected that future AI chatbots aimed at scientific research will support real-time queries to PubMed and other resources to attack this problem. This matter ultimately arises as a consequence of the language model design, which is probabilistic in nature.
image
Figure 5. ChatGPT-generated computer program to calculate body mass index using the Python programming language.

Medical Applications of AI

Medical AI is changing the delivery of clinical care in numerous ways, such as assisting in drug discovery, advising on clinical drug selection or interpreting radiographic images.3-8 The importance of these developments is highlighted by the launching of several medical journals that focus on AI in medicine, such as Artificial Intelligence in Medicine, Journal of Medical Artificial Intelligence, IEEE Journal of Biomedical and Health Informatics, and NEJM AI.

One particularly useful AI application in anesthesiology would be an advanced tool that reviews the preoperative patient’s medical record and generates a detailed synopsis. Although some electronic health records offer basic versions of this feature, future AI enhancements would go further by recognizing specific circumstances relevant to anesthesiologists. For instance, AI could automatically search for conditions like pseudocholinesterase deficiency, review previous surgical histories to anticipate potential complications and tailor postoperative pain management plans based on past effectiveness and the risk of opioid dependence. Such an AI application would also intelligently disregard outdated concerns, such as a cardiac echocardiogram from two years ago showing an ejection fraction of 17% in a patient with a recent heart transplant, or an old report of severe aortic stenosis in a patient with a new aortic valve.

Additional general medical capabilities might include conversations about how best to work up a patient. To illustrate this, Figure 6 provides a sample ChatGPT commentary with suggestions on working up a hypothetical anemic woman, whereas Figure 7 shows the result when ChatGPT is asked to generate the text portion for an instructional slideshow for a talk on medical ethics.

Prompt: I have an otherwise healthy 28-year-old woman with heavy periods. Hb only 7.6. What should I do next?

CHATGPT Response: A hemoglobin level of 7.6 in a 28-year-old woman with heavy periods is concerning and requires prompt evaluation and management. Here are the steps you can take next:

  • Assess the severity of bleeding: Ask the patient about the duration, frequency, and amount of bleeding, including the use of sanitary products. You can also use validated bleeding assessment tools like Pictorial Blood Assessment Chart (PBAC) to quantify the amount of bleeding.
  • Evaluate for underlying causes: Heavy menstrual bleeding may be caused by various conditions such as uterine fibroids, endometrial hyperplasia or cancer, adenomyosis, pelvic inflammatory disease, and bleeding disorders like von Willebrand disease or platelet function disorders. Take a detailed medical history, perform a physical examination, and consider ordering further tests, such as a pelvic ultrasound, Pap smear, and coagulation studies, to determine the underlying cause.
  • Address the anemia: A hemoglobin level of 7.6 indicates severe anemia and requires prompt treatment to prevent complications. You may consider blood transfusion or iron supplementation, depending on the severity of anemia and the patient’s symptoms. Ensure to counsel the patient about dietary changes, such as increasing iron-rich foods, and follow-up appointments to monitor the response to treatment.
  • Manage the heavy menstrual bleeding: Depending on the underlying cause, various treatment options are available, such as hormonal therapy, nonsteroidal anti-inflammatory drugs (NSAIDs), and surgical interventions like endometrial ablation or hysterectomy. Discuss the treatment options with the patient, taking into account the patient’s preferences, future fertility goals, and potential side effects.
  • Follow-up: Schedule a follow-up appointment to monitor the patient’s response to treatment and to repeat blood tests to assess the hemoglobin levels.

In summary, heavy menstrual bleeding and severe anemia in a 28-year-old woman require a prompt and thorough evaluation, addressing the underlying cause and managing the anemia and heavy bleeding.

Figure 6. Sample ChatGPT commentary with suggestions on working up a hypothetical anemic woman.

Prompt: Can you please generate the text portion for an instructional slide set (e.g., PowerPoint) for a talk on medical ethics?

CHATGPT Response: Sure! Here’s an outline for an instructional slide set on medical ethics:

  • Slide 1: Introduction Title: “Introduction to Medical Ethics” Overview of the importance of medical ethics in healthcare.
  • Slide 2: Definitions. Definitions of key terms related to medical ethics, such as autonomy, beneficence, non-maleficence, justice, and informed consent.
  • Slide 3: Principles of Medical Ethics. Overview of the four principles of medical ethics: autonomy, beneficence, non-maleficence, and justice. Discussion of how these principles apply to various aspects of medical practice.
  • Slide 4: Ethical Issues in Clinical Practice. Discussion of common ethical issues that arise in clinical practice, such as end-of-life care, resource allocation, and conflicts of interest. Case examples to illustrate these ethical issues.
  • Slide 5: Informed Consent. Overview of the concept of informed consent, including the information that should be provided to patients and the process for obtaining consent. Discussion of how to handle situations where a patient is unable to provide consent.
  • Slide 6: Cultural Competence. Importance of cultural competence in healthcare, and how it relates to medical ethics. Strategies for providing culturally sensitive care.
  • Slide 7: Confidentiality and Privacy. Overview of the concepts of confidentiality and privacy in healthcare, including HIPAA regulations. Discussion of how to balance the need for confidentiality and privacy with the need to share information for patient care.
  • Slide 8: Research Ethics. Overview of the ethical principles that guide research, such as respect for persons, beneficence, and justice. Discussion of common ethical issues in research, such as informed consent, confidentiality, and conflict of interest.
  • Slide 9: Conclusion. Summary of the key points covered in the presentation. Reminder of the importance of medical ethics in healthcare, and the role that healthcare providers play in upholding ethical principles.
  • Slide 10: References. List of resources for further reading on medical ethics, including relevant articles, books, and organizations.
Figure 7. Sample result when ChatGPT was asked to generate the text portion for an instructional slideshow for a talk on medical ethics.

ChatGPT can even contribute to the medical humanities. Figure 8 shows ChatGPT as a playwright contributing to developing a theatrical production centering on a dying woman. Figure 9 shows that ChatGPT is not a bad poet either.

Prompt: Write an empathic movie script where a doctor must explain to her patient that her test results have come back, and that she has only a few months left to live since her cancer has spread to her brain, bones and lung.

CHATGPT Response:

Title: Life’s Only Promise

FADE IN: INT. DOCTOR’S OFFICE - DAY

DR. MEGAN (late 30s) sits behind her desk, looking through a pile of test results, while JEN (early 40s) nervously fidgets in her chair. Dr. Megan takes a deep breath, then looks up to face Jen.

DR. MEGAN: I’m sorry to have kept you waiting, Jen. Your test results are back.

Jen’s anxiety-filled eyes meet the doctor’s.

DR. MEGAN: Jen, I’m afraid I have some difficult news. Your cancer has spread to your brain, bones, and lung.

Jen’s hands start to shake, and she begins to cry.

JEN: (pause) How long do I have left?

DR. MEGAN: I’m sorry, Jen. Given the extent of the cancer, I estimate that you have only a few months left.

Jen sobs uncontrollably, and Dr. Megan stands up to console her. They embrace as Jen tries to make sense of the news.

DR. MEGAN: I understand this news is devastating. But, Jen, you have to keep hope.

JEN: (tearfully) Hope for what? I’m dying!

DR. MEGAN: (pause) You’re not dying today, Jen. We have a few months, and during that time, we’ll make sure that you’re comfortable and that we do everything we can to manage your symptoms.

Jen nods, still crying.

DR. MEGAN: (pause) I know this isn’t the news you wanted to hear, but I promise that we’ll be with you every step of the way. We’ll make sure that your remaining time is filled with love, happiness, and comfort.

Jen takes a deep breath and tries to compose herself.

JEN: (pause) Thank you, Dr. Megan.

DR. MEGAN: (pause) We’ll start with palliative care right away. We’ll also discuss treatment options and make a plan that’s best for you.

Jen nods, and Dr. Megan hands her a tissue.

DR. MEGAN: (pause) We can fight this together, Jen. And, no matter what, we’ll make sure that your life’s only promise is filled with love and happiness.

FADE OUT.

Figure 8. Sample result with ChatGPT in a playwright role contributing to developing a theatrical production centering on a dying woman.

Prompt: Write a poem about pain, suffering and opium.

CHATGPT Response:

Pain, like a raging fire,
Sweeps through the body and soul,
Suffering, an unrelenting choir,
Sings its mournful toll.

Opium, like a soothing balm,
Calms the searing pain,
Bringing peace with its healing calm,
As the sufferer’s tears wane.

A plant with ancient power,
A medicine for the mind,
In the depths of pain’s dark hour,
Opium’s relief we find.

But in the midst of its embrace,
Lies a treacherous path,
As addiction takes its place,
Like a thief in the aftermath.

Opium, a double-edged sword,
A gift and curse combined,
A comfort that cannot be ignored,
But with caution, we must find.

For in the end, it is not the opium,
But the spirit that must prevail,
To overcome the pain and fears,
And the suffering to curtail.

So let us be mindful and aware,
As we seek relief from pain,
With compassion and love to share,
And hope in our hearts remain.

Figure 9. Sample result with ChatGPT in a poetic role.

Examples of AI-based medical apps that will certainly be developed for wide use will be an AI app to manage type 1 and type 2 diabetes (different algorithms for each) or an app to select an antibiotic for hospital use. Such apps may be free, like the popular Linux operating system, or cost money to download, like most Microsoft products. The FDA has guidance regarding medical software products used clinically.9 Some apps will be sponsored by Big Pharma, just like the books, videos and brochures they had distributed to doctors decades ago. Another app many clinicians might like is a perioperative pain management system based on specific consensus guidelines. In many cases, published clinical guidelines can be readily adapted for such purposes. Note that many or most of these apps will draw on rule-based AI.

Applications of AI to Anesthesiology

One of the earliest AI tools for anesthesiology was a program called ATTENDING developed in 1983 at Yale University by Perry Miller.10 This was a computer system to critique anesthetic plans by evaluating a patient’s medical problems, planned surgical procedures and proposed anesthetic methods, providing a risk–benefit analysis and alternative approaches. Over the four decades since that landmark effort, many more AI developments related to anesthesiology have been described or proposed. What follows is a sampling of these developments.

First, AI can potentially be used to help with the planning of anesthesia by identifying optimal protocols based on patient characteristics, medical history and procedural requirements. By developing personalized anesthesia plans, AI will help ensure quality anesthesia delivery while minimizing side effects. For instance, an AI planning advisor might list necessary equipment and drugs for upcoming cases or recommend double treatment options for patients at high risk for postoperative nausea and vomiting. Such systems can be rule-based, like using procedural sedation guidelines or awake intubation management flowcharts.

Second, AI models are potentially useful in predicting clinical outcomes, including the likelihood of postoperative complications and need for extended recovery times in the PACU or ICU.11-14 For example, the online ACS NSQIP (American College of Surgeons National Surgical Quality Improvement Program) Surgical Risk Calculator estimates the risk for death or adverse outcomes after surgery, although it does not consider factors such as the surgeon’s experience.15

Third, AI-powered tools can potentially help with real-time clinical documentation, reducing the cognitive load on anesthesia providers and improving the quality of anesthetic records. This will likely ensure more accurate and comprehensive documentation, freeing up providers to focus on patient care.

Fourth, AI tools will be developed to help with clinical workflow and quality matters, potentially improving anesthesia safety practices. For instance, existing rule-based AI can remind providers to administer the next dose of antibiotics or alert users when a drug infusion is nearing completion.

Fifth, AI algorithms will be able to recommend personalized drug dosages based on patient data such as age, weight and medical history. AI-based closed-loop anesthesia delivery systems, incorporating EEG monitoring, also will adjust drug infusion rates in real time, potentially optimizing drug delivery and minimizing suboptimal dosing.16

AI will also eventually assist in the real-time monitoring of physiologic parameters, providing early detection of complications or deviations from the anesthesia plan. Automated alerts will prompt anesthesiologists to take corrective actions in response to specific patient conditions, such as hypotension and bradycardia after spinal anesthesia or the onset of sepsis.17

Finally, other applications include AI-assisted identification of sonoanatomy for ultrasound-guided regional anesthesia,18 identification of acid–base disorders from blood gas data19 and automated cardiac rhythm analysis.20

Is ChatGPT Trustworthy?

ChatGPT is like a genius friend who is super eager to help you and happy to answer any question you might have on almost any topic: from summarizing Plato’s “The Allegory of the Cave” to helping you write a computer program (Figure 5). ChatGPT can assist you instantly at no cost (a free version is currently available). ChatGPT has an astonishing intellect and is wonderfully polite. There is almost nothing ChatGPT cannot comment on intelligently, eloquently and with grace.

But there is a problem: Your genius friend is not fully trustworthy. Sometimes ChatGPT provides incorrect information, kind of like an honors student taking an oral exam who just misses getting an A on their exam, because while they did a pretty good job in answering, some misunderstandings became apparent during the discussion. The bottom line is this: Verify everything, especially literature citations (Figure 4).

Exploring the ‘Dark Side’

Technology has a dark side, such as spam emails and cyberbullying. As we enter the rapidly evolving “Age of AI,” we should evaluate how new AI developments will affect our world and plan for possible negative consequences. Some thinkers are concerned that AI proliferating without limits could end humanity as we know it.

An individual asking ChatGPT for technical details on the various synthesis pathways for making sarin, carfentanil or other dangerous molecules is an obvious “red flag” that warrants potential action by authorities to ensure the safety of the population. But which action? Should OpenAI automatically forward (via an AI filter, of course) worrisome ChatGPT queries to the FBI and other agencies concerned with national security? And what constitutes “worrisome”? What about the author researching bioweapons in preparation for writing a spy novel where James Bond saves the world from the deployment of deadly terrorist toxins? These issues illustrate the special challenges of developing corporate and government policies regarding AI deployment.

As one policy example, some controversial thinkers suggest that a tax on AI and robotic services will be needed to help support individuals who have become unemployed as a result of these very services. Such policy suggestions are sometimes made in conjunction with the idea of providing all adult citizens with a universal basic income.

Another difficult issue concerns essays submitted by high school and university students. As you can see from the examples provided herein, AI chatbots can produce commentaries of sufficient quality to be genuinely indistinguishable from human-generated material, generating truly original text that can pass plagiarism detection software.

Another concern with AI chatbots is “hallucinations.”21 Users of AI chatbots like ChatGPT more than occasionally find that the responses provided contain “careless errors” (to use a term frequently used by one of my teachers). An example is when you and ChatGPT are conversing on a particular scientific or clinical matter, and you ask for specific citations to the scientific literature. Sometimes, however, the citations provided end up as duds when entered into PubMed.gov; the citations were actually a “hallucination” with no real-world existence (Figure 4). Other examples of hallucinogenic errors might include mistaking the nationality of a prominent person or providing invented information, such as getting a national capital wrong.

Finally, some experts are concerned that AI might be harmful to human health in some situations. For example, Federspiel et al22 worry that this might happen due to “the control and manipulation of people,” as exemplified by China’s Social Credit System. Other concerns they identify are the effects of AI on unemployment and the use of lethal autonomous weapons systems driven by AI.

Giving Credit

When an AI chatbot has contributed substantially to a formal talk or scientific manuscript, this fact should be acknowledged, in some manner, much as one acknowledges colleagues who comment critically on early drafts of a manuscript. “This slide made with ChatGPT assistance” is a little textbox I occasionally add to some of my PowerPoint slides. And, of course, I used ChatGPT to help with producing the text for this article, mostly to get suggestions to see how the text might be improved.

Conclusion

The above exploration of the remarkable capabilities introduced by AI chatbots offers truly transformative potential in clinical applications. These advanced AI tools have demonstrated their prowess in enhancing learning, problem-solving and creative content generation. As we continue to harness the power of these new technologies, it is imperative to strike a balance between their extraordinary capabilities and ethical considerations to ensure they are applied responsibly.

References

  1. Buttazzo G. Front Artif Intell. 2023;6:1226990.
  2. Dave T, Athaluri SA, Singh S. Front Artif Intell. 2023;6:1169595.
  3. Liu J, Wang C, Liu S. J Med Internet Res. 2023;25:e48568.
  4. Fatani B. Cureus. 2023;15(4):e37285.
  5. Huang J, Tan M. Am J Cancer Res. 2023;13(4):1148-1154.
  6. Eysenbach G. JMIR Med Educ. 2023;9:e46885.
  7. Bhargava DC, Jadav D, Meshram VP, et al. Med Leg J. 2023;91(4):223-225.
  8. Sedaghat S. Clin Med (Lond). 2023;23(3):278-279.
  9. FDA. Guidances With Digital Health Content. November 3, 2023. Accessed August 1, 2024. https://www.fda.gov/ medical-devices/ digital-health-center-excellence/ guidances-digital-health-content
  10. Miller PL. Anesthesiology. 1983;58(4):362-369.
  11. Tully JL, Zhong W, Simpson S, et al. J Med Syst. 2023;47(1):71.
  12. Bera K, Braman N, Gupta A, et al. Nat Rev Clin Oncol. 2022;19(2):132-146.
  13. Dong J, Feng T, Thapa-Chhetry B, et al. Crit Care. 2021;25(1):288.
  14. Frize M, Ibrahim D, Seker H, et al. Conf Proc IEEE Eng Med Biol Soc. 2004;2004:3202-3205.
  15. Chudgar N, Yan S, Hsu M, et al. J Thorac Cardiovasc Surg. 2022;163(4):1509-1516.e1.
  16. Wingert T, Lee C, Cannesson M. Anesthesiol Clin. 2021;39(3):565-581.
  17. Mollura M, Lehman LH, Mark RG, et al. Philos Trans A Math Phys Eng Sci. 2021;379(2212):20200252.
  18. Lloyd J, Morse R, Taylor A, et al. Adv Exp Med Biol. 2022;1356:117-140.
  19. Kacki E, Malolepszy A. Stud Health Technol Inform. 2005;116:235-240.
  20. Siontis KC, Friedman PA. Card Electrophysiol Clin. 2021;13(3):543-554.
  21. Hatem R, Simmons B, Thornton JE. Cureus. 2023;15(9):e44720.
  22. Federspiel F, Mitchell R, Asokan A, et al. BMJ Global Health 2023;8:e010435.

Copyright © 2024 McMahon Publishing, 545 West 45th Street, New York, NY 10036. Printed in the USA. All rights reserved, including the right of reproduction, in whole or in part, in any form.

Download to read this article in PDF document:
Discovering the AI Capabilities Introduced by ChatGPT: A Clinical Exploration

This article is in PDF format and it requires Abobe Reader. If you do not have Adobe Reader installed on your computer then please download and install from the link below.

Related Keywords