Document Type : Research Paper

Author

Deoartment of Industrial and Systems Engineering, Northern Illinois University, United States.

Abstract

This research paper presents a comprehensive SWOT analysis of ChatGPT in healthcare, examining its strengths, weaknesses, opportunities, and threats. The paper highlights the potential benefits of ChatGPT, such as improved patient engagement and support for medical education, as well as its limitations, including the risk of inaccurate data and inability to summarize non-text reports. The paper also identifies opportunities for ChatGPT, such as enabling personalized healthcare delivery and supporting remote patient monitoring. However, the paper also highlights potential threats, such as self-treatment among patients and the risk of an AI-driven infodemic. The significance of this research paper lies in its valuable insights into the ethical and safe use of ChatGPT in healthcare, providing healthcare professionals and policymakers with important considerations for its use. The SWOT analysis also serves as a framework for future research and development of ChatGPT and other large language models in healthcare. This research paper is a significant contribution to the ongoing discussion on the use of ChatGPT in healthcare and its potential impact on patient care and public health.

Keywords

Main Subjects

[1]        Kim, S. G. (2023). Using ChatGPT for language editing in scientific articles. Maxillofacial plastic and reconstructive surgery, 45(1), 13. https://link.springer.com/article/10.1186/s40902-023-00381-x
[2]        Krishnan, C., Gupta, A., Gupta, A., & Singh, G. (2022). Impact of artificial intelligence-based chatbots on customer engagement and business growth. In Deep learning for social media data analytics (pp. 195–210). Springer.
[3]        Abedin, Y., Ahmad, O. F., & Bajwa, J. (2023). AI in primary care, preventative medicine, and triage. In AI in clinical medicine: a practical guide for healthcare professionals (pp. 81–93). Wiley Online Library. https://doi.org/10.1002/9781119790686.ch9
[4]        Jadczyk, T., Wojakowski, W., Tendera, M., Henry, T. D., Egnaczyk, G., & Shreenivas, S. (2021). Artificial intelligence can improve patient management at the time of a pandemic: the role of voice technology. Journal of medical internet research, 23(5), e22959. https://www.jmir.org/2021/5/e22959/
[5]        Hoffmann, C. H., & Hahn, B. (2020). Decentered ethics in the machine era and guidance for AI regulation. AI & society, 35, 635–644.
[6]        Roisenzvit, A. B. (2023). From euclidean distance to spatial classification: unraveling the technology behind GPT models. https://ucema.edu.ar/sites/default/files/2023-04/853.pdf
[7]        Panda, S., & Kaur, N. (2023). Exploring the viability of ChatGPT as an alternative to traditional chatbot systems in library and information centers. Library hi tech news, 40(3), 22–25.
[8]        Tustumi, F., Andreollo, N. A., & Aguilar-Nascimento, J. E. de. (2023). Future of the language models in healthcare: the role of chatGPT. ABCD. arquivos brasileiros de cirurgia digestiva (são paulo), 36, e1727. https://www.scielo.br/j/abcd/a/6CLbpwhNDWx9WpKZd56hYtm/
[9]        Chen, S., Kann, B. H., Foote, M. B., Aerts, H. J. W. L., Savova, G. K., Mak, R. H., & Bitterman, D. S. (2023). The utility of ChatGPT for cancer treatment information. MedRxiv, 2003–2023. https://doi.org/10.1101/2023.03.16.23287316
[10]      Arslan, S. (2023). Exploring the potential of Chat GPT in personalized obesity treatment. Annals of biomedical engineering, 51, 1887–1888.
[11]      Sinha, R. K., Roy, A. D., Kumar, N., Mondal, H., & Sinha, R. (2023). Applicability of ChatGPT in assisting to solve higher order problems in pathology. Cureus, 15(2), e35237. https://www.cureus.com/articles/140097-applicability-of-chatgpt-in-assisting-to-solve-higher-order-problems-in-pathology.pdf
[12]      Jeblick, K., Schachtner, B., Dexl, J., Mittermeier, A., Stüber, A. T., Topalis, J. (2023). ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports. European radiology, 1–9. https://link.springer.com/article/10.1007/s00330-023-10213-1
[13]      Bhattacharya, K., Bhattacharya, A. S., Bhattacharya, N., Yagnik, V. D., Garg, P., & Kumar, S. (2023). ChatGPT in surgical practice—a New Kid on the Block. Indian journal of surgery, 1–4. https://link.springer.com/article/10.1007/s12262-023-03727-x
[14]      Sallam, M. (2023). ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare, 11(6), 887. https://doi.org/10.3390/healthcare11060887
[15]      Leigh, D. (2009). SWOT analysis. Handbook of improving performance in the workplace: volumes 1-3, 115–140.
[16]      Hu, X., Tian, Y., Nagato, K., Nakao, M., & Liu, A. (2023). Opportunities and challenges of ChatGPT for design knowledge management. https://arxiv.org/abs/2304.02796
[17]      George, A. S., George, A. S. H., Baskar, T., & Martin, A. S. G. (2023). Human Insight AI: An Innovative Technology Bridging The Gap Between Humans And Machines For a Safe, Sustainable Future. Partners universal international research journal, 2(1), 1–15.
[18]      Khan, R. A., Jawaid, M., Khan, A. R., & Sajjad, M. (2023). ChatGPT-Reshaping medical education and clinical management. Pakistan journal of medical sciences, 39(2), 605. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10025693/
[19]      Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of consumer research, 46(4), 629–650.
[20]      Asan, O., Bayrak, A. E., & Choudhury, A. (2020). Artificial intelligence and human trust in healthcare: focus on clinicians. Journal of medical internet research, 22(6), e15154. https://www.jmir.org/2020/6/e15154/
[21]      Oviedo-Trespalacios, O., Peden, A. E., Cole-Hunter, T., Costantini, A., Haghani, M., Kelly, S., & Reniers, G. (2023). The risks of using ChatGPT to obtain common safety-related information and advice. Safety science, 167, 106244. https://doi.org/10.1016/j.ssci.2023.106244
[22]      Hosseini, M., & Horbach, S. P. J. M. (2023). Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other Large Language Models in scholarly peer review. Research integrity and peer review, 8(1), 4. https://link.springer.com/article/10.1186/s41073-023-00133-5
[23]      Morley, J., & Floridi, L. (2023). Foundation Models are exciting, but they should not disrupt the foundations of caring. https://dx.doi.org/10.2139/ssrn.4424821
[24]      Abdulai, A. F., & Hung, L. (2023). Will ChatGPT undermine ethical values in nursing education, research, and practice. Nurs. inq. e12556--e12556. https://www.researchgate.net/profile/Abdul-Fatawu-Abdulai/publication/370323174_Will_ChatGPT_undermine_ethical_values_in_nursing_educa
tion_research_and_practice/links/644c24084af788735245a8f7/Will-ChatGPT-undermine-ethical-values-in-nursing-education-research-and-practice.pdf