Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Addressing Governance and Regulatory Issues of Generative AI and Large Language Models in Medical Education

References

  1. IBM. What is Generative AI? [cited 2024 10 Sep 2024]; Available from: IBM Research (2024).
  2. IBM. What are Large Language Models (LLMs)? [cited 2024 10 Sep 2024]; Available from: IBM Topics (2024).
  3. Masters, K. Artificial intelligence in medical education. Med. Teach. 41, 976–980 (2019).
  4. Hanycz, S. A. & Antiperovitch, P. A practical review of generative AI in cardiac electrophysiology medical education. J. Electrocardiol. 90, 153903 (2025).
  5. Lu, M.Y. et al. A Multimodal generative AI Copilot for human pathology. Nature (2024).
  6. Wang, X. et al. Foundation model for predicting prognosis and adjuvant therapy benefit from digital pathology in GI cancers. J. Clin. Oncol. p. JCO2401501 (2025).
  7. Eynon, R. & Young, E. Methodology, legend, and rhetoric: the constructions of AI by academia, industry, and policy groups for lifelong learning. Sci., Technol., Hum. Values 46, 166–191 (2020).
  8. Bell, G. et al. Rapid Response Information Report: Generative AI—Language Models (LLMs) and Multimodal Foundation Models, (Australian Council of Learned Academies, 2023).
  9. Masters, K. Ethical use of artificial intelligence in health professions education: AMEE Guide No. 158. Med Teach. 45, 574–584 (2023).
  10. Tolsgaard, M. G. et al. The fundamentals of Artificial Intelligence in medical education research: AMEE Guide No. 156. Med. Teach. 45, 565–573 (2023).
  11. Reznick, R. et al. Task force report on artificial intelligence and emerging digital technologies. Royal College of Physicians and Surgeons of Canada (2020).
  12. Gordon, M. et al. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. Med Teach. 46, 446–470 (2024).
  13. Lee, J. et al. Artificial intelligence in undergraduate medical education: a scoping review. Acad. Med. 96, S62–S70 (2021).
  14. Alam, F. et al. Integrating AI in medical education: embracing ethical usage and critical understanding. Front. Med. 10, 1279707 (2023).
  15. AAIN Generative AI Working Group, AIN Generative Artificial Intelligence Guidelines (Australian Academic Integrity Network, 2023).
  16. Valiant, L. The Importance of Being Educable: A New Theory of Human Uniqueness (Princeton University Press, 2024).
  17. Foltynek, T. et al. ENAI recommendations on the ethical use of artificial intelligence in education. Int. J. Educ. Integr. 19 (2023).
  18. Preiksaitis, C. & Rose, C. Opportunities, challenges, and future directions of generative artificial intelligence in medical education: scoping review. JMIR Med Educ. 9, e48785 (2023).
  19. Dong, H. et al. Some learning theories for medical educators. Med. Sci. Educ. 31, 1157–1172 (2021).
  20. Kolb, D. A. Experiential Learning: Experience as the Source of Learning and Development (Prentice-Hall, 1984).
  21. Ericsson, K. A. & Staszewski, J. J. Skilled memory and expertise: mechanisms of exceptional performance. In: Klahr, D., Kotovsky, K. Editors Complex Information Processing, 235–267 (Lawrence Erlbaum Associates, 1989).
  22. Wang, S. et al. Artificial intelligence in education: a systematic literature review. Expert Syst. Appl. 252 (2024).
  23. Macnamara, B. N. et al. Does using artificial intelligence assistance accelerate skill decay and hinder skill development without performers’ awareness? Cogn. Res. Princ. Implic. 9, 46 (2024).
  24. Tran, M. et al. Generative artificial intelligence: the ‘more knowledgeable other’ in a social constructivist framework of medical education. npj Digit. Med. Under Review (2025).
  25. Safranek, C. W. et al. The role of large language models in medical education: applications and implications. JMIR Med. Educ. 9, e50945 (2023).
  26. Linardatos, P. et al. Explainable AI: a review of machine learning interpretability methods. Entropy 23 (2020).
  27. Bearman, M. & Ajjawi, R. Learning to work with the black box: Pedagogy for a world with artificial intelligence. Br. J. Educ. Technol. 54, 1160–1173 (2023).
  28. Clusmann, J. et al. The future landscape of large language models in medicine. Commun. Med. 3, 141 (2023).
  29. van der Niet, A. G. & Bleakley, A. Where medical education meets artificial intelligence: ‘Does technology care? Med Educ. 55, 30–36 (2021).
  30. Reddy, S. Generative AI in healthcare: an implementation science informed translational path on application, integration and governance. Implement Sci. 19, 27 (2024).
  31. Mykhailov, D. Philosophical dimension of today’s educational technologies: framing ethical landscape of the smart education domain. NaUKMA Res. Pap. Philo. Relig. Stud. 68–75 (2023).
  32. Moritz, S. et al. Generative AI (gAI) in medical education: Chat-GPT and co. GMS J. Med. Educ. 40, Doc54 (2023).
  33. Brin, D. et al. Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments. Sci. Rep. 13, 16492 (2023).
  34. Alfertshofer, M. et al. Sailing the seven seas: a multinational comparison of ChatGPT’s performance on medical licensing examinations. Ann. Biomed. Eng. 52, 1542–1545 (2024).
  35. Lucas, H. C. et al. A systematic review of large language models and their implications in medical education. Med. Educ. (2024).
  36. Zack, T. et al. Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study. Lancet Digit. Health 6, e12–e22 (2024).
  37. Li, Z. et al. Large language models and medical education: a paradigm shift in educator roles. Smart Learning Environ. 11 (2024).
  38. Chan, K. S. & Zary, N. Applications and challenges of implementing artificial intelligence in medical education: integrative review. JMIR Med. Educ. 5, e13930 (2019).
  39. Tolsgaard, M. G. et al. The role of data science and machine learning in Health Professions Education: practical applications, theoretical contributions, and epistemic beliefs. Adv. Health Sci. Educ. Theory Pract. 25, 1057–1086 (2020).
  40. Balasooriya, C. et al. Learning, teaching and assessment in health professional education and scholarship in the next 50 years. FoHPE 25 (2024).
  41. Abd-Alrazaq, A. et al. Large language models in medical education: opportunities, challenges, and future directions. JMIR Med. Educ. 9, e48291 (2023).
  42. Lomis, K. et al. Artificial Intelligence for Health Professions Educators. NAM Perspect. 2021 (2021).
  43. Nagi, F. et al. Applications of artificial intelligence (AI) in medical education: a scoping review. Stud. Health Technol. Inf. 305, 648–651 (2023).
  44. Boscardin, C. K. et al. ChatGPT and Generative artificial intelligence for medical education: potential impact and opportunity. Acad. Med. 99, 22–27 (2024).
  45. Artsi, Y. et al. Large language models for generating medical examinations: systematic review. BMC Med. Educ. 24, 354 (2024).
  46. Laupichler, M. C. et al. Large language models in medical education: comparing ChatGPT- to human-generated exam questions. Acad. Med. 99, 508–512 (2024).
  47. Scott, K. & Hart, J. Digital technologies in health: implications for health professional education. FoHPE. 25 (2024).
  48. Pearce, J. & Chiavaroli, N. Rethinking assessment in response to generative artificial intelligence. Med Educ. 57, 889–891 (2023).
  49. Fawns, T. & Schuwirth, L. Rethinking the value proposition of assessment at a time of rapid development in generative artificial intelligence. Med. Educ. 58, 14–16 (2024).
  50. Rampton, V. et al. Implications of artificial intelligence for medical education. Lancet Digit. Health 2, e111–e112 (2020).
  51. Jackson, P. et al. Artificial intelligence in medical education – perception among medical students. BMC Med. Educ. 24, 804 (2024).
  52. Australian Academy of Technological Sciences and Engineering (ATSE) and A.I.f.M.L. (AIML), Responsible AI: Your questions answered. 2023: Canberra, Adelaide.
  53. Moy, S. et al. Patient perspectives on the use of artificial intelligence in health care: a scoping review. J. Patient Cent. Res Rev. 11, 51–62 (2024).
  54. Mikkelsen, J. G. et al. Patient perspectives on data sharing regarding implementing and using artificial intelligence in general practice – a qualitative study. BMC Health Serv. Res 23, 335 (2023).
  55. Khullar, D. et al. Perspectives of patients about artificial intelligence in health care. JAMA Netw. Open 5, e2210309 (2022).
  56. Kudina, O. & de Boer, B. Co-designing diagnosis: Towards a responsible integration of Machine Learning decision-support systems in medical diagnostics. J. Eval. Clin. Pract. 27, 529–536 (2021).
  57. Mykhailov, D. A moral analysis of intelligent decision-support systems in diagnostics through the lens of Luciano Floridi’s information ethics. Hum. Aff. 31, 149–164 (2021).
  58. Juravle, G. et al. Trust in artificial intelligence for medical diagnoses. Prog. Brain Res. 253, 263–282 (2020).
  59. Candelon, F. et al. AI regulation is coming. Harvard Bus. Rev. (2021).
  60. Longoni, C. et al. Resistance to medical artificial intelligence. J. Consum. Res. 46, 629–650 (2019).
  61. Sauerbrei, A. et al. The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med. Inf. Decis. Mak. 23, 73 (2023).
  62. de Boer, B. & Kudina, O. What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms. Theor. Med. Bioeth. 45, 245–266 (2021).
  63. Kingsford, P. A. & Ambrose, J. A. Artificial intelligence and the doctor-patient relationship. Am. J. Med 137, 381–382 (2024).
  64. Lorenzini, G. et al. Artificial intelligence and the doctor-patient relationship: expanding the paradigm of shared decision making. Bioethics 37, 424–429 (2023).
  65. Mittelstadt, B. The Impact of Artificial Intelligence on the Doctor-Patient Relationship (Council of Europe 2021).
  66. Mittermaier, M. et al. Collaborative strategies for deploying AI-based physician decision support systems: challenges and deployment approaches. NPJ Digit. Med. 6, 137 (2023).
  67. U.S. Department of Health and Human Services. Artificial Intelligence (AI) at HHS. [cited 10 Sep 2024]; Available from: HHS (2024).
  68. Jonnagaddala, J. & Wong, Z. S. Privacy preserving strategies for electronic health records in the era of large language models. NPJ Digit. Med. 8, 34 (2025).
  69. Productivity Commission, Australian Government. Making the most of the AI opportunity: The challenges of regulating AI: Canberra (2024).
  70. Nikolic, S. et al. ChatGPT, Copilot, Gemini, SciSpace and Wolfram versus higher education assessments: an updated multi-institutional study of the academic integrity impacts of Generative Artificial Intelligence (GenAI) on assessment, teaching, and learning in engineering. Austr. J. Eng. Educ. 29, 1–28 (2024).
  71. Schuwirth, L. The need for national licensing examinations. Med. Educ. 41, 1022–1023 (2007).
  72. Schuwirth, L. W. & Van der Vleuten, C. P. Programmatic assessment: from assessment of learning to assessment for learning. Med. Teach. 33, 478–485 (2011).
  73. Bhanji, F. et al. Competence by design: the role of high-stakes examinations in a competence based medical education system. Perspect. Med. Educ. 13, 68–74 (2024).
  74. Shumailov, I. et al. AI models collapse when trained on recursively generated data. Nature 631, 755–759 (2024).
  75. De Angelis, L. et al. ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Front. Public Health 11, 1166120 (2023).
  76. Xu, X. et al. Opportunities, challenges, and future directions of large language models, including ChatGPT, in medical education: a systematic scoping review. J. Educ. Eval. Health Prof. 21, 6 (2024).
  77. Ngo, B. et al. The cases for and against artificial intelligence in the medical school curriculum. Radio. Artif. Intell. 4, e220074 (2022).
  78. Franco D’Souza, R. et al. Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education. Med. Educ. Online 29, 2330250 (2024).
  79. Fleisher, L. A. & Economou-Zavlanos, N. J. Artificial Intelligence can be regulated using current patient safety procedures and infrastructure in hospitals. JAMA Health Forum 5, e241369 (2024).
  80. Chouffani El Fassi, S. et al. Not all AI health tools with regulatory authorization are clinically validated. Nat. Med. 30, 2718–2720 (2024).
  81. Australian Human Rights Commission. Australia Needs AI Regulation [cited 3 Dec 2024]; Available from: Human Rights (2023).
  82. Mesko, B. & Topol, E. J. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit. Med. 6, 120 (2023).
  83. Gibson, D. et al. Learning theories for artificial intelligence promoting learning processes. Br. J. Educ. Technol. 54, 1125–1146 (2023).
  84. Yu, H. & Guo, Y. Generative artificial intelligence empowers educational reform: current status, issues, and prospects. Front. Educ. 8 (2023).
  85. Gniel, H. AI: A Regulatory Perspective (Australian Government Tertiary Education Quality and Standards Agency, 2023).
  86. The Royal Australian College of General Practitioners. Artificial intelligence in primary care [cited 2024 20/09/2024]; Available from: RACGP (2024).
  87. Knopp, M. I. et al. AI-enabled medical education: threads of change, promising futures, and risky realities across four potential future worlds. JMIR Med Educ. 9, e50373 (2023).
  88. Australian Health Practitioner Regulation Agency. Meeting your professional obligations when using Artificial Intelligence in healthcare [cited 3 Dec 2024]; Available from: AHPRA (2024).
  89. Australian Government Digital Transformation Agency, Policy for the Responsible Use of AI in Government, Commonwealth of Australia (Digital Transformation Agency) (2024).
  90. The White House. Fact Sheet: Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence [cited 13 Dec 2024]; Available from: White House (2023).
  91. Council of Europe, Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, Council of Europe Treaty Series No. 225 (2024).
  92. Department for Science, I.T., The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023, Department for Science, Innovation & Technology (2023).
  93. United Nations AI Advisory Body, Governing AI for Humanity (2024).
  94. Australian Government Department of Industry, S.a.R., Safe and responsible AI in Australia consultation: Australian Government’s interim response. Commonwealth of Australia (2024).
  95. Wellner, G. A postphenomenological guide to AI regulation. J. Hum.-Technol. Relat. 2, 1–18 (2024).
  96. The White House. Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People [cited 20 Sep 2024]; Available from: AI Bill of Rights (2024).
  97. Government of the United Kingdom, National AI Strategy, HM Government (2021).
  98. Digital Policy Office: The Government of the Hong Kong Special Administrative Region of the People’s Republic of China, Ethical Artificial Intelligence Framework (2024).
  99. Ministry of Education, C., Sports, Science and Technology – Japan, White Paper on Science, Technology, and Innovation: How AI will transform Science, Technology and Innovation. (Ministry of Education, Culture, Sports, Science and Technology, 2024).
  100. Australian Government Department of Industry, S.a.R., Voluntary AI Safety Standard. (Commonwealth of Australia, 2024).

Exploring Generative AI: Transforming Education and Healthcare

In recent years, generative artificial intelligence (AI) has emerged as a groundbreaking technology, reshaping how we interact with information and each other. This blog post delves into the essential aspects of generative AI, its applications in various fields—including healthcare and education—and the implications for the future.

What is Generative AI?

Generative AI refers to algorithms that can generate new content, whether it be text, images, music, or even complex code, by learning from existing data. IBM’s research blog on generative AI outlines its capacity to create original responses and solutions based on learned patterns, making it a powerful tool in many domains (IBM, 2024).

Key Features of Generative AI:

  • Deep Learning: Utilizes extensive datasets to understand and replicate patterns.
  • Content Creation: Capable of producing various forms of media, enhancing creativity while reducing manual effort.
  • Personalization: Offers tailored solutions and responses, improving user experience.

Large Language Models (LLMs)

At the heart of the generative AI phenomenon are large language models (LLMs). These models are trained on vast amounts of text data to understand and generate human-like text. IBM defines LLMs as complex neural networks capable of complex language tasks, from composing essays to answering questions intelligibly (IBM, 2024).

Applications of LLMs:

  • Chatbots: Providing customer service and support.
  • Educational Tools: Assisting in tutoring and personalized learning experiences.
  • Research Assistance: Summarizing literature and providing insights quickly.

Generative AI in Medical Education

The integration of generative AI in medical education has been met with enthusiasm and cautious optimism. A comprehensive review highlights its potential to enhance learning and teaching methodologies in medical fields (Hanycz & Antiperovitch, 2025).

Key Benefits:

  1. Interactive Learning: Students can engage with AI systems for real-time feedback and assistance.
  2. Content Generation: AI can create simulated patient scenarios, valuable for learning clinical skills.
  3. Research Support: It aids in generating literature reviews and summarizing findings, thereby streamlining the research process.

Relevant Studies:

  • Tolsgaard et al. (2023) noted that AI could augment educational practices in health professions through innovative teaching techniques and assessments.
  • Mastering AI in Medical Education underscores the importance of incorporating ethical considerations into AI usage, ensuring responsible deployment (Masters, 2023).

Challenges and Considerations

While the potential of generative AI is vast, its implementation is not without challenges. Concerns include:

  • Ethical Usage: Ensuring that AI systems operate within ethical guidelines, especially in healthcare.
  • Data Privacy: Safeguarding sensitive information while utilizing AI technologies.
  • Bias and Inequity: Mitigating the risks of AI perpetuating existing biases found in training datasets.

Conclusion

As generative AI continues to evolve, its impact on fields like education and healthcare can lead to significant advancements. Embracing this technology requires a careful balance of innovation, ethical scrutiny, and regulatory compliance to harness its full potential.

For further reading and insights on generative AI and its implications, refer to:

  • IBM’s overview on generative AI here.
  • IBM’s detailed examination of large language models here.

As we navigate this transformative era, ongoing discussion, research, and collaboration will be crucial in shaping a future where generative AI responsibly enhances our capabilities.

Latest

Thales Alenia Space Opens New €100 Million Satellite Manufacturing Facility

Thales Alenia Space Inaugurates Advanced Space Smart Factory in...

Tailoring Text Content Moderation Using Amazon Nova

Enhancing Content Moderation with Customized AI Solutions: A Guide...

ChatGPT Can Recommend and Purchase Products, but Human Input is Essential

The Human Voice in the Age of AI: Why...

Revolute Robotics Unveils Drone Capable of Driving and Flying

Revolutionizing Remote Inspections: The Future of Hybrid Aerial-Terrestrial Robotics...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Microsoft launches new AI tool to assist finance teams with generative tasks

Microsoft Launches AI Copilot for Finance Teams in Microsoft...

Germany Adopts AI and Digitization; Confidence in Generative AI Grows

Germany's AI Revolution: Trust in Generative AI Grows Amid Economic Resurgence Germany is forging ahead with ambitious plans to rejuvenate its economy through artificial intelligence...

Leveraging Generative AI to Enhance Diversity in Virtual Robot Training Environments...

Advancing Robotic Training: The Promise of Steerable Scene Generation Technology The Future of Robotics: How MIT’s Steerable Scene Generation is Revolutionizing Robot Training In recent years,...

How Northwestern University Developed a Multilingual AI Search Tool Using AWS

Revolutionizing Library Access: The Power of AI-Driven Search at Northwestern University Libraries Enhancing User Experience Through Generative AI Selecting AWS: A Flexible, Scalable Solution for Innovation Crafting...