SingHealth Duke-NUS Academic Medical Centre will NEVER ask you to transfer money over a call. If in doubt, call the 24/7 ScamShield helpline at 1799, or visit the ScamShield website at www.scamshield.gov.sg.
| Title | Description |
| Artificial Intelligence, Digital Imaging, and Robotics Technologies for Surgical Vitreoretinal Diseases | To review recent technological advancement in imaging, surgical visualization, robotics technology, and the use of artificial intelligence in surgical vitreoretinal (VR) diseases. |
| Latest developments of generative artificial intelligence and applications in ophthalmology | Generative AI is transforming ophthalmology by improving efficiency, accuracy, and innovation in clinical practice and research through data processing, medical documentation, decision support, and patient communication. This review explores its integration into clinical workflows, highlights potential benefits and risks, and emphasizes the need for standardized evaluation frameworks and responsible adoption. |
| Cybersecurity in the generative artificial intelligence era | Generative AI (GenAI) can create original content and is rapidly transforming healthcare through clinical, operational, and research applications. This review examines its benefits, cybersecurity risks—such as data privacy issues, bias, and hallucinations—and proposes strategies to mitigate these risks while safely maximizing its potential. |
| Comparison of the Quality of Discharge Letters Written by Large Language Models and Junior Clinicians: Single-Blinded Study | The aim of this study was to assess the performance of GPT-4 in generating discharge letters written from urology specialist outpatient clinics to primary care providers and to compare their quality against letters written by junior clinicians. |
| AI-Based Diabetic Macular Edema Screening for Improved Care in Low-Resource Settings | An AI model incorporating privacy preservation, improved generalizability, and uncertainty estimation shows potential for enhancing diabetic macular edema screening, especially in low-resource settings. Future improvements may include multimodal imaging integration and detection of vision-threatening center-involved disease. |
| Digital tools in heart failure: addressing unmet needs | This paper reviews the role of digital tools in heart failure care, including screening, diagnosis, treatment optimization, and patient monitoring, as well as their impact on research. It highlights the need for clinical frameworks and supportive systems to effectively implement these technologies globally, especially in regions with limited healthcare resources. |
| Implementation of Artificial Intelligence in Retinopathy of Prematurity Care: Challenges and Opportunities | Retinopathy of prematurity (ROP) diagnosis is well suited for artificial intelligence due to its image-based nature, offering potential to improve screening as cases rise globally. This review examines existing AI and imaging systems, highlights implementation challenges such as infrastructure and regulation, and emphasizes the need for validated, scalable, and cost-effective screening programs to prevent childhood blindness. |
| Re-training and fine-tuning a deep learning artificial intelligence model for the detection of referable diabetic retinopathy in children and young people with diabetes | Clinically acceptable performance of SELENA+ for the detection of referable DR was found in this cohort of CYP, even when the AI model was trained in the adult population. |
| From generalist to specialist: Adapting vision‑language models via task‑specific visual instruction tuning | VITask is a framework designed to improve the task-specific performance of vision–language models (VLMs) by integrating task-specific models and aligning response distributions. Tested across multiple medical imaging datasets, it demonstrates improved diagnostic accuracy and flexibility for specialized medical tasks. |
| Artificial Intelligence-Based Disease Activity Monitoring to Personalized Neovascular Age-Related Macular Degeneration Treatment: A Feasibility Study | These results demonstrate the potential of using an AI-based DA model to optimize treatment decisions in the clinical setting and in detecting and monitoring DA in patients with nAMD. |
| A Competition for the Diagnosis of Myopic Maculopathy by Artificial Intelligence Algorithms | To evaluate DL algorithms for MM classification and segmentation and compare their performance with that of ophthalmologists. |
| Diagnostic performance of deep learning for infectious keratitis: a systematic review and meta-analysis | Infectious keratitis (IK) is the leading cause of corneal blindness globally. Deep learning (DL) is an emerging tool for medical diagnosis, though its value in IK is unclear. |
| National use of artificial intelligence for eye screening in Singapore | This case study describes the development and national implementation of an AI-based software (SELENA+) for diabetic retinopathy screening in Singapore through the Singapore Integrated Diabetic Retinopathy Program. The system demonstrated high sensitivity in detecting referable and vision-threatening diabetic retinopathy, highlighting the potential of AI to expand screening capacity and support large-scale healthcare programs. |
| Federated Learning in Healthcare: A Benchmark Comparison of Engineering and Statistical Approaches for Structured Data Analysis | This study underscores the relative strengths and weaknesses of both types of methods, providing recommendations for their selection based on distinct study characteristics. |
| Ethical and regulatory challenges of large language models in medicine | This viewpoint discusses key ethical concerns surrounding the use of large language models (LLMs) in medicine, including issues related to data privacy, intellectual property, and data provenance. It emphasizes the need for comprehensive frameworks and mitigation strategies to ensure the responsible and ethical integration of LLMs into medical practice. |
| Medical ethics of large language models in medicine | This paper proposes a bioethics-based framework for the responsible use of large language models in medicine, guided by the principles of autonomy, beneficence, nonmaleficence, and justice. It emphasizes the shared responsibility of patients, clinicians, and governing systems to ensure ethical, equitable, and effective application of LLMs in healthcare. |
| Med‑Pal: A lightweight large language model for medication enquiry | This study introduces Med-Pal, a lightweight domain-specific large language model chatbot designed to answer medication-related patient inquiries. Fine-tuned with expert-curated data, Med-Pal demonstrated improved accuracy, safety, and clarity in patient communication compared with other lightweight LLMs, highlighting its potential for digital health support. |
| A proposed S.C.O.R.E. evaluation framework for large language models: Safety, consensus, objectivity, reproducibility, and explainability | A comprehensive qualitative evaluation framework for large language models (LLM) in healthcare that expands beyond traditional accuracy and quantitative metrics needed. |
| Effect of childhood atropine treatment on adult choroidal thickness using sequential deep learning–enabled segmentation | This study demonstrated that short-term (2-4 years) atropine treatment during childhood was associated with an increase in choroidal thickness of 20-40 μm in adulthood (10-20 years later), after adjusting for age, sex, and axial length. |
| Generative artificial intelligence and ethical considerations in health care: A scoping review and ethics checklist | The widespread use of Chat Generative Pre-trained Transformer (known as ChatGPT) and other emerging technology that is powered by generative artificial intelligence (GenAI) has drawn attention to the potential ethical issues they can cause, especially in high-stakes applications such as health care, but ethical discussions have not yet been translated into operationalisable solutions. |
| Generative AI and large language models in reducing medication‑related harm and adverse drug events: A scoping review | Medication-related harm has a significant impact on global healthcare costs and patient outcomes, accounting for deaths in 4.3 per 1000 patients. Generative artificial intelligence (GenAI) has emerged as a promising tool in mitigating risks of medication-related harm. |
| An ethics assessment tool for artificial intelligence implementation in healthcare: CARE-AI | This paper highlights ethical challenges in implementing AI prediction models in healthcare, particularly concerning fairness, bias, and transparency. It emphasizes the need for a bioethics-focused toolkit to guide the responsible and equitable deployment of AI in clinical practice. |
| Mitigating Cognitive Biases in Clinical Decision-Making Through Multi-Agent Conversations Using Large Language Models: Simulation Study | This study aimed to explore the role of large language models (LLMs) in mitigating these biases through the use of the multi-agent framework. |
| Variable importance analysis with interpretable machine learning for fair risk prediction | This study introduces Shapley Variable Importance Cloud (ShapleyVIC), a method that improves the robustness and interpretability of variable importance analysis in machine learning models. By using an ensemble of regression models and estimating uncertainty, it helps identify significant predictors and supports fairer clinical risk prediction. |
| Clinical domain knowledge–derived template improves post hoc AI explanations in pneumothorax classification | Pneumothorax is an acute thoracic disease caused by abnormal air collection between the lungs and chest wall. Recently, artificial intelligence (AI), especially deep learning (DL), has been increasingly employed for automating the diagnostic process of pneumothorax. |
| Ascle—A Python natural language processing toolkit for medical text generation: Development and evaluation study | This study aims to describe the development and preliminary evaluation of Ascle. Ascle is tailored for biomedical researchers and clinical staff with an easy-to-use, all-in-one solution that requires minimal programming expertise. |
| FAIM: Fairness-aware interpretable modeling for trustworthy machine learning in healthcare | The escalating integration of machine learning in high-stakes fields such as healthcare raises substantial concerns about model fairness. |
© 2025 SingHealth Group. All Rights Reserved.