• KSEBM
  • Contact us
  • E-Submission
ABOUT
BROWSE ARTICLES
EDITORIAL POLICY
FOR CONTRIBUTORS

Page Path

3
results for

"Practice guideline"

Filter

Article category

Keywords

Publication year

Authors

Funded articles

"Practice guideline"

Reviews

Patient values and preferences in guideline development
Su-Hyun Kim
J Evid-Based Pract 2026;2(1):8-15.   Published online March 30, 2026
DOI: https://doi.org/10.63528/jebp.2026.00001
Clinical practice guidelines (CPGs) are critical for translating research into clinical practice; however, high-quality evidence alone does not ensure optimal care. The integration of patient values and preferences is essential for developing recommendations that are both relevant and applicable, yet many guidelines continue to underrepresent patient perspectives and lack transparent incorporation of preference research. This review delineates the distinction between values and preferences, examines their influence on preference-sensitive decisions, and evaluates methods for eliciting patient input, such as utility-based measurements, discrete-choice experiments, and qualitative studies. Systematic integration of this evidence through guideline development enhances both credibility and patient-centeredness. Persistent challenges include issues of representativeness, methodological uncertainty, and cultural barriers. Implementing practical strategies to address these challenges will improve transparency, relevance, and acceptance of clinical practice guidelines.
  • 98 View
  • 2 Download
This review explores the current landscape of artificial intelligence (AI)-assisted semi-automation tools used in systematic reviews and guideline development. With the exponential growth of medical literature, these tools have emerged to improve efficiency and reduce the workload involved in evidence synthesis. Platforms such as Covidence, EPPI-Reviewer, DistillerSR, and Laser AI exemplify how machine learning and, more recently, large language models (LLMs) are being integrated into key stages of the systematic review process—ranging from literature screening to data extraction. Evidence suggests that these tools can save considerable time, with some achieving average reductions of over 180 hours per review. However, challenges remain in transparency, reproducibility, and validation of AI performance. In response, international initiatives such as the Responsible AI in Evidence Synthesis (RAISE) project and the Guideline International Network (GIN) have proposed frameworks to ensure the ethical, trustworthy, and effective use of AI in health research. These include principles like transparency, accountability, preplanning, and continuous evaluation. This review highlights both the opportunities and limitations of adopting AI in evidence synthesis and underscores the importance of human oversight and rigorous validation to ensure that such tools enhance, rather than compromise, the integrity of systematic reviews and guideline development.
  • 642 View
  • 26 Download

Original Article

Development of the clinical practice guideline protocol registration program and its pilot application in Korea
Hyun Jung Kim, You Kyoung Lee, Soo Young Kim, Kyu Chang Wang, Ho Sin Gwak, Yeol Kim
J Evid-Based Pract 2025;1(1):24-29.   Published online March 31, 2025
DOI: https://doi.org/10.63528/jebp.2025.00004
Background
In the case of clinical practice guideline (CPG), the need for the prospective registration of protocols has been proposed several times. However, the registration of CPG protocols is not yet active. The objective of this study was to summarize the experience of the CPG protocol registration program in Korea.
Methods
This study was performed in the following order: 1) formation of a methodological expert group; 2) CPG protocol template development; 3) CPG protocol preparation and expert review; 4) exploration of the knowledge and attitude of the guideline developers toward CPG protocol.
Results
The final version of the CPG protocol templates consists of four parts (planning, development, finalization, and timetable). The protocols for 18 cancers were submitted by 14 medical societies. conflicts of interest (n = 14, 77.8%), guideline development group (GDG; n = 9, 50%), scope of CPG (n = 9, 50%), and key questions (n = 8, 44.4%) were the under-reported areas in the submitted protocols. The GDGs (n = 13, 72.7%) was the most misreported areas of the protocol. CPG developers generally agreed on the advantages of protocol registration but responded that it was difficult to understand the concepts in the protocol and fill them with appropriate content. The areas where CPG developers responded that they felt difficulty were were recommendation grade (n = 9, 75.0%), GDG composition (n = 7, 58.3%), and determining key questions (n = 7, 58.3%).
Conclusions
The CPG protocol registration program was planned and piloted in Korea, and it could be said that it is feasible. It is necessary to evaluate the developed CPG later and determine whether protocol registration affects the quality of CPG through indices such as transparency and clarity of CPG.
  • 1,357 View
  • 20 Download
TOP