Clinical practice guidelines (CPGs) are critical for translating research into clinical practice; however, high-quality evidence alone does not ensure optimal care. The integration of patient values and preferences is essential for developing recommendations that are both relevant and applicable, yet many guidelines continue to underrepresent patient perspectives and lack transparent incorporation of preference research. This review delineates the distinction between values and preferences, examines their influence on preference-sensitive decisions, and evaluates methods for eliciting patient input, such as utility-based measurements, discrete-choice experiments, and qualitative studies. Systematic integration of this evidence through guideline development enhances both credibility and patient-centeredness. Persistent challenges include issues of representativeness, methodological uncertainty, and cultural barriers. Implementing practical strategies to address these challenges will improve transparency, relevance, and acceptance of clinical practice guidelines.
This review explores the current landscape of artificial intelligence (AI)-assisted semi-automation tools used in systematic reviews and guideline development. With the exponential growth of medical literature, these tools have emerged to improve efficiency and reduce the workload involved in evidence synthesis. Platforms such as Covidence, EPPI-Reviewer, DistillerSR, and Laser AI exemplify how machine learning and, more recently, large language models (LLMs) are being integrated into key stages of the systematic review process—ranging from literature screening to data extraction. Evidence suggests that these tools can save considerable time, with some achieving average reductions of over 180 hours per review. However, challenges remain in transparency, reproducibility, and validation of AI performance. In response, international initiatives such as the Responsible AI in Evidence Synthesis (RAISE) project and the Guideline International Network (GIN) have proposed frameworks to ensure the ethical, trustworthy, and effective use of AI in health research. These include principles like transparency, accountability, preplanning, and continuous evaluation. This review highlights both the opportunities and limitations of adopting AI in evidence synthesis and underscores the importance of human oversight and rigorous validation to ensure that such tools enhance, rather than compromise, the integrity of systematic reviews and guideline development.