Background and Aims:Effective non-invasive risk-stratification tools are essential for the early detection of individuals at high risk for cirrhosis, to enable timely intervention. We conducted a prospective, head-to-head comparison of fibrosis-based and outcome-driven routine blood-based risk scores for predicting cirrhosis-related morbidity in a large community-based cohort.Approach and Results:We first performed a systematic review to identify risk scores derived from routine liver blood tests, and then evaluated them in the UK Biobank. Severe cirrhosis-related morbidity was defined using International Classification of Diseases, Tenth Revision codes. Discrimination and clinical utility were assessed using the Wolbers C-index, time-dependent area under the receiver operating characteristic curve, area under the precision-recall curve (AUPRC), and cumulative incidence accounting for competing risks. The review identified 12 eligible risk scores (10 novel models plus APRI and FIB-4). Among 385,738 participants, the 10-year cumulative incidence of severe cirrhosis-related morbidity was 0.39% (1498 events). Most novel scores outperformed APRI and FIB-4. LiverRisk showed the highest discrimination at 5 years (C-index 0.847) and 10 years (C-index 0.812), closely followed by CORE (5-year C-index 0.839; 10-year C-index 0.811). In contrast, CORE achieved better enrichment of high-risk individuals, with an AUPRC of 0.088 compared with 0.063 for LiverRisk. At low referral proportions, increasing the CORE threshold yielded greater net benefit than a sequential CORE-LiverRisk referral strategy.Conclusions:CORE and LiverRisk are the most discriminative routine blood-based tools for predicting long-term cirrhosis-related morbidity in the community. When referrals are limited, a higher-threshold CORE-only strategy may outperform a sequential CORE-LiverRisk approach.