Technological innovations including risk-stratification algorithms and large databases of longitudinal population health data and genetic data are allowing us to develop a deeper understanding how individual behaviors, characteristics, and genetics are related to health risk. The clinical implementation of risk-stratified screening programmes that utilise risk scores to allocate patients into tiers of health risk is foreseeable in the future.
1. Introduction
“All Screening Programmes Do Harm; Some Do Good as Well, and of These, Some Do More Good than Harm at Reasonable Cost”
[1].
By 2025, 130 million genomes are expected to be sequenced, of which 83 million will be cancer genomes
[2]. Whole genome sequencing (WGS) results, when combined with other real-world clinical and socio-demographic data, will allow for ongoing risk stratification and re-classification with prognostic value not only for prevention but also for resource allocation via targeted screening within given populations. This shift to a more dynamic, algorithmic approach will improve, if not radically alter, population screening programmes from both scientific and social perspectives.
In 1968, the WHO in its Principles and Practice of Screening for Disease set the criteria for modern screening programs in populations
[3]. Around the world, the general criteria for establishing a screening programme still include the classical components of the importance of the condition, an acceptable and suitable test (clinical utility), the availability and acceptability of treatment, a policy on whom to treat, the availability of screening facilities, and continuity
[4]. The WHO criteria have since been refined to also include the importance of an acceptable balance between benefit and harm, integrated monitoring and evaluation (cost-effectiveness), equity, and informed choices
[5]. With the exception of newborn screening programmes that seek to find asymptomatic, at-risk newborns and that are generally considered to be in the best interests of all children, participation in population screening remains voluntary.
Screening finds apparently well persons in a given population who may have a disease and who are then individually tested. Screening can also identify persons with an increased susceptibility to a genetic disease, but screening is not a diagnostic test. Screening is applied across populations for diseases for which early detection and treatment can prevent or at least, ameliorate the consequences. Screening can also be applied to sub-populations, such as women over 50 years of age who are routinely offered breast cancer screening. Pap smear and mammography screening initiatives date back to the 1950–1960s.
The intention of any cancer screening programme is not merely to detect cancers but also to treat them so as to reduce deaths. Not all individuals involved in screening benefit, and in fact, some can be harmed. The trade-offs include missing cancer diagnosis or subjecting an individual to unnecessary investigation, overdiagnosis, and overtreatment. To date, several studies in breast and prostate cancer have reported that tailoring screening to an individual’s risk level could improve the efficiency of the screening program and reduce its adverse consequences
[6][7][8].
At present, the mammographic breast screening programmes for the general population have age as the only entry criterion. The starting and stopping ages (varying from 40 to 74 years) and the frequency of screens (yearly to triennially) differ between countries
[9].
The risk of developing breast cancer varies among women. There are different subtypes of breast cancer, and the growth rate of breast cancers, even of the same subtype, varies widely, from being almost static to fast-growing
[10]. The age-based or the “one-size-fits-all” approach, however, does not take into account the heterogeneity of the breast cancer subtypes, biological behaviour, and the risk in the population.
Fast growing cancers quickly lead to symptoms and death. Uniform screening would miss detecting fast growing tumours. Some tumours grow at a slow enough pace that the individual would die from other causes before the cancer manifests symptoms. As mentioned, detecting these tumours and treating them does not necessarily benefit the person but can harm them. Nonetheless, it is not possible yet to determine if a cancer is over-diagnosed. Identifying subgroups of individuals likely to have progressive tumours and targeting screening to them or tailoring the screening frequency and age according to their risk score could reduce the adverse consequences of screening.
The implementation of this approach must be carefully examined, including its socio-ethical and legal implications. Indeed, some would argue that the notion that risk scores will offer equivalent utility population-wide by providing informative risk stratification across multiple diseases is misleading, “raising unrealistic expectations and implementing programmes without careful evaluation risks compromising the application of risk scores for specific niches, and indeed, of genomic medicine as a whole”
[11].
Another challenge inherent in implementing population health screening programmes is establishing the subsequent benefit–harm balance thereof
[12][13][14]. Evidence of an appropriate benefit–harm balance can be difficult to adduce for screening programmes, as the earlier detection of disease can increase post-detection survival without decreasing cancer-specific mortality
[15][16][17]. Randomised controlled trials are considered to be the gold standard for demonstrating the effectiveness of screening programmes. However, performing randomised controlled trials (RCTs) of population screening programmes is an onerous undertaking, as such trials are costly to implement and must necessarily be performed relative to a stable cohort across many years
[14]. Evidence of the benefit–harm balance of a screening programme could potentially be adduced in reliance on data derived from the long-term evaluation of its functioning upon implementation, to alleviate the difficulties in performing RCTs
[14].
Furthermore, the practical implementation of population screening creates further potential obstacles to establishing that the benefits thereof outweigh the harms, as the improper implementation thereof can negate the anticipated benefits
[18]. Economic factors can also be relevant to the implementation of a population screening programme. The use of health-sector resources to implement and maintain a population screening programme must be justified relative to other potential uses of available resources (e.g., administrative resources, funding, labor, use of technological infrastructure)
[7][14].
Assessing the benefit–harm balance of a population screening programme also requires reaching consensus on a number of social policy issues. Such policy issues include determining the most desirable balance between sensitivity and specificity (i.e., the balance of false positives to false negatives) and establishing appropriate metrics for assessing cost-effectiveness (e.g., increased screening could prevent more deaths but also could increase false findings, overdiagnosis, and use of resources). For example, a screening test can lead to improved prognosis or could lead to an improved quality of life or a less invasive course of treatment because of the earlier detection of the concerned health condition. Conversely, a false positive screening test could lead to unnecessary clinical interventions and to stress and anguish for affected persons
[19]. It is also critical to address issues of equitable access and equitable outcomes in programme implementation.
Further, the active surveillance of identified low-risk cancers is increasingly used as an alternative to surgical intervention in certain cancer treatment contexts, suggesting that accurate risk stratification is of growing relevance to clinical decision-making (e.g., for prostate cancers or thyroid cancers)
[20][21][22][23][24]. Both Canada and England have successfully implemented risk-stratified approaches to the follow-up care of cancer survivors (i.e., determining the magnitude of follow-up care and whether oncologists or primary care physicians perform such follow-up)
[25]. In sum, there is evidence that risk stratification could improve both the cost–benefit and risk–benefit profiles of cancer interventions that today present ambiguous cost–benefit and risk–benefit propositions, including cancer screening
[26][27][28]. Presently, individual characteristics, such as age and family histories are used to determine whether screening or preventive mastectomy are liable to produce better or worse outcomes for patients
[29].
2. PART I: Risk Stratification: Socio-Ethical Implications
Risk stratification is a proposed method to improve the benefit–harm balance of screening programmes and other health interventions (e.g., preventive surgeries, lifestyle modification)
[30][31]. The rationale is to identify high-risk individuals within a chosen population for targeted health interventions rather than to perform such interventions across the entire population. This can improve the balance of risks and benefits and the cost-effectiveness of the concerned interventions (e.g., by reducing the number of false positives and overdiagnosis of screening programmes)
[32]. In ensuring that health interventions are provided to individuals that stand to benefit from them the most (i.e., through stratification), the potential negative externalities of such interventions can be minimised and the potential benefits maximised. The targeted provision of screening and other health interventions could also help to ensure that greater benefit is obtained from such initiatives relative to their costs. It must be acknowledged that certain elements of cost–benefit analysis remain inherently subjective. Competing values are engaged, including accessibility, equity, and benefit-maximization for both individuals and subpopulations
[33].
Cancer care is susceptible to benefit from risk-stratified screening and other risk-stratified health interventions. Certain cancers exhibit much worse prognosis for high-risk individuals than for low-risk individuals, which implies that accurate risk stratification for the purposes of targeted intervention may be more apt to lead to improved clinical outcomes than it is to increase cancer detection
[34]. Screening methodologies that are effective in cancer care are often associated with high costs or limited availability (e.g., genetic testing for
BRCA 1 and
BRCA 2)
[35]. Preventive surgeries often impose significant burdens on patients (e.g., inherent risks or associated adverse effects)
[36][37]. Information such as age, gender, family history of cancer, select biomarkers, and membership in select populations that exhibit heightened risk (e.g., persons of Ashkenazi Jewish ancestry)
[38] are already relied on to personalise cancer care to anticipated risk in such subpopulations, partly in recognition of these imperatives
[39][40].
In developing policies and assessment methodologies for the provision of risk-stratified healthcare, it is necessary to define the concerned population and to categorise population members according to their individuated health risk in accordance with a defined risk-stratification methodology. Methodologies for assessing individual health risk include algorithmic methodologies that entail the calculation of a risk score based on input data and human-initiated methodologies that are reliant on clinical judgment to assess health risk
[41]. In practice, the most common are hybrid approaches, which involve the application of human interpretation to algorithm-derived scores
[42]. Consequently, the ethical, legal, and policy issues considered relate both to the development of algorithmic risk stratification methodologies and to the application thereof by clinicians (i.e., clinical implementation).
Other related issues include obtaining access to the rich health data that are required to perform risk-stratification, ensuring that risk-stratification achieves comparable performance across sub-populations and across human genetic diversity, ensuring that individuals in different healthcare contexts obtain equitable access to risk-stratified care, and ensuring that individuals and healthcare practitioners understand their respective responsibilities in obtaining appropriate follow-up care after their risk level has been assessed.
3. PART II: Polygenic Risk Scores: Regulatory Implications
Genome-wide association studies (GWAS) have uncovered the relevance of inherited variants to common complex diseases, furthering the integration of genetic data within risk score algorithms
[43][44]. Most non-communicable disorders have a genetic component that comprises hundreds or thousands of genetic variants, each of which has a small effect on the disease risk
[44]. While genetic testing is widely used to diagnose monogenic diseases determined by mutations in a single gene, polygenic disorders are caused by many genetic variants located throughout the whole genome, as well as by environmental and lifestyle factors
[45]. Each of these variables is valuable in the pathway of the disorder, but it is not informative for assessing the overall disease risk
[43][44]. A polygenic risk score (PRS) is a weighted sum of several of the risk variants for a particular disease
[44][45]. It provides an estimate of an individual’s genetic vulnerability to a trait or disease
[43]. In other words, PRSs are the tool by which the knowledge of these common variants can be used to improve healthcare by providing a point of reference that could place an individual to e.g. a lower-than-average, average, or above-average risk category and therefore potentially improve screening, advance preventive medicine, and achieve a more personalized treatment
[12][44]. The estimate provided by PRSs is calculated based on the individual’s genotype profile in comparison with the relevant GWAS data
[43]. The ever-growing use and availability of large quantity of genomic and health-related data from which the foregoing can be discovered and furthered, the identified advantages of preventive medicine, and the increasing personalization of medicine have emphasized the use and potential usefulness of PRS alone or combined with other risk factors in risk stratification and screening practices and programmes
[46].
Considering the role that PRSs play and continue to play in programmes and practices of screening and stratification involving genomic data, any of the implementation, adoption, and development issues related to PRSs are therefore of paramount importance while discussing those of programmes and practices of screening and stratification involving genomic data. The regulatory framework applicable to PRS as a non-device clinical decision support tool or as a medical device is one of the most crucial systemic implementation issues. Regulatory frameworks differ in this context.
It is important to create and promote clear and harmonised regulatory frameworks that enable both the scientific advancement and safety of risk assessment and stratification tools, such as the PRS and risk prediction models that facilitate those stratification and screening programmes.