1. Effects of Intensification on the Nutritional Composition of Cereals and Grain Legumes
1.1. Wheat Genetics
One of the main breeding approaches used to increase the harvest index (the proportion of grain in the total biomass) and grain yields in cereals was the introduction of semi-dwarfing genes. This not only increased grain yields but also reduced straw length and thereby the risk of lodging, which had increased when farmers started to use high mineral N fertilizer inputs, as recommended during the “green revolution”
[1][2]. Modern varieties of cereals (in particular wheat and rice) therefore have (i) shorter stems/straw and (ii) a higher maximum yield potential with high mineral fertilizer and pesticide input regimes typical for intensive conventional farming systems (see Section 3.1.2 Agronomic Practices).
A range of studies compared the composition of contrasting hexaploid wheat (
Triticum aestivum) varieties and reported (i) lower concentrations of protein and nutritionally desirable mineral micronutrients (e.g., Cu, Fe, Se, and/or Zn) and/or phytochemicals (e.g., phenolics) and/or (ii) higher concentrations of carbohydrates and the toxic metal cadmium (Cd) in modern short-straw varieties, when compared with older (released pre-1960s), traditional, and/or longer-straw/stem varieties
[2][3][4][5][6][7][8][9][10][11]. The magnitude of the reduction in modern wheat varieties differed between minerals, and the most notable reductions were reported for Mg, Fe, Se, and Zn
[3][4].
Considerable variation in mineral nutrient concentrations between wheat species and varieties was reported in an extensive field experimental study by Zhao et al.
[11], who compared 150 lines of common wheat (
T. aestivum var.
aestivum) with 10 lines of durum (
T. turgidum var.
durum), 5 lines of spelt (
T. aestivum var.
spelta), 5 lines of einkorn (
T. monococcum var.
monococcum), and 5 lines of emmer (
T. turgidum var.
dicoccum) wheat species. They found that the spelt, einkorn, and emmer lines had higher Se concentrations compared with common and durum wheats. When comparing the common wheat lines, they found that (i) grain Zn but not Fe concentrations correlated negatively with grain yield and (ii) that there was a “
decreasing trend in grain Zn concentration with the date of variety release, suggesting that genetic improvement in yield has resulted in a dilution of Zn concentration in grain”, which is consistent with results from other studies
[3][4][6][7].
Also, the results from a recent retail flour survey, in which samples of all brands of common wheat (
n = 112) and spelt (
n = 55) wheat available in German and UK supermarkets were analysed over two consecutive years, suggest that flour from currently used spelt wheat varieties (
T. spelta: a hulled wheat species which was an important staple food in Europe between the Bronze age and medieval times but is now a minor cereal) has significantly higher mineral micronutrient (Cu, Mg, Zn) and phenolic concentrations and higher antioxidant activity (TEAC) compared with flour produced from modern common wheat (
T. aestivum) varieties
[12][13]. Compared with common wheat, there has been very limited breeding/selection effort for spelt wheat over the last 60 years, and the currently used spelt wheat varieties have a lower maximum yield potential and longer straw/stems but are more robust and require lower mineral NPK fertilizer and pesticide inputs to achieve their yield potential
[8][12][13][14].
It is interesting to note that conventionally produced wholegrain spelt flour also had lower pesticide residues compared with common wheat wholegrain flour in the same retail flour survey
[13]. In addition to differences in agronomic practices used for spelt and common wheat (see Section 3.1.2 below), this is thought to be partially due to the hull protecting the grain against becoming contaminated by pesticides applied late in the growing season
[2][13].
Except for the positive correlations of grain micronutrient concentrations and negative correlation of grain cadmium concentrations with straw length, there is limited information about the potential morphological and physiological traits that are responsible for the differences in nutritional composition between old/traditional and modern wheat species or varieties. One potential explanation is that the higher yields of modern varieties result in a “dilution effect”, although this does not explain the higher Cd concentrations in grain from modern wheat varieties.
It has been suggested that the introduction of semi-dwarfing genes (which affect plant growth regulator concentrations in plants) has not only reduced stem length but also resulted in changes to root morphology, physiology, and distribution pattern in soil and that this may have had knock-on effects on mineral micronutrient uptake and/or redistribution
[2][10]. There is, no strong experimental evidence to support this hypothesis. However, the results of a mineral analysis of archived wheat grain and soil samples from the Broadbalk Experiment at Rothamsted Research Station in the UK provides some circumstantial evidence
[15]. This unique long-term experiment allowed trends in grain mineral composition to be assessed in relation to soil mineral levels, wheat cultivar, yield, and harvest index
[15]. The concentrations of Cu, Fe, Mg, and Zn were found to remain stable between 1845 and the mid 1960s but decreased thereafter, which coincided with the introduction of semi-dwarf high-yielding cultivars in the UK from the mid-1960s onwards. Decreasing grain concentrations were recorded in crops grown (i) without fertilizer inputs, (ii) mineral fertilizers, or (iii) manure inputs, and soil concentrations of the same mineral micronutrients either increased or remained stable between the mid-1960s and the early 2000s. This further supports the hypothesis that (i) changes in crop physiology associated with the introduction of semi-dwarfing genes and/or (ii) a dilution effect associated with increasing grain yields were the main driver for the decrease in Cu, Fe, Mg, and Zn observed during the period of agricultural intensification. It is important to note that multiple regression analysis identified both increasing yield and harvest index as highly significant factors that explained the decrease in grain mineral concentration
[15].
The strong selection for yield against the background of high mineral fertilizer (and in particular P) inputs may have also co-selected against the capacity of modern varieties to establish mycorrhizal associations, which are known to facilitate the uptake of P and nutritionally relevant mineral micronutrients such as Zn
[16][17][18][19]. There has been limited research into the impact of wheat breeding on mycorrhizal competence and associated micronutrient uptake capacity in modern compared with older/traditional varieties. However, it is interesting to note that a study which compared the relative dependence on mycorrhizas of modern wheat varieties, landraces, and ancestral wheat genotypes reported a trend for greater reliance on the symbiosis for yield in older cultivated wheat varieties
[16]. Also, phytohormone concentrations in cereals, which were augmented by the introduction of semi-dwarfing genes, have also been identified as important regulators of AM symbiosis
[17]. Furthermore, there is evidence for complex interactions between (i) wheat genetics (old versus modern varieties), (ii) agronomic practices (e.g., the use of water-soluble P fertilizers, which are known to augment mycorrhizal development), and (iii) the mycorrhizal populations developing in soils under different management regimes (e.g., organic versus conventional)
[18].
Different to other mineral micronutrients (e.g., Cu, Fe, Zn), plant roots can actively take up Se as selenate (SeO
2−, which is chemically similar to sulphate SO
4−2) via high-affinity sulphate transporters in the root cortex, root tip, and lateral roots. The large differences in Se concentrations found between modern and older, long-straw wheat varieties reported by Murphy et al.
[3] in the USA, where soils have relatively high Se concentrations, may therefore also have been due to breeding/selection having affected the density, activity, or distribution of sulphate transporters in wheat roots
[3]. It is interesting to note that in the Broadbalk Experiment, the trends for Se grain concentrations over time differed from those observed for other mineral micronutrients
[20]. This experiment was carried out in the UK, where soil Se concentrations are known to be low, and showed that (i) the introduction of semi-dwarf, high-yielding wheat varieties did not coincide with a decrease in grain Se levels, (ii) soil Se levels increased over time, (iii) grain Se concentrations were lower in non-fertilized control plots and not significantly affected by fertilizer type, and (iv) grain Se was significantly negatively correlated with SO
2 emissions in the UK
[15]. In contrast, results from long-term factorial field experiments carried out at Newcastle University’s Nafferton Experimental Farm in the UK
[10] suggest that longer-straw spelt varieties had higher grain Se concentrations than modern short-straw common wheat varieties and that the use of mineral NPK resulted in lower grain Se concentrations compared with manure applied at the same N input level (see also Section 3.1.2 below). In these unique experiments, (i) rotation design, (ii) fertilization regimes, (iii) crop protection protocols, and/or (iv) crop species/variety were used as factors and therefore allowed both main effects and interactions between these agronomic parameters to be identified and quantified (see Rempelos et al.
[9][21][22] and Daud et al.
[10] for detailed descriptions of the experimental designs).
In contrast to mineral micronutrients, concentrations of the toxic metal Cd were reported to be significantly higher in modern compared with older wheat cultivars when grown without fungicide applications
[7]. The underlying mechanisms are unclear, although it should be noted that one reduced height gene (Rh8) has been associated with grain Cd
[7]. However, Cd is known to be a complex trait and was also reported not to be clearly associated with straw/stem length
[7][23].
There is also some evidence that breeding and selection for high yields against the background of conventional agronomic protocols (with high mineral NPK and pesticide inputs) has co-selected for a greater need for the application of synthetic chemical pesticides in cereal production
[1][2][9]. For example, there is evidence that the short-straw modern wheat varieties are (i) less competitive against weeds and (ii) more susceptible to
Fusarium grain infection (and associated mycotoxin contamination) from fungal inoculum on crop residues on the soil surface
[1][2][24]. Results from the long-term Nafferton Factorial Systems Comparison (NFSC) trials and other factorial field experiments carried out between 2000 and 2017 at Nafferton Farm in the UK also provided evidence that organic common wheat breeding programmes with selection under low-input organic background conditions may have generated varieties with (i) longer straw/stems, (ii) greater resistance to biotrophic diseases such as powdery mildew and rust, and (iii) higher leaf phenolic concentrations
[1][2][9].
It is important to note that positive correlations between straw length and higher mineral micronutrient concentrations were reported in all studies with common wheat, while one factorial field experiment (with variety, irrigation, and fertilizer type as factors) carried out in Crete, Greece, which compared two traditional long-straw spelt varieties with one modern short-straw spelt variety, reported significantly higher phenolic concentrations in the short-straw variety
[8]. This may be explained by the differences in agronomic protocols used between common and spelt wheat and differences in genetic × agronomy interactions. Specifically, spelt wheat is produced with substantially lower nitrogen and pesticide inputs (both of which have been shown to affect phenolic concentrations in crops) compared with common wheat (see Section 3.1.2 for a description of the effects of N inputs and crop protection on grain phenolic concentrations).
1.2. Agronomic Practices
The evidence for changes in agronomic practices having affected the nutritional composition of cereals comes primarily from studies which (i) investigated the effect of specific inputs/practices (e.g., mineral NPK fertilizers, synthetic chemical pesticides, short rotations/cereal monoculture, minimum/no tillage) introduced during the green revolution and (ii) compared the nutritional composition of conventional and organic cereal grains/products
[1]. Organic cereal production resembles in many aspects the type of agronomic protocols used prior to the green revolution because it uses (i) no synthetic chemical pesticides and mineral N, P, and KCl fertilizers, (ii) more diverse rotations which include legume crops, and in most organic cereal production protocols, (iii) mechanical weed control and traditional inversion ploughing-based tillage protocols and (iv) legume fertility-building crops, animal manures, and/or organic waste-based composts to maintain soil fertility
[1].
Phenolic concentrations in the leaves and grain of common wheat (
T. aestivum) were shown to (i) decrease with increasing mineral N fertilizer inputs and (ii) be lower with mineral N fertilizer when compared with cattle farmyard manure applied at the same total N input level
[1][9][23][25][26]. Phenolics are part of the plants constitutive and inducible resistance response to disease attack, and it has been demonstrated that increasing N availability to plants significantly reduces the concentrations of phenolic compounds and resistance against biotrophic diseases in wheat and other crops in a dose-dependent manner
[23][27][28][29][30]. However, results from the Nafferton Factorial Systems Comparison (NEFG) trials also showed that the use of conventional crop protection regimes also has a significant negative effect on phenolic levels in cereals but not field vegetables [see Section 3.2 below] and that there are significant interactions between crop protection and fertilization for phenolic concentrations in both wheat and barley
[2][9].
Se concentrations in grain were recently shown to be significantly lower in spelt wheat crops fertilized with mineral NPK (NPK) or maize biogas digestate (MBD) compared with crops fertilized with composted cattle farmyard manure (FYM) applied at the same total N input level
[10]. In contrast, yields were highest with MBD digestate, intermediate with NPK, and lowest with FYM as fertilizer. This and the finding of significantly higher Se concentrations in FYM- compared with MBD-fertilized crops suggests that the higher Se inputs with FYM compared with MBD and NPK (which contains virtually no Se) were the main driver for the higher Se concentrations in FYM-fertilized crops. Lower Se concentrations in NPK- compared with FYM-fertilized crops were also found in factorial field experiments with common wheat
[10].
Cd inadvertently accumulates in wheat grains, and wheat is recognized as a primary source of dietary Cd intake
[7]. Cd concentrations in wheat grain and other cereals are well known to increase with increasing mineral P fertilizer inputs, and in intensive conventional cereal production systems/regions (e.g., rice/maize or wheat/maize double cropping rotations in China) which use very high mineral P fertilizer inputs (~600 kg P/ha), Cd concentrations in grain can be above the thresholds set by the WHO
[1][7][22][31][32]. Also, long-term factorial field trials with common wheat have shown that NPK-fertilized common wheat crops have higher Cd and/or Ni concentrations compared with crops fertilized with FYM at the same total N input level
[33]. In contrast, in spelt (which is produced with much lower mineral fertilizer inputs), no significant difference in Cd concentrations could be detected between manure and NPK applied at the same total N input level
[8].
Pesticide residues in cereals and other grain crops have also increased since they were first made widely available and used by farmers in the 1960s
[34][35][36][37]. However, data from regulatory pesticide monitoring in Europe
[34][35][36][37] also show that the profile of pesticides used has changed over time, and total pesticide inputs to wheat and other cereals have decreased since the 2000s in some regions (e.g., the EU), although this decrease may have been due to older pesticides (e.g., S-fungicides and insecticides) being replaced by new pesticide products that have activity at much lower application rates
[34][35][36][37].
Systematic reviews and meta-analyses of data from studies which compared the pesticide residues/profiles in organic and conventional cereals and other crops reported that conventional cereal grains/products (which are produced with crop protection protocols based on multiple pesticide applications) have substantially higher pesticide residue levels compared with organic cereal grains (which are produced without applications of synthetic chemical pesticide)
[38][39]. Similar results were recently found in an extensive wheat flour survey conducted in the UK and Germany, which detected a larger number of different pesticide residues and higher pesticide residue concentrations in conventional compared with organic cereals
[13].
It should be noted that the evidence for lower pesticide residues in organic compared to conventional food crops is now widely accepted, while there is still controversy about the evidence for other food quality, safety, and security benefits of organic food production and consumption
[40][41]. In cereals, the use of (i) the plant growth regulator chlormequat (which is used to reduce stem length and thereby prevent the lodging of cereals) and (ii) applications of the herbicide glyphosate close to harvest to desiccate cereals is currently of particular concern. This is mainly because (i) these practices result in high residue levels of chlormequat, which is a very persistent chemical linked to endocrine-disrupting activity that has been banned for use in fruit production but is still permitted for use in wheat, and (ii) glyphosate has been classified as a probable carcinogen
[13][35][36][37][42][43]. There is also evidence that applications of glyphosate as (a) a pre-emergence treatment in non-glyphosate-resistant soybean and (ii) during the growing season in glyphosate-resistant soybean crops impairs plant micronutrient uptake and thereby reduces concentrations of some micronutrients in soybean leaves and seed
[44][45][46].
Fusarium mycotoxin contamination is also affected by many of the agronomic practices introduced during the intensification of conventional cereal production over the last 60 years, and this has recently been reviewed in detail by Bernhoft et al.
[47]. Briefly, there is evidence that high mineral N inputs, the use of chlormequat to shorten stems, increased stem density, the use of certain fungicides (e.g., strobilurins), no and minimum tillage, and cereal monoculture/non-diverse arable rotations (and, in particular, growing wheat after maize) increase the risk of
Fusarium mycotoxin contamination in cereals
[47]. It is important to note that mycotoxin levels in wheat grain in the 27 large well-designed farm surveys reviewed by Bernhoft et al.
[47] were substantially higher than the concentrations found in both wholegrain and white wheat flour in the most recent retail survey conducted by Wang et al.
[24]. This difference is thought to be primarily due to the introduction of quality assurance protocols by grain processors that involve mycotoxin testing of all batches of cereals destined for human consumption in order to comply with maximum contamination levels (MCLc) set by the EU
[24][47]. It also, at least partially, explains why most farm surveys reviewed by Bernhoft et al.
[47] found significantly higher concentrations of
Fusarium mycotoxins in conventional cereal grain, while Wang et al.
[24] found similar concentrations of both (i)
Fusarium mycotoxins and (ii) ochratoxin A in organic and conventional wholegrain wheat flour. Since cereals with mycotoxin levels above the MCL for humans are widely used as animal feed, the effects of agricultural intensification on mycotoxin levels in cereals are therefore more likely to have an impact on livestock than human health.
It is important to point out that recent factorial experimental and survey-based studies with wheat have also identified significant (i)
genetics ×
agronomy, (ii)
environment ×
genetics ×
agronomy, and (iii)
genetics ×
agronomy ×
processing interactions for a range of nutritionally relevant compounds (including phenolics, mineral micronutrients, pesticide residues, mycotoxins)
[1][2][8][9][10][11][12][13][14][24][47][48]. While explaining the large variation and sometimes inconsistency of results from studies carried out in different counties, seasons, and pedoclimatic environments identified in systematic reviews/meta-analyses of comparative crop composition data
[38][49], this also highlights the risk of bias when conclusions about the effects of intensification on food quality are based on evidence from individual or only a small number of studies/environments.
1.3. Grain Processing and Post-Harvest Quality Assurance Protocols
In the past, a large proportion of cereal-based foods consumed as part of a MedDiet were made from wholegrain, which is widely recognized to have significant positive nutritional (higher fibre, antioxidant, and mineral micronutrient intake) and associated health impacts
[12][50][51][52][53][54]. However, as in Northern and Central Europe and North America, wholegrain consumption in many Mediterranean regions has declined over the last 60 years and been replaced by products made from refined grains/flour
[53][54].
Refining substantially reduces the concentrations of a range of mineral micronutrients (e.g., Ca, Mg, Fe, Cu, Zn), phenolics, and other antioxidants that are mainly located in the outer bran and germ layers of the cereal grain
[12][52]. In contrast, Se and Cd are known to be more uniformly distributed across the grain and found in similar concentrations in (i) the bran, germ, and endosperm and (ii) wholegrain and refined cereal flour
[7][10][12][52].
However, refining is also increasingly recognized to have some nutritional benefits, especially when used for conventionally produced cereal grains
[50]. Most importantly, refining is known to reduce the concentrations of pesticides, especially non-systemic pesticides and those applied late in the growing season
[13][19]. Also, a recent wheat flour survey reported that refined flour contained substantially lower concentrations of the mycotoxin T-2/HT-2, which is known to be produced by
Fusarium species that colonize the outer surface of the grain which is removed during refining
[24].
It is important to note that some of the negative aspects of refining are addressed by the grain processing and efficient quality assurance protocols used in developed countries (e.g., compulsory fortification of refined cereals with minerals such as Ca or Fe and testing of all cereal batches for mycotoxins)
[12][24][50].
2. Effects of Intensification on the Nutritional Composition of Fruit and Vegetables
2.1. Evidence for Historical Changes in Nutrient Composition
Several studies have investigated historical changes in the mineral nutrient composition of fruit, vegetables, and/or nuts by comparing published food composition data from the 1960s or before (pre-agricultural intensification) with composition data obtained between the 1980s and 2010s (post-agricultural intensification)
[55][56][57][58][59]. Although there is considerable variation between (i) studies, (ii) countries in which samples were collected, and (iii) between fruit and vegetable species, most studies have reported similar trends.
Similar results for Ca and Fe were also reported in another US study
[59] that also compared concentrations of macronutrients (carbohydrate, fat, and protein) and selected nutritionally desirable phytochemicals in 43 garden crops (mainly vegetables). The study detected significant declines in protein, Ca, P, Fe, riboflavin, and ascorbic acid content but did not find a significant decline for five other nutrients (fat, carbohydrate, vitamin A, Thiamin, and Niacin)
[59].
Since (i) increasing marketable yield has also been the dominant breeding target in most vegetable and fruit crops and (ii) variety trials have consistently shown correlations between yield and concentrations of minerals and protein, it is tempting to assume that the historical decline in some minerals and phytochemicals is primarily due to a “dilution effect”
[60]. However, there is evidence that changes in agronomic practices have also contributed to the declines in mineral and phytochemical concentrations (see Section 3.2.2 below).
It is also important to consider that, different to cereals, breeding/selection in a number of fruit and vegetable crops has more recently also focused on improving certain nutritional quality parameters (e.g., phenolic and antioxidant content)
[61][62]. For these species, the (i) decline in certain nutritionally desirable compounds and (ii) the negative correlations with yield may have already started to reverse, although this has not been documented yet.
2.2. Agronomic Practices
Recent factorial field experiments and retail surveys have provided evidence that, similar to cereals, agricultural intensification (especially the use of high mineral NPK fertilizer and pesticide inputs to increase yields) can reduce the nutritional quality of fruit and vegetables
[1][21][25][26][28][29][30][35][36][37][39][40][41][42][43][63][64][65]. There is also now some evidence that excessive irrigation can have negative effects on both crop yield and quality
[66].
Phenolic concentrations were found to decrease with increasing mineral N inputs in a range of fruit (e.g., grapes, apple) and vegetable crops (e.g., tomato, zucchini, potato), and in several crop species, this coincided with a reduction in disease resistance
[28][29][30]. Also, recent farm and retail surveys with grapes reported that, overall (across all grape varieties assessed), organically produced grapes had higher phenolic concentrations and antioxidant activity than conventionally produced grapes, although it is important to note that variety had a substantially larger effect on grape composition than agronomic protocols
[63][64].
These findings are consistent with the results of several systematic reviews and meta-analyses, which found lower poly(phenolic) concentrations and antioxidant activity in fruit and vegetables from conventional production systems (which use pesticides for crop protection and rely nearly exclusively on mineral NPK fertilizers for crop nutrition) compared with produce from organic production (which is produced without synthetic chemical pesticides, mineral N, water soluble P, and KCl fertilizers)
[38][41][49][67][68].
Results from the long-term Nafferton Factorial Systems Comparison (NFSC) trials at Newcastle University (Newcastle Upon Tyne, UK) suggest that the differences in phytochemical composition between organic and conventional vegetables are relatively small and primarily due to the contrasting fertilization regimes used in organic and conventional production. Specifically, the NFSC trials showed that the use of mineral NPK fertilizer (NPK) results in higher N availability but significantly lower total phenolic levels in potato, cabbage, and lettuce, but not onions, compared with crops grown with cattle farmyard manure (FYM) applied at the same N input level
[21]. It is important to consider that (i) only ~50% of total N applied with FYM is known to become plant through mineralization in the first growing season, (ii) crop yields were significantly lower with FYM compared with NPK in potato, cabbage, and lettuce but not onion crops, and (iii) that the use of NPK also resulted in significantly lower glucosinolate and carotenoid concentrations in cabbage, vitamin C concentrations in potato and cabbage, and vitamin B
9 concentrations in potato and lettuce compared with NPK
[21]. This may indicate that both the (i) down-regulation of phytochemical synthesis by the higher N availability from NPK and (ii) a dilution effect resulting from higher yields may have contributed to the lower phenolic, carotenoid, and vitamin concentrations in NPK-fertilized potato, cabbage, and lettuce
[21].
In contrast, (i) the preceding crop (spring beans versus winter barley) only had a significant effect on the vitamin C content in cabbage (higher in crops grown after winter barley), and (ii) crop protections (pesticide-based conventional versus mechanical weed control and insect proof crop cover-based organic) only affected the glucosinolate and vitamin B
9 content in cabbage (both lower with organic crop protection)
[21]. The study concluded that these effects were most likely due to the higher N availability following a legume crop and a reduction in solar irradiation by crop covers reducing the synthesis of phytochemicals in organically produced cabbages
[21].
Similar to cereals, recent studies
[42][69] confirmed that (i) conventionally produced fruit and vegetables have substantially higher pesticide residues than organic produce, and (ii) contamination levels are higher in conventional fruit than in conventional vegetables.
However, based on the still very limited evidence available, it is still not possible to identify consistent overall trends for the effects of (i) production systems (e.g., organic vs. conventional) and (ii) specific agronomic parameters (e.g., fertilizer type/input levels, crop protection protocols, rotation, tillage) on mineral micronutrient concentrations in fruit and vegetable crops
[1][21][40][41][67].
The consumption of olive oil is thought to be a particularly important driver for monosaturated fatty acid (oleic acid) intake and associated health benefits of the MedDiet
[70][71][72][73]. Increasing olive oil/oleic acid consumption was linked to a reduction in plasma cholesterol, LDL cholesterol, and triglycerides, an improvement in immune function, and protection against atherosclerosis in mice and, potentially, anti-cancer properties
[74][75][76]. Recent farm surveys and experimental studies in Greece that compared the performance of organic and conventional olive production systems reported no significant differences in nutritional and sensory composition parameters (including oleic acid and phenolic concentrations and acidity) between organic and conventionally produced table olives and olive oil, except for higher pesticide residues in conventional olive oil and fruit
[65][77][78][79]. It is important to note that (i) olive fruit and oil yields were not significantly different and were numerically slightly (~10%) higher in organic production systems
[65], and (ii) this finding is consistent with the results of a recent meta-analysis of yield data from mainly perennial fruit crops, which also reported that there is no significant difference in yield between conventional and organic perennial fruit crops
[80].
The decline in mineral concentrations over time
[55][56][57][58] may be explained by changes in crop genetics (e.g., higher yields of modern varieties/hybrids resulting in a dilution effect), and this should be further investigated in the future, for example, by including traditional and modern varieties in factorial field experiments.
Different to wheat and bovine milk, there is still insufficient published data to reliably detect and quantify the effects of intensification for most individual vegetable and fruit species through meta-analyses. Conclusions on the effects of intensification therefore still rely on synthesizing composition data across crop species, using information from studies that compared (i) samples from organic and conventional production or (ii) food composition data published before and after agricultural intensification. However, synthesizing composition data from different crop species does not take into account differences in the consumption of specific fruit and vegetables as part of a MedDiet. It is, therefore, also currently impossible to estimate differences in total intakes of desirable and undesirable/toxic compounds with fruit and vegetables (i) before and after agricultural intensification and (ii) from organic and conventional production.
3. Effects of Intensification on the Nutritional Quality of Meat and Dairy Products
Meat and dairy products are major dietary sources for omega-3 fatty acids (
n-3), including the very-long-chain omega-3 fatty acids eicosapentaenoic acid (EPA, c5c8c11c14c17 C20:5), docosapentaenoic acid (DPA, c7c10c13c16c19 C22:5), and docosahexaenoic acid (DHA, c4c7c10c13c16c19 C22:6), which were linked to a reduced risk of cardiovascular disease, cancer, and/or obesity, improved body composition, bone density, and foetal development, and enhanced anti-inflammatory, immune, neurological, and cognitive functions
[65][74][75][76][77][78][79][80][81][82][83][84][85][86]. Western diets are thought to be deficient in EPA/DPA/DHA, and there are recommendations to significantly increase intakes, especially for pregnant women
[82]. Milk and dairy products also contain significant amounts of nutritionally desirable carotenoids, vitamin E, oleic acid, and the mineral micronutrient iodine
[75][76][81][82][83][84][85].
The evidence for nutritionally relevant changes in meat and dairy product composition from agricultural intensification comes primarily from studies which compared meat and dairy products produced with contrasting feeding regimes, livestock genotypes, and management systems (organic versus conventional, outdoor versus indoor production). The main changes in livestock production introduced during the period of agricultural intensification over the last 50 years were the (i) increased use of indoor production and concentrate feeds made from cereals and grain legumes (often described as feedlot systems), (ii) reductions in the amount of dry matter intake from grazing/foraging on pasture, (iii) breeding for and/or the use of breeds/hybrids selected for high feed-use efficiency and milk or meat yield/animal from concentrate-feed-based diets, (iii) use of hormones and antibiotics as growth promoters (which is still permitted in North America but not the EU), and (iv) an increased use of veterinary medicine inputs, partially to address negative health impacts resulting from the change to more intensive management practices
[40][81][82][83][84][85][87][88][89].
It is now well documented that (i) livestock feeding regimes are the main factor affecting the nutritional and sensory quality of meat and dairy products and (ii) that replacing outdoor-grazing-based high forage intake diets with diets based on concentrate feed made from cereals and grain legumes has a substantial negative effect on the nutritional composition of meat and dairy products, with the largest declines having been reported for concentrations of omega-3 fatty acids (including EPA, DPA, and DHA)
[81][82][83][84][85][87][88][89][90].
For example, in beef cattle, traditional forage/grass-based diets resulted in more than three times higher concentrations of total omega-3 and very-long-chain (EPA+DPA+DHA) omega-3 fatty acids in meat of the
longissimus muscle of bulls from two different cattle breeds (German Holstein and German Simmental bulls), compared with feedlot-type concentrate-based diets
[89].
Similarly, the meat of the
longissimus thoracis et lumborum muscle of Suffolk × ‘Mule’ ram lambs was found to contain more than two times higher total and very-long-chain omega-3 fatty acid concentrations when the animals were finished on forage compared with concentrate-based diets
[87].
Differences in omega-3 fatty acid concentrations between pork from pigs reared outdoors with access to pasture and pigs raised indoors on concentrate-only diets were reported to be smaller than those found between meat from grazing-only and concentrate-only ruminant production systems
[88]. Specifically, total omega-3, DPA, and DHA concentrations were ~10–20% lower in indoor-reared pigs on concentrate-only diets compared with outdoor-reared pigs on concentrate-based diets with some grazing-based fresh forage intake
[88].
Pork from indoor systems also contained significantly lower (~25%) lower α-tocopherol concentrations
[88].
A systematic literature review/meta-analysis of composition differences between organic and conventional meat published in 2016 estimated that organic meat has ~50% higher total omega-3 fatty acid concentrations compared with conventional meat when data from all livestock species were pooled
[90].
Studies that compared the fatty acid profiles of milk from production systems which used (i) grazing-based, traditional, and (ii) intensified (higher concentrate and/or conserved forage) feeding regimes reported similar trends to those found for ruminant meat in both Europe and the USA; specifically, both total and very-long-chain omega-3 fatty acids decreased significantly with increasing use of concentrate in the dairy diet
[81][82][83][84][85][91][92][93][94][95][96]. Also, a systematic literature review/meta-analysis of organic and conventional bovine milk published in 2016 estimated that organic milk has ~60% higher concentrations of very-long-chain omega-3 fatty acids
[82].
In dairy cattle, high concentrate diets were also shown to reduce the concentrations of conjugated linoleic acid (CLA), carotenoids, and vitamin E in milk
[82][83]. In contrast, Se and I concentrations in bovine milk were found to be significantly lower in production systems with high grazing-based forage intake, compared with high-concentrate diets
[82][83]. This is thought to be mainly due to higher concentrate intake because concentrate feeds are routinely fortified with mineral supplements, and concentrate feed use is higher in conventional systems. The lower I levels in organic milk have resulted in concerns that this may lead to I deficiency in countries (e.g., the UK) which do not follow the WHO recommendations to fortify salt with I and where milk/dairy products are the main dietary source for I intake
[82].
It is important to note that a range of other practices introduced as part of the intensification of livestock production were also linked to small but significant negative effects on milk quality, and these include the following:
-
The use of Swiss Brown dairy cattle genotypes selected for high milk yield from concentrate feed (reduction in omega-3 concentrations in milk)
[81];
-
The use of pigs with the RN genotype (higher concentrations of total omega-3 fatty acids in pork)
[88];
-
The robotic milking of dairy cows (reduced concentrations of β-lactoglobulin and total polyunsaturated fatty acid and increased concentrations of the saturated fatty acids lauric acid/C12:0 and myristic acid/C14:0 in milk)
[94];
-
The use of mineral N fertilizer to increase forage production in grass clover swards used for grazing dairy cattle (reduction in omega-3 fatty acid concentrations in milk)
[83].
For dairy cattle, significant interactions between breed and feeding regimes were also reported for both milk yield per cow and the omega-3 concentrations in milk
[81]. Compared with traditional Swiss Brown genotypes bred for/selected in alpine outdoor-grazing-based systems (T-SB) in Switzerland, Swiss Brown genotypes that had been selected for high milk yield (HY-SB) from high-concentrate diets in the US produced significantly higher yields on concentrate/conserved-forage-based winter diets but similar yields on grazing-based fresh forage summer diets. In contrast, HY-SB produced milk with significantly lower milk omega-3 fatty acid (ALA, EPA and DPA) concentrations with both winter and summer feeding regimes, although both (i) omega-3 concentrations and (ii) the relative difference between HY-SB and T-SB genotypes was greater with grazing-based summer feeding regimes
[81].
For meat and milk, the evidence for agricultural intensification having resulted in a decline in the nutritional quality has grown substantially since the publication of the comprehensive systematic literature reviews/meta-analyses of composition differences between organic and conventional meat and dairy products
[82][90]. Most importantly, the additional evidence described above
[84][85][86][87][88][89] confirmed that the use of concentrate feeds (which increased substantially during agricultural intensification) results in substantially lower concentrations of very-long-chain omega-3 fatty (VLC
n-3) acids EPA, DPA, and DHA in meat and milk from both bovine and small ruminant production.
Since VLC
n-3 are recognized to be deficient in Western diets, it has been recommended that dietary intakes of VLC omega-3 fatty acids should be doubled, especially during pregnancy
[75][76][82][86][90]. Although the consumption of meat is lower than in Western diets, traditional MedDiets are known to result in higher VLC
n-3 intakes, primarily due to higher fish consumption
[73].
Fish, meat, dairy products, eggs and, omega-3 supplements are the only dietary sources for VLC
n-3, and there is strong evidence that concerns about high concentrations of Cd, the decline in maritime fish stock, and the sustainability of fish farming may limit the ability of increasing VLC
n-3 intake via fish consumption in the future
[75][82][90]. However, the evidence presented in this section suggests that changing to the consumption of meat and dairy products from organic and other extensive outdoor-grazing-based livestock production systems allows for the following:
-
A substantial increase in VLC n-3 intake as part of MedDiets (e.g., during pregnancy) without an increase in meat or fish consumption;
-
The consumers of typical Western diets to follow nutritional advice to substantially reduce meat consumption without reducing their VLC n-3 intake or increase fish consumption.
4. Effects of Agricultural Intensification on the Nutritional Quality of Fish
Fish is known to be a good dietary source for omega-3 fatty acids and in particular EPA and DHA, and frequent fish consumption and associated omega-3 fatty acid intake is thought to be a major driver for the health benefits of traditional MedDiets
[70][71][72][73][97]. However, in many areas of the Mediterranean and globally, industrial and agricultural pollution has resulted in an increase in the toxic compounds in fish and other seafood. Fish consumption is recognized as a major dietary source for toxic metals such as Cd and Pb
[98][99][100] and also a major source of endocrine-disrupting chemicals
[99][100][101]. Also, high levels of maternal fish consumption during pregnancy (>3 portions per week) were also linked to rapid child growth and an increased risk of rapid growth in infancy and childhood obesity
[101]. This has led to recommendations to limit fish consumption to one portion per week, especially during pregnancy
[99][100][101].
Traditionally, MedDiets were based on the consumption of wild fish caught in the Mediterranean Sea and, to a lesser extent, freshwater lakes. However, over the last 60 years, the consumption of farmed fish (i) produced in the Mediterranean (e.g., sea bream, sea brass, dorado) and/or (ii) imported from outside the Mediterranean (e.g., salmon) has increase substantially
[102][103][104][105][106][107][108]. Although European consumers prefer wild over farmed fish
[103][104] and there is growing concern about the sustainability and environmental impact of fish farming in Europe
[105], farmed fish now accounts for more than 50% of fish produced and consumed in Mediterranean countries
[102][106][107][108].
Except for salmon (which is produced in Northern Europe and imported by Mediterranean countries), there are, no studies which compare the nutritional composition of wild and farmed fish
[109][110][111][112][113][114]. For salmon, survey-based studies in the US (which compared farmed Atlantic and Pacific wild salmon) and Europe (which compared farmed and wild Atlantic salmon) reported contrasting results for toxic contaminants and omega-3 PUFAs
[109][114]. For example, a study in the US reported that farmed Atlantic salmon has higher concentrations of (i) nutritionally desirable omega-3 PUFAs including EPA and DHA but also (ii) nutritionally undesirable toxins such as dioxins, polychlorinated bisphenols, polybrominated diphenol ethers, and pesticides used in fish farming. As a result, the authors recommended that “
young children, women of child-bearing age, pregnant women, and nursing mothers not at significant risk for sudden cardiac death associated with CHD but concerned with health impairments such as reduction in IQ and other cognitive and behavioral effects, can minimize contaminant exposure by choosing the least contaminated wild salmon or by selecting other sources of (
n-3)
fatty acids”.
[109]. In contrast, a recent study from Norway
[110] found that concentrations of dioxins, dioxin-like PCBs, mercury, and arsenic were three times higher in wild compared to farmed salmon but well below EU-uniform maximum levels for contaminants in food in both farmed and wild fish. The six ICES (International Council for the Exploration of the Sea) PCB concentrations were also higher in wild salmon (5.09 ± 0.83 ng/g) compared with farmed fish (3.34 ± 0.46 ng/g). However, the fat content was substantially (up to three times) higher in farmed salmon, while the proportion of very-long-chain omega-3 PUFAs (EPA and DHA) in fish fat was a substantially (>2 times) higher in wild fish, and the authors concluded that “
Both farmed and wild Atlantic salmon are still valuable sources of eicosapentaenoic acid and docosahexaenoic acid.” because “
One 150 g portion per week will contribute more (2.1 g and 1.8 g) than the recommended weekly intake for adults”
[110]. However, it should also be noted that the development and introduction of more sustainable feeding regimes (especially a reduction in the use of fish oil and meal) in fish farming is expected to reduce the current levels of very-long-chain omega-3 PUFAs (EPA and DHA) in farmed salmon
[111][112].