plant lover, cookie monster, shoe fiend
18632 stories
·
21 followers

LIFE: A metric for mapping the impact of land-cover change on global extinctions | Philosophical Transactions of the Royal Society B: Biological Sciences

1 Share

1. Introduction

Biodiversity loss is one of the greatest environmental challenges of our age, with declines associated with significant negative impacts on human wellbeing [1]. Tracking and mitigating these losses requires robust, spatially explicit biodiversity metrics for monitoring overall trends, identifying where conservation actions might be most effective, and measuring progress towards local-to-global biodiversity targets [2,3]. According to the IUCN Red List, agriculture and logging, the two main activities that drive land-use change, threaten 70 and 46% of terrestrial vertebrate species, respectively [4]. Moreover, land-use change looks set to remain the largest single threat for at least the next few decades [58] and is likely to interact with other threats to biodiversity such as climate change, underscoring the importance of biodiversity metrics that are linked directly with trends in land use [9].

While biodiversity is complex and multifaceted, metrics used in conservation arguably represent two primary motivations, as reflected in the targets of the Global Biodiversity Framework [10]: preventing species loss [11] and maintaining the integrity of ecosystems and their contributions to people [1]. In broad terms, such metrics typically comprise measures of spatial and temporal variation in extinction risk of species and intactness of ecosystems, respectively. Measures of extinction risk commonly incorporate features such as the number of species present in an area, their range sizes (and hence how important the area is for their persistence globally) and descriptions of how population sizes or ranges have changed or might do so. Intactness, on the other hand, describes anthropogenic impacts on biological communities, with declines in intactness taken to indicate biodiversity loss and reductions in ecosystem functions and associated ecosystem services.

To track variation in extinction risk or ecosystem intactness in relation to targets and to identify the likely positive or negative impacts of anthropogenic actions in different places, we suggest that metrics should:

  1. Strive to be representative—geographically, taxonomically and in terms of habitat types. Given marked differences in the availability of data for different regions, taxonomic groups and habitat types, unrepresentativeness is a significant limitation of several metrics. For example, the Living Planet Index [12] uses data only for vertebrates, much of it from Europe and North America, though in this case substantial efforts are made to adjust statistically for differences in the coverage of different classes and regions [13]. The IUCN’s Species Threat Abatement and Restoration metric (STAR) [5] currently covers amphibians, birds and mammals. However, as STAR considers species of Least Concern to have zero extinction risk, it is currently unable to quantify the impact of land-use changes on those species and instead focuses on threatened and near-threatened species. Some other metrics, such as the Biodiversity Intactness Index (BII) [14], are based on data that are more representative taxonomically and by threat—in this case, measures of relative abundance of nearly 60 000 non-threatened as well as threatened plant and animal species.

  2. Be comparable across space and direction of biodiversity change (that is, across gains and losses). Spatial comparability means that a given score for the metric in one location is equivalent in terms of the broad outcome of interest (extinction risk or ecosystem intactness) to the same value in any other location and that an area with twice that value is twice as important. Spatial comparability is essential when comparing actions in different locations—and so is of particular relevance in setting spatial priorities, in assessing the impacts of actors (e.g. international NGOs or corporations) who operate in different countries or in understanding the contribution of national activities towards global targets [2]. Metrics that treat all pristine habitats as of equal value—such as Mean Species Abundance (MSA) and the BII [14,15]—make it difficult to compare the impact of actions on habitats that differ markedly in the communities or ecosystem services they support. Directional comparability—where a score of x is equal and opposite to a score of −x—is essential when there is interest in identifying opportunities to mitigate damaging operations through remedial actions elsewhere—although, of course, where those actions involve habitat restoration, additional safeguards are necessary because of time lags and uncertainties in habitat recovery. Although the IUCN’s STAR metric identifies gains in biodiversity that could result from habitat restoration and threat abatement, it does not currently consider the impacts of continued habitat loss [5].

  3. Be amenable to aggregation and disaggregation according to species, ecosystems and other factors. This can be useful where stakeholders are interested only in certain taxonomic groups, charismatic species or biomes and can allow for analyses of the impacts of particular threatening processes, as well as the sensitivity of observed patterns to unrepresentativeness of the underlying data. Furthermore, biodiversity metrics play a key role in halting biodiversity loss through raising awareness with the public and policymakers. Given this, it is important that such metrics are easy to understand and interpret [16]. Globally, aggregated metrics are often difficult to understand or relate to, but disaggregation can allow stakeholders to understand policy targets relating to both national and international commitments. Many metrics, however—such as MSA and the Sustainable Ecology and Economic Development frameworl(SEED) [15,17]—are not readily disaggregated, as species identity is not retained through computation.

  4. Finally, to be useful in guiding real-world actions that vary in area, it is important that biodiversity metrics provide information that is scalable without the need for extensive additional analysis. If an action is larger than the grid size at which the metric is presented, can its impact be reliably estimated by using the scores for component grid cells—and likewise, does the score for a grid cell reliably indicate the value of action smaller than one grid cell? To what degree can published maps of metric scores be used, without rerunning the underlying algorithms, to assess the biodiversity impact of restoration or conversion actions that are much larger or smaller than the grid cells used to derive the maps? The importance of this is highlighted by the inclusion in the SMART targets (specific, measurable, achievable, relevant and time-bound) paradigm of ‘Measurable’, defined as ‘being able to assess progress towards the target using data already available or feasible to mobilize’ [18].

2. Conceptual basis of the Land-cover change Impacts on Future Extinctions (LIFE) metric

In this paper, we present a new global metric, which we term LIFE, which attempts to map for the first time the numbers of extinctions resulting from marginal losses and gains in the extent of natural habitats. LIFE is a global-scale progression of Durán et al.’s [19] persistence score approach and builds on a series of earlier foundational papers [2024]. It is based on the following five fundamental assumptions and assertions around mapping anthropogenic extinction risks:

  1. That it is useful to focus on quantifying likely human-caused extinctions. Extinctions of course arise naturally, but given that current extinction rates are roughly three orders of magnitude above background [25], here we look at changes in risks of extinction relative to extinctions in the absence of people. As all species eventually go extinct (or evolve into new species), it is important to note that LIFE is specifically concerned with human-driven extinctions arising from present-day actions that manifest over 100 years—a timescale that aligns with faunal relaxation times following anthropogenic habitat loss and with the IUCN Red List criteria [26,27].

  2. That a species’ change in extinction risk as a result of human action depends on its current or future population size relative to that in the absence of people (hereafter its ‘original population’), as well as on its absolute population size. Absolutely small populations are of course at greater risk of extinction due to chance events [28], but we suggest that species that have had small population sizes through their evolutionary history are likely to have been selected to be more resilient to extinction at that size than other species [29]. Hence, reducing a species to half its original population size will have roughly the same effect on its extinction risk regardless of whether it was naturally abundant or scarce. LIFE does not concern itself with species that are nowadays more abundant than in their evolutionary past.

  3. That a species’ risk of extinction within any specified timescale scales non-linearly with its current risk relative to the original population, with a given marginal decline having a small effect when a population is close to its original size but a much greater effect when a population has already been greatly reduced. The exact shape of this curve is not known and will presumably vary with a species’ life history, demography and ecology. Importantly, non-linearity means that metrics that instead assume linearity will tend to underestimate the impacts of population declines in already severely impacted species, and also that estimating contemporary impacts requires present-day population sizes to be expressed relative to original population sizes.

  4. That in the first instance it is reasonable to focus on anthropogenic changes in habitat extent and quality, because these constitute the greatest current and future source of threat to terrestrial biodiversity [7,30,31] . Other threats, such as overexploitation [32] and invasive alien species [33], are also extremely important and will determine how far a species is able to occupy an area of suitable habitat, but they are poorly mapped at global scale [34], so their incorporation into worldwide area-based metrics is problematic (but see [5]).

  5. That although species’ occupation of suitable habitats will vary with their ecology, with habitat condition, fragmentation, connectivity and so on, until these effects can be estimated separately for very many species, as a first step, it is useful to estimate land cover-mediated changes in relative extinction risk using changes in species’ Area of Habitat (AOH), again estimated relative to that in the absence of people. While AOH—defined conceptually as the habitat available to a species and in practice mapped as the intersection between a species’ range and its environmental preferences [35]—is of course an imperfect surrogate, it is for now the only measure of species’ distributions that is available for tens of thousands of species.

3. Developing the LIFE metric

LIFE takes as its starting point Durán et al.’s [19] persistence score. This uses species-specific distribution and habitat suitability information to estimate the consequences of marginal changes in land cover for the modelled probability that species will persist (i.e. avoid extinction), relative to their probability of persisting in the absence of anthropogenic habitat change (see §2). Duran et al. did not specify a time period. As LIFE is concerned with human-driven extinctions, we therefore assess the probability of extinction over 100 years—a timescale at which human-driven habitat loss occurs and biodiversity impacts have stabilized. Changes can be gains or losses of suitable habitat, with negative scores equal and opposite to positive ones. Uniquely, the persistence score also accounts explicitly for the likely non-linear relationship between habitat loss and changes in species’ probability of persistence, and considers the cumulative impact of habitat loss over the long term, rather than just recent changes. Both of these issues are overlooked in other metrics of extinction risk. Other approaches instead typically assume extinction risk only depends on contemporary change in AOH: that a 100 km2 loss of habitat for a species’ currently occupying 1000 km2 has the same effect regardless of whether in the absence of people it would occupy 1000 or 1 million km2 [3638]. However, there is substantial evidence that the impacts of habitat loss on species extinction risk are typically cumulative and non-linear, with the effect of losing a given quantity of habitat increasing as the remaining habitat diminishes and hence also dependent on habitat changes in the more distant past [39,40]. Because different regions have been subject to anthropogenic pressures at different times [41], estimating the impact of contemporary changes in AOH thus necessitates information on each species’ likely AOH in the hypothetical absence of people (hereafter its ‘original’ AOH) [42].

The Durán et al. [19] method integrates original habitat extent and the non-linear impact of habitat loss on a species’ probability of persistence, assuming a power-law relationship between species remaining AOH and probability of persistence (i.e. of avoiding extinction) [11]. The probability of persistence is expressed as a function of the proportion of a species’ AOH remaining relative to its original AOH and so has a maximum value of 1 when a species occupies its original AOH (or indeed any larger area). Figure 1 illustrates the shape of this curve (assuming, for illustration, an exponent of 0.25) and the resultant change in probability of persistence of two hypothetical species when converting natural habitat in one cell. The shape of the curve means that if a species currently occupies a smaller fraction of its original AOH, the same absolute loss of AOH causes a greater reduction in persistence (ΔP; compare species A and B).

The Durán et al. [19] method can be used to estimate the change in probability of persistence resulting from retaining or restoring natural habitat in any area for each species whose global range and ecological requirements are known, and then summed across all species whose original AOH overlaps the area. Initial analyses applied this method to 1368 amphibian, bird and mammal species as well as 641 plants in the Brazilian Cerrado [19,43]. Because the metric is comparable across space, it was possible to estimate the changes in probability of persistence for all species as a consequence of sourcing soy from different parts of the Cerrado. Because results can be disaggregated, impacts on the probability of persistence of individual charismatic species (such as giant anteaters and jaguars) could also be derived. However, despite these scores having several desirable properties, their derivation for large numbers of species is computationally demanding, and so they have had limited uptake at global scale [24].

In this paper, we develop Durán’s [19] method into the LIFE metric by bringing AOH data for >30 000 terrestrial vertebrates together with high-performance computing to generate global, downloadable maps that summarize at 1 arc-min resolution the impact on the expected number of extinctions (either increases or decreases) of two archetypal land-cover changes: (i) converting natural habitat and pasture to arable, and (ii) restoration of current pasture and arable to their natural state. To align with the broad policy and societal focus on extinctions, we express the metric in terms of changes in probability of extinction (rather than persistence), but of course a change in extinction probability of a species is simply equal and opposite to that in its probability of persistence. Conversion to arable land was chosen because food and farming are responsible for more biodiversity loss than any other sector [57], and so maps of where agricultural impacts will be most acute are useful in guiding conservation and other decisions. We focused on restoration because of its high profile in international policy [44], including within the United Nation’s Decade on Ecosystem Restoration, and because mapping its potential impact provides information on where actions to reverse past habitat losses would be most effective. To better understand what the LIFE metric represents, for each of these mapped layers, we investigate how our scores vary with an area’s species richness, endemism and degree of habitat loss to date. Because the LIFE metric explicitly assumes non-linear relationships between habitat loss and extinction risk, we also examine its scalability—the extent to which scores derived for grid cells can be relied upon when actions are smaller or larger than those cells. Finally, we explore the sensitivity of our findings to different assumptions about how the probability of persistence responds to losses or gains of suitable habitat (i.e. to the shape of the persistence–AOH curve) and how far our results differ across major taxonomic groups. We begin, though, by explaining in detail how LIFE scores are derived.

4. Generating current and original areas of habitat

To derive global maps of the LIFE score for future land-cover changes, we first calculated current and estimated original AOH for all terrestrial vertebrate groups (amphibians, reptiles, birds and mammals) [4]. We did not include species with missing data, those that inhabit caves or subterranean habitats or those where mismatches between range maps, habitat maps and habitat preferences result in no measurable AOH either currently or in the past. We also exclude species that are listed as ‘marine’, ‘terrestrial + marine’, ‘freshwater’ or ‘freshwater + marine’ in the IUCN ‘systems’ field [4], which removes just over 500 species—largely penguins, marine mammals and sea snakes. This left us with 30 875 species (7188 amphibians, 8760 reptiles, 9447 birds and 5480 mammals). Current and original AOHs were generated for each species following Brooks et al. [35,45,46] (see Data accessibility). For current AOH, we used a map of the estimated distribution of habitats [47] in 2016. For the original AOH, we used a map of potential natural vegetation (PNV) [48], which estimates the distribution of habitat types in the absence of human impacts. The current layer maps IUCN level 1 and 2 habitats, but habitats in the PNV layer are mapped only at IUCN level 1, so to estimate species’ proportion of original AOH now remaining, we could only use natural habitats mapped at level 1 and artificial habitats at level 2. We overlaid these two habitat surfaces with species’ range maps from IUCN and Birdlife International and a Digital Elevation Model [49,50] and estimated AOH for each species’ range as those parts of its range that are (or were) suitable based on its elevation and habitat preferences (from IUCN) [4]. IUCN codes species’ range polygons based on species’ presence, origin and seasonality. We included those parts of a species’ range where its presence is ‘extant’ or ‘possibly extinct’, its origin is ‘native’, ‘reintroduced’ or ‘uncertain’ and the seasonal occurrence is ‘resident’, ‘breeding’, ‘non-breeding’ or ‘unknown’. When generating original AOH maps, we also included range polygons coded as ‘extinct’, acknowledging that these data are incomplete, particularly for amphibians. For species that exhibit seasonal habitat preferences, AOH was calculated separately for the breeding and non-breeding seasons.

For each species, we then calculated the extant proportion of its original AOH as the ratio of its current to original AOH (figure 2). This analysis indicates that 14.3% of species have a larger estimated AOH currently than in the absence of people, implying that human-mediated land-use change has enabled these species to expand their potential distributions, sometimes very substantially. Across all species, the geometric mean proportion of AOH remaining is 0.80. However, this figure is strongly influenced by very marked AOH expansions among some of those species that have apparently benefitted from human activity. Focusing instead on the 85.7% of species with smaller AOHs now than those estimated in the absence of people, their geometric mean proportion of AOH remaining is 0.62.

5. Using current and original areas of habitat to estimate marginal changes in probability of extinction

Following Durán et al. (2020), for each species, we then used our estimates of its current and original AOH to estimate the marginal impact of two contrasting sets of land-cover changes: the conversion of remaining natural habitats and non-urban artificial lands to arable land (our ‘conversion to arable scenario’) and the restoration of non-natural habitats (the ‘reversion to natural scenario’). In the conversion to arable scenario, all terrestrial habitats currently mapped as non-urban were converted to arable land. In the revert to natural scenario, all areas classified as arable or pasture were restored to their PNV (as mapped by [48]). In effect, here we are treating pasture as a semi-natural habitat that, despite often being actively managed, can still harbour substantial levels of biodiversity and therefore sits between natural and arable land. In both scenarios, land currently classified as urban was left unmodified because it is highly unlikely that either farmland expansion or restoration will encroach into existing urban areas. We estimated the effect on each species’ AOH of any conversion or reversion occurring in its current range polygon, even if that fell outside its current AOH—so under conversion, a species tolerant of arable could expand its AOH into previously unoccupied parts of its range, while under reversion, a species intolerant of cropland could expand back into restored natural habitat. Scenario-specific changes in AOH were calculated at the scale of 100 m pixels and then aggregated into 1 arc-min grid cells (approximately 1.86 × 1.86 km (3.4 km2 in area) at the equator) to facilitate downstream computation while still providing results at a fine enough scale to inform real-world decision-making.

Next, for each species, we translated the scenario-driven change in its AOH into a cell-specific change in its global probability of persistence and subsequently extinction risk over an appropriate relaxation period (following the approach summarized in §3 and in figure 1). The ‘true’ form of the persistence-habitat loss curve over a fixed period of time is of course not known and is likely to vary across taxa. To take a conservative approach and to avoid conjecture, we have based this iteration of LIFE on established literature, with a view to implementing improved persistence-AOH curves in the future. We followed previous studies using this approach by assuming an exponential function with an exponent of 0.25 [19,20,24,43], but we also tested the sensitivity of our broad findings to this assumption by using several alternative curve shapes (see §9). Because we were not concerned with those species with greater current than original population sizes (see §2), where the current or scenario estimate of a species’ AOH exceeded its original AOH, we capped its probability of persistence at 1 [20]. Following maps made by the IUCN for threatened species, we account for species occupying novel regions within the limits of their native range but not the colonization of areas beyond it, but note that new iterations of LIFE maps could be adjusted to include natural range expansions and assisted colonizations. For migratory species, probability of persistence in any scenario was derived separately for the species’ breeding and non-breeding ranges, with the overall change in persistence extinction risk for a given set of habitat changes then calculated as the difference between the geometric means of their breeding and non-breeding probabilities of persistence before and after the changes (based on equation 3 of Durán et al. [19]; see electronic supplementary material, section S4).

In the last stage, we summed the change in probability of persistence for all the species found in the cell. Significantly, this summed value of change in probability of persistence across species in a grid cell is numerically equal to the expected number of extinctions caused or avoided by conversion or reversion of that grid cell (for proof, see electronic supplementary material, section S3). To align with the broad policy focus on extinctions, we then multiply our persistence score values by −1 to convert them to changes in extinction risk. Finally, because the area undergoing change varies widely across cells, we divided the summed change in extinction risk scores by the area (in km2) of the cell restored or converted under that scenario to obtain an overall LIFE score describing the likely impact on the expected number of extinctions of converting or restoring 1 km2 of land. The scaling error associated with summing and then averaging 100 m pixel changes in this way is explored under §8 (Scalability).

6. Global maps of the LIFE score

The LIFE score maps for our conversion and reversion scenarios (figure 3) prompt two overarching observations. First, while the per-km2 impacts on extinction of converting remaining habitats and pasture to arable land are very largely positive (indicating an increase in extinction risk) and those of restoring natural habitats very largely negative (indicating a decrease in extinction risk), the increases in extinction risk from conversion to arable tend to be both greater and more widely distributed than the decreases in extinction risk resulting from habitat reversion to natural. The relatively lower and patchier gains from reversion to natural arises because many grid cells currently have relatively little area under farming, and because to date there has been no conversion (at 100 m resolution) in some 1 arc-min grid cells of exceptional importance for vertebrate biodiversity. This overall comparison of the maps means that at the global scale we have far more to gain through habitat retention than through restoration. The importance of retaining existing natural habitats is underscored by the delayed and, in many cases, lower impacts of real-world habitat restoration compared with conversion [51,52]: any benefits plotted in our reversion surface are less clearcut and would likely take far longer to materialize than the increases in extinction risk shown in our conversion map.

A second observation is that for both scenarios, LIFE scores are highly skewed, with the majority of regions having relatively low values and a few regions scoring very highly. The conversion map highlights areas with high levels of vertebrate endemism, including several species-rich regions—such as the Guiana Shield, Cameroon, New Guinea and northern Australia—where to date clearance for agriculture has been relatively limited. Under reversion to natural, by contrast, the highest LIFE scores correspond to areas known to have large numbers of relatively narrowly distributed vertebrates that have already undergone extensive conversion to agriculture—including much of Brazil’s Atlantic Forest, eastern Madagascar, the highlands of Ethiopia and the Philippines. In the next section, we set out a more formal exploration of these spatial patterns.

7. Dissecting spatial variation in LIFE scores

To check our understanding of what LIFE scores represent, we investigated how well their spatial variation is predicted by three key components of the importance of land-cover change for global extinctions: species richness, the degree of endemism of the species present and the extent to which the species have already lost suitable habitat anywhere in their ranges. Because LIFE scores are summed across species, we anticipated that absolute values would covary positively with species richness. Because a unit area of land-cover change should have a greater impact on the probability of extinction of species with smaller global ranges, we expected absolute LIFE scores should be higher in grid cells whose species are on average more narrowly endemic. And because we consider that any given loss of AOH impacts more heavily those species that have lost more habitat already (figure 1), we expected positive associations between absolute LIFE scores and the average proportional loss of AOH to date of those species present.

To test these predictions, we calculated: richness as the number of species whose ranges overlapped a grid cell, endemism as the mean proportion of each species’ current total AOH made up by the cell and habitat loss to date as the mean proportion across each of its species of their original AOH that is no longer suitable for them (electronic supplementary material, figure S5). For the two scenarios (conversion to arable and reversion to natural), we focused on LIFE values with the predominant effect (i.e. positive LIFE scores associated with conversion and negative LIFE scores associated with reversion). The absolute value was taken for reversion values. We then modelled our log10-transformed LIFE scores in relation to these three predictor variables, including a spatial smoothing function for geographic location, by randomly sampling 170 000 cells (0.32 and 0.96% of the data for conversion and reversion, respectively) without replacement and calculating mean standardized effect sizes across 200 independent runs. Conversion and reversion impacts were modelled separately, only considering losses and gains, respectively, in each.

These analyses confirmed our understanding of what is captured in LIFE scores (table 1). Absolute values associated with conversion and reversion were greater for grid cells with higher species richness of terrestrial vertebrates, cells whose species on average exhibit greater endemism and cells whose species have already lost more of their original AOH. Standardized effect sizes were greatest for endemism, but all had relatively narrow confidence intervals across independent model runs. The modelled deviance explained ranged across runs from 79.4 to 89.6% and 69.1 to 76.4% for conversion and reversion, respectively.

land-cover change

predictor

mean

2.5%

97.5%

conversion to agriculture

endemism

1.106

0.809

1.287

habitat loss to date

0.188

0.078

0.302

species richness

0.705

0.548

0.845

reversion to natural

endemism

0.603

0.453

0.763

habitat loss to date

0.448

0.346

0.554

species richness

0.113

0.033

0.264

8. Scalability of LIFE scores

A central conceptual premise of the LIFE framework is that the relationship between a species’ remaining AOH and its probability of persistence is non-linear. This means that the per km2 impact on extinction risk of an action that is larger than the grid cell size at which an impact is computed is not exactly the same as the average across all affected grid cells and that of a smaller action is not the same as that of the entire grid cell that overlays it. However, running bespoke extinction risk calculations at the scale of any specific action would be impractical for most end-users, so instead we ran two sets of simulations to examine how far ‘true’ LIFE scores derived at exactly the scale of an action deviate from those estimated simply from using our existing 1 arc-min results. This deviation will depend on each species’ proportion of AOH remaining, the shape of the persistence–habitat loss curve and the size of the action.

Spatially modelling hundreds of actions was computationally prohibitive, so to test the scalability of our maps, we opted for a non-spatial statistical-modelling approach focused on the following five regions: South America, sub-Saharan Africa, south-eastern Asia, western Europe and northern Asia (Russia and Mongolia). For each region, we calculated the proportion of AOH remaining for each species present. To examine the scalability of our mapped LIFE scores for actions larger than our grid cells, we modelled 1000 actions across geometrically distributed sizes, ranging from the native resolution (3.4 km2 at the equator) to 10 million km2. For each action, the probability that a species was affected was governed by the portion of its AOH overlapping the region and the area of the action. The appropriate number of grid cells for the action size was then iteratively scattered across the region without replacement, with each having a chance to hit a given species. This procedure is essentially equivalent to assuming a homogenous random distribution of species within the region. Then, for each species, we calculated the ‘true’ impact of the simulated land-cover change and, derived from the grid cell values, expressed this deviation as a fraction of the ‘true’ value and summed these relative deviations across all species. We repeated the process for a total of 100 actions of each size.

These simulation exercises suggested that our mapped surfaces can be used to impute the approximate per-km2 impact on extinctions of actions ranging up to 1000 km2 in size. Figure 4a shows how the summed relative deviation between the true and grid cell-derived values varies with action size. In western Europe and northern Asia, the incurred error remains <10% for actions up to 30 000 and 40 000 km2, respectively. South-east Asia, South America and sub-Saharan Africa reach 10% mean deviation at just under 1000 km2. These regional differences reflect the fact that species at lower latitudes have on average lost a greater proportion of their AOH already.

Adopting a similar modelling approach to test the validity of using mapped LIFE values for actions that are smaller than our mapped grid cells, we used the same sets of species and modelled 100 actions ranging in size from 0.05 to 1 arc-min on the side (0.17–3.4 km2 at the equator). When calculating the ‘true’ value of the simulated land-cover change, the area that the action alters within the grid cell is known and is added or removed from each species’ current AOH as appropriate. The fractional value, on the other hand, is calculated by multiplying the average LIFE score per unit area in the cell (which assumes land-use changes across the entire cell) by the area of the action. Figure 4b shows the results of this process. The mean summed deviation between the ‘true’ and grid cell-derived value remains low right down to 0.05 arc-min actions (where it reaches ~7% in SE Asia and less elsewhere). However, here the uncertainty in this deviation is 25%, so we advise caution when using grid cell values for very small actions.

With these results in mind, we are confident that the LIFE surfaces presented in figure 3 can be used to evaluate changes in the statistically expected number of extinctions driven by land-cover changes of up to about 1000 km2. This does not preclude the use of LIFE as a means to assess larger changes, but doing so will incur a higher level of uncertainty or else bespoke calculations of LIFE, tailored to specific interventions (see Data accessibility). Of course, LIFE metric values are representative of a snapshot of the current state of global land cover, which is subject to change. This is also true of all other biodiversity metrics that consider land cover. Therefore, to minimize the risk of inaccuracies in the LIFE metric, it will be important to make use of the best and most recent land-cover data as and when it becomes available, especially in regions undergoing large-scale, rapid land-cover change.

9. Sensitivity analyses

We tested the sensitivity of spatial variation in LIFE scores to (i) the assumed shape of the relationship between a species’ probability of persistence and its loss of AOH, and (ii) what groups of species are included in the analysis.

(a) Sensitivity to changing the persistence–habitat loss curve

The relationship between incremental losses of a species’ habitat and its risk of extinction is unknown and likely to vary widely across species: modes and rates of reproduction and dispersal, evolutionary history and vulnerability to other threats may each shape how species’ populations respond to anthropogenic habitat loss [53]. For our main analyses, we followed other studies [19,20,24,43,54] in assuming all species exhibit an exponential persistence–habitat loss curve with an exponent of 0.25, but we also explored how our two LIFE score surfaces differed using exponential curves with exponents set to 0.1, 0.5 and 1.0 (the latter indicating a linear response to habitat loss) and assuming probability of persistence changes according to a modified Gompertz curve (which allows for the disproportionate impact of stochasticity on persistence at low AOH values; see electronic supplementary material, figure S2 for curve shapes).

The LIFE score maps generally pick out the same broad regions of the world as being important for conversion and reversion regardless of curve specification—typically species-rich parts of the tropics and subtropics (electronic supplementary material, figure S6). However, comparison of the maps also shows, as might be expected, that assuming higher exponents tends to increase the homogeneity of LIFE scores: the impact of a unit AOH conversion (or reversion) becomes less sensitive to how much habitat conversion has already taken place. At the extreme, if persistence responds linearly to reductions in AOH (i.e. z = 1.0), the loss of a given AOH has the same impact regardless of how much of a species’ AOH has already cleared. If the assumption of a linear fit is biologically inappropriate [39], it thus risks underestimating the impact of losing (or restoring) the last remaining areas of habitat in highly converted regions while overestimating the impact of changes elsewhere. Conversely, maps generated with z set to 0.1 and especially those assuming a modified Gompertz relationship show greater spatial variation in LIFE scores and suggest the impacts of restoration or conversion would be relatively greater in regions that have already undergone extensive habitat clearance. We also conducted a simple variance analysis of the curve exponents and the Gompertz curve by comparing the variance of the scores within each pixel when different curve shapes were used. Most cells (77%) had a variance of less than 1% from the z = 0.25 curve. There was a higher level of variance in areas with a greater number of species, up to approximately 60% of the z = 0.25 value. This variance did not strongly correlate with the score itself (r = 0.0001).

(b) Variation across taxonomic groups

Disaggregating LIFE scores by major taxonomic group (amphibians, reptiles, birds and mammals) again suggested our metric is broadly robust at a coarse scale (see electronic supplementary material, figure S7): for each of our taxonomic groups, the same regions would generally experience marked (and others, negligible) changes in species extinction risk following habitat conversion or reversion. However, there are some interesting differences when amphibians or reptiles are considered in isolation. Compared with all terrestrial vertebrates combined, for amphibians, land-cover changes in eastern North America and southern Europe are more impactful, while for reptiles, changes in some arid regions (such as the Sahara and central Australia) appear more important and those in higher latitude regions less important. These observations underscore the importance, in subsequent work, of expanding the LIFE metric to include additional taxa, most obviously any sizeable plant or invertebrate groups for which range maps and habitat preferences become available for a large proportion of the world’s species.

10. Overview, limitations and applications

By combining data on ranges and habitat preferences for 30 875 species of terrestrial vertebrates together with maps of the current and estimated original extent of habitat types, we generated two global, 1 arc-min resolution LIFE surfaces describing the present-day impacts on probable number of extinctions of converting or restoring natural habitats worldwide. Assuming species’ probability of persistence responds exponentially to changing AOH (with a z-value of 0.25; figure 1), habitat restoration would be particularly valuable per unit area in endemic-rich regions that have undergone extensive habitat clearance already (such as the Atlantic Forest, eastern Madagascar and the Ethiopian Highlands). Habitat retention, on the other hand, would have most impact in mitigating extinction in these regions too, but also in endemic as well as species-rich regions where there has been less marked conversion to date (such as the Guyana Shield, southeast Amazon Basin, Cameroon, eastern Congo, Greater Sundas and northern Australia). Statistical modelling of spatial variation in LIFE scores confirms these patterns, with impacts from conversion and restoration both co-varying positively with endemism, with the extent to which species have already lost AOH, and (especially for conversion scores) with species richness. We note that statistical exploration of variation in biodiversity metrics is unusual, but suggest similar formal interrogation of spatial patterns would be helpful in interpreting other global metrics as well.

In terms of the desirable characteristics of biodiversity metrics outlined above, LIFE scores have been devised to be directly comparable across space, such that a unit increase (or decrease) in summed probability of extinction reflects the same impact on the expected number of extinctions regardless of where it occurs. Our investigation of the scalability of LIFE scores suggests in addition that, despite our premise that habitat loss impacts species’ persistence in a non-linear way, the values presented in our 1 arc-min resolution surface can provide reasonably reliable estimates of impacts on extinction risk of land-cover changes ranging from 0.5 to 1000 km2. Our breakdown of findings by taxon illustrates that LIFE scores can be readily disaggregated according to the interests of the user. However, the resulting differences in LIFE score maps among major taxa make clear our vertebrate-only surface is not representative of terrestrial biodiversity as a whole, and so underline the importance of adding data on other groups as these become available. To be usable, such information needs to include the range and habitat preferences of all species in a taxon (or life form, such as trees)—across the entire area of interest. In the absence of such data, LIFE scores should be treated cautiously, especially in regions (such as Mediterranean biomes and the Cerrado) with higher relative richness and endemism among non-vertebrate groups [55].

The LIFE framework has several other limitations. Here we discuss five, the first two of which are linked to its underlying assumptions. First, as with any metric relating land-cover change to extinction risk, we lack a robust understanding of how species’ probability of persistence decreases as their AOH is reduced. Clearly, more work is needed to establish plausible curve shapes and explore how they are likely to vary across and within different groups of species. Reassuringly, we found broadly similar geographical variation in LIFE scores for exponential curves using z-values varying from 0.1 to 0.5, but a modified Gompertz curve resulted in markedly sharper geographical variation. The observation that a z-value of 1.0 produces somewhat more muted differences in apparent impacts suggests that assuming—as many metrics implicitly do—that extinction scales linearly with habitat loss [5,36,38,56] risks underestimating the potentially grave impacts of continued habitat conversion in already heavily converted regions.

Second, at present, the LIFE method treats all habitats listed as suitable by the IUCN as being of equal value to a species—ignoring differences in whether a habitat type is suitable or marginal for a species (because this information is currently only reported for 11% of species and because we lack information on what this difference in suitability means for species in terms of occupancy). This simplification assumes that population density is equal across different habitats within the species’ AOH, potentially overestimating the importance of marginal populations at range limits (and vice versa). Likewise, to date, LIFE also ignores effects of habitat patch size, fragmentation, connectivity, degradation and, critically, the impacts of other threatening processes (such as overexploitation or invasive species) that may limit a species’ ability to make use of otherwise suitable habitat [57]. These oversimplifications mean our scores overestimate the relative impact of habitat loss or restoration for species and places that are particularly affected by such processes. Likewise for those species able to live in agricultural land, our extinction risk scores take no account of differences in how that land is managed and hence underestimate the benefit of restoring areas currently subject to particularly damaging practices. Conversely, we take a conservative approach and, in line with the IUCN, do not allow species’ to colonize newly suitable areas outside of their current ranges, potentially underestimating the value of restoration [5]. We hope to address each of these simplifications in how LIFE deals with habitat suitability in future work.

Third, although our results on proportional losses of AOH (figure 2) align with a recent assessment that only around one-half of the area of ice-free biomes is still in areas of low human impact [58], our results are clearly only as reliable as the underlying data on species’ ranges, habitat preferences and habitat maps. Information is poorer for certain taxa and regions [59,60], and the natural habitat preferences for some species (including some nowadays exclusively associated with anthropogenic land uses) are entirely unknown. Estimates of species’ distributions in the absence of people are poor for many taxa, and this means that where we underestimate them and hence species’ habitat loss to date, LIFE scores will underestimate the effects both of further conversion and of restoration (because species are in reality further along the habitat loss trajectory than assumed). Work is in progress to ensure that the pipeline for calculating LIFE scores is readily updatable as new data on species and land-cover distributions become available (see Data accessibility). This is also important because the LIFE surfaces represent a snapshot of extinction risks today and should be updated periodically to reflect the changing availability of habitats, especially in regions of rapid conversion.

Fourth, we do not incorporate time lags between habitat change and biodiversity impact. In the case of conversion, we ignore extinction debts, and in the case of restoration, we do not consider delays or indeed uncertainties in species’ colonization and recovery. Caution is thus needed when comparing LIFE scores between our two maps. Values for restoration should certainly not be viewed as equivalent to those for conversion, and efforts to make comparisons—for instance, to inform offsetting activities for mitigating habitat damage—should employ explicit and conservative adjustment ratios to account for the much slower and less certain course of habitat restoration (see [5,52,61]); we suggest these ratios should be habitat- and region-specific.

Last, biodiversity metrics can be important in raising awareness of environmental change among the public and policymakers. Given this, it is important that metrics are easy to interpret [16]. Although the concept of extinction risk is relatively easy to communicate, the LIFE scores presented here are numerically small and not readily interpretable. Future developments should consider how to make these numbers more easily communicated—for example, by standardizing values relative to a chosen ‘average’ or ‘outstanding’ place in the world.

11. Conclusions

These significant caveats notwithstanding, we believe that the explicit consideration of the non-linear impacts of habitat loss and of long-term anthropogenic conversion, the transparent assumptions in the underlying method and the use of best-available data on almost 30 000 species mean the LIFE score is among the most powerful tools to date for quantifying the likely impacts on extinction of spatially explicit land-cover change. The LIFE layers are publicly available and can be easily combined with other data sources to assess the impact of land-cover changes across a broad range of actions, scales and geographies. For example, in terms of damaging activities, they can be linked in near-real time with remotely derived imagery to estimate and potentially attribute the extinction impacts of clearance events or wildfires. Combined with consumption and trade data, they can help assess the extinction footprint of specific products or businesses, the consequences of national trading decisions and even the impacts of individuals’ diets [43]. In terms of conservation actions, the LIFE layers can be used to estimate the effects of retaining or restoring particular areas of habitat and linked with cost data to help inform systematic conservation planning [62]. And at very large scale, they could be used to estimate the likely beneficial impacts of global-scale initiatives such as the recent international commitment to conserve 30% of land area by 2030—as well as (in combination with trade and economic data) to explore the likely negative effects such actions will have through displacing commodity production to other parts of the world. We welcome any such applications, as well as advice on how to improve the LIFE metric to make it more useful, accurate and representative.

Ethics

This work did not require ethical approval from a human subject or animal welfare committee.

Data accessibility

The LIFE surface data are provided in GeoTIFF format via [63] under the terms and conditions of the underlying species' elevation and habitat preference data and distribution polygons as laid out by the IUCN Redlist (https://www.iucnredlist.org/). Digital elevation maps are available from the USGS (https://earthexplorer.usgs.gov), and potential natural vegetation [48] and present land-cover maps [64] from their original sources. The LIFE pipeline source code is available under an open-source license (https://github.com/quantifyearth/life), allowing tailored impact analyses to be easily generated. Different land-cover maps and land-cover change scenarios may be examined, additional data may be used (such as better information on species occupancy) or users can focus on particular species sets of interest. The LIFE data can be easily regenerated as the underlying datasets are updated. We envisage our digital pipeline being regularly updated as upstream sources become available. All final statistical analyses for this manuscript were performed using the mgcv package (v1.9-0) in R (v4.3.2). IUCN data were processed using IUCN-modlib [65].

Supplementary material is available online [66].

Declaration of AI use

We have not used AI-assisted technologies in creating this article.

Authors’ contributions

A.E.: conceptualization, formal analysis, investigation, methodology, project administration, visualization, writing—original draft, writing—review and editing; T.S.B.: formal analysis, investigation, methodology, software, visualization, writing—review and editing; M.D.: formal analysis, investigation, resources, software, writing—review and editing; T.S.: conceptualization, formal analysis, funding acquisition, investigation, methodology, project administration, resources, software, supervision, visualization, writing—review and editing; A.A.: conceptualization, investigation, writing—review and editing; D.B.: conceptualization, investigation, methodology, software, writing—review and editing; A.P.D.: conceptualization, investigation, methodology, writing—review and editing; J.M.H.G.: conceptualization, investigation, methodology, writing—review and editing; R.E.G.: conceptualization, investigation, writing—review and editing; A.M.: funding acquisition, investigation, project administration, resources, software, supervision, writing—review and editing; A.B.: conceptualization, investigation, methodology, project administration, resources, supervision, visualization, writing—original draft, writing—review and editing.

All authors gave final approval for publication and agreed to be held accountable for the work performed therein.

Conflict of interests

We declare we have no competing interests.

Funding

A.E., M.D. and T.S. were supported through grants from the Tezos Foundation and Tarides to the Cambridge Centre for Carbon Credits (grant code NRAG/719). T.S.B. was funded by UK Research and Innovation's BBSRC through the Mandala Consortium (grant no. BB/V004832/1). A.P.D. and J.G. were supported by UK Research and Innovation's Global Challenges Research Fund (UKRI GCRF) through the Trade, Development and the Environment Hub project (project no. ES/S008160/1). For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.

Acknowledgements

The authors thank Leon Bennun, Miranda Black, Diana Bowler, Tom Brooks, Graeme Buchanan, Neil Burgess, Stu Butchart, Mike Harfoot, Frank Hawkins, Miranda Lam, Nicholas Macfarlane, Chess Ridley and Thomas Starnes for comments on the LIFE metric, Sarah Blakeman for help with the extinction risk proof and Richard Gregory and an anonymous referee for constructive comments on the MS.

Read the whole story
sarcozona
4 hours ago
reply
Epiphyte City
Share this story
Delete

Edge of Tomorrow: If At First You Don’t Succeed, Die and Die Again - Reactor

1 Share
Read the whole story
sarcozona
4 hours ago
reply
Epiphyte City
Share this story
Delete

Ask Ethan: Can a lumpy Universe explain dark energy? - Big Think

1 Share

Sign up for the Starts With a Bang newsletter

Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all

It’s now been more than 25 years since astronomers discovered “most of the Universe” in an incredibly surprising way. In terms of energy, the most dominant species in our Universe isn’t light, it isn’t normal matter, it isn’t neutrinos, and it isn’t even dark matter. Instead, a mysterious form of energy — dark energy — makes up about ⅔ of the total cosmic energy budget. As revealed by supernovae, baryon acoustic oscillations, the cosmic microwave background, and other key probes of the Universe, dark energy dominates the Universe and has for around ~6 billion years, causing our Universe to not only expand, but for that expansion to accelerate, causing distant galaxies to recede from us with greater and greater speeds as time goes on.

But could all of this be based on an erroneous assumption? Could dark energy not exist at all, and could a lumpy, highly inhomogeneous Universe be the culprit, as one recent study has claimed? That’s what many of you, including Dirk Van Tatenhove, Michael Wigner, and Patreon supporter RicL want to know, inquiring things such as:

“Is the timescape model of cosmic expansion a serious threat to the existence of dark energy?Do you find the timescape hypothesis of cosmic expansion to be credible? If so, would that create a problem with observations that the geometry of the universe is flat on the average?

The model suggests that a clock in the Milky Way would be about 35 percent slower than the same one at an average position in large cosmic voids, meaning billions more years would have passed in voids… 35% sounds an awful lot to me.”

Although this might be based on a relatively new study, the idea is quite old. It turns out it runs into colossal problems when confronted not just with supernova data, but with what’s already known about the large-scale structure of the Universe. Let’s take a look for ourselves.

The first thing you have to understand is that despite how it looks locally, where we have a few objects that are extremely dense compared to the cosmic average (like planets, stars, and galaxies) while most of space is devoid of such objects (interplanetary, interstellar, or intergalactic space) altogether, on large cosmic scales, the Universe is very, very uniform. If you were to take a “dipper” that was the size of a kitchen ladle and “dipped” it into the interior of a star or planet, it would pull out matter with roughly the density of water: 1 gram per cubic centimeter.

But if instead your dipper were enormous, like “10 billion light-years per side” levels of enormous, you’d find that whether you dipped your dipper into:

  • an ultra-dense galaxy cluster,
  • an ultra-sparse cosmic void,
  • or anything in between,

that the average density of what you pulled out would be nearly identical: with about one proton’s worth of total energy per cubic meter of space. Even though the difference between underdense and overdense regions (what the pros call “density contrast”) is enormous on small cosmic scales, with typical values approaching a factor of ~1030, on the largest of cosmic scales, those density differences are on the order of ~0.01%, or less than 1-part-in-10,000.

This isn’t something that you can measure very well by looking at isolated, random “points” that you might sample in the Universe. You can’t look at:

  • the brightest, most massive galaxies within the largest galaxy clusters,
  • the distribution of gamma-ray bursts,
  • the distributions of quasars,
  • or the distributions of cataclysms, like individual type Ia supernovae,

and expect that you’re going to get a “fair sample” of the Universe. If you insist on using those objects, which are non-representative of the Universe as a whole, you’re succumbing to the fallacy of using a biased tracer, which can lead you to believing in the existence of objects, forces, or structures that you would easily see don’t exist if you used a better, more comprehensive indicator.

Instead, our best tools for measuring how homogeneous (i.e., uniform) or inhomogeneous (e.g., clumpy or lumpy) the Universe is are twofold.

  1. We can start here, where we are, and measure how galaxies — large and small, high mass and low mass, luminous and faint, etc. — are distributed across space on all cosmic scales. Using this, we can construct a “mass/density map” of the Universe, not just nearby, but at all points throughout cosmic history.
  2. Or, we can start at the beginning — with the seed fluctuations produced by inflation — and evolve that forward in time until we reach the cosmic microwave background, and then compare our inhomogeneity map from that time (which we observe) with those theoretical predictions.

It should come as no surprise that we’ve done precisely that with both of these methods. For the second option, we got our best data back in the 2010s from the Planck satellite, and found that the “average density fluctuation” in the early Universe was roughly the same on all scales, large and small, and was at just the 1-part-in-30,000 level. Moreover, we’ve also accomplished this with the first option, and have found a value that’s not only consistent with the other method, but have shown how structure grows and clumps over cosmic time: in perfect agreement on practically all scales with what simulations and theory predicts.

Additionally, many near-future missions (the Vera Rubin observatory, the Nancy Roman telescope, and the SPHEREx mission) will measure cosmic structure more exquisitely than ever, cementing what was first assumed and then observed to be true: that the Universe, on the largest cosmic scales, is incredibly homogeneous and uniform.

It’s these facts that justify our longstanding cosmological models: where the Universe is roughly the same everywhere (homogeneous) and in all directions (isotropic), with only small, quantifiable imperfections superimposed atop this uniform background. The Universe was born uniform, then clumped and clustered, and despite all that’s transpired, remains relatively uniform on the largest of cosmic scales.

If we work with a Universe that has these properties, then the only way to “match” what we see with what must exist is to invoke two ingredients that go beyond what’s directly known to exist and make up the Universe. In addition to “normal matter” (which includes the familiar protons, neutrons, and electrons), to light (radiation in the form of photons), and to neutrinos (which are part of the Standard Model of known particles), there must also be a large amount of dark matter that outmasses normal matter by a factor of about 5-to-1, and there must also be dark energy, which accounts for about double the energy density of all other forms of mass/energy (including dark matter) combined.

That’s our standard model of cosmology, and it has withstood countless challenges throughout the 21st century.

Nevertheless, it’s important to keep on challenging the status quo and to explore alternatives, as the idea of attempting to knock down even your most well-established theories and hypotheses is a key component of the enterprise of science. One such alternative to consider that made a lot of noise at the very end of 2024 (and continues now, at the start of 2025) is known as the timescape cosmology, developed by David Wiltshire of New Zealand. In a new paper (and accompanying press release), the claim is that dark energy doesn’t need to exist, and that huge differences in energy density between regions of space create a “lumpy” Universe that exhibits wildly different expansion rates and cosmic ages across these various regions of space.

If this framework were correct, it would imply many new phenomena.

  • The Universe would need to be very inhomogeneous, and the relatively “clumpy” and “empty” regions of space that we find differ not by ~0.01% from one another in density, but by more like ~100% from region-to-region.
  • That instead of gravitational time dilation altering the age of one region versus another by up to hundreds or thousands of years compared to the 13.8 billion year age of the overall Universe, those age differences would instead be in the billions of years.
  • And that instead of dark energy causing the Universe to accelerate in its expansion, these large-magnitude inhomogeneities alter the local expansion rate severely, creating regions where the expansion rate is either much larger or much smaller than the cosmic average overall.

As many have noted — including astrophysicists I respect such as Brian Koberlein and David Kipping — this falls into the “profound, if true” category.

But is it true?

As the authors argue, if you use type Ia supernovae as the testing ground, you find that both the standard model of cosmology (what we sometimes call ΛCDM, or the dark matter and dark energy-rich but mostly uniform Universe that we know) and the timescape cosmology model work pretty well, and that future studies with many more type Ia supernovae will be able to distinguish between the two.

Unfortunately, however, for the authors and also for anyone buying into their claims, that’s not the best testing ground we can muster. The best testing ground for this scenario is to instead look at the structure that’s formed in the Universe on all scales, and to test-and-measure how homogeneous vs. inhomogeneous it actually is.

Then, based on that observed level of inhomogeneity, we can simulate a variety of things, including:

  • how significantly these cosmic inhomogeneities contribute to the overall energy density,
  • what type of effects this “inhomogeneity energy” actually has on the expanding Universe (i.e., whether it behaves as radiation, matter, curvature, dark energy, etc.),
  • and how that energy evolves over time, to see whether it can possibly emulate or mimic the effects of dark energy.

Fortunately for all of us, this is not “future work” where the answer is unknown, but work that was done by a large portion of the astrophysics community — including by me, personally — some 20 years ago.

Back in 2005, a team of astrophysicists (Rocky Kolb, Tony Riotto, Sabino Matarrese, and Alessio Notari) suggested a version of this very idea: that dark energy doesn’t exist, and that the effects of inhomogeneity energy on the Universe is instead tricking us into seeing an expansion rate that differs from our predictions. Relatively swiftly, the astrophysics community concluded that this could not be the case. Here’s how we knew.

There are both gravitational potential terms (because of gravitational collapse/contraction) and also kinetic terms (because the matter is in motion), and both of those play a role and must be calculated. After performing those calculations — not just with a first-order or second-order approximation, but taking into account fully nonlinear inhomogeneities — a number of lessons emerge.

  • It turns out that inhomogeneities, as a function of energy density, always remain small: no greater than about ~0.1% (or 1-part-in-1000) of the total energy density at any time, even many billions of years into the future.
  • It also turns out that there’s a “key scale” where the greatest contributions arise: on scales of between about a few hundred thousand and around ten million light-years. Both larger and smaller cosmic scales, even including super-horizon scales, contribute less.
  • And finally, it turns out that the inhomogeneities never behave as dark energy behaves, and in fact has an equation of state that always contributes further to a decelerating universe, not an accelerating one.

The ending sentences from my 2005 paper, now a full 20 years old, remain tremendously timely, especially with regard to Wiltshire’s work and the attention it’s been getting. In particular:

“The possibility that a known component of the universe may be responsible for the accelerated expansion remains intriguing. However, we conclude that sub-horizon perturbations are not a viable candidate for explaining the accelerated expansion of the universe.”

There’s also something worth pointing out to those of you who aren’t experts, but are merely interested onlookers: David Wiltshire, who has been the leading proponent of the timescape cosmology, has been investigating exactly this type of “alternative to dark energy” ever since that idea was first proposed (and debunked) back in 2005. Some example papers include:

  • a 2005 paper suggests using type Ia supernovae to suggest there’s no dark energy,
  • a 2007 paper suggesting that gravitational energy differences lead to the illusion of cosmic acceleration,
  • a 2011 paper again suggesting that gravitational energy mimics dark energy and leads to only an apparent acceleration,
  • a 2011 paper arguing against a homogeneous universe and in favor of this new “timescape cosmology,”
  • a 2017 paper seeking to prove that cosmic acceleration is only an apparent phenomenon from type Ia supernovae,
  • and three recent papers arguing the same concept: that dark energy isn’t real, and only appears as an apparent effect due to the backreaction of cosmic inhomogeneities.

Despite the fact that we have better type Ia supernovae data today than ever before, this “new research” is just a continuation of a longstanding research program that explores, but in no way proves or validates, an alternative idea to the mainstream. These ideas are important, but the consensus — at least for now — is that our understanding of large-scale structure precludes this from being physically relevant for our own Universe.

To put it all together: yes, our Universe is not perfectly homogeneous and smooth, but instead is indeed lumpy and clumpy. It was born with small imperfections and inhomogeneities in it, and over time, those imperfections grew into the vast cosmic web, with galaxies, stars, planets, white dwarfs, neutron stars, and black holes all throughout it. Some regions really are of enormous density; others really are of a very low density.

But the Universe is not so lumpy or clumpy that our foundational assumptions about it — that it’s isotropic and homogeneous on the largest scales — should be thrown out. The evidence for these properties of the Universe is very strong, as is evidence for the Universe being the same age and having (roughly) the same observed expansion rate in all directions and at all locations, save for the “evolution” that comes along with one simple fact: looking far away in space implies looking farther back in time.

I expect timescape cosmology to remain an area of interest for a few select researchers, but not to gain a broader following based on this research. It’s exciting that a cosmological test has been concocted, but the truth is that dark energy’s existence is now based on a wide, robust suite of evidence that’s so comprehensive that even if we ignored all of the type Ia supernova data entirely, we would still be compelled to conclude that dark energy exists. It’s important to keep your mind open to new ideas, but to always let reality itself rein you back in. Like many new ideas, the timescape cosmology simply withers when faced with the full suite of cosmological evidence.

Send in your Ask Ethan questions to startswithabang at gmail dot com!

In this article

Sign up for the Starts With a Bang newsletter

Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all

Read the whole story
sarcozona
17 hours ago
reply
Epiphyte City
Share this story
Delete

The Secret to a Better City Is a Two-Wheeler – Mother Jones

1 Share
Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.

Luchia Brown used to bomb around Denver in her Subaru. She had places to be. Brown, 57, works part time helping to run her husband’s engineering firm while managing a rental apartment above their garage and an Airbnb out of a section of the couple’s three-story brick house. She volunteers for nonprofits, sometimes offering input to city committees, often on transportation policy. “I’m a professional good troublemaker,” she jokes when we meet in her sun-soaked backyard one fine spring day.

She’s also an environmentally conscious type who likes the idea of driving less. Brown bought a regular bike years ago, but mainly used it just for neighborhood jaunts. “I’m not uber-fit,” she says. “I’m not a slug, but I’m not one of the warriors in Lycra, and I don’t really want to arrive in a sweat.”

Then, a couple of years ago, she heard Denver was offering $400 vouchers to help residents purchase an e-bike—or up to $900 toward a hefty “cargo” model that can haul heavier loads, including children. She’d considered an e-bike, but the city’s offer provided “an extra kick in the derriere to make me do it.”

She opens her garage door to show off her purchase: a bright blue Pedego Boomerang. It’s a pricey model—$2,600 after the voucher—but “it changed my life!” she says. Nowadays, Brown thinks nothing of zipping halfway across town, her long dark-gray hair flying out behind her helmet. Hills do not faze her. Parking is hassle-free. And she can carry groceries in a crate strapped to the rear rack. She’d just ridden 4 miles to a doctor’s appointment for a checkup on a recent hip replacement. She rides so often—and at such speeds—that her husband bought his own e-bike to keep up: “I’m like, ‘Look, when you’re riding with me, it’s not about exercise. It’s about getting somewhere.’”

She ended up gifting the Subaru to her son, who works for SpaceX in Texas. The only car left is her husband’s work truck, which she uses sparingly. She prefers the weirdly intoxicating delight of navigating on human-and-battery power: “It’s joy.”

Many Denverites would agree. Over the two years the voucher program—pioneering in scale and scope—has been in effect, more than 9,000 people have bought subsidized e-bikes. Of those, more than one-third were “income qualified” (making less than $86,900 a year) and thus eligible for a more generous subsidy. People making less than $52,140 got the most: $1,200 to $1,400. The goal is to get people out of their cars, which city planners hope will deliver a bouquet of good things: less traffic, less pollution, healthier citizens.

Research commissioned by the city in 2022 found that voucher recipients rode 26 miles a week on average, and many were using their e-bikes year-round. If even half of those miles are miles not driven, it means—conservatively, based on total e-bikes redeemed to date—the program will have eliminated more than 6.1 million automobile miles a year. That’s the equivalent of taking up to 478 gas-powered vehicles off the road, which would reduce annual CO2 emissions by nearly 190,000 metric tons.

Subsidizing electric vehicles isn’t a new concept, at least when those vehicles are cars. President Barack Obama’s 2009 American Recovery and Reinvestment Act offered up to $7,500 to anyone who bought an electric car or light truck, capped at 200,000 per automaker. In 2022, President Joe Biden’s Inflation Reduction Act created new and similar rebates without the caps. The US government has spent more than $2 billion to date subsidizing EV purchases, with some states and cities kicking in more. Weaning transportation off fossil fuels is crucial to decarbonizing the economy, and EVs on average have much lower life-cycle CO2 emissions than comparable gas vehicles—as little as 20 percent, by some estimates. In states like California, where more than 54 percent of the electricity is generated by renewables and other non–fossil fuel sources, the benefits are even more remarkable.

Now, politicians around the country have begun to realize that e-bikes could be even more transformative than EVs. At least 30 states and dozens of cities—from Ann Arbor, Michigan, to Raleigh, North Carolina—have proposed or launched subsidy programs. It’s much cheaper than subsidizing electric cars, and though e-bikes can’t do everything cars can, they do, as Brown discovered, greatly expand the boundaries within which people work, shop, and play without driving. Emissions plummet: An analysis by the nonprofit Walk Bike Berkeley suggests that a typical commuter e-bike with pedal assist emits 21 times less CO2 per mile than a typical electric car (based on California’s power mix) and 141 times less than a gas-powered car. And e-bikes are far less resource- and energy-intensive to manufacture and distribute.

Cities also are coming to see e-bikes as a potential lifeline for their low-income communities, a healthy alternative to often unreliable public transit for families who can’t afford a car. And that electric boost gives some people who would never have considered bike commuting an incentive to try, thus helping facilitate a shift from car dependency to a more bikeable, walkable, livable culture.

In short, if policymakers truly want to disrupt transportation—and reimagine cities—e-bikes might well be their secret weapon.

I’m an avid urban cyclist who rides long distances for fun, but I don’t ride an electric. So when I landed in Denver in April, I rented a Pedego e-bike to see how battery power would affect my own experience of getting around a city.

Reader: It was delightful. Denver is flat-ish, but it’s got brisk winds and deceptively long slopes as you go crosstown. There are occasional gut-busting hills, too, including one leading up to Sunnyside, the neighborhood where I was staying. Riding a regular bike would have been doable for an experienced cyclist like me, but the battery assist made longer schleps a breeze: I rode 65 miles one day while visiting four far-flung neighborhoods. On roads without traffic, I could cruise along at a speedy 18 miles an hour. The Cherry Creek bike trail, which bisects Denver in a southeast slash, was piercingly gorgeous as I pedaled past frothing waterfalls, families of ducks, and the occasional tent pitched next to striking pop art on the creekside walls. My Apple watch clocked a decent workout, but it was never difficult. 

I did a lunch ride another day with Mike Salisbury, then the city’s transportation energy lead overseeing the voucher program. Tall and lanky, with a thick mop of straight brown hair, Salisbury wears a slim North Face fleece and sports a beige REI e-bike dusted with dried mud. He’s a lifelong cyclist, but the e-bike, which he’d purchased about two years earlier, has become his go-to ride. “I play tennis on Fridays, and it’s like 6 miles away,” he says, and he always used to drive. “It would never, ever have crossed my mind to do it on my acoustic bike.” 

E-bikes technically date back to 1895, when the US inventor Ogden Bolton Jr. slapped an electric motor on his rear wheel. But for more than a century, they were niche novelties. The batteries of yore were brutally heavy, with a range of barely 10 miles. It wasn’t until the lithium-ion battery, relatively lightweight and energy-dense, began plunging in price 30 years ago that e-bikes grew lighter and cheaper. Some models now boast a range of more than 75 miles per charge, even when using significant power assist.

All of this piqued Denver’s interest. In 2020, the city had passed a ballot measure that raised, through sales taxes, $40 million a year for environmental projects. A task force was set up to figure out how to spend it. Recreational cycling has long been a pastime in outdoorsy Colorado, and bike commuting boomed on account of the pandemic, when Covid left people skittish about ridesharing and public transit. E-bikes, the task force decided, would be a powerful way to encourage low-emissions mobility. “We were thinking, ‘What is going to reduce VMT?”—vehicle miles traveled—Salisbury recalls. His team looked at e-bike programs in British Columbia and Austin, Texas, asked dealers for advice, and eventually settled on a process: Residents would get a voucher code through a city website and bring it to a local dealer for an instant rebate. The city would repay the retailer within a few weeks.

A program was launched in April 2022 with $300,000, enough for at least 600 vouchers. They were snapped up in barely 10 minutes, “like Taylor Swift fans flooding Ticketmaster,” Salisbury wrote in a progress report. His team then secured another $4.7 million to expand the program. “It was like the scene in Jaws,” he told me: “We’re gonna need a bigger boat.” Every few months, the city would release more vouchers, and its website would get hammered. Within a year, the program had handed out more than 4,700 vouchers, two-thirds to income-qualified riders.

Denver enlisted Ride Report, an Oregon-based data firm, to assess the program’s impact: Its survey found that 65 percent of the e-bikers rode every day and 90 percent rode at least weekly. The average distance was 3.3 miles. Salisbury was thrilled.

The state followed suit later that year, issuing e-bike rebates to 5,000 low-income workers (people making up to 80 percent of their county’s median income). This past April, state legislators approved a $450 tax credit for residents who buy an e-bike. Will Toor, executive director of the Colorado Energy Office, told me he found it very pleasant, and highly unusual, to oversee a program that literally leaves people grinning: “People love it. There’s nothing we’ve done that has gotten as much positive feedback.” 

I witnessed the good cheer firsthand talking to Denverites who’d taken advantage of the programs. They ranged from newbies to dedicated cyclists. Most said it was the subsidy that convinced them to pull the trigger. All seemed fairly besotted with their e-bikes and said they’d replaced lots of car trips. Software engineer Tom Carden chose a cargo model for heavy-duty hauling—he’d recently lugged 10 gallons of paint (about 110 pounds) in one go, he told me—and shuttling his two kids to and from elementary school.

Child-hauling is sort of the ideal application for cargo bikes. I arrange a ride one afternoon with Ted Rosenbaum, whose sturdy gray cargo e-bike has a toddler seat in back and a huge square basket in front. I wait outside a local day care as Rosenbaum, a tall fellow clad in T-shirt and khakis, emerges with his pigtailed 18-month-old daughter. He straps her in and secures her helmet for their 2.5-mile trek home. “It’s right in that sweet spot where driving is 10 to 15 minutes, but riding my bike is always 14,” Rosenbaum says as we glide away. “I think she likes this more than the car, too—better views.”

The toddler grips her seatposts gently, head swiveling as she takes in the sights. Rosenbaum rides slowly but confidently; I’d wondered how drivers would behave around a child on a cargo bike, and today, at least, they’re pretty solicitous. A white SUV trails us for two long blocks, almost comically hesitant to pass, until I give it a wave and the driver creeps by cautiously. At the next stoplight, Rosenbaum’s daughter breaks her silence with a loud, excited yelp: There’s a huge, fluffy dog walking by.

E-bikes stir up heated opposition, too. Sure, riders love them. But some pedestrians, drivers, dog walkers, and “acoustic” bikers are affronted, even enraged, by the new kid on the block.

This is particularly so in dense cities, like my own, where e-bikes have proliferated. By one estimate, New York City has up to 65,000 food delivery workers on e-bikes. Citi Bike operates another 20,000 pay-as-you-go e-bikes, and thousands of residents own one. When I told my NYC friends about this story, probably half, including regular cyclists, blurted out something along the lines of, “I hate those things.” They hate when e-bikers zoom past them on bike paths at 20 mph, dangerously close, or ride the wrong direction down bike lanes on one-way streets. And they hate sharing crowded bikeways with tourists and inexperienced riders.

In September 2023 near Chinatown, a Citi Bike customer ran into 69-year-old Priscilla Loke, who died two days later. After another Citi Biker rammed a Harlem pedestrian, Sarah Pratt, from behind, Pratt said company officials insisted they weren’t responsible. Incensed, a local woman named Janet Schroeder co-founded the NYC E-Vehicle Safety Alliance, which lobbies the city for stricter regulations. E-bikes should be registered, she told me, and she supports legislation that requires riders to display a visible license plate and buy insurance, as drivers do. This, Schroeder says, would at least make them more accountable. “We are in an e-bike crisis,” she says. “We have older people, blind people, people with disabilities who tell me they’re scared to go out because of the way e-bikes behave.”

Dedicated e-bikers acknowledge the problem, but the ones I spoke with also felt that e-bikes are taking excessive flak due to their novelty. Cars, they point out, remain a far graver threat to health and safety. In 2023, automobiles killed an estimated 244 pedestrians and injured 8,620 in New York City, while cyclists (of all types) killed eight pedestrians and injured 340. Schroeder concedes the point, but notes that drivers at least are licensed and insured—and are thus on the hook for casualties they cause.

Underlying the urban-transportation culture wars is the wretched state of bike infrastructure. American cities were famously built for cars; planners typically left precious little room for bikes and pedestrians, to say nothing of e-bikes, hoverboards, scooters, skaters, and parents with jogging strollers. Cars hog the roadways while everyone else fights for the scraps. Most bike lanes in the United States are uncomfortably narrow, don’t allow for safe passing, and are rarely physically separated from cars­—some cyclists call them “car door lanes.” The paths winding through Denver’s parks are multimodal, meaning pedestrians and riders of all stripes share the same strip, despite their very different speeds. 

Even in this relatively bike-friendly city, which has 196 miles of dedicated on-road bike lanes, riding sometimes requires the nerves of a daredevil. I set out one afternoon with 34-year-old Ana Ilic, who obtained her bright blue e-bike through the city’s voucher program. She used to drive the 10 miles to her job in a Denver suburb, but now she mostly cycles. She figures she clocks 70 miles a week by e-bike, driving only 10.

Her evening commute demonstrates the patchiness of Denver’s cycling network. Much of our journey is pleasant, on quieter roads, some with painted bike lanes. But toward the end, the only choice is a four-lane route with no bike lanes. Cars whip past us, just inches away. It’s as if we’d stumbled into a suburban NASCAR event. “This is the worst part,” she says apologetically.

The fear of getting hit stops lots of people from jumping into the saddle. But officials in many cities still look at local roadways and conclude there aren’t enough cyclists to justify the cost of more bike lanes. It’s the chicken-egg paradox. “You have to build it,” insists Peter Piccolo, executive director of the lobby Bicycle Colorado. “If we’re going to wait for the majority of the population to let go of car dependency, we’re never going to get here.” 

Advocates say the true solution is to embrace the “new urbanist” movement, which seeks to make cities around the world more human-scaled and less car-dependent. The movement contends that planners need to take space back from cars—particularly curbside parking, where vehicles sit unused 95 percent of the time, as scholar Donald Shoup has documented. That frees up room, potentially, for wider bike lanes that allow for safe passing. (New York and Paris are among the cities now embracing this approach.) You can also throw in “traffic calming” measures such as speed bumps and roads that narrow at intersections. One by-product of discouraging driving is that buses move faster, making them a more attractive commute option, too. 

Cities worldwide are proving that this vision is achievable: In 2020, the mayor of Bogota added 17 permanent miles of bike lanes to the existing 342 and has plans for another 157. (Bogota and several other Colombian cities also close entire highways and streets on Sundays and holidays to encourage cycling.) Paris, which has rolled out more than 500 miles of bike lanes since 2001, saw a remarkable doubling in the number of city cyclists from 2022 to 2023—a recent GPS survey found that more people now commute to downtown from the inner suburbs by bicycle than by car. In New York City, where bike lane miles have quintupled over the past decade, the number of cyclists—electric and otherwise—has also nearly doubled.

Colorado has made some progress, too, says Toor, the Energy Office director. For decades, state road funds could only be used to accommodate cars, but in 2021, legislators passed a bill to spend $5.4 billion over 10 years on walking, biking, and transit infrastructure—“because it’s reducing demand” on roadways, he explains. The transportation department also requires cities to meet greenhouse gas reduction targets, which is why Denver ditched a long-planned $900 million highway expansion in favor of bus rapid transit and safer streets.

One critique of e-bike programs, ironically, involves the climate return on investment. Research on Swedish voucher programs found that an e-bike typically reduces its owner’s CO2 emissions by about 1.3 metric tons per year—the equivalent of driving a gas-­powered vehicle about 3,250 miles. Not bad, but some researchers say a government can get more climate bang for the subsidy buck by, for example, helping people swap fossil fuel furnaces for heat pumps, or gas stoves for electric. E-bike subsidies are “a pretty expensive way” to decarbonize, says economist Luke Jones, who co-authored a recent paper on the topic. That’s because e-bikes, in most cases, only replace relatively short car trips. To really slash vehicular CO2, you’d need to supplant longer commutes. Which is clearly possible—behold all those Parisians commuting from the inner suburbs, distances of up to 12 miles. It’s been a tougher sell in Denver, where, as that 2022 survey found, only 5 percent of trips taken by voucher recipients exceeded 9 miles. 

But the value of e-bikes lies not only, and perhaps not even principally, in cutting emissions. Cycling also eases traffic congestion and improves health by keeping people active. It reduces the need for parking, which dovetails neatly with another new urbanist policy: reducing or eliminating mandatory parking requirements for new homes and businesses, which saves space and makes housing cheaper and easier to build. And biking has other civic benefits that are hard to quantify, but quite real, Salisbury insists. “It has this really nice community aspect,” he says. “When you’re out riding, you see people, you wave, you stop to chat—you notice what’s going on in the neighborhoods around you. You don’t do that so much in a car. It kind of improves your mood.”

That sounds gauzy, but studies have found that people who ride to work do, in fact, arrive in markedly better spirits than those who drive or take transit. Their wellbeing is fueled by fresh air and a feeling of control over the commute—no traffic jams, transit delays, or hunting for parking. “It’s basically flow state,” says Kirsty Wild, a senior research fellow of population health at the University of Auckland. Nobody has ascribed a dollar value to these benefits, but it’s got to be worth something for a city to have residents who are less pissed off.

What would really make e-bikes take off, though, is a federal subsidy. The Inflation Reduction Act initially included a $4.1 billion program that could have put nearly 4.5 million e-bikes on the road for $900 a pop, but Democratic policymakers yanked it. Subsequent bills to roll out an e-bike tax credit have not made it out of committee.

E-bike sharing companies are sometimes seen as gentrifiers, but Denver’s experience shows that e-bikes can be more than just toys for the affluent. Take June Churchill. She was feeling pretty stressed before she got her e-bike. She’d come to Denver for college, but after graduating had found herself unemployed, couchsurfing, and strapped for cash. Having gender-­transitioned, she was estranged from her conservative parents. “I was poor as shit,” she told me. But then she heard about the voucher program and discovered that she qualified for the generous low-income discount. Her new e-bike allowed her to expand her job search to a wider area—she landed a position managing mass mailings for Democratic campaigns—and made it way easier to look around for an affordable place to live. “That bike was totally crucial to getting and keeping my job,” she says.

It’s true that e-bikes and bikeshare systems were initially tilted toward the well-off; the bikes can be expensive, and bikeshares have typically rolled out first in gentrified areas. Denver’s answer was to set aside fully half of its subsidies for low-­income residents.

Churchill’s experience suggests that an e-bike can bolster not only physical mobility, but economic mobility, too. Denver’s low-­income neighborhoods have notoriously spotty public transit and community services, and, as the program’s leaders maintain, helping people get around improves access to education, employment, and health care. To that point, Denver’s income-qualified riders cover an average of 10 miles more per week than other voucher recipients—a spot of evidence Congress might contemplate.

But there are still some people whom cities will have to try harder to reach. I ride one morning to Denver’s far east side, where staffers from Hope Communities, a nonprofit that runs several large affordable-­housing units, are hosting a biweekly food distribution event. Most Hope residents are immigrants and refugees from ­Afghanistan, Myanmar, and other Asian and African nations. I watch as a procession of smiling women in colorful wraps and sandals collect oranges, eggs, potatoes, and broccoli, and health workers offer blood-pressure readings. There’s chatter in a variety of languages.

Jessica McFadden, a cheery program administrator in brown aviators, tells me that as far as her staff can tell, only one Hope resident, a retiree in his 70s named Tom, has snagged an e-bike voucher. The problem is digital literacy, she says. Not only do these people need to know the program exists, but they also have to know when the next batch of vouchers will drop—and pounce. But Hope residents can’t normally afford laptops or home wifi—most rely on low-end smartphones with strict data caps. Add in language barriers, and they’re generally flummoxed by online-first government programs.

Tom was able to get his e-bike, McFadden figures, because he’s American, is fluent in English, and has family locally. He’s more plugged in than most. She loves the idea of the voucher program. She just thinks the city needs to do better on outreach. Scholars who’ve studied e-bike programs, like John MacArthur at Portland State University, recommend that cities set up lending libraries in low-income areas so people can try an e-bike, and put more bike lanes in those neighborhoods, which are often last in line for such improvements.

In Massachusetts, the nonprofit organizers of a state-funded e-bike program operating in places like Worcester, whose median income falls well below the national average, found that it’s crucial to also offer people racks, pannier bags, and maintenance vouchers.

As I chat with McFadden, Tom himself suddenly appears, pushing a stroller full of oranges from the food distro. I ask him about his e-bike. He uses it pretty frequently, he says. “Mostly to shop and visit my sister; she’s over in Sloan Lake”—a hefty 15 miles away. Then he ambles off.

McFadden recalls how, just a few weeks earlier, she’d seen him cruising past on his e-bike with his oxygen tank strapped to the back, the little plastic air tubes in his nose. “Tom, are you sure you should be doing that?” she’d called out.

Tom just waved and peeled away. He had places to be.

Read the whole story
sarcozona
1 day ago
reply
Epiphyte City
Share this story
Delete

Green MLA Valeriote says oil and gas shares an 'oversight' - Coast Reporter

1 Comment

“I appreciate The Tyee for bringing this to my attention. This was an oversight which I am taking action to resolve,” Valeriote said in a statement to Pique. “The $172.08 I have in PrairieSky shares is a leftover dividend from past retirement investments in fossil fuels, which I have been actively divesting from over the years. While this highlights the systemic challenges of transitioning pensions away from oil and gas investments, I am committed to leading by example as the representative for West Vancouver-Sea to Sky and will be removing this investment immediately.”

Valeriote’s historic campaign centred around his fierce opposition to the Woodfibre LNG facility being construction on Howe Sound, which helped make him the Greens’ first-ever candidate elected on the B.C. mainland. Last month, he told The Squamish Chief it was “tremendously disappointing” the NDP wouldn’t agree to cancel the controversial project as part of its power-sharing agreement with the Greens.

The Greens’ 2024 election platform promised no new LNG projects, no permits for new fracking wells or pipelines, and vowed to set a date to begin phasing out gas production in the province.

Valeriote’s disclosure lists other investments, including shares in companies dedicated to sustainability and reducing carbon emissions. They include wind power company Innergex, electric and hybrid vehicle producer Azure Dynamics, and Foremost Lithium Resource and Technology.

The MLA also invests in the telecommunications giant, Telus Corp.; plane and snowmobile manufacturer, Bombardier; cybersecurity and software provider, BlackBerry; Toronto-Dominion Bank; water management consultants Paradigm Environmental Technologies; drug developer Arbutus Biopharma; and IM Cannabis Corp., among others.

Read the whole story
sarcozona
1 day ago
reply
It kills me that this $127 made more news than the $50 million of real estate investments the conservative who ran on fixing housing prices has
Epiphyte City
Share this story
Delete

How to spur the invention of more cancer screening tests | STAT

1 Share

Toni Roberts was 58 when she began to experience gastrointestinal issues. She modified her diet and tried over-the-counter remedies, but her symptoms did not improve. She finally got a CT scan, which led to an urgent visit with her doctor. When he told her that she had ovarian cancer, she thought he had confused her with someone else.

Surgery and chemotherapy followed. Toni died four years after her diagnosis, leaving behind two heartbroken sons.

Every year, nearly 20,000 American women like Toni Roberts are diagnosed with ovarian cancer and about 13,000 die from the disease. Symptoms like bloating often go unaddressed for months. Most women are diagnosed with advanced disease, and 70% of these women will die within five years. Survival rates for Black and Hispanic women are even worse.

STAT+

Already have an account? Log in

View All Plans

To read the rest of this story subscribe to STAT+.

Subscribe
Read the whole story
sarcozona
1 day ago
reply
Epiphyte City
Share this story
Delete
Next Page of Stories