plant lover, cookie monster, shoe fiend
18541 stories
·
21 followers

2030 Renewables in Australia forecast – 2024 Update | Evcricket's Energy

1 Comment

The end to another year means we have another chance to see how my ridiculous 2030 Forecast for Renewables In Australia is tracking.

I forgot to do this for a couple of years, mostly because the battery data doesn’t come out until March or so by which time I’ve lost the drive to write blog posts. So this year I’m doing it over New Years and fudging the solar and battery numbers for the last 3 months of the year.

Incredibly, reality is tracking ahead of my absurd compound interest forecast. The original forecast is shown as a solid line and the actual data from each year a dot.

Just eyeballing it looks like the solar rate might be slowing and I’d say that wind has definitely slowed last two years. I think this is broadly due to connection issues slowing down projects and causing a few plans to change. I think we’ll see the rate recover in the next few years, driven mostly by the decreasing cost of batteries.

In general, developers know that they can build renewables that make electricity cheaper than grid power, if the site is simply considered as a system block. But then that site will have various costs to connect and supply power to the grid or potential customers. Developers will then compare sites on the basis of the cost and certainty of the associated connection options. They might for example have two sites in mind, with good landlord agreements and known construction costs. One might be waiting for one of the big transmission projects to commit before building and the other can currently only have half the connection size they want. As battery prices fall those second projects will become viable as a solar and battery hybrid site. Increasingly I think grid scale projects will make their electricity and store it on site, looking to sell into the market when the confluence of factors; local congestion, market prices, weather events, present the most favourable opportunity to sell electricity.

As they say in the classics, time will tell.

Read the whole story
sarcozona
7 hours ago
reply
Gooooooo !!!!
Epiphyte City
Share this story
Delete

Turnout inequality in UK elections close to tipping point, report warns | Voter apathy | The Guardian

1 Share

UK elections are “close to a tipping point” where they lose legitimacy because of plummeting voter turnout among renters and non-graduates, an influential thinktank has said.

Analysis by the Institute for Public Policy Research (IPPR) found that the gap in turnout between those with and without university degrees grew to 11 percentage points in the 2024 general election – double that of 2019.

The turnout gap between homeowners and renters grew by nearly a quarter between the 2017 and 2024 elections, to 19 percentage points.

The findings suggest a growing disillusionment with politics among certain social groups, which is leading to increasingly unequal elections.

Parth Patel, an associate director of democracy and politics at IPPR, said: “We are close to the tipping point at which elections begin to lose legitimacy because the majority do not take part. That should be ringing more alarm bells than it is.”

Turnout inequality in the 2024 election was 11 percentage points between the top and bottom third earners and between people in working-class and middle-class jobs, which was largely unchanged since 2015.

The turnout gap between 18- to 24-year-olds and over-60s was 21 percentage points, a measure that has also remained stable, according to the analysis.

The data is likely to provoke concern among Labour strategists. Morgan McSweeney, Keir Starmer’s chief of staff and most influential adviser, built his 2024 election strategy around winning over voters without university degrees.

Party figures have turned their attention to combating the rise of Nigel Farage’s Reform UK, which came second in 89 Labour-held seats, including in many of the party’s former industrial heartlands in the north of England.

Labour committed to several franchise-boosting measures in its manifesto, including making voter registration easier, lowering the voting age to 16 and tightening the rules around political donations. The party has declined to unpick voter ID rules introduced by the Conservatives but has said it will increase the types of ID accepted at polling stations.

It has not pledged to introduce automatic voter registration but the Guardian reported in June that it was drawing up plans to do so. All these policies would form part of a democracy bill, which is in the early stages of development.

The IPPR called for bolder measures including a new civic duty to staff polling stations, which would be akin to jury service. The thinktank said that without action to improve participation in democracy, populist movements would continue to gain more traction even if the economy was doing well.

Only one in two adults living in the UK voted in the July 2024 general election, the thinktank’s analysis showed, which was the lowest share of the population to vote since universal suffrage. Among registered voters, only three in every five cast a ballot.

The IPPR recommended four policies that it said would be effective in raising turnout, narrowing inequality and could feasibly be implemented in this parliament. These were:

  • Lowering the voting age to 16.

  • Implementing automatic voter registration.

  • Introducing a £100,000 annual cap on donations to political parties.

  • Creating an “election day service”.

This latter would involve recruiting poll workers from the population by lot, similar to selecting citizens to do jury service.

Other policy suggestions included moving polling day to a weekend or making election days a public holiday, and scrapping the voter ID requirements introduced by the Conservatives.

The IPPR also said the government should consider enfranchising the 5 million long-term tax-paying residents who are not citizens of the UK, Ireland or Commonwealth countries.

Another suggested change is redrawing constituency boundaries based on the entire adult population of an area, not just registered voters.

Finally, the thinktank recommended bolstering the Electoral Commission’s powers to investigate candidate rule-breaking and impose sanctions, including fines of up to £500,000 or 4% of campaign spending.

Last month the Guardian reported that ministers were considering proposals from Vijay Rangarajan, the chief executive of the elections watchdog, to strengthen the rules around political donations to protect the electoral system from foreign interference.

Rangarajan said linking donations to political parties to the UK profits of companies owned by foreigners was one of the urgent changes needed to retain the trust of voters.

The move, which the government is looking at, could cap the amount that Elon Musk could donate through the British arm of his social media company X (formerly Twitter). There has been speculation that Musk, who is an ally of Donald Trump and a fierce critic of Starmer, could make a $100m (£80m) donation to Reform.

Ryan Swift, an IPPR research fellow who co-authored the report, said: “The widening turnout gaps between renters and homeowners, and graduates and non-graduates, highlight a glaring blind spot in tackling political inequality.

“To rebuild trust and strengthen democracy, we need bold reforms like votes at 16, automatic registration and fairer electoral rules.”

Read the whole story
sarcozona
18 hours ago
reply
Epiphyte City
Share this story
Delete

MAHA movement and GOP agree on downplaying health insurance | STAT

1 Comment

WASHINGTON — Robert F. Kennedy Jr.’s Make American Healthy Again movement calls for many things — avoiding chronic disease, making our food healthier, eliminating environmental risks, and rooting out corporate influence. Insurance isn’t one of them.

RFK Jr. is Trump’s pick for the nation’s top health role, running the Department of Health and Human Services. His movement’s goal of keeping people healthy might seem to be at odds with anticipated Republican policies that would result in higher uninsured rates

Some Republicans, on the other hand, say higher rates of health insurance have not made people healthier, and people wouldn’t need insurance as often if they were to adopt healthier lifestyles.

STAT+

Already have an account? Log in

View All Plans

To read the rest of this story subscribe to STAT+.

Subscribe
Read the whole story
sarcozona
20 hours ago
reply
What an absolute perversion of public health
Epiphyte City
Share this story
Delete

Authorization Milestone for First NextGen Covid Vaccine in Europe and More News (Update 24) - Absolutely Maybe

1 Comment

Read the whole story
sarcozona
21 hours ago
reply
Happy new year!
Epiphyte City
Share this story
Delete

Russia cuts off gas to Moldovan separatists, risking humanitarian crisis – POLITICO

1 Share
Read the whole story
sarcozona
21 hours ago
reply
Epiphyte City
Share this story
Delete

Calibration “resolves” epistemic uncertainty by giving predictions that are indistinguishable from the true probabilities. Why is this still unsatisfying?

1 Share

This is Jessica. The last day of the year is like a good time for finishing things up, so I figured it’s time for one last post wrapping up some thoughts on calibration. 

As my previous posts got into, calibrated prediction uncertainty is the goal of various posthoc calibration algorithms discussed in machine learning research, which use held out data to learn transformations on model predicted probabilities in order to achieve calibration on the held out data. I’ve reflected a bit on what calibration can and can’t give us in terms of assurances for decision-making. Namely, it makes predictions trustworthy for decisions in the restricted sense that a decision-maker who will choose their action purely based on the prediction can’t do better than treating the calibrated predictions as the true probabilities. 

But something I’ve had trouble articulating as clearly as I’d like involves what’s missing (and why) when it comes to what calibration gives us versus a more complete representation of the limits of our knowledge in making some predictions. 

The distinction involves how we express higher order uncertainty. Let’s say we are doing multiclass classification, and fit a model fhat to some labeled data. Our “level 0” prediction fhat(x) contains no uncertainty representation at all; we check it against the ground truth y. Our “level 1” prediction phat(.|x) predicts the conditional distribution over classes; we check it against the empirical distribution that gives a probability p(y|x) for each possible y. Our “level 2” prediction is trying to predict the distribution of the conditional distribution over classes, p(p(.|x), e.g. a Dirichlet distribution that assigns probability to each distribution p(.|x), which we can distinguish using some parameters theta.

From a Bayesian modeling perspective, it’s natural to think about distributions of distributions. A prior distribution over model parameters implies a distribution over possible data-generating distributions. Upon fitting a model, the posterior predictive distribution summarizes both “aleatoric” uncertainty due to inherent randomness in the generating process and “epistemic” uncertainty stemming from our lack of knowledge of the true parameter values. 

In some sense calibration “resolves” epistemic uncertainty by providing point predictions that are indistinguishable from the true probabilities. But if you’re hoping to get a faithful summary of the current state of knowledge, it can seem like something is still missing. In the Bayesian framework, we can collapse our posterior prediction of the outcome y for any particular input x to a point estimate, but we don’t have to. 

Part of the difficulty is that whenever we evaluate performance as loss over some data-generating distribution, having more than a point estimate is not necessary. This is true even without considering second order uncertainty. If we train a level 0 prediction of the outcome y using the standard loss minimization framework with 0/1 loss, then it will learn to predict the mode. And so to the extent that it’s hard to argue one’s way out of loss minimization as a standard for evaluating decisions, it’s hard to motivate faithful expression of epistemic uncertainty.

For second order uncertainty, the added complication is there is no ground truth. We might believe there is some intrinsic value in being able to model uncertainty about the best predictor, but how do we formalize this given that there’s no ground truth against which to check our second order predictions? We can’t learn by drawing samples from the distribution that assigns probability to different first order distributions p(.|x) because technically there is no such distribution beyond our conception of it. 

Daniel Lakeland previously provided an example I found helpful of the difference between putting Bayesian probability on a predicted frequency, where there’s no sense in which we can check the calibration of the second order prediction. 

Related to this, I recently came across a few papers by Viktor Bengs et al that formalize some of this in an ML context. Essentially, they show that there is no well-defined loss function that can be used in the typical ML learning pipeline to incentivize the learner to make correct predictions that are also faithful as expressions of the epistemic uncertainty. This can be expressed in terms of trying to find a proper scoring rule. In the case of first order predictions, as long as we use a proper scoring rule as the loss function, we can expect accurate predictions, because a proper scoring rule is one for which one cannot score higher by deviating from reporting our true beliefs. But there is no loss function that incentivizes a second-order learner to faithfully represent its epistemic uncertainty like a proper scoring rule does for a first order learner. 

This may seem obvious, especially if you’re coming from a Bayesian tradition, considering that there is no ground truth against which to score second order predictions. And yet, various loss functions have been proposed for estimating level 2 predictors in the ML literature, such as minimizing the empirical loss of the level 1 prediction averaged over possible parameter values. These results make clear that one needs to be careful interpreting the predictors they give, because, e.g., they can actually incentivize predictors that appear to be certain about the first order distribution. 

I guess a question that remains is how to talk about incentives for second order uncertainty at all in a context where minimizing loss from predictions is the primary goal. I don’t think the right conclusion is that it doesn’t matter since we can’t integrate it into a loss minimization framework. Having the ability to decompose predictions by different sources of uncertainty and be explicit about what our higher order uncertainty looks like going in (i.e., by defining a prior) has scientific value in less direct ways, like communicating beliefs and debugging when things go wrong. 

Read the whole story
sarcozona
21 hours ago
reply
Epiphyte City
Share this story
Delete
Next Page of Stories