I love the concept of it, but the thing about the NPVIC is that it’s 0% of the way there until it’s 100% of the way there. So while 77% seems like we’re close, and there is legislation pending that could get us to 95%, the only reason it seems to be going forward steadily is that it does nothing unless you go all the way.
The moment there is the prospect of legislation in a state that would get that last 5%, not only will that legislation be fought tooth and nail, but every state that has already entered the compact will have to fight like hell to keep it in place, not once but constantly forever. Because if you’re just over the threshold then almost any state backing out of the compact will nullify the whole thing again.
It seems too fragile to be a workable solution. But I guess I don’t see anything wrong with trying!
Many states will be incentivized to keep the compact passes because it means the election stops focusing on a handful of swing states.
Every presidential campaign will have to adopt a 50 state strategy, meaning a lot of states will receive political attention they never get because they aren't swing states.
The legislation has to all-or-nothing precisely because of the effect on political attention. If a state awarded its delegates by national popular vote before the magic 270 was reached then politicians can win that state by maximizing their votes in other states so they would be incentivised to put more focus on the states that aren't signed up if they expect to win the popular vote, reducing the political attention paid to signatories.
When the 270 mark is passed, it has the effect of making every vote equal everywhere.
When the 270 mark is passed, it has the effect of making every vote equal everywhere.
Right, and this is bad for the Republican Party, so they will do everything in their power to stop it.
I would like to see what an Electoral Collage looks like.
You don't have one in your Democracy Scrapbook?
Mom went authoritarian on me when I cut up her Electoral magazines.
It's what Trump tried to make with his fake idea.
I thought that's what the Republicans were trying to make with all those weird gerrymandered districts.
Singular prints only
Just a reminder to not be complacent.
Here's hoping Trump pulls a Biden tomorrow.
Or a James Earl Jones. I'm not picky.
Oh god, this is how I find out!!!!!
You really maxed out the comedic value of the shenanigan though, so there’s that!
Too soon.
For once, it might actually be too soon.
Too Soon! (I just read about JEJ)
Sad about JEJ. But maybe a rule of 3s that takes out Trump wouldn't be the worst outcome.
Dang, I wonder how Henry Kissinger escaped the claw for so long
Kissinger was a liche. We got a lucky and destroyed the phylactery.
The Behind The Bastards episode on him is shocking and hilarious
I wish BTB and The Dollop would do more crossover episodes.
I’m almost certain JEJ would be happy to take the bastard out with him
The problem was that Biden was actually trying to say something complicated and he got tripped up. Trump has always spoken at a kindergarten level because he knows he has nothing to say.
Sure. He did get tripped up though, and he end Ed up looking like an idiot. That said, it was for the best. Harris is an infinitely better candidate.
The question was if Trump was going to do the same.
Who is this guy and how serious should we take this information? This is by far the highest number I've seen for Trump so far.
He's quite a well known pollster. Up until recently he was responsible for Five Thirty Eight, but it got sold and he left.
He got the 2016 election wrong (71 Hilary, 28 trump)
He got the 2020 election right (89 Biden, 10 Trump)
Right and wrong are the incorrect terms here, but you get what I mean.
He didn’t get it wrong. He said the Clinton Trump election was a tight horse race, and Trump had one side of a four sided die.
The state by state data wasn’t far off.
Problem is, people don’t understand statistics.
If someone said Trump had over a 50% probability of winning in 2016, would that be wrong?
In statistical modeling you don’t really have right or wrong. You have a level of confidence in a model, a level of confidence in your data, and a statistical probability that an event will occur.
So if my model says RFK has a 98% probability of winning, then it is no more right or wrong than Silver's model?
If so, then probability would be useless. But it isn't useless. Probability is useful because it can make predictions that can be tested against reality.
In 2016, Silver's model predicted that Clinton would win. Which was wrong. He knew his model was wrong, because he adjusted his model after 2016. Why change something that is working properly?
You're conflating things.
Your model itself can be wrong, absolutely.
But for the person above to say Silver got something wrong because a lower probability event happened is a little silly. It'd be like flipping a coin heads side up twice in a row and saying you've disproved statistics because heads twice in a row should only happen 1/4 times.
Silver made a prediction. That's the deliverable. The prediction was wrong.
Nobody is saying that statistical theory was disproved. But it's impossible to tell whether Silver applied theory correctly, and it doesn't even matter. When a Boeing airplane loses a door, that doesn't disprove physics but it does mean that Boeing got something wrong.
but it does mean that Boeing got something wrong.
Comparing it to Boeing shows you still misunderstand probability. If his model predicts 4 separate elections where each underdog candidate had a 1 in 4 chance of winning. If only 1 of those underdog candidates wins, then the model is likely working. But when that candidate wins everyone will say "but he said it was only a 1 in 4 chance!". It's as dumb as people being surprised by rain when it says 25% chance of rain. As long as you only get rain 1/4 of the time with that prediction, then the model is working. Presidential elections are tricky because there are so few of them, they test their models against past data to verify they are working. But it's just probability, it's not saying this WILL happen, it's saying these are the odds at this snapshot in time.
Presidential elections are tricky because there is only one prediction.
Suppose your model says Trump has a 28% chance of winning in 2024, and mine says Trump has a 72% chance of winning in 2024.
There will only be one 2024 election. And suppose Trump loses it.
If that outcome doesn't tell us anything about the relative strength of our models, then what's the point of using a model at all? You might as well write a single line of code that spits out "50% Trump", it is equally useful.
The point of a model is to make a testable prediction. When the TV predicts a 25% chance of rain, that means that it will rain on one fourth of the days that they make such a prediction. It doesn't have to rain every time.
But Silver only makes a 2016 prediction once, and then he makes a new model for the next election. So he has exactly one chance to get it right.
His model has always been closer state to state, election to election than anyone else's, which is why people use his models. He is basically using the same model and tweaking it each time, you make it sound like he's starting over from scratch. When Trump won, none of the prediction models were predicting he would win, but his at least showed a fairly reasonable chance he could. His competitors were forecasting a much more likely Hillary win while he was showing that trump would win basically 3 out of 10 times. In terms of probability that's not a blowout prediction. His model was working better than competitors. Additionally, he basically predicted the battleground states within a half percentage iirc, that happened to be the difference between a win/loss in some states.
So he has exactly one chance to get it right.
You're saying it hitting one of those 3 of 10 is "getting it wrong", that's the problem with your understanding of probability. By saying that you're showing that you don't actually internalize the purpose of a predictive model forecast. It's not a magic wand, it's just a predictive tool. That tool is useful if you understand what it's really saying, instead of extrapolating something it absolutely is not saying. If something says something will happen 3 of 10 times, it happening is not evidence of an issue with the model. A flawless model with ideal inputs can still show a 3 of 10 chance and should hit in 30% of scenarios. Certainly because we have a limited number of elections it's hard to prove the model, but considering he has come closer than competitors, it certainly seems he knows what he is doing.
First, we need to distinguish Silver's state-by-state prediction with his "win probability". The former was pretty unremarkable in 2016, and I think we can agree that like everyone else he incorrectly predicted WI, MI, and PA.
However, his win probability is a different algorithm. It considers alternate scenarios, eg Trump wins Pennsylvania but loses Michigan. It somehow finds the probability of each scenario, and somehow calculates a total probability of winning. This does not correspond to one specific set of states that Silver thinks Trump will win. In 2016, it came up with a 28% probability of Trump winning.
You say that's not "getting it wrong". In that case, what would count as "getting it wrong"? Are we just supposed to have blind faith that Silver's probability calculation, and all its underlying assumptions, are correct? Because when the candidate with a higher win probability wins, that validates Silver's model. And when that candidate loses, that "is not evidence of an issue with the model". Heads I win, tails don't count.
If I built a model with different assumptions and came up with a 72% probability of Trump winning in 2016, that differs from Silver's result. Does that mean that I "got it wrong"? If neither of us got it wrong, what does it mean that Trump's probability of winning is simultaneously 28% and 72%?
And if there is no way for us to tell, even in retrospect, whether 28% is wrong or 72% is wrong or both are wrong, if both are equally compatible with the reality of Trump winning, then why pay any attention to those numbers at all?
I see what you're not getting! You are confusing giving the odds with making a prediction and those are very different.
Let's go back to the coin flips, maybe it'll make things more clear.
I or Silver might point out there's a 75% chance anything besides two heads in a row happening (which is accurate.) If, as will happen 1/4 times, two heads in a row does happen, does that somehow mean the odds I gave were wrong?
Same with Silver and the 2016 election.
I or Silver might point out there's a 75% chance anything besides two heads in a row happening (which is accurate.)
Is it?
Suppose I gave you two coins, which may or may not be weighted. You think they aren't, and I think they are weighted 2:1 towards heads. Your model predicts one head, and mine predicts two heads.
We toss and get two heads. Does that mean the odds I gave are right? Does it mean the odds you gave are wrong?
In the real world, your odds will depends on your priors, which you can never prove or disprove. If we were working with coins, then we could repeat the experiment and possibly update our priors.
But suppose we only have one chance to toss them, and after which they shatter. In that case, the model we use for the coins, weighted vs unweighted, is just a means to arrive at a prediction. The prediction can be right or wrong, but the internal workings of a one-shot model - including odds - are unfalsifiable. Same with Silver and the 2016 election.
The thing is, Nate Silver did not make a prediction about the 2016 race.
He said that Hilary had a higher chance of winning. He didn't say Hilary was going to win.
How can you falsify the claim "Clinton has a higher chance of winning"?
Alternately:
Silver said "Clinton has a higher chance of winning in 2016" whereas Michael Moore said "Trump has a higher chance of winning in 2016".
In hindsight, is one of these claims more valid than the other? Because if two contradictory claims are equally valid, then they are both meaningless.
Silver made a prediction. That's the deliverable. The prediction was wrong.
Would you mind restating the prediction?
He predicted Clinton would win. That's the only reasonable prediction if her win probability was over 50%
If I say a roll of a 6-sided die has a >50% chance of landing on a number above 2, and after a single roll it lands on 2, was I wrong?
If anything, the problem is in the unfalsifiability of the claim.
Admittedly, 538 was pretty good about showing their work after. While individual events suffer from the unfalsifiability issue, 538 when Silver was around, did pretty good "how did we do for individual races/states" and compared their given odds to the actual results.
If you predict that a particular die will land on a 3-6 and it lands on a 2, then you were wrong. Predictions are occasionally wrong, that's unavoidable in the real world. Maybe the die wasn't fair and you should adjust your priors.
On the other hand, if you refuse to make a prediction but simply say a particular die has a >50% chance of landing above 2, then your claim is non-falsifiable. I could roll a hundred 1's in a row, and you could say that your probability is correct and I was just unlucky. That's why non-falsifiable claims are ultimately worthless.
Finally, if you claim that a theoretically fair die has a 2/3 probability of landing on 3-6 then you are correct, but that does not necessarily have anything to do with the real world of dice.
It's forecasting, not a prediction. If the weather forecast said there was a 28% chance of rain tomorrow and then tomorrow it rained would you say the forecast was wrong? You could say that if you want, but the point isn't to give a definitive prediction of the outcome (because that's not possible) it's to give you an idea of what to expect.
If there's a 28% chance of rain, it doesn't mean it's not going to rain, it actually means you might want to consider taking an umbrella with you because there's a significant probability it will rain. If a batter with a .280 batting average comes to the plate with 2 outs at the bottom of the ninth, that doesn't mean the game is over. If a politician has a 28% probability of winning an election, it's not a statement that the politician will definitely lose the election.
If the weather forecast said there was a 28% chance of rain tomorrow and then tomorrow it rained would you say the forecast was wrong?
Is it possible for the forecast to be wrong?
I think so. If you look at all the times the forecast predicts a 28% chance of rain, then it should rain on 28% of those days. If it rained, say, on half the days that the forecast gave a 28% chance of rain then the forecast would be wrong.
With Silver, the same principle applies. Clinton should win at least 50% of the 2016 elections where she has at least a 50% chance of winning. She didn't.
If Silver kept the same model over multiple elections, then we could look at his probabilities in finer detail. But he doesn't.
How about this:
Two people give the odds for the result of a coin flip of non-weighted coins.
Person A: Heads = 50%, Tails = 50%
Person B: Heads = 75%, Tails = 25%
The result of the coin flip ends up being Heads. Which person had the more accurate model? Did Person A get something wrong?
Person B's predicted outcome was closer to the truth.
Perhaps person A's prediction would improve if multiple trials were allowed. Perhaps their underlying assumptions are wrong (ie the coins are not unweighted).
Perhaps person A's prediction would improve
But in this hypothetical scenario of explicitly unweighted coins, Person A was entirely correct in the odds they gave. There's nothing to improve.
We are talking about testing a model in the real world. When you evaluate a model, you also evaluate the assumptions made by the model.
Let's consider a similar example. You are at a carnival. You hand a coin to a carny. He offers to pay you $100 if he flips heads. If he flips tails then you owe him $1.
You: The coin I gave him was unweighted so the odds are 50-50. This bet will pay off.
Your spouse: He's a carny. You're going to lose every time.
The coin is flipped, and it's tails. Who had the better prediction?
You maintain you had the better prediction because you know you gave him an unweighted coin. So you hand him a dollar to repeat the trial. You end up losing $50 without winning once.
You finally reconsider your assumptions. Perhaps the carny switched the coin. Perhaps the carny knows how to control the coin in the air. If it turns out that your assumptions were violated, then your spouse's original prediction was better than yours: you're going to lose every time.
Likewise, in order to evaluate Silver's model we need to consider the possibility that his model's many assumptions may contain flaws. Especially if his prediction, like yours in this example, differs sharply from real-world outcomes. If the assumptions are flawed, then the prediction could well be flawed too.
Probability is useful because it can make predictions that can be tested against reality.
Yes. But you'd have to run the test repeatedly and see if the outcome, i.e. Clinton winning, happens as often as the model predicts.
But we only get to run an election once. And there is no guarantee that the most likely outcome will happen on the first try.
If you can only run an election once, then how do you determine which of these two results is better (given than Trump won in 2016):
Clinton has a 72% probability of winning in 2016
Trump has a 72% probability of winning in 2016
You do it by comparing the state voting results to pre-election polling. If the pre-election polling said D+2 and your final result was R+1, then you have to look at your polls and individual polling firms and determine whether some bias is showing up in the results.
Is there selection bias or response bias? You might find that a set of polls is randomly wrong, or you might find that they're consistently wrong, adding 2 or 3 points in the direction of one party but generally tracking with results across time or geography. In that case, you determine a "house effect," in that either the people that firm is calling or the people who will talk to them lean 2 to 3 points more Democratic than the electorate.
All of this is explained on the website and it's kind of a pain to type out on a cellphone while on the toilet.
You are describing how to evaluate polling methods. And I agree: you do this by comparing an actual election outcome (eg statewide vote totals) to the results of your polling method.
But I am not talking about polling methods, I am talking about Silver's win probability. This is some proprietary method takes other people's polls as input (Silver is not a pollster) and outputs a number, like 28%. There are many possible ways to combine the poll results, giving different win probabilities. How do we evaluate Silver's method, separately from the polls?
I think the answer is basically the same: we compare it to an actual election outcome. Silver said Trump had a 28% win probability in 2016, which means he should win 28% of the time. The actual election outcome is that Trump won 100% of his 2016 elections. So as best as we can tell, Silver's win probability was quite inaccurate.
Now, if we could rerun the 2016 election maybe his estimate would look better over multiple trials. But we can't do that, all we can ever do is compare 28% to 100%.
Just for other people reading this thread, the following comments are an excellent case study in how an individual (the above poster) can be so confidently mistaken, even when other posters try to patiently correct them.
May we all be more respectful of our own ignorance.
But what if there's a 28% chance said poster is right?
He works for Peter Theil now, so I take everything he says with a huge grain of salt.
Because of Polymarket? Not everything is a conspiracy.
Because Peter Thiel is a right-wing billionaire piece of shit whose little bitch boy is J. D. Vance.
Okay. That's not in dispute. But partial ownership of a company doesn't make its employees your slaves. Especially when the company has nothing to do with ideological stuff.
Polling guru Nate Silver and his election prediction model gave Donald Trump a 63.8% chance of winning the electoral college in an update to his latest election forecast on Sunday, after a NYT-Siena College poll found Donald Trump leading Vice President Kamala Harris by 1 percentage point.
He's just a guy analizing the polls. The source is Fox News. He mentions in the article that tomorrow's debate could make that poll not matter.
Should you trust Nate or polls? They're fun but... Who is answering these polls? Who wants to answer them before even October?
So yeah take it seriously that a poll found that a lot of support for Trump exists. But it's just a moment of time for whoever they polled. Tomorrow's response will be a much better indication of any momentum.
analizing
I have shamed my family
Whi, is it not completeli spelled correctli?
It just seems strange because I don't think that many people are on the fence. Perhaps I'm crazy, but I feel most people know exactly who they're voting for already. Makes me wonder how valid this cross-section was that was used as the sample set. If it accurately represents the US, including undecided voters, then... 😮
but I feel most people know exactly who they’re voting for already
The cross-section of people you know are more politically off the fence than the entire nation. Those that aren't online at all are also more undecided and less likely to interact with you.
I listen to those news things that interview people on the street and I'm amazed at how many are uninformed and can go either way.
There's a Trump undercount in polling: Trump voters don't trust "MSM" and therefore don't answer calls from pollsters, or are embarrassed to admit they will vote for him.
Same goes for asking random people on the street.
There's also an undercount of young people who don't answer the phone.
I don’t know many people (boomers and younger) who answer the phone from numbers they do not recognize. I would like to imagine that the people who do answer strange numbers tend to be out of touch. Bias in the polls to fools or the lucky who are not spammed ?
And an undercount of women who are telling their husbands and anyone else who asks that they'll be voting Trump, but will actually vote for Harris when the time comes. And an undercount of bro-ski-s who claim to support Harris, but secretly hate the fact that they can't get a 'female' that will cater to their every whim and will vote Trump because he'll increase oppression of women. And an undercount of cat ladies.... etc. Most "high quality" models at least attempt to mitigate these over and undercounts, which definitely skews results, and why poll aggregators are important. It helps to eliminate biases in polling types. There's really only ONE poll that matters. VOTE! BRING YOUR FRIENDS!
Pollsters are compensating for that undercount of unlikely voters. 2016 they were low, 2020 still low but pretty close. They will have scaled it up to be more accurate this go around.
Except there's a few snags there. In between the 2020 election and now, there was an insurrection, Roe v. Wade was overturned, Trump was convicted of crimes and indicted for many more. These are things that a statistical process can't really account for when putting weight on how likely a respondant is to actually vote.
Trump lost in 2020. Do all of these events incentivize more people will turn out for him this time than in the last election? Or will less people turn out for him?
Every time something unprecedented happens it negatively impacts the ability for a scientific statistical process to predict the outcome. Science can't predict things there's no model for, and how do you can't have a model for something you haven't seen before. And a hell of a lot of unprecedented shit has happened. Maybe next time a convicted felon that tried to overthrow democracy runs in an election there can be accurate polling, but it's not going to be the case in this election.
There really is no way to know what will happen on election day. So there's else to do other than maximum effort until election day.
The issue isn't really people on the fence for Trump or Harris but mainly with generating turnout. After Biden's poor debate performance, people didn't change their mind and decide to vote for Trump, they became apathetic and maybe wouldn't show up to vote.
Harris doesn't need to persuade people to abandon Trump, she needs to get people excited to show up to vote.
He's not polling, he is aggregating all of the polls into a prediction model. Either way it is just a snapshot in time.
The key to doing statistics well is to make sure you aren't changing the results with any bias. This means enough samples, a good selection of samples, and weighing the outcome correctly. Even honest polling in pre-election is hard to get right, and because of that it's easy to make things lean towards results if you want to get certain results, or or getting paid to get those results.
There's only one poll that matters, and that poll should include as large of a sample as possible, and be counted correctly. Even though some will try to prevent that from happening.
It's a chance of winning, not a poll, so 64% is high but not insane. Silver is serious and it's a decent model. Knowing the model there's a pretty good chance this is a high point for Trump but it's not like he's pulling this out of nowhere, he has had similar models every election cycle since like 2008.
If it's overstaying Trump it's because his model is interpreting the data incorrectly because of the weirdness of this election cycle. I personally think that is likely the case here.
That used to be true, but in recent years he has gotten a lot more conservative, so I personally take his predictions with a huge grain of salt.
Yes, I kinda agree. Let's see his model's brier score in November :)
This isn't a poll. That's why the number is so high. His model is also automatically depressing Harris' numbers because of the convention right now. (It did the same thing to Trump after his convention)
Nate has been upfront in his newsletters about the factor dropping off the model after today, but then it's also the debate. Things are likely to be far more clear going into the weekend because we'll have post debate polling being published and no more convention adjustments.
You shouldn't take it seriously. The 24-hour news cycle depends on data like this. It just doesn't tell us anything.
Their models have been really accurate for the last several election cycles. They’re part of fivethirtyeight.com
No, Nate is not part of 538 anymore. He now works for a crypto betting website partly owned by Peter Thiel.
I'll let you decide how neutral that makes him.
Peter Thiel, the same guy who sold Republicans on JD the couch fucker Vance
While that is also my pet name for JD, keep in mind it is aspirational, not historic.
He's a degen gambler who admits in his book he was gambling up to $10k a day while running 538... It never made him go "huh maybe I fucked my employees because I'm a degen gambler."
Boo, what does this mean for 538?
Nate is not with 538 anymore. Disney didn’t renew his contract. However, he got to keep the model that he developed and publishes it for his newsletter subscribers. 538 had to rebuild their model from scratch this year with G Elliot Morris.
Now Nate hosts the podcast Risky Business with Maria Konnikova. The psychologist who became a professional poker player while researching a book. It’s pretty good.
Who is this guy and how serious should we take this information?
Well, he did predict Clinton would win in 2016 so there's that.
He's renowned for being wrong for several previous elections
All prediction models only give you odds, not flawless accuracy. He has been closer in every election than most everyone else in the prediction market.
Who is Nate Silver? Really?
Hey man there is a mountain of people who don't know things and are scared to ask. learning is always a good thing
Social media isn't a search engine. If an article is referring to someone by name in the title, they almost certainly have a Wikipedia page the questioner could read rather than requesting random strangers on a message board provide answers for them (in the form of multiple answers of varying bias and accuracy).
Wanting to learn isn't the problem, it's not spending the tiniest bit of personal effort before requesting service from other people.
Or he could have a conversation on a conversation forum.
Perish the thought!
Yeah. I think we take our easy navigation for granted sometimes. Like... I can get most information pretty quickly and not have a lot of trouble discerning what I need to do to get that information.
But not everyone is as "natural" at surfing. Maybe they have trouble putting things in perspective, they don't know how to use a tool like Wikipedia, or even - maybe they just don't like researching.
I'm so glad we have people that are great at keeping up with everything. But we have to remember that presenting and teaching information accurately and helpfully is a skill that we need desperately.
Whether it’s 55/45 or 65/35, we’re still basically talking about the same thing. This race is neck and neck, and whoever gets the turnout edge will win. We’re talking about fractions of percents that are at play, which is why these odd are a coin toss.
Edit: it looks like 538’s model is new, and Silver doesn’t like it or the guy behind it.
Different model, same website. Silver got to keep his model and took it elsewhere after departing from 538.
TIL. I thought they forked it. I didn’t realize 538’s was all new.
Wait, he's not 538 now?
Nope. I don't know exactly what happened there, but after ABC bought it, Nate was gradually phased out. He found alternative funding.
to be fair, nate silver is an idiot funded by peter thiel
He's not an idiot. He is funded by Thiel. He has been politically captured by authoritarian capitalism, so I'd be wary of any models he produced that aren't independently audited for bias.
I think polls are useful, and the monte carlo simulation approach for turning them into a electorial vote probability is good, but there "too much" magic sauce left over for me to trust the outputs from Silver or 538.
I doubt even the usefulness of polls. Who answers polls anymore? We've been polled and surveyed to death. Nobody has time for it anymore.
I do or at least I have in the past. If the caller will answer my Google call screening, I'll answer or call back.
There's always been a issue of sample not matching population, and a variety of methods to correct for that. But, I will admit there is some limitations to that, and I don't quite understand where the limitations are. (I love math, but I failed statistics once, and barely passed the second time. I prefer symbols and proofs and closed forms...)
I hate my country
Important to note, these forecasts are absolutely subject to change. This is not Nostradamus. It is merely reading the polls and factors as they stand. If Harris obliterates Trump tomorrow then this flips. If everyone donates enough money this week and the DNC gets more ground network for their get out the vote efforts, then this flips
All the model guys are very clear about this.
What's driving this current Trump run in the models is the lack of a convention bump for Harris. Models automatically tune a candidate's chances down by about 10 percent after their convention because it's usually a bit of a honeymoon period. It's been pointed out though that she may have had her honeymoon period after taking over from Biden. In which case the odds are more like 46/54.
The takeaway from this is that this election is incredibly close right now. Even at 36/64 it is very close. Both candidates need to run near perfect campaigns to have a chance of winning.
What the fuck? How can this "race" even be close? How brain-dead emotional are the voters? There are two candidates, you choose the person who's ideals and directions you believe in? How is the election process surprisingly similar to an ADHD kindegarten with a nominated side whose campaign is metaphorical shit slinging??
My parents believe whatever slop is thrown at them, so it doesnt surprise me.
Mine too. They want to believe, so they do.
There are still people that distrust government as a general principle AND still believe the GOP is the party of "small government" so they will vote for whatever name is next to the R.
There's a lot of Gen X and Millennials who were raised to automatically sit between the parties and ignore all the noise about each party being evil. To try and make an active decision, rather than just being a fan. From 1960 to 2000, that wasn't horrible advice for the average person. But now it's led them into considering Trump and Harris as equals because they've ignored all "the noise" about Trump.
That's my opinion anyways. It's what I've encountered in many places.
Nate silver also predicted Hillary would win against Trump.
He predicted she had 70% chance to win. He didn't predict her to win.
So... About the same as this
I suspect Harris got her "convention bounce" (as defined by the model) right when she became the nominee, this made the model think she was overperforming pre-convention and now the bounce is fading "early" when the model thinks she should still have it so it seems like she's underperformed.
If this is the theory, knowing how close the swing states are and thus how swingy it can be, most likely this number goes back to maybe 55/45 Trump.
Isn't he also shilling PolyMarket as well?
He's a paid something or other for them, why?
But still no getting rid of the EC or passing RCV by incumbents.
Gaza is calling
Gaza is calling for trump to be elected?
Or what else did you mean?
The misinformation team from Russia just wants to destroy us from the inside. Gaza is not going to be in the debate at all. Trump will certainly do whatever it is that makes him more followers or money. And I’m betting that letting Muslims die is pretty high on the MAGA wish list.
This is going to sound bad but as much as I empathize with Palestine we have massive problems here too that need to be solved
Harris's lack of action on Gaza is the reason she's sinking in the forecasts.
She should help Gaza to help herself, if for no other reason.
I doubt that. Most people outside of here don’t care, or don’t care about it enough to allow a Trump win—because that effects them more and we all know Americans are selfish.
Arab-Americans in Michigan care, and it's going to cost Harris Michigan.
harris isn't prime minister of israel nor president of the us
No, but she does seem to be the president of Israel’s cheer squad.
"Forecasts" is a very strange thing to call polls.
The forecast isn't a poll. 538 analyzes data from polls to create their forecast.
If you don't want to read the article, you could at least read the headline, which is also the title of this lemmy post that you're commenting on right now. Look at the top of your screen. It says "forecast".
Abolish the Electoral Collage.
That ain’t gonna happen.
That said, we can make it irrelevant with The National Popular Vote Interstate Compact. It’s 77% the way there.
https://en.wikipedia.org/wiki/National_Popular_Vote_Interstate_Compact
I love the concept of it, but the thing about the NPVIC is that it’s 0% of the way there until it’s 100% of the way there. So while 77% seems like we’re close, and there is legislation pending that could get us to 95%, the only reason it seems to be going forward steadily is that it does nothing unless you go all the way.
The moment there is the prospect of legislation in a state that would get that last 5%, not only will that legislation be fought tooth and nail, but every state that has already entered the compact will have to fight like hell to keep it in place, not once but constantly forever. Because if you’re just over the threshold then almost any state backing out of the compact will nullify the whole thing again.
It seems too fragile to be a workable solution. But I guess I don’t see anything wrong with trying!
Many states will be incentivized to keep the compact passes because it means the election stops focusing on a handful of swing states.
Every presidential campaign will have to adopt a 50 state strategy, meaning a lot of states will receive political attention they never get because they aren't swing states.
The legislation has to all-or-nothing precisely because of the effect on political attention. If a state awarded its delegates by national popular vote before the magic 270 was reached then politicians can win that state by maximizing their votes in other states so they would be incentivised to put more focus on the states that aren't signed up if they expect to win the popular vote, reducing the political attention paid to signatories.
When the 270 mark is passed, it has the effect of making every vote equal everywhere.
Right, and this is bad for the Republican Party, so they will do everything in their power to stop it.
Very interesting, I had never heard of this.
CGP Grey has a great video on it.
Sssh, Top Sneaky.
Or Electoral College even.
I would like to see what an Electoral Collage looks like.
You don't have one in your Democracy Scrapbook?
Mom went authoritarian on me when I cut up her Electoral magazines.
It's what Trump tried to make with his fake idea.
I thought that's what the Republicans were trying to make with all those weird gerrymandered districts.
Singular prints only
Just a reminder to not be complacent.
Here's hoping Trump pulls a Biden tomorrow.
Or a James Earl Jones. I'm not picky.
Oh god, this is how I find out!!!!!
You really maxed out the comedic value of the shenanigan though, so there’s that!
Too soon.
For once, it might actually be too soon.
Too Soon! (I just read about JEJ)
Sad about JEJ. But maybe a rule of 3s that takes out Trump wouldn't be the worst outcome.
Dang, I wonder how Henry Kissinger escaped the claw for so long
Kissinger was a liche. We got a lucky and destroyed the phylactery.
The Behind The Bastards episode on him is shocking and hilarious
I wish BTB and The Dollop would do more crossover episodes.
I’m almost certain JEJ would be happy to take the bastard out with him
The problem was that Biden was actually trying to say something complicated and he got tripped up. Trump has always spoken at a kindergarten level because he knows he has nothing to say.
Sure. He did get tripped up though, and he end Ed up looking like an idiot. That said, it was for the best. Harris is an infinitely better candidate.
The question was if Trump was going to do the same.
Who is this guy and how serious should we take this information? This is by far the highest number I've seen for Trump so far.
He's quite a well known pollster. Up until recently he was responsible for Five Thirty Eight, but it got sold and he left.
He got the 2016 election wrong (71 Hilary, 28 trump) He got the 2020 election right (89 Biden, 10 Trump)
Right and wrong are the incorrect terms here, but you get what I mean.
He didn’t get it wrong. He said the Clinton Trump election was a tight horse race, and Trump had one side of a four sided die.
The state by state data wasn’t far off.
Problem is, people don’t understand statistics.
If someone said Trump had over a 50% probability of winning in 2016, would that be wrong?
In statistical modeling you don’t really have right or wrong. You have a level of confidence in a model, a level of confidence in your data, and a statistical probability that an event will occur.
So if my model says RFK has a 98% probability of winning, then it is no more right or wrong than Silver's model?
If so, then probability would be useless. But it isn't useless. Probability is useful because it can make predictions that can be tested against reality.
In 2016, Silver's model predicted that Clinton would win. Which was wrong. He knew his model was wrong, because he adjusted his model after 2016. Why change something that is working properly?
You're conflating things.
Your model itself can be wrong, absolutely.
But for the person above to say Silver got something wrong because a lower probability event happened is a little silly. It'd be like flipping a coin heads side up twice in a row and saying you've disproved statistics because heads twice in a row should only happen 1/4 times.
Silver made a prediction. That's the deliverable. The prediction was wrong.
Nobody is saying that statistical theory was disproved. But it's impossible to tell whether Silver applied theory correctly, and it doesn't even matter. When a Boeing airplane loses a door, that doesn't disprove physics but it does mean that Boeing got something wrong.
Comparing it to Boeing shows you still misunderstand probability. If his model predicts 4 separate elections where each underdog candidate had a 1 in 4 chance of winning. If only 1 of those underdog candidates wins, then the model is likely working. But when that candidate wins everyone will say "but he said it was only a 1 in 4 chance!". It's as dumb as people being surprised by rain when it says 25% chance of rain. As long as you only get rain 1/4 of the time with that prediction, then the model is working. Presidential elections are tricky because there are so few of them, they test their models against past data to verify they are working. But it's just probability, it's not saying this WILL happen, it's saying these are the odds at this snapshot in time.
Presidential elections are tricky because there is only one prediction.
Suppose your model says Trump has a 28% chance of winning in 2024, and mine says Trump has a 72% chance of winning in 2024.
There will only be one 2024 election. And suppose Trump loses it.
If that outcome doesn't tell us anything about the relative strength of our models, then what's the point of using a model at all? You might as well write a single line of code that spits out "50% Trump", it is equally useful.
The point of a model is to make a testable prediction. When the TV predicts a 25% chance of rain, that means that it will rain on one fourth of the days that they make such a prediction. It doesn't have to rain every time.
But Silver only makes a 2016 prediction once, and then he makes a new model for the next election. So he has exactly one chance to get it right.
His model has always been closer state to state, election to election than anyone else's, which is why people use his models. He is basically using the same model and tweaking it each time, you make it sound like he's starting over from scratch. When Trump won, none of the prediction models were predicting he would win, but his at least showed a fairly reasonable chance he could. His competitors were forecasting a much more likely Hillary win while he was showing that trump would win basically 3 out of 10 times. In terms of probability that's not a blowout prediction. His model was working better than competitors. Additionally, he basically predicted the battleground states within a half percentage iirc, that happened to be the difference between a win/loss in some states.
You're saying it hitting one of those 3 of 10 is "getting it wrong", that's the problem with your understanding of probability. By saying that you're showing that you don't actually internalize the purpose of a predictive model forecast. It's not a magic wand, it's just a predictive tool. That tool is useful if you understand what it's really saying, instead of extrapolating something it absolutely is not saying. If something says something will happen 3 of 10 times, it happening is not evidence of an issue with the model. A flawless model with ideal inputs can still show a 3 of 10 chance and should hit in 30% of scenarios. Certainly because we have a limited number of elections it's hard to prove the model, but considering he has come closer than competitors, it certainly seems he knows what he is doing.
First, we need to distinguish Silver's state-by-state prediction with his "win probability". The former was pretty unremarkable in 2016, and I think we can agree that like everyone else he incorrectly predicted WI, MI, and PA.
However, his win probability is a different algorithm. It considers alternate scenarios, eg Trump wins Pennsylvania but loses Michigan. It somehow finds the probability of each scenario, and somehow calculates a total probability of winning. This does not correspond to one specific set of states that Silver thinks Trump will win. In 2016, it came up with a 28% probability of Trump winning.
You say that's not "getting it wrong". In that case, what would count as "getting it wrong"? Are we just supposed to have blind faith that Silver's probability calculation, and all its underlying assumptions, are correct? Because when the candidate with a higher win probability wins, that validates Silver's model. And when that candidate loses, that "is not evidence of an issue with the model". Heads I win, tails don't count.
If I built a model with different assumptions and came up with a 72% probability of Trump winning in 2016, that differs from Silver's result. Does that mean that I "got it wrong"? If neither of us got it wrong, what does it mean that Trump's probability of winning is simultaneously 28% and 72%?
And if there is no way for us to tell, even in retrospect, whether 28% is wrong or 72% is wrong or both are wrong, if both are equally compatible with the reality of Trump winning, then why pay any attention to those numbers at all?
I see what you're not getting! You are confusing giving the odds with making a prediction and those are very different.
Let's go back to the coin flips, maybe it'll make things more clear.
I or Silver might point out there's a 75% chance anything besides two heads in a row happening (which is accurate.) If, as will happen 1/4 times, two heads in a row does happen, does that somehow mean the odds I gave were wrong?
Same with Silver and the 2016 election.
Is it?
Suppose I gave you two coins, which may or may not be weighted. You think they aren't, and I think they are weighted 2:1 towards heads. Your model predicts one head, and mine predicts two heads.
We toss and get two heads. Does that mean the odds I gave are right? Does it mean the odds you gave are wrong?
In the real world, your odds will depends on your priors, which you can never prove or disprove. If we were working with coins, then we could repeat the experiment and possibly update our priors.
But suppose we only have one chance to toss them, and after which they shatter. In that case, the model we use for the coins, weighted vs unweighted, is just a means to arrive at a prediction. The prediction can be right or wrong, but the internal workings of a one-shot model - including odds - are unfalsifiable. Same with Silver and the 2016 election.
The thing is, Nate Silver did not make a prediction about the 2016 race.
He said that Hilary had a higher chance of winning. He didn't say Hilary was going to win.
How can you falsify the claim "Clinton has a higher chance of winning"?
Alternately:
Silver said "Clinton has a higher chance of winning in 2016" whereas Michael Moore said "Trump has a higher chance of winning in 2016".
In hindsight, is one of these claims more valid than the other? Because if two contradictory claims are equally valid, then they are both meaningless.
Would you mind restating the prediction?
He predicted Clinton would win. That's the only reasonable prediction if her win probability was over 50%
If I say a roll of a 6-sided die has a >50% chance of landing on a number above 2, and after a single roll it lands on 2, was I wrong?
If anything, the problem is in the unfalsifiability of the claim.
Admittedly, 538 was pretty good about showing their work after. While individual events suffer from the unfalsifiability issue, 538 when Silver was around, did pretty good "how did we do for individual races/states" and compared their given odds to the actual results.
If you predict that a particular die will land on a 3-6 and it lands on a 2, then you were wrong. Predictions are occasionally wrong, that's unavoidable in the real world. Maybe the die wasn't fair and you should adjust your priors.
On the other hand, if you refuse to make a prediction but simply say a particular die has a >50% chance of landing above 2, then your claim is non-falsifiable. I could roll a hundred 1's in a row, and you could say that your probability is correct and I was just unlucky. That's why non-falsifiable claims are ultimately worthless.
Finally, if you claim that a theoretically fair die has a 2/3 probability of landing on 3-6 then you are correct, but that does not necessarily have anything to do with the real world of dice.
It's forecasting, not a prediction. If the weather forecast said there was a 28% chance of rain tomorrow and then tomorrow it rained would you say the forecast was wrong? You could say that if you want, but the point isn't to give a definitive prediction of the outcome (because that's not possible) it's to give you an idea of what to expect.
If there's a 28% chance of rain, it doesn't mean it's not going to rain, it actually means you might want to consider taking an umbrella with you because there's a significant probability it will rain. If a batter with a .280 batting average comes to the plate with 2 outs at the bottom of the ninth, that doesn't mean the game is over. If a politician has a 28% probability of winning an election, it's not a statement that the politician will definitely lose the election.
Is it possible for the forecast to be wrong?
I think so. If you look at all the times the forecast predicts a 28% chance of rain, then it should rain on 28% of those days. If it rained, say, on half the days that the forecast gave a 28% chance of rain then the forecast would be wrong.
With Silver, the same principle applies. Clinton should win at least 50% of the 2016 elections where she has at least a 50% chance of winning. She didn't.
If Silver kept the same model over multiple elections, then we could look at his probabilities in finer detail. But he doesn't.
How about this:
Two people give the odds for the result of a coin flip of non-weighted coins.
Person A: Heads = 50%, Tails = 50%
Person B: Heads = 75%, Tails = 25%
The result of the coin flip ends up being Heads. Which person had the more accurate model? Did Person A get something wrong?
Person B's predicted outcome was closer to the truth.
Perhaps person A's prediction would improve if multiple trials were allowed. Perhaps their underlying assumptions are wrong (ie the coins are not unweighted).
But in this hypothetical scenario of explicitly unweighted coins, Person A was entirely correct in the odds they gave. There's nothing to improve.
We are talking about testing a model in the real world. When you evaluate a model, you also evaluate the assumptions made by the model.
Let's consider a similar example. You are at a carnival. You hand a coin to a carny. He offers to pay you $100 if he flips heads. If he flips tails then you owe him $1.
You: The coin I gave him was unweighted so the odds are 50-50. This bet will pay off.
Your spouse: He's a carny. You're going to lose every time.
The coin is flipped, and it's tails. Who had the better prediction?
You maintain you had the better prediction because you know you gave him an unweighted coin. So you hand him a dollar to repeat the trial. You end up losing $50 without winning once.
You finally reconsider your assumptions. Perhaps the carny switched the coin. Perhaps the carny knows how to control the coin in the air. If it turns out that your assumptions were violated, then your spouse's original prediction was better than yours: you're going to lose every time.
Likewise, in order to evaluate Silver's model we need to consider the possibility that his model's many assumptions may contain flaws. Especially if his prediction, like yours in this example, differs sharply from real-world outcomes. If the assumptions are flawed, then the prediction could well be flawed too.
Yes. But you'd have to run the test repeatedly and see if the outcome, i.e. Clinton winning, happens as often as the model predicts.
But we only get to run an election once. And there is no guarantee that the most likely outcome will happen on the first try.
If you can only run an election once, then how do you determine which of these two results is better (given than Trump won in 2016):
You do it by comparing the state voting results to pre-election polling. If the pre-election polling said D+2 and your final result was R+1, then you have to look at your polls and individual polling firms and determine whether some bias is showing up in the results.
Is there selection bias or response bias? You might find that a set of polls is randomly wrong, or you might find that they're consistently wrong, adding 2 or 3 points in the direction of one party but generally tracking with results across time or geography. In that case, you determine a "house effect," in that either the people that firm is calling or the people who will talk to them lean 2 to 3 points more Democratic than the electorate.
All of this is explained on the website and it's kind of a pain to type out on a cellphone while on the toilet.
You are describing how to evaluate polling methods. And I agree: you do this by comparing an actual election outcome (eg statewide vote totals) to the results of your polling method.
But I am not talking about polling methods, I am talking about Silver's win probability. This is some proprietary method takes other people's polls as input (Silver is not a pollster) and outputs a number, like 28%. There are many possible ways to combine the poll results, giving different win probabilities. How do we evaluate Silver's method, separately from the polls?
I think the answer is basically the same: we compare it to an actual election outcome. Silver said Trump had a 28% win probability in 2016, which means he should win 28% of the time. The actual election outcome is that Trump won 100% of his 2016 elections. So as best as we can tell, Silver's win probability was quite inaccurate.
Now, if we could rerun the 2016 election maybe his estimate would look better over multiple trials. But we can't do that, all we can ever do is compare 28% to 100%.
Just for other people reading this thread, the following comments are an excellent case study in how an individual (the above poster) can be so confidently mistaken, even when other posters try to patiently correct them.
May we all be more respectful of our own ignorance.
But what if there's a 28% chance said poster is right?
He works for Peter Theil now, so I take everything he says with a huge grain of salt.
Because of Polymarket? Not everything is a conspiracy.
Because Peter Thiel is a right-wing billionaire piece of shit whose little bitch boy is J. D. Vance.
Okay. That's not in dispute. But partial ownership of a company doesn't make its employees your slaves. Especially when the company has nothing to do with ideological stuff.
He's just a guy analizing the polls. The source is Fox News. He mentions in the article that tomorrow's debate could make that poll not matter.
Should you trust Nate or polls? They're fun but... Who is answering these polls? Who wants to answer them before even October?
So yeah take it seriously that a poll found that a lot of support for Trump exists. But it's just a moment of time for whoever they polled. Tomorrow's response will be a much better indication of any momentum.
I have shamed my family
Whi, is it not completeli spelled correctli?
It just seems strange because I don't think that many people are on the fence. Perhaps I'm crazy, but I feel most people know exactly who they're voting for already. Makes me wonder how valid this cross-section was that was used as the sample set. If it accurately represents the US, including undecided voters, then... 😮
The cross-section of people you know are more politically off the fence than the entire nation. Those that aren't online at all are also more undecided and less likely to interact with you.
I listen to those news things that interview people on the street and I'm amazed at how many are uninformed and can go either way.
There's a Trump undercount in polling: Trump voters don't trust "MSM" and therefore don't answer calls from pollsters, or are embarrassed to admit they will vote for him.
Same goes for asking random people on the street.
There's also an undercount of young people who don't answer the phone.
I don’t know many people (boomers and younger) who answer the phone from numbers they do not recognize. I would like to imagine that the people who do answer strange numbers tend to be out of touch. Bias in the polls to fools or the lucky who are not spammed ?
And an undercount of women who are telling their husbands and anyone else who asks that they'll be voting Trump, but will actually vote for Harris when the time comes. And an undercount of bro-ski-s who claim to support Harris, but secretly hate the fact that they can't get a 'female' that will cater to their every whim and will vote Trump because he'll increase oppression of women. And an undercount of cat ladies.... etc. Most "high quality" models at least attempt to mitigate these over and undercounts, which definitely skews results, and why poll aggregators are important. It helps to eliminate biases in polling types. There's really only ONE poll that matters. VOTE! BRING YOUR FRIENDS!
Pollsters are compensating for that undercount of unlikely voters. 2016 they were low, 2020 still low but pretty close. They will have scaled it up to be more accurate this go around.
Except there's a few snags there. In between the 2020 election and now, there was an insurrection, Roe v. Wade was overturned, Trump was convicted of crimes and indicted for many more. These are things that a statistical process can't really account for when putting weight on how likely a respondant is to actually vote.
Trump lost in 2020. Do all of these events incentivize more people will turn out for him this time than in the last election? Or will less people turn out for him?
Every time something unprecedented happens it negatively impacts the ability for a scientific statistical process to predict the outcome. Science can't predict things there's no model for, and how do you can't have a model for something you haven't seen before. And a hell of a lot of unprecedented shit has happened. Maybe next time a convicted felon that tried to overthrow democracy runs in an election there can be accurate polling, but it's not going to be the case in this election.
There really is no way to know what will happen on election day. So there's else to do other than maximum effort until election day.
The issue isn't really people on the fence for Trump or Harris but mainly with generating turnout. After Biden's poor debate performance, people didn't change their mind and decide to vote for Trump, they became apathetic and maybe wouldn't show up to vote.
Harris doesn't need to persuade people to abandon Trump, she needs to get people excited to show up to vote.
He's not polling, he is aggregating all of the polls into a prediction model. Either way it is just a snapshot in time.
The key to doing statistics well is to make sure you aren't changing the results with any bias. This means enough samples, a good selection of samples, and weighing the outcome correctly. Even honest polling in pre-election is hard to get right, and because of that it's easy to make things lean towards results if you want to get certain results, or or getting paid to get those results.
There's only one poll that matters, and that poll should include as large of a sample as possible, and be counted correctly. Even though some will try to prevent that from happening.
It's a chance of winning, not a poll, so 64% is high but not insane. Silver is serious and it's a decent model. Knowing the model there's a pretty good chance this is a high point for Trump but it's not like he's pulling this out of nowhere, he has had similar models every election cycle since like 2008.
If it's overstaying Trump it's because his model is interpreting the data incorrectly because of the weirdness of this election cycle. I personally think that is likely the case here.
This quote sums it up:
𝘾𝙝𝙖𝙨𝙚
@chsrdn
In the future we won't elect presidents. We'll have a primary, then Nate Silver will go into a spice trance and pick the winner.
3:41 AM · 7 nov 2012
That used to be true, but in recent years he has gotten a lot more conservative, so I personally take his predictions with a huge grain of salt.
Yes, I kinda agree. Let's see his model's brier score in November :)
This isn't a poll. That's why the number is so high. His model is also automatically depressing Harris' numbers because of the convention right now. (It did the same thing to Trump after his convention)
Nate has been upfront in his newsletters about the factor dropping off the model after today, but then it's also the debate. Things are likely to be far more clear going into the weekend because we'll have post debate polling being published and no more convention adjustments.
You shouldn't take it seriously. The 24-hour news cycle depends on data like this. It just doesn't tell us anything.
Their models have been really accurate for the last several election cycles. They’re part of fivethirtyeight.com
No, Nate is not part of 538 anymore. He now works for a crypto betting website partly owned by Peter Thiel.
I'll let you decide how neutral that makes him.
Peter Thiel, the same guy who sold Republicans on JD the couch fucker Vance
While that is also my pet name for JD, keep in mind it is aspirational, not historic.
He's a degen gambler who admits in his book he was gambling up to $10k a day while running 538... It never made him go "huh maybe I fucked my employees because I'm a degen gambler."
Boo, what does this mean for 538?
Nate is not with 538 anymore. Disney didn’t renew his contract. However, he got to keep the model that he developed and publishes it for his newsletter subscribers. 538 had to rebuild their model from scratch this year with G Elliot Morris.
Now Nate hosts the podcast Risky Business with Maria Konnikova. The psychologist who became a professional poker player while researching a book. It’s pretty good.
Well, he did predict Clinton would win in 2016 so there's that.
He's renowned for being wrong for several previous elections
All prediction models only give you odds, not flawless accuracy. He has been closer in every election than most everyone else in the prediction market.
Who is Nate Silver? Really?
Hey man there is a mountain of people who don't know things and are scared to ask. learning is always a good thing
XKCD 1053
Social media isn't a search engine. If an article is referring to someone by name in the title, they almost certainly have a Wikipedia page the questioner could read rather than requesting random strangers on a message board provide answers for them (in the form of multiple answers of varying bias and accuracy).
Wanting to learn isn't the problem, it's not spending the tiniest bit of personal effort before requesting service from other people.
Or he could have a conversation on a conversation forum.
Perish the thought!
Yeah. I think we take our easy navigation for granted sometimes. Like... I can get most information pretty quickly and not have a lot of trouble discerning what I need to do to get that information.
But not everyone is as "natural" at surfing. Maybe they have trouble putting things in perspective, they don't know how to use a tool like Wikipedia, or even - maybe they just don't like researching.
I'm so glad we have people that are great at keeping up with everything. But we have to remember that presenting and teaching information accurately and helpfully is a skill that we need desperately.
Ignore headlines
JUST VOTE
His older model at538 has things tighter with the coin toss slightly weighted toward Harris.https://projects.fivethirtyeight.com/trump-harris-2024-election-map/
Whether it’s 55/45 or 65/35, we’re still basically talking about the same thing. This race is neck and neck, and whoever gets the turnout edge will win. We’re talking about fractions of percents that are at play, which is why these odd are a coin toss.
Edit: it looks like 538’s model is new, and Silver doesn’t like it or the guy behind it.
https://www.natesilver.net/p/why-i-dont-buy-538s-new-election
Different model, same website. Silver got to keep his model and took it elsewhere after departing from 538.
TIL. I thought they forked it. I didn’t realize 538’s was all new.
Wait, he's not 538 now?
Nope. I don't know exactly what happened there, but after ABC bought it, Nate was gradually phased out. He found alternative funding.
to be fair, nate silver is an idiot funded by peter thiel
He's not an idiot. He is funded by Thiel. He has been politically captured by authoritarian capitalism, so I'd be wary of any models he produced that aren't independently audited for bias.
I think polls are useful, and the monte carlo simulation approach for turning them into a electorial vote probability is good, but there "too much" magic sauce left over for me to trust the outputs from Silver or 538.
I doubt even the usefulness of polls. Who answers polls anymore? We've been polled and surveyed to death. Nobody has time for it anymore.
I do or at least I have in the past. If the caller will answer my Google call screening, I'll answer or call back.
There's always been a issue of sample not matching population, and a variety of methods to correct for that. But, I will admit there is some limitations to that, and I don't quite understand where the limitations are. (I love math, but I failed statistics once, and barely passed the second time. I prefer symbols and proofs and closed forms...)
I hate my country
Important to note, these forecasts are absolutely subject to change. This is not Nostradamus. It is merely reading the polls and factors as they stand. If Harris obliterates Trump tomorrow then this flips. If everyone donates enough money this week and the DNC gets more ground network for their get out the vote efforts, then this flips
All the model guys are very clear about this.
What's driving this current Trump run in the models is the lack of a convention bump for Harris. Models automatically tune a candidate's chances down by about 10 percent after their convention because it's usually a bit of a honeymoon period. It's been pointed out though that she may have had her honeymoon period after taking over from Biden. In which case the odds are more like 46/54.
The takeaway from this is that this election is incredibly close right now. Even at 36/64 it is very close. Both candidates need to run near perfect campaigns to have a chance of winning.
What the fuck? How can this "race" even be close? How brain-dead emotional are the voters? There are two candidates, you choose the person who's ideals and directions you believe in? How is the election process surprisingly similar to an ADHD kindegarten with a nominated side whose campaign is metaphorical shit slinging??
My parents believe whatever slop is thrown at them, so it doesnt surprise me.
Mine too. They want to believe, so they do.
There are still people that distrust government as a general principle AND still believe the GOP is the party of "small government" so they will vote for whatever name is next to the R.
There's a lot of Gen X and Millennials who were raised to automatically sit between the parties and ignore all the noise about each party being evil. To try and make an active decision, rather than just being a fan. From 1960 to 2000, that wasn't horrible advice for the average person. But now it's led them into considering Trump and Harris as equals because they've ignored all "the noise" about Trump.
That's my opinion anyways. It's what I've encountered in many places.
Nate silver also predicted Hillary would win against Trump.
He predicted she had 70% chance to win. He didn't predict her to win.
So... About the same as this
I suspect Harris got her "convention bounce" (as defined by the model) right when she became the nominee, this made the model think she was overperforming pre-convention and now the bounce is fading "early" when the model thinks she should still have it so it seems like she's underperformed.
If this is the theory, knowing how close the swing states are and thus how swingy it can be, most likely this number goes back to maybe 55/45 Trump.
Isn't he also shilling PolyMarket as well?
He's a paid something or other for them, why?
But still no getting rid of the EC or passing RCV by incumbents.
Gaza is calling
Gaza is calling for trump to be elected?
Or what else did you mean?
The misinformation team from Russia just wants to destroy us from the inside. Gaza is not going to be in the debate at all. Trump will certainly do whatever it is that makes him more followers or money. And I’m betting that letting Muslims die is pretty high on the MAGA wish list.
This is going to sound bad but as much as I empathize with Palestine we have massive problems here too that need to be solved
Harris's lack of action on Gaza is the reason she's sinking in the forecasts.
She should help Gaza to help herself, if for no other reason.
I doubt that. Most people outside of here don’t care, or don’t care about it enough to allow a Trump win—because that effects them more and we all know Americans are selfish.
Arab-Americans in Michigan care, and it's going to cost Harris Michigan.
harris isn't prime minister of israel nor president of the us
No, but she does seem to be the president of Israel’s cheer squad.
"Forecasts" is a very strange thing to call polls.
The forecast isn't a poll. 538 analyzes data from polls to create their forecast.
If you don't want to read the article, you could at least read the headline, which is also the title of this lemmy post that you're commenting on right now. Look at the top of your screen. It says "forecast".