Topic Actions

Topic Search

Who is online

Users browsing this forum: No registered users and 8 guests

Arguing vs. Testing

The Management is not responsible for the contents of this forum. Enter at your own risk.
Arguing vs. Testing
Post by DDHvi   » Fri Jul 10, 2015 10:56 pm

DDHvi
Captain (Junior Grade)

Posts: 365
Joined: Mon Dec 15, 2014 8:16 pm

"Is so!" "Is Not!" "Is so!" "Is Not!" "Is so!" "Is Not!" "Is so!" "Is Not!"

Anyone who has dealt with grade schools has heard that pattern at one time or another.

Some people, supposedly adult, state there is no such thing as reality - we make our own universe. This eliminates any idea of making a rational test of our ideas, instead of arguing like little kids.

In at least one of his columns, Jerry Pournelle pointed out that most social study courses only expose the students to statistics, instead of solid training in using this branch of math. Perhaps this is why so many political snake-oil-salesmen exist?

For :ugeek: some :ugeek: people :ugeek: , the book "Expert Political judgement" by Phillip E. Tetlock would be interesting reading. For the rest of us, a not quite so scholarly summary will be better.

This reports on a fourteen year research about expert political prediction on two questions: 1) How good is it?, and 2) "How can we know?

The research team compared pre-event expert's opinions on the resolution of various political situations with the actual events using statistical analysis. Experts do better than mindless choices or briefly briefed undergraduates, but three statistical methods do better than the experts. Interestingly, ideology has little effect on accuracy, it just produces different errors. What has an effect is the mode of thinking a given expert uses. Surprise, surprise - the ones who get the least accuracy are also the ones who the media prefers to talk with and quote. Who would have thunk it???

Those who liked Isaac Asimov's Foundational statistics based psychohistory are likely to be interested in knowing that three statistical methods beat the experts! There is both a methodilogical appendix and a technical appendix for those who want to dig deep.

One science fiction story read had a society in which anyone could propose a policy, or vote on one that was proposed, but each had to accompanied by two clear, testable, short essays; one about what the author/voter thought would happen if the policy was implemented and the other if not implemented. These were archived. Anyone who predicted well had their vote weight increased from the average, while those who predicted poorly had their voting weight decreased.

It would be nice to have a means of testing against reality that didn't depend on having disasters from poor policies!!!

Note the author only analyzed accuracy of prediction and the methods used to test this. This does include an analysis of the kind of excuses the poor predictors come up with after the event! Some of the excuses are really imaginative! It does not include the question of how to find out which proposals are best. This is partly from the difficulty of defining best: some people think the best society is one where they are in charge and the rest of us are obedient. Others prefer an inclusive society.

It would be nice to have a set of solid statistical tests of various historical policies. It should be possible to have a post event examination what worked and what did not. The idea of being accountable to reality is not likely to be popular with the experts, or the none-expert politicians.

Tetlock is not a Hari Seldon, and the book is not as pleasant to read as the Foundation books, but for :ugeek: thoughtful :ugeek: people :ugeek: it is likely to be worth it.
Douglas Hvistendahl
Retired technical nerd
ddhviste@drtel.net

Dumb mistakes are very irritating.
Smart mistakes go on forever
Unless you test your assumptions!
Top

Return to Politics