8 Comments
User's avatar
titotal's avatar

Great post!

I think there are wider problems that extend to non-money based systems: as soon as people start making decisions based on prediction markets, people will start deliberately distorting them to achieve desired goals. Like if you want AI safety funding to increase, why not deliberately put an overoptimistic forecast on AGI arriving on metacalculus? The question is weighted the same as way easier ones when it comes to your forecasting questions, so it won't even affect your top forecaster rating by much.

Expand full comment
Bob Jacobs's avatar

Yes, though these can be mitigated somewhat by more sophisticated scoring systems. The simplest example is if you also measure how "controversial" your prediction is, e.g. if people largely predicted something wouldn't happen, while you successfully predicted it would, you'd get more points (some platforms already do a version of this). This can be extended with a tagging system that records in which *domains* you have a good track record. This would prevent people from building up a "good track record" by e.g. successfully predicting every day that the sun will rise.

Although we might still expect some group bias here, so perhaps we also need something akin to the "birdsong" algorithm.

What I'm not arguing for is "futarchy" (that governments solely rely on prediction markets/prediction platforms), I envision the future more pluralistic, using a combination of; academic surveys, prediction platforms, hired researchers etc etc. They will all have their biases but by taking those into account and comparing them they will collectively be less biased.

In any case, prediction platforms will be better than prediction markets since there's less chance people will spread misinformation, act on harmful incentives, or gamble away their life savings.

Expand full comment
titotal's avatar

Yeah, I'd be happy with a system where prediction platforms are used as an additional tool among many to evaluate claims, as long as they aren't used as a replacement for those other methods.

Part of doing this sucessfuly would be a clear-headed understanding of their limitations, so I'm glad you wrote this up!

Expand full comment
John of Orange's avatar

I worked for Metaculus briefly and disastrously. They were quite obsessed with this idea of allocating decisions and funding based on Metaculus questions and simply would not understand the obvious point you are making here. I think it ultimately stemmed from Anthony lacking time to devote to "his" project while being extremely badly ignorant of how to properly hand it off to someone. So we ended up in a situation where the out-of-touch founder would have these ideas he was so sure about, and all the crazy then-CEO cared about was placating the founder and similar outside stakeholders so there was no clear process for reversing these stupid decisions.

I have to tell you that prior to working at Metaculus, I was non-EA but for the most part favorably disposed to them, or to who I thought they were. After seeing what it was actually like, I have come around to the conviction that the EA community is a massive disgrace and all of their supposed beliefs are hypocritical half-beliefs in the manner of a religion. When you give money to an EA organization it is likely to go to sending people to fake conferences, or hiring their buddy's project for services they would never otherwise have hired, and so on. The whole thing is functionally just a competition for places, most of which will never be obtained, pretending to be some kind of philosophical social movement. This is actually pretty obvious as soon as you get any kind of serious handle on these places, and the only reason EAs don't know it is that to be an EA is almost definitionally to be an inexperienced and impractical kid.

Expand full comment
John of Orange's avatar

"hiring their buddy's project for services they would never otherwise have hired" is a direct reference btw to how Metaculus paid money to Verity, a worse version of Ground News, for useless services. Verity is the project of Max Tegmark, who is Anthony's longtime friend. Now that the Future of Life Institute has all that meme coin money, Tegmark has been able to hire his wife to supposedly run it. It's likely legal but it's all so blatantly and disgustingly corrupt.

Expand full comment
Bob Jacobs's avatar

Thanks for the insights! Always interesting to hear from an insider. I agree that EA tends to be very inner-circle-y, in fact I made a whole post about it: https://bobjacobs.substack.com/p/how-democratic-is-effective-altruism

Though I will say that the "global development" and to a lesser extend the "animal welfare" charities that EAs promote are still pretty good. It is, in my opinion, mostly the "meta-EA" and "longtermism" charities that people should be wary of.

Expand full comment
John of Orange's avatar

Right, GiveWell was always my model of what EA was before I knew what EA really was, and back when I liked EA. In my view it's basically an indulgence-buying project to try and rub off some perceived virtue on the weirdos who want to fill the lightcone with shrimp or whatever it is that they're doing.

Expand full comment
Jack Whitcomb's avatar

Fascinating post. I didn't know how often outcomes are influenced deliberately to profit from these markets. Still, it seems like these problems are solvable. Betting markets themselves would profit from banning people from participating if they can directly influence the outcome, or at least forcing those people to make all of their trades without anonymity.

Expand full comment