Discussion about this post

User's avatar
titotal's avatar

Great post!

I think there are wider problems that extend to non-money based systems: as soon as people start making decisions based on prediction markets, people will start deliberately distorting them to achieve desired goals. Like if you want AI safety funding to increase, why not deliberately put an overoptimistic forecast on AGI arriving on metacalculus? The question is weighted the same as way easier ones when it comes to your forecasting questions, so it won't even affect your top forecaster rating by much.

Expand full comment
2 more comments...

No posts