13 Comments

Hmmmm as the person who created the "Tax Policy" tag - it might just be a tagging problem rather than the posts not existing. I probably had to put in an hour or so finding relevant posts and writing up the wiki.

It sounds like you might be a good person to do the same for the social science tags you suggested?

Also increasing the number of women in EA, I ran created the London EA Women and NBs group chat and ran monthly meet ups for about a year. I don't think direct "recruitment" of women would work, it's more about encouraging people (especially woman) to lead and build the EA they want to see.

Related: https://forum.effectivealtruism.org/posts/Pz7RdMRouZ5N5w5eE/ea-should-taboo-ea-should

Expand full comment
author

I didn't suggest "direct recruitment of women" (nor did I say "EA should"), I suggested "reading and recruiting more from other disciplines too."

I've had incredibly bad experiences with the tag system. When I removed Elon Musk and SBF from the EA people page with a comment explaining why (*after* the FTX scandal), it got downvoted, and I had to ask a friend to upvote it. When I added a tag for "abuse" and wrote a whole wiki-page with psychology literature and citations, it got downvoted and removed with the excuse that EA didn't deal with that topic. I had listed numerous articles that the tag could be added to, and even now when you search "abuse" on the EA forum you'll get 544 post results. I personally suspect the real reason it got deleted was that among the posts I wanted to add the tag to was the post "An Exploration of Sexual Violence Reduction for Effective Altruism Potential" by Kathy Forth, an EA member they'd prefer people forget about (because of the assault and subsequent suicide).

I also shouldn't be the one to make the tag, since I left EA after the manifest scandal made it clear the EA community would continue to deplatform leftists while platforming fraudsters and race "scientists" and the like. I really should have left after my years of unpaid and unacknowledged labor for EA got rewarded with a high profile EA publicly, indirectly (and falsely) accusing me of doxxing people, without him facing any repercussions, but I kept on going until the day the manifest people said they would invite some of those "scientific racists" to speak again next year, which was the straw that broke the camel's back.

Expand full comment

Hmmm I don't think this is entirely fair. I haven't met anyone in EA (other than young students) who don't read outside of EA and idk I think you are suggesting that more funding / volunteer time should go to speculatively finding people in other fields.

Tbh I would agree with their editorial decision on a tag called "Abuse", from looking at the 544 articles it's too wide ranging and could include Animal Welfare, Domestic Violence, Corruption, Political suppression of women in misogynistic countries, Criticisms of EA culture, Reducing risks from malicious/abusive actors etc. I doubt they'd have an issue with a Sociology tag based on their Wiki FAQ https://forum.effectivealtruism.org/topics/ea-wiki-faq

I'm not sure I'd agree that Manifest is EA - maybe EA-adjacent.

Who are you expecting recognition from? Idk but I'd say being able to donate to effective charities and work directly on pressing problems is a massive privilege and there's a responsibility for privileged people to use it to improve the world.

I think the EA forum is a useful resource for working out the best ways to do that, so I want to contribute as I found it helpful, but honestly doesn't owe me more than that :/

Expand full comment

Independent of our discussion, I found this comment from Will Macaskill, which is relevant. https://forum.effectivealtruism.org/posts/jSPGFxLmzJTYSZTK3/reality-is-often-underpowered?commentId=KvYM2XPG7mD7aFvv6

Expand full comment
author

I don't think we should update too hard on this. He states his viewpoint that EA does a lot of qualitative research when relevant, and then links one case study by an EA as an example to back it up. Which in this context is totally fine and normal; it's a comment, not a post or study... but it's still only one example.

Expand full comment

Some thoughts. If it is relevant, I have maths background, am male, have south asian ethnicity:

- I believe there is a strong selection effect here. The type of people who like or are comfortable with the reasoning that is core to EA and/or rationality are also far more likely to do subjects like econ / comp sci / philosophy. Types of reasoning include decoupling, or, quantifying costs/benefits.

- It is feasible that there is not easy insights to be gained from the social sciences. This is a pretty damning perspective on social sciences by person who reviewed 2000+ papers for Replication Study: https://fantasticanachronism.com/2020/09/11/whats-wrong-with-social-science-and-how-to-fix-it/#.

- I would be curious to know what the stats are like in EA student groups. Which kind of people are initially attracted during a student fair or who make first engagements? If it skews one way, what are the reasons for this?

- Have you tried reading / recruiting from non-standard backgrounds? If yes, would be curious to read about your experiences!

Expand full comment
author

Interesting post. I've skimmed over some sections, but I think I'll be reading it again soon. Thanks for sharing. However, I did notice it mostly doesn't apply to what I wrote. Let's assume his premises and methodology are all correct (I haven't checked, it's not peer reviewed, and he's not an academic, so that's a bit bold, but why not). He draws the conclusion that any given economics, sociology, education, and "other" are good at replicating, and social psychology, political science, criminology, psychology, evo psychology, cognitive psychology, management and marketing are bad at replicating.

Now let's look at the disciplines I mentioned: economics, sociology, anthropology, gender studies, geography, political science, and history. As you can see, I didn't mention psychology, partly because I'm well aware of the problems in that field (and in fact, my post shows that psychology is worse at citing other sciences than economics), and partly because I only consider it a semi-social science. I didn't mention marketing because I know the problems it has, and I didn't mention management because I didn't even think of it (but wouldn't have included it if I had thought of it). That leaves us with economics and sociology, which he says are good, and political science, which he says is bad. Okay, what about the others I mentioned: anthropology, gender studies, geography, and history? They're not part of this paper, or if they are, they're part of "Other". If it's "Other," it's mostly good, and if it isn't, then it mostly doesn't seem to apply to what I wrote. And importantly, these are disciplines that are much better at addressing his seventh point; they do create more theory.

Also, this is not how almost anyone actually reads about other disciplines. Mostly, we read books about a topic that only show a couple of the best conclusions the field has reached in the past decades, aka mostly stuff that has been replicated. A given criminology paper has a low chance of being replicated. So what? Focus on reading the papers that were replicated; that's what I would recommend anyway.

Expand full comment
author

Thanks Lovkush, I shall read that paper.

Thanks for adding your demographics, but for future commenters, it's not necessary. Having data on demographics is important for large groups since it informs dynamics and biases, but that information is much much less useful when trying to draw conclusions about individual people.

I study "moral science" myself, which is 50% philosophy and 50% various social sciences including economics, psychology, sociology, etc. I used to run EA Ghent until, today actually. I recruited mostly philosophy and moral science students, since I knew them the best. But using 'quant people are more interested in EA, therefore it's more relevant for EA' as an argument is a bit strange since it's assuming the conclusion (not that you were making that argument, that's just one interpretation). Perhaps the methodology baked into EA is suboptimal, and EA has mistakenly become unattractive to people who would be high impact.

One problem I notice between these different research areas/different researchers, is the use of different methodologies. I used to also be pretty much die-hard pro-quantifying costs/benefits, but have become less so as I learn more about other disciplines. For example, my latest blog post is about how we managed to enshrine animal welfare in the Belgian constitution. How do you quantify that? Well, you don't. The effects are so broad, multifaceted and indirect that they're impossible to quantify. With things like medical interventions, we can run an RCT (which the EA framework loves), but the same cannot be done with constitutional changes since we don't have a "control Belgium". RCTs are great, but they also have drawbacks. They are expensive and measure narrow, direct, continuous effects, while they're unpractical for broad, indirect, or discontinuous effects. See, e.g. "the problem of marginalism" https://eprints.gla.ac.uk/289530/1/289530.pdf

But obviously, changing the constitution has a big impact; just because we can't quantify it doesn't mean it doesn't exist. And that's where other methodologies come in. E.g. the historical method will tell you that yes, changing a constitution has indeed had big impacts on the history of countries. It can tell us about big trends, while RCTs can tell us about tiny, discrete trends (which, combined with the cost, does mean that RCT-based interventions privilege the status quo more). Focusing so much on quantification can lead us towards the McNamara fallacy: https://en.wikipedia.org/wiki/McNamara_fallacy

Expand full comment

Thanks for detailed reponses, Bob!

- I agree with you that fantasticanachronism post does not challenge the underlying methodologies, tools or reasoning styles, or the strongest parts of the fields.

- "Perhaps the methodology baked into EA is suboptimal, and EA has mistakenly become unattractive to people who would be high impact." Always interesting to consider what the blindspots are of EA! Do you already know styles of reasoning that are missing in EA?

- "How do you quantify that? Well, you don't.". "E.g. the historical method says changing a constitution has had big impacts". I use 'quantify' in a broad sense to include things like "what percentage of historical constitutional changes had big impact" (which is impossible to answer literally as stated, but is at least the kind of thing I'd like to get a sense of), rather than 'quantify = rct'.

- I dont think constitutional changes are 'obviously good'. In your case, my instinct is it is good (I am between vegetarian and vegan so yay for animal welfare progress!). But, policy/legal changes could have unexpected negative consequences - e.g. imagine this makes it even harder to do any building or construction because there is extra bureaucracy around the affect on insects living in the ground being constructed on.

- Contrary to the tone in your comments, EA is open to many kinds reasoning / quantification other than RCTs. One example is there are no RCTs to measure impact of x-risk work. Second example is that if you listen to EA (adjacent) podcasts (like 80k or ClearerThinking), you will be exposed to large range of ideas. E.g. big one I recently listened to on 80k is with Mushtaq Khan - highly recommended if you're interested in policy!

- I re-read your OP and my comments have already veered off track (but hopefully in a way that is enjoyable!). Tackling your OP directly, it is not clear exactly what your critique of EA is and precisely which problems from economics you think EA has inherited. E.g. you have stats about how economists undervalue multidisciplinary research, but I dont think EA suffers from that problem. My reading is you have two main claims. First, EA undervalues certain fields of social science - I think the best arguments for this would be to find examples of reasoning/methodology from those that are being missed by EAs. Second, EA has bad recruitment strategies which leads to under-representation of those fields and of various demographic groups. I think this is likely, but for me what would be interesting to know is how EA could improve their recruitment.

Expand full comment
author

> Always interesting to consider what the blindspots are of EA! Do you already know styles of reasoning that are missing in EA?

Well, part of the problem is that I've been steeped in EA culture too, so I'm still in the process of figuring this out myself. The main thing I would say is to not use one methodology for everything; switch methodologies depending on what is appropriate for the subject. E.g. when trying to understand whether films have become more inclusive, we could grab EA's trusty regression analysis, and this will tell you something, but why not grab narrative analysis, since it's specifically designed for that kind of stuff.

> I use 'quantify' in a broad sense to include things like "what percentage of historical constitutional changes had big impact" (which is impossible to answer literally as stated, but is at least the kind of thing I'd like to get a sense of), rather than 'quantify = rct'.

Yes, an RCT is the most extreme example, but the broader point is that a lot of historical analysis is not quantitative; it's qualitative.

> Contrary to the tone in your comments, EA is open to many kinds reasoning / quantification other than RCTs. One example is there are no RCTs to measure impact of x-risk work. Second example is that if you listen to EA (adjacent) podcasts (like 80k or ClearerThinking), you will be exposed to large range of ideas. E.g. big one I recently listened to on 80k is with Mushtaq Khan - highly recommended if you're interested in policy!

The bulk relies on certain methods/disciplines. The fact you said quantifications is already pointing in that direction. What about qualitative research, like what most historians are doing? If you like black humor, read the origins of the term 'McNamara fallacy' for the types of things that can go wrong with overrelying on quantification.

> I dont think constitutional changes are 'obviously good'.

I don't know where that quotation comes from, but it's not from my comment. I said it obviously has a big impact, not which direction that impact will be.

>" My reading is you have two main claims. First, EA undervalues certain fields of social science - I think the best arguments for this would be to find examples of reasoning/methodology from those that are being missed by EAs.

Is the beginning of my post not evidence for it?:

>" When you scroll through the Effective Altruism Forum you’ll find lots of economic analyses.

There is a tag for Economics with 161 posts, as well as one for Economic growth (130 posts), Economic inequality (10 posts), Economics of artificial intelligence (28 posts), Welfare economics (17 posts), Tax Policy (21 posts), Markets for altruism (33 posts) and many other subcategories.

Now I really like economics and I love economic analyses, but I think it’s very surprising how little presence the other social sciences have. The Social science tag only has 27 posts and there is no tag for sociology, anthropology, gender studies, geography, political science, or many other social sciences.

When you look at the People page of the EA forum, you’ll find lots of economists (and entrepreneurs) but almost no other social scientists. You will also notice that the vast majority of the people on this page are white men. This has been this way for a long time; this is what the page looked like a year ago: "<

Expand full comment

> "a lot of historical analysis is not quantitative; it's qualitative"

Can you give an example of a qualitative research tool that EAs are under-using? Making it concrete will make it easier for me to understand.

> "Is the beginning of my post not evidence for [EA undervaluing certain fields of social science]?"

It is some evidence, but not strong evidence. The same reasoning could be used to show that EA undervalues every possible field of study, which is not what you are arguing for (e.g. you mentioned previously you would not include management in your list). My question is why you believe that EA is undervaluing "sociology, anthropology,..." in particular, and a good way to do this is to point to specific reasoning or tools in those fields and how EAs could use it to further their goals.

To emphasise, I am not saying they don't exist, but it would be useful to have concrete examples. I'm also fine if you do not know a good example off the top of your head, but I'd also like to know if that is the case.

> "I said it obviously has a big impact, not which direction that impact will be."

First, apologies for misunderstanding. In many contexts 'impact' implicitly means 'positive impact' (e.g. 'I would like to have an impactful career'), so it was simple misunderstanding on my part. Second, why did you say that historical analysis can show constitutional change can have large impacts (but not the direction of impact)? Why is this useful?

Expand full comment
author

> Can you give an example of a qualitative research tool that EAs are under-using? Making it concrete will make it easier for me to understand.

Sure! I already mentioned the historical method and narrative analysis, which are two big ones, but a specific one I'm currently looking at is action research, since it would allow us to do (prospective) pro-social actions while we research. (And again, I'm not advocating that we replace everything with them, just that we add them to our toolbelt)

> why did you say that historical analysis can show constitutional change can have large impacts but not the direction of impact?

I was merely careful in my post not to prematurely claim that the constitutional change we implemented will be a positive one. The historical method can show us the direction of impact, but only when paired with a moral framework. So for example, the historical method can shows us which policies lead to better lives for animals, but whether that's a good direction depends on whether you use a moral framework that considers animals moral patients.

> My question is why you believe that EA is undervaluing "sociology, anthropology,..." in particular, and a good way to do this is to point to a specific reasoning or tools in those fields and how EAs could use it to further their goals. To emphasise, I am not saying they don't exist, but it would be useful to have concrete examples. I'm also fine if you do not know a good example off the top of your head, but I'd also like to know if that is the case.

I'll do you one better, I'll give you "explorable explanations" on social science topics I think you will find interesting. These are brief, fun, interactive, explanations of the topic:

Here's one on a topic in the field of computational sociology: https://ncase.me/polygons/

Here's two on topics in the field of election science: https://ncase.me/ballot/ http://polytrope.com/district/

Here's one on a topic in the field of network science: https://ncase.me/crowds/

Expand full comment
Jun 20·edited Jun 20Liked by Bob Jacobs

This is great! Thanks very much. Don't have time now to look into details but appreciate the back and forth. I hope someday you'll be comfortable identifying as EA again. Seems like you have a valuable perspective to add.

Expand full comment