Skip to content →

Re: Adam Grant on Sex Differences and the MBTI

Adam Grant is an organizational psychologist and Wharton professor. With regards to the Myers-Briggs personality test, Grant wrote a debunking piece on that test in 2013, which spawned numerous other articles that criticized the Myers-Briggs.
With regards to sex differences, in 2017 a Google software engineer James Damore wrote a piece arguing that the reason there are more men than women in tech jobs is not simply due to culture and discrimination, but that biological sex differences play some role as well. Grant has also written a reply to that piece, which went viral after it was shared by the Facebook executive Sheryl Sandberg.
This video will examine Grant’s arguments and methodology across both of these pieces. He is a credentialed psychologist and professor, so his arguments must be pretty well-researched, right?
First, his criticism of the Myers-Briggs test. In his debunking piece of the Myers-Briggs, which, we repeat, has been cited far and wide by rank and file journalists, Grant does several eye-raising things:
• Grant faults the MBTI for not being able to predict job performance. But the MBTI itself says that it doesn’t measure job performance.
• Grant seems to think that ‘Feeling’ in the Myers-Briggs is about emotions and that MBTI preferences predict aptitude in given fields. None of these things are claimed by the Myers-Briggs, and anyone who takes but a cursory look at the Myers-Briggs will know this, since its publishers are very careful to point this out. How can a man who is putting his professor and psychologist credentials behind his claims not know these basic things about the instrument he is criticizing?
• But more alarmingly, Grant cites several scientific studies in his piece. From these, he cherry picks parts of sentences, so as to make it look as if the studies conclude that the Myers-Briggs is meaningless, which is what he wants them to say. But if you actually read the studies, they say pretty much the opposite of what he wants them to say. For example, Grant says that the Myers-Briggs is meaningless, but the studies he refers to say “…the four MBTI indices did measure aspects of four of the five major dimensions of normal personality…” and that “even critical reviewers … see promise in the instrument.” Again, how can a professor who is writing an article within his field of expertise misrepresent the research so gravely? If a student did that, he would get an F on his assignment or worse – get expelled from his university for scientific dishonesty.
• Grant also compares the Myers-Briggs to palm readings and horoscopes. But while scientific studies have repeatedly found that there is no validity to these practices, every study ever conducted has found that there is a good-to-okay level of validity to the Myers-Briggs. Again, it should raise an eyebrow or two that a professor so blatantly misquotes and misconstrues the research within his field.
So much for Grant’s criticism of the MBTI. What about his take on sex differences?
The Google piece written by software engineer James Damore, argues that the gender disparity in tech jobs is not just cultural but also influenced by biological factors as well. It generated quite the controversy. Four scientists have reviewed the piece and found that it gets the research that it uses to argue its claims mostly right. But Grant doesn’t think so. He cites a meta-analysis of sex differences which concludes that 78% percent of the differences between men and women are small or close to zero. But again, if you look into it, the study doesn’t back up his claim.
The study includes a lot of variables like “likelihood of smiling when not observed,” “leadership style” and so on. On these variables, men and women are pretty similar. But when we get down to the parts of the analysis that are relevant to whether sex differences may also account for part of the dearth of female software engineers – visuospatial skills, mechanical ability, computer skills, and so on – the study actually finds substantial differences between men and women.
What the meta-analysis that Grant is relying on does is to average out all of the observed parameters and then concludes that men and women are 78% identical. But many of the parameters studied are unlikely to have an influence on a person’s propensity to become a software engineer. With the same methodology, I could prove that a random guy in the street is as good at chess as Kasparov or Magnus Karlsen: All I had to was to measure them according to irrelevant parameters like “likelihood of smiling when not observed,” “leadership style” and so on. If I include enough of these irrelevant parameters, the relevant ones like logical reasoning, visuospatial skills, and memory will eventually be averaged out as they are dragged towards the mean by irrelevant parameters. The parameters actually relevant to chess aptitude, and the likelihood of being interested in chess, would be drowned out by noise and I could faithfully conclude that any guy in the street was more similar to Kasparov and Karlsen in his chess ability that he was different.
With his criticism of the Myers-Briggs, Grant misquoted the scientific studies he was citing. He does the same here. He says that the sex difference for upper body strength between men and women are found to be “large” in that study, but actually, the difference is only found to be moderate. And again, he misleads his readers by misconstruing what the study says: He Grants that the sex difference in upper body strength is significant, but he neglects to mention that the same study finds an even greater difference between men and women in terms of mechanical reasoning ability. Grant agrees that if one were to assemble a football team, the difference in upper body strength would be a significant parameter. By his own reasoning, if one were to assemble a team of software coders, difference is mechanical reasoning and computer skills would similarly be significant, so the very study he is citing actually backs up what the Google engineer wrote in his piece, and not Grant’s debunking of it.
So to sum up, Grant has gravely misrepresented the studies he is citing across two different instances. I am not familiar with Grant’s books or other research, so I can’t say if this is something that’s characteristic of him in general. But whether it is or not, these two instances are serious enough in their own right. In several European countries, where scientific is more closely monitored by control bodies and tribunals, these instances might have gotten Grant into hot water and prompted a general review of him as a scientist. And as said, if he were a student, using this methodology, would almost certainly have gotten him flunked or expelled. He certainly isn’t someone whose pieces one should swallow wholesale simply because he is a professor.

Published in CT