Last March, the Massachusetts Institute of Technology announced that it would again require undergraduate applicants to submit scores on the SAT or ACT. Like many universities, MIT suspended this requirement in 2020 out of pandemic-created exigency. But other elite universities are remaining test-optional: Yale and Princeton, for instance, will not require applicants this year to submit admissions test scores, and Harvard has even extended test-optional policies through the cycle starting in autumn 2025.

Stu Schmill, an MIT admissions officer and Class of 1986 alumnus, explained MIT’s decision in a blog post for the MIT admissions office. Schmill’s writing is admirably clear and the policy he defends welcome. But Schmill also reveals that even the most clear-eyed university officials still hold dubious ideas about the importance of subjective, non-academic admissions criteria.

Schmill correctly acknowledges that admissions tests do a good job of indicating college readiness. They predict college grades about as well as high school records do; a combination of test scores and high school records does even better. Writes Schmill: “Our ability to accurately predict student academic success at MIT is significantly improved by considering standardized testing—especially in mathematics—alongside other factors” (emphasis his, as in all subsequent quotations). He notes that all MIT students must take courses in calculus and physics, making math ability vital: “there is no path through MIT that does not rest on a rigorous foundation in mathematics, and we need to be sure our students are ready for that as soon as they arrive.”

Schmill addresses concerns over the bias of the SAT and ACT in less depth. He ignores the question of racial bias on these tests, which doesn’t exist: if anything, racial score differences slightly understate the underlying differences in academic skills. He addresses the charge of socioeconomic biases, but more diffidently than he could, as the allegation is tendentious. Correlations between students’ SAT scores and their parents’ socioeconomic status are far weaker than the correlation between SAT scores and college grades and are explained in substantial part by genetic confounding: intelligence and conscientiousness have significant genetic influences and contribute both to students’ academic achievement and to their parents’ ability to obtain well-paying jobs. Socioeconomic gaps don’t stem from rich students’ better access to private test preparation, which offers only modest average improvements.

Schmill does raise a crucial point: even if the SAT and ACT exhibit socioeconomic bias, the alternatives are worse. Poorer schools, he writes, don’t let students show their abilities with advanced work such as Advanced Placement and International Baccalaureate courses, which are common in more affluent schools. The SAT and ACT, whatever their faults, give all applicants a chance to prove themselves.

But Schmill doesn’t take his skepticism of alternatives to admissions tests far enough. He specifically defends the “holistic” admissions system that MIT shares with almost all selective colleges—a system that probably makes student selection less accurate and more biased, without any compensating improvements.

Holistic admissions processes consider test scores only to a limited extent, often just to make an initial list of candidates who clear a minimum bar, and base final decisions on softer considerations. “We do not prefer people with perfect scores,” writes Schmill; “indeed, despite what some people infer from our statistics, we do not consider an applicant’s scores at all beyond the point where preparedness has been established as part of a multifactor analysis.”

Indifference to perfect scores may be justified, if only because the SAT and ACT don’t distinguish well between top performers: adjustments to the SAT have made top-range scores far more common, especially in math. In 1984, only about 0.9 percent of SAT takers scored above 1,400, out of 1,600 total; today, roughly 7 percent do, and about 2 percent got a math score of 780 or above, out of 800.

But this doesn’t mean that even a better test could only assess whether applicants meet a minimum standard: highly selected groups show stark differences in intellectual performance. This spread of abilities is shown, for instance, by difficult examinations such as the William Lowell Putnam Mathematical Competition, a twelve-question prize examination taken every December by several thousand math students from selective American and Canadian universities. In a typical year, the top few Putnam contestants get near-perfect scores, several dozen get at least half the possible points, and the majority do not get full credit on even a single question.

Success in intellectual careers also correlates strongly with cognitive ability well into the top percentile, as shown by projects such as the Study of Mathematically Precocious Youth, which tracked into adulthood thousands of subjects who had scored in the top 1 percent of mathematical ability at age 13. Even within this rarefied cohort, the top-scoring quarter was 4.5 times as likely as the bottom quarter to have written a peer-reviewed publication at age 38 or later and was 18.2 times as likely to have a STEM doctorate. Even elite colleges, therefore, could select better students with competitive academic criteria that distinguish between the best candidates.

Granted, admissions tests are not infallible, a fact often taken as grounds for trying to compensate for their imperfections by using subjective factors. Writes Schmill: “We can never be fully certain how any given applicant will do. . . . However, our research does help us establish bands of confidence that hold true in the aggregate, while allowing us, as admissions officers, to exercise individual contextual discretion in each case.” Schmill mentions internal research purportedly finding that successful MIT students not only need good test scores but “also need to do well in high school and have a strong match for MIT”—a quality, says the MIT Admissions website, that includes such traits as “alignment with MIT’s mission,” “collaborative and cooperative spirit,” and “risk-taking.” MIT, like other colleges, judges these traits subjectively, via extracurricular activities, recommendation letters, interviews, and application essays.

But there’s little evidence that these subjective considerations help colleges choose better students. In fact, almost no published research examines the predictive value of nonacademic admissions criteria except in medical education, likely the only subject for which such criteria are common outside the United States. This research does find evidence that nonacademic criteria may be useful if carefully and objectively evaluated. Some evidence from the Netherlands shows that students with better participation in certain extracurricular activities in high school, evaluated with a strict rubric, drop out less from medical school (in Europe, students begin medical school immediately after high school). Another review finds evidence that personality factors affect medical school performance, and that quantified assessments of these factors, such as “situational judgment tests” and structured “multiple mini-interviews,” can improve student selection. But the review finds no value in softer measures of personality such as unstructured interviews, personal statements, and reference letters—all important aspects of American college admissions. In any case, selection methods that work for medical school may not work elsewhere: medical education is unique in its split between classroom and clinical learning and its heavy demands on students’ memory and situational judgment.

Additional evidence comes from studies of personnel selection in industrial psychology, which usually examine hiring but also apply to college admissions. One review by the psychologist Scott Highhouse compiles evidence that humans vastly overrate our judgment of others, and that formulas using only objective data make better personnel selections than humans. For instance, though most corporate HR professionals know that direct tests of ability can predict job performance, they also think that interviews are essential for assessing putative intangible factors, and place much more weight on interviews than on paper-and-pencil tests. But unless they are rigidly structured, job interviews are highly inaccurate in isolation and add little information to more objective measures.

Highhouse cites another study from 1943 showing that college performance of a sample of high school graduates was better predicted by a formula incorporating only their admissions tests and high school records than by professional college counselors, who could interview the students and consult their academic records as well as supplementary data such as personality tests. An essay by the hardware engineer Dan Luu also notes several costly biases in high-stakes competitive settings, such as choosing professional athletes or highly paid engineers, that persisted for decades because managers trusted their intuition over objective measures and favored candidates based on irrelevant aspects of appearance and self-presentation.

Many admissions officers would respond that they can’t just choose the best scholars: colleges need students with good character who contribute to the campus community. This idea, as the sociologist Jerome Karabel details in his history of Ivy League admissions The Chosen, has an ugly past originating in the somewhat anti-intellectual ethos of the Gilded Age northeastern upper class, for whom good character meant public spirit, social grace, and, above all, athletic achievement. Nevertheless, the Ivy League admitted all students who passed entrance examinations until the 1920s, when admissions officers began evaluating applicants’ character and requiring them to attend interviews. The reason, described explicitly in university officials’ writings to one another, was anti-Semitism: these subjective criteria could disguise a cap on admissions of Jewish students that most of the public, even in the 1920s, considered unfair and un-American. The definition of good character itself had racial connotations: WASPs disliked Jews chiefly for their lack of social polish, and espoused theories that Jews and southern Europeans were inferior to “Nordic” Anglo-Saxons in courage and physical constitution.

Karabel points out that the controversy over Asian-American discrimination has followed similar lines. Already in 1984, when Asian civil rights organizations were beginning to complain about admissions discrimination, an internal investigation by Brown University found that Asian applicants received far lower ratings on subjective personality factors, a state of affairs that investigation attributed to “cultural bias and stereotypes which prevail in the Admissions Office” and promised to fix. Biased evaluations of these soft factors also form one of the main allegations of a high-profile lawsuit against Harvard.

A more fundamental objection to this notion, though, is that universities are educational institutions. Their purpose is to spread knowledge with the understanding that educated citizens will be more useful and able to address social challenges, not to build a utopian campus community or reward teenagers for virtue. They best serve this purpose by choosing students who can benefit most from their scholastic offerings.

In any case, college applications offer distorted looks at applicants’ character: they include lists of extracurricular activities chosen to impress colleges and essays heavily edited, if not ghostwritten, by adults. Institutional efforts to discern applicants’ character from such sources show discouraging results. The Ivy League, for instance, weighs nonacademic factors much more than MIT, such that some Harvard students during my time there had difficulty with science general-education classes that were about as challenging as high school physics. Nevertheless, observers have long complained that the bulk of Ivy League students, whatever claims to high-mindedness they made when they applied, end up making conventional career choices and heading to graduate and professional schools, consulting and investment-banking firms, and large tech companies.

The same can be said for the Rhodes Scholarship, with its long list of character qualifications such as “truth, courage, devotion to duty,” and “moral force of character and instincts to lead.” Rhodes selection committees evaluate these criteria through reference letters and interviews and social events in which interviewers can ask anything they want—a sure recipe for unreliable, biased judgment. Adam Mastroianni, a psychology researcher and 2014 Rhodes Scholar, writes that the Rhodes Scholars that survive this gauntlet “are often charming conversationalists and sometimes bad people.” Consider, for example, Jonah Lehrer, a 2003 Rhodes Scholar whose career in science journalism was halted less than a decade later by discoveries that two of his books contained extensive plagiarism and fabricated quotations.

Nonacademic criteria can show cultural and class biases of their own. Insofar as Harvard’s dislike of Asians’ personalities isn’t a mere smokescreen for racial preferences, for instance, it may reflect how Asian applicants’ parents steer them into academic pursuits and other serious activities such as classical music, often in the mistaken belief that colleges value them. These are worthy endeavors, but not ones that admissions officers, who often comment among themselves on how much Asian applicants resemble one another, consider proof of a unique personality. But Asian applicants are not alone: the subjective aspects of college applications are largely a test of cultural capital.

Some relatively perceptive progressive critics have long made this argument. One recent paper points out that selective colleges that claim to place greater weight on subjective admissions criteria don’t enroll more low-income or “racially marginalized” students, and lists possible sources of bias: students accustomed to speaking with well-to-do adults as relative equals, for instance, may be more polished in interviews. Cultural capital also includes knowing how to manipulate bureaucracies such as admissions offices, in ways ranging from mildly distasteful—one bit of advice that circulated among parents at my high school was to find excuses to contact admissions offices and keep one’s name fresh in application reviewers’ memory, a practice that, if it actually works, rewards willingness to waste others’ time—to more corrupt, such as getting audiences with senior officials through backchannels.

The application essay also tests cultural capital. On the surface, essay prompts ask applicants to describe a special interest or a formative experience or influence, and admissions departments often assure applicants that essays are just a way to get a better sense of applicants’ lives. But James Warren, a professor of English at the University of Texas–Arlington, points out that those assurances, and the prompts themselves, belie the essay’s true purpose: “Most prompts ask applicants for personal narratives, but the essays actually function as arguments that make a case for the applicant’s potential as a college student.” Application essays must also appeal to a specific audience with values that may not be students’ own, requiring unaccustomed rhetorical sophistication from students who are used to writing impersonal academic essays and “perceive . . . writing as an attempt to produce ideal, error-free texts” judged without reference to an audience.

Wealthier students not only have less cultural distance from their essay readers (typically, recent graduates of the college they work for) but also can get help from adults who understand the unspoken goals of college essays—including guidance counselors whose students often apply to selective colleges and parents who went to selective colleges themselves. To show the benefits of explicit instruction in these unspoken goals, Warren ran an experiment with an English class at a low-performing high school in the Dallas–Fort Worth area, which included a unit on college application essays that assigned students to answer a prompt used by many colleges in Texas. Some sections of the class formed a control group and learned from handouts and “how-to guides” like those that students might themselves find on the Internet. The experimental group had classes with Warren himself, who explained the situation-specific nature of rhetoric and the unspoken argumentative purpose of college essays, warned about the deceptiveness of much common advice, and helped students plan how to appeal to admissions officers’ values. Warren’s college students also gave the experimental group feedback on their essay drafts, focusing on the essays’ effectiveness as arguments. The result: two admissions officers at the University of Texas–Austin who evaluated the students’ essays gave the experimental group a much higher mean score, besting the control group by 0.45 points on a four-point scale. Their feedback on the low-scoring essays often praised their narrative skill in responding to the literal prompt but complained that they missed the unwritten goal of showing the authors’ college potential.

In short, Stu Schmill is right that admissions tests such as the SAT and the ACT are valuable tools for college admissions. But he doesn’t take this observation far enough. The point is not just that admissions tests are useful, but that very little else is, beyond other academic measures such as high school records. Even an elite institution such as MIT could select a far more able incoming class, capable of handling more rigorous curriculum and benefiting more from MIT’s resources, if it based admissions purely on competitive measures of academic performance.

Subjective considerations of nonacademic criteria make college admissions more inaccurate and biased, while providing little information about students except their ability to tell a few bourgeois adult strangers what they want to hear. College admissions would be both more efficient and more fair if they were based on academic criteria alone.

Photo: smolaw11/iStock

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next