This is an older article by Josh Gibbs, but the claim made here is pretty crucial to understand:
While “We teach students how to think, not what to think” sounds quite provocative, I don’t believe it is logically possible. Imagine a cooking school which claimed, “We teach students how to cook, not what to cook.” How would you teach students how to cook without cooking something in particular? If a chef taught his students how to cook a pot roast, he has taught them how to cook a pot roast, not a brick or an old boot. If he teaches his students how to cook an egg, he has— like it or not— showed them what to cook. The same is true of the chef who teaches his students how to cook potatoes, scallops, or peach pie. There is no platonic act of pure cooking which is entirely separated from all food and drink. The cook is always cooking something in particular. There is no how of cooking apart from a what.
To be sure, Gibbs is by no means claiming that the reverse is possible, either—he has no aspirations for teaching students what to think without teaching them how to think. But as we consider this analogy, do its claims hold up? Is teaching students “how to think, not what to think” a meaningful educational goal?
I assume Gibbs would concede that some people do indeed attempt to teach what to think without teaching how to think. When someone says, “I don’t know how to cook, I just follow the recipes,” we know immediately what he means: though he may be competent enough in following directions that were written out in advance, he has never attained the power of generalizing the techniques and principles of cooking that would enable him to apply them to a novel situation. Though he can follow a recipe, it would be far beyond his powers to originate one.
Education that proceeds according to this “just-follow-the-recipe” scheme is not difficult to find. We educate scads of “mathematicians” that are no more mathematicians than our hypothetical recipe-follower is a cook; they may have memorized some algorithms and been taught to apply them to problems with a familiar scheme, but they have never cultivated cognitive faculties that would allow them to really think through a novel problem mathematically. They can follow an algorithm taught to them, but they could no more originate an algorithm than they could walk on the moon.
Many humanities classes follow exactly the same format: “read” something along with the teacher, paying careful attention to his interpretation. When essay time comes along, deliver his interpretation back to him—maybe with enough variations in expression that it doesn’t seem like a quotation verbatim—and you’re golden. We could characterize the problem here as placing the emphasis on what to think, or we might actually question whether any thought is occurring here at all. To be sure, following a humanities-interpretation-recipe by no means implies the ability to actually interpret: to read and repeat would not imply the ability to interpret for yourself.
I surmise that 99% of the time what is meant when someone affirms teaching “how to think, not what to think” as an educational goal is just this: the intention to avoid teaching merely by recipe. Fair enough, Gibbs might say: while you may not want to teach by recipe, you’ll still never teach apart from recipes. Working through the algorithms may not be a sufficient, but certainly is a necessary condition for teaching mathematical thinking. The power to generalize will only come at the end of a process of mastering many algorithms, or many interpretations, in just the way that good cooks start by learning from recipes and then proceed to generalize to the principles of cooking implicit in every recipe.
At this point, we might be reminded of the Platonic distinction between an art and a practice. An art is distinguished by having some account of the “why” behind the things done. Someone could be a practitioner of cooking without ever having knowledge of cooking as an art, because, although they may even be able to cook most anything you put in front of them and even do it very well (they have, to that extent, generalized beyond the recipes they’ve learned, have the power to create recipes as well as follow them), they nevertheless can’t give an account of why they do this or that. While they may be excellent cooks, we could still say confidently they are totally ignorant of the art of cooking. Someone who knows the art of cooking does not merely prepare the food properly, but can articulate the principle according to which he acts.
So we could consider cooking according to three levels of competence:
The recipe-follower (cooking by rote recipe)
The practitioner of cooking (does all the right things, but can’t say why he does)
A cooking artist (has the correct practices but also the ability to articulate those practices in an account).
The disdain for theoretical knowledge (a prejudice endemic to Americans) comes in part from the perception that it is possible to grasp the theory of level three without the core competence of level two. To take up yet another analogy, Americans are far more impressed by someone who can “play by ear” than by someone with a deep grasp of music theory who doesn’t play well. There is some truth to this, because theory occurs on the level of reason and skill on the level of habit. You can tell me perfectly well in theory how you would eat your morning cereal with your non-dominant hand, but, unless you are ambidextrous, you will fail in the execution because your hand hasn’t been trained to follow what your mind knows.
So perhaps we could restate the issue Gibbs is addressing this way: the drive to teach “how to think, not what to think” is the drive to prize level three knowledge over level two, habitual practice. While we certainly don’t wish to make the opposite error of teaching practice without teaching the why, luckily we need not choose between them: we can pursue an “incarnational” education which operates fully on levels two and three simultaneously.
While this sounds pretty reasonable, if we turn to the question of concrete practice we will realize this is not quite right. If this were true, we would attain habits in correct thinking by thinking correct things, the way that someone who plays by ear develops the habit of “feeling” which note is next. But surely this isn’t how it works. Someone may learn to be courageous by doing courageous things, but nobody learns to think correctly by thinking correct things. If it were so, the people who spend the most time thinking correct thoughts would be the most reliably correct thinkers. Yet we all know that it’s possible to spend all day having correct thoughts about geometry or grammar or theology and then proceed to make a fool of yourself in some area outside your domain. Thinking the correct things in one area of life does not form a “correct-thinking-habit” that then transfers to other domains. We can’t train ourselves to recognize truth the way that musicians train themselves to hear which notes are in the key; whatever cognitive faculty in us it is that recognizes truth, it does not operate as merely a trained habit.
At best, habitual thinking leads to heuristics, and heuristics are notoriously unreliable. Consider this talk from Daniel Kahneman:
Now, to express it in Kahneman’s terms, habitual thinking could only ever operate on the level of what he calls System 1—which, he says, is just the mind’s faculty for recognition. This faculty works extremely well for situations where consistency and regularity allow us to perceive patterns, but, as he demonstrates from numerous examples, it is pretty useless outside of its area of familiarity, and that includes, generally, all unfamiliar abstract truths. There is no faculty of the mind that perceives abstract truth as such—only a faculty that can get good at recognizing familiar patterns in consistent situations.
Now, perhaps we can reformulate the claim about teaching how to think, not what to think, in this way: an education first and foremost must be training students in the use of System 2, the way of thinking where we encounter novel situations and have to solve them (this is clearly related to the Platonic definition of art—we will be able to articulate our process because it takes place on the conscious, verbal level). No doubt some incidental training of System 1 is called for as well—certain skills have to become automatic, like reading and certain basic arithmetical operations. But can we defend the strong claim that the preeminence should go to our educational efforts in training students how to use System 2?
That depends on whether we think we are preparing students primarily for a world of consistency and certainty or ambiguity and unpredictability. If life mainly consists in the rote application of predigested formulae, we can focus on drilling those and leave System 2 to some small cadre of experts. If, on the other hand, we think life is generally characterized by uncertainty, we should focus on System 2 (except the incidental training that System 1 needs to support System 2), because this is the only kind of thinking that matters when encountering novel situations.
If the last three years have taught us anything, isn’t it our need to cope with novel situations? We cannot simply rely on our recognition of past patterns to deliver us from situations like a once-in-a-century pandemic—all our powers of recognition that are useful for judging more typical situations are bound to fail us here. The pace of technological change, moreover, virtually necessitates constant adaptation. Even without these two factors, there is every reason to believe that life is frequently characterized by unpredictability and change. If we peruse the biographies of premodern figures, we will consistently find that the apparent stability of their lives is merely a trick of historical perspective: human beings have always lived in times of churn and change, requiring novel solutions to unfamiliar problems. The success of the Greek phalanx versus the mighty Persian army, the flourishing of abstract Roman law over the patchwork of local customs fueled by favoritism, the flexibility of the Muslim empires against the moribund Christian kingdoms,1 the biblical savvy of the Protestant reformers against the sclerotic orthodoxy of the Medieval Roman church, the guerrilla tactics of the American colonists versus the gentlemanly rules of combat enforced by the British Army—history celebrates the achievements of those who were willing to adapt to change, instead of relying on instinctual reflexes—of people who knew how to think, instead of just relying on others to tell them what to think.
It is precisely the necessity of learning how to think that forces us to hold back on telling students what to think. Even when we know the truth, to simply deliver the answer to students without work on their part may give them an answer that they may (or may not) recognize again. But if students go through the System 2 process of finding and verifying the truth, they have obtained something valuable—the truth—but also something far more valuable, which is practice in using the part of their brain (an effortful, dearly effortful part) that deals with novel situations for which there is no formula.
Perhaps even after all this, Gibbs would reply that this is fine and well for questions of fact, but that normative questions require a different approach. To break new ground in science, technology, warfare—this may require System 2 thinking. But questions of right and wrong depend on immemorial truths, where no new discoveries can be made. Here we must drill the recognition of the norms we have received, without any wish to innovate.
Admittedly, there are some complex issues here that we can’t fully explore. Whether even the world of moral norms is totally immune from progression may, indeed, be doubted. But nevertheless it will be true that we have given a coherent account of how teaching students how to think, not what to think, is a meaningful goal. Thinking does not work quite like the analogy Gibbs gives us. The real analogy would be between teaching someone how to cook, and feeding them. It is sometimes the duty of a teacher to feed his students—to give truth into their hands ready-made—for some things can be taught in no other way. These sorts of things are really incidental and along the way to teaching the important things—the use of the “System 2” abstract thinking applicable to novel situations, where even if we wanted someone to tell us what to think, nobody is available. Nobody can give us a recipe for the cure for cancer, or a handbook for colonizing Mars. The thing hasn’t been done, and so we will have to figure it out for ourselves, if we figure it out at all.
If we will do it, we must have, not merely habitual practices, but the logos—the underlying principles of reasons beyond our habitual mental heuristics which will enable us to arrive at a verifiable truth. Such things can be taught, but they are not taught merely by delivering truth into the hands of students and having them repeat it back—by teaching them what to think and not how to think.
Before we end, we might consider briefly just how such things can be taught. Dorothy Sayers’ intention was to teach the art of thought through the trivium. By learning to handle the mechanics of how language works (grammar), the principles of abstract inference (logic), and finishing it with a study of clear and eloquent expression (rhetoric), she thought students could be strengthened against the pernicious effects of propaganda and empowered to rationally and carefully investigate the truth.
Let’s call this the Trivium answer for how to teach the art of thought. The main alternative would be the Quadrivium answer—that careful thought is chiefly taught mathematically, and that its crowning glory is the principles of falsifiability and experimentation that form the foundation of the scientific method. Which method actually works—or are they complementary? The question is debated whether formal logic improves the ability to reason or not, as is whether the formal study of grammar improves one’s ability to use language or not.2
Meanwhile, the quadrivium method certainly has its advocates—among them, apparently, the educators of the former Soviet Union:
One part of this video really resonates with me, where she describes the main difference between students who had been in their program for a long time, and those who were new to it. When confronted with a truly novel problem, she explains, the new students will work for a little while, throw up their hands, and say, “I don’t get this.” The veteran students in their program will work for a while, hit a snag and say, “Let me think about this.” That might be a good image for what an education in learning how to think should accomplish—not the expectation that someone will give you answers, but that, with thought, you might just find them for yourself.
It’s easy to forget that Muslims far surpassed the Christian west in technology and culture during the Middle Ages. They were using glass cups and windows, floors other than dirt, practiced regular bathing, and these were among numerous other advances that enabled them to rapidly conquer swaths of the formerly Christian strongholds of North Africa and the Middle East. See Henry Weaver, The Mainspring of Human Progress.
The lack of effectiveness of formal grammar study to actually improve writing has been a finding in many studies: “Mrs. Sokolowski is right that formal grammar instruction, like identifying parts of speech, doesn’t work well. In fact, research finds that students exposed to a glut of such instruction perform worse on writing assessments.” NY Times.