Sunday, November 15, 2015

Some Notes on Reductionism

There are two different meanings of reductionism to which I've been exposed, mainly by those who find themselves opposed to this particular 'ism' in education.

  1. On the one hand, reductionism is what Sir Peter Medawar calls "nothing-buttery" in his brutal review of Pierre Teilhard de Chardin's book The Phenomenon of Man. That is, tables are "nothing but" collections of atoms, mathematics understanding is "nothing but" a collection of memories about procedures, and so on.
  2. On the other hand, reductionism can be simply talking too much about organizing learning—or using too many technical terms to do so. For instruction, it can refer to mere selectivity or filtering of information—it might be reductionist to say "To add two fractions, find common denominators, add the numerators, and then set the sum over the common denominator," because there is more to adding fractions than this.

I mention these two meanings up front only to get them out of the way—to set them up as strawmen, between which (or outside which) we have to carve a path. The notion that only one level of analysis—one scale of interaction with the world—can apply to any topic (the "nothing-buttery" notion) is not held by any sensible person; nor is the notion that reductionism should be synonymous with scheduling or selectivity in instruction.

Turning Away from "Nothing-Buttery"

It seems to me that the author of this article in The Curriculum Journal occasionally makes the same general mistake that most everyone makes when arguing against reductionism in education—he steers us rightly away from the first strawman, only to run nearly headlong into the second. Here is how the first turn is made:

There are clearly considerable practical difficulties in converting the rich complexities of a discipline such as mathematics into a curriculum which can be accommodated within the artificial school experience of learning, where days are fragmented into discrete lessons of up to an hour or so. Yet mathematics teaching can become excessively fragmented beyond this. Ollerton (1994) condemns fragmented teaching where: "for one or two lessons children are responding to a set of short questions in an exercise, such as ‘solve the following equations’, and then the following day or week they are working on another skill such as adding fractions or working out areas of triangles." (Ollerton, 1994, p. 63)

Almost without fail, those who would oppose reductionism will use the word "artificial" to describe school. This always strikes me as bizarre, even though I am completely in touch with the sensibility the use of this word appeals to. But if school is "artificial" because it is an activity divided into discrete time chunks, then so is the road trip I took with my family this past summer, or a young couple's first date, or the most open and inclusive meeting of professional educators. Of course, we can choose to describe any of these scenarios as "nothing but" blocks of time filled with prescribed activities, but nothing makes them necessarily so outside of those descriptions. This applies even to those apparently awful, disconnected lessons full of short questions. A level of analysis consistent with painting a reductionist picture of school is chosen, and then we are invited to decry how reductionist it all seems.

And Into the Less-Than-Helpful

And what's the alternative to this 'artificiality'? Everyone has a limited amount of time, which must be taken up linearly in chunks. We do not regularly find ourselves in states of quantum superposition. Thus, having dodged the nothing-buttery strawman, here we at least graze the second one:

Working more holistically in the mathematics classroom means to some extent relinquishing teacher control (‘teacher lust’) over micromanaging every detail (Boole, 1931; Tyminski, 2010). It also entails a classroom focused on longer timescales. . . .

Features of working more holistically could include:
  • giving students richer, more complex mathematical problems with a deeper degree of challenge, so that solutions are not straightforward or obvious;
  • deliberately using problems which simultaneously call on a range of different areas of the curriculum, encouraging students to ‘see sideways’ and make connections;
  • using ‘open’ tasks, where students can exercise a significant degree of choice about how they define the task and how they approach it–importantly, the teacher does not have one fixed outcome in mind;
  • giving students sufficient time to explore different pathways without the pressure to arrive at ‘an answer’ quickly;
  • encouraging a view that being stuck or confused and not knowing what to do is normal and can be productive, that ambiguities can be beneficial for a time (Foster, 2011a), and that seeking not to ‘move students on’ too quickly can deepen their opportunities to learn (Dweck, 2000).

The second and fourth of these bullet points are good ideas for making teaching more 'holistic'. The last and first don't belong at all, and their appearance doesn't inspire confidence that the word 'holistic' actually means anything in the article. As for the rest of this quote—it seems to represent this mind-boggling, to me, notion that teachers or teaching is the cause of this distasteful reductionism; that to make a class or an experience 'holistic,' we would do well to get rid of or diminish the teacher's voice, rather than raise up its quality.

We can and should (and do) avoid the idea that stringing together "nothing but" pieces of content is sufficient to make 'holistic' understanding bubble up as an emergent property of student learning. But equally dubious, and equally unsubscribed, is the idea that learning can be transformed from fragmented to holistic by subtracting something from the experience.

The right reductionism and the right holism can work together. See this study, for example, summarized over here.

Audio Postscript Foster, C. (2013). Resisting reductionism in mathematics pedagogy Curriculum Journal, 24 (4), 563-585 DOI: 10.1080/09585176.2013.828630

Friday, November 6, 2015

Concept Before Procedure? It Doesn't Matter

I was excited to see, in a very recent edition of Educational Psychology Review, researchers take on a handful of education myths, which the authors describe as common educational practices and "widely held intuitive beliefs about learning that turn out to be unsupported by empirical evidence."

In this article, Rittle-Johnson, Schneider, and Star investigate the belief that instruction must proceed in a conceptual-to-procedural direction:

Most recently, the National Council of Teachers of Mathematics (NCTM 2014) explicitly asserted a conceptual-to-procedural perspective in their principle that "procedural fluency follows and builds on a foundation of conceptual understanding" (p. 42). "Conceptual understanding (i.e., the comprehension and connection of concepts, operations, and relations) establishes the foundation, and is necessary, for developing procedural fluency (i.e., the meaningful and flexible use of procedures to solve problems)" (NCTM p. 7) . . . We confirmed that the language used was deliberate, reflecting the "strong belief" of the authors of the report that developing procedural fluency "should not come first" (J. Wanko, personal communication, September 24, 2014).

After setting up further evidence that this belief is indeed widespread, the authors use the remainder of the paper to discuss research that supports and runs counter to this claim. Here are their findings in a nutshell. This'll be quick.

A Bi-Directional Relationship

The authors report on findings from 8 studies that ran a few days and at least 3 that ran over years, with samples ranging from preschool to middle school children. In each of these cases, procedural knowledge was as good a predictor of conceptual knowledge as the other way around. In addition, several other studies which directly manipulated the procedural-conceptual order all reported that procedure instruction supported conceptual knowledge and concept instruction supported procedural knowledge.

Overall, both longitudinal and experimental studies indicate that procedural knowledge leads to improvements in conceptual knowledge, in addition to vice versa. The relations between the two types of knowledge are bidirectional. It is a myth that it is a "one-way street" from conceptual knowledge to procedural knowledge. Rittle-Johnson, B., Schneider, M., & Star, J. (2015). Not a One-Way Street: Bidirectional Relations Between Procedural and Conceptual Knowledge of Mathematics Educational Psychology Review DOI: 10.1007/s10648-015-9302-x

Thursday, October 29, 2015

Conceptual Knowledge Is Important

A study by researchers at Harvard and the University of Minnesota has found that "conceptual fraction and proportion knowledge and procedural fraction and proportion knowledge play a major role in understanding individual differences in proportional word problem-solving performance."

The study involved 411 seventh graders, who were tested in January on their procedural and conceptual fraction knowledge using a 12-item assessment, their conceptual proportion knowledge using 3 short-answer items requiring students to explain their reasoning, and their procedural proportion knowledge using 2 missing-value proportion items. Two months later, students were given a 21-item multiple-choice assessment composed of proportion word problems adapted from items on TIMSS, NAEP, and state assessments.

Results indicated that conceptual fraction knowledge, procedural fraction knowledge, conceptual proportion knowledge, and procedural proportion knowledge were significant predictors of proportional word problem-solving performance. Together, these predictors explained 37% of the variance in proportional word problem solving [p < .001]. . . .

Scores of all four domain-specific tasks correlated significantly with the proportional word problem-solving score. Specifically, the correlations between proportional word problem solving and domain-specific knowledge variables ranged from .37 to .45, with conceptual fraction knowledge (r = .45) and procedural proportion knowledge (r = .43) having the strongest relationship with proportional word problem solving.

Knowledge Does Not Prevent Understanding

While these results provide strong evidence to the contrary, a talking point one often encounters in one form or another in education discussions is that procedural and conceptual knowledge hinder learning. This is an easy misreading of research results, and the possibility for such a misreading is evident in this passage from the authors' opening discussion of the relevant literature:

Student difficulties with proportional thinking may be explained by the acquisition of routine expertise (the ability to complete tasks "quickly and accurately without much understanding"; Hatano, 2003, p. xi) instead of an adaptive expertise ("the ability to apply meaningfully learned procedures flexibly and creatively"; Hatano, 2003, p. xi).

One can be forgiven (not indefinitely, though) for walking away from passages like this with the notion that the "acquisition of routine expertise" causes difficulties in proportional thinking. But this isn't the case, and it's not what is being said (and I think it is hardly ever what is being said). In fact, one can and should simply lop off the first part of the statement, because what is being reported here is, at best, a revelation that the lack of "adaptive expertise" causes trouble, not that "routine expertise" does. To make it even clearer, you can simply replace the first part with just about anything, and still have a believably true statement—so long as the second part is indeed true:

Student difficulties with proportional thinking may be explained by the acquisition of expertise with regard to the entire catalog of Simpsons episodes instead of an adaptive expertise [about proportions].

Adding to the possible confusion in this case is the fact that the Hatano source referenced above is the foreword to a book, not research, wherein Hatano has this to say about the concept of adaptive expertise: "The notion of adaptive experts, which I introduced in Hatano (1982), was a theoretical ideal rather than a model derived from a series of empirical studies."

So, Read Carefully

A fairly reasonable way forward, it seems to me—given that both common wisdom and academic obscurity can conspire against reasonable understanding—is to heed these words from Marcello Truzzi (and Carl Sagan): "Extraordinary claims require extraordinary evidence." The notion that children between the ages of 6 and 18 are stymied in their learning primarily by the knowledge that they have (often of the rote, automatic variety) rather than by knowledge they lack is, I think, a fairly extraordinary claim that requires some fairly extraordinary evidence to justify. What's more, the results of the study described in this post—along with a host of others—provide evidence in the opposite direction.

When the popular assumption (grounded in confusion about the research and the 'virality' of dumb ideas) is that something stands in the way of student learning, rather than that something is lacking, it makes sense to try to remove the barriers rather than add to the problem. It makes sense to, for example, decrease telling, decrease structure, decrease authority, etc., rather than work to improve what we are already doing and what our students already have. So, our perspectives about learning do real work in the world. They inform practices and priorities and set boundaries and goals. If these perspectives are based on extraordinary claims without extraordinary evidence, we shouldn't be surprised to wind up with extraordinary failure.

Image mask credit: JD Hancock Jitendra, A., Lein, A., Star, J., & Dupuis, D. (2013). The contribution of domain-specific knowledge in predicting students' proportional word problem-solving performance Educational Research and Evaluation, 19 (8), 700-716 DOI: 10.1080/13803611.2013.845107

Monday, October 19, 2015

'No Logic in the Knowledge'

Marilyn Burns left this really nice comment over at Dan's a while ago. It's one of the few times in my recent memory where I've seen an attempt to produce an understandable and functional rationale for a "no-telling" approach that doesn't mention agency or character. And it's balanced and sensitive to context to boot. Here's part of it:

Explicit instruction (teaching by telling?) is appropriate, even necessary, when the knowledge is based in a social convention. Then I feel that I need to "cover" the curriculum. We celebrate Thanksgiving on a Thursday, and that knowledge isn't something a person would have access to through reasoning without external input―from another person or a media source. There's no logic in the knowledge. But when we want students to develop understanding of mathematical relationships, then I feel I need to "uncover" the curriculum.

In a nutshell, some concepts are connected in such a way as to make it possible for students to derive one (or many) given another. There is 'logic in the knowledge,' and so explicit instruction is not strictly necessary to make connections between nodes in those situations; students can possibly do that themselves.

A good example of a convention that math teachers might think of is the order of operations. One might argue that there's no logic there; you just have to know the agreed-upon order, and so we have to teach it directly. By contrast, manipulating numerators and denominators when adding fractions has a logic behind it—if you're adding win-loss records represented as fractions to determine total wins to total losses, then adding across is just fine. But it's usually not fine, because the denominator often represents a whole rather than another part. Students do not necessarily have to be told this logic to get it. They can be led to discover that one third of a pizza plus two thirds of the pizza can't possibly mean three sixths, or one half, of the pizza. Then they can build models to pin down exactly what fraction addition does represent, along with the connections to the symbolic representations of those meanings.

Okay, So That's a Good Start Anyway

Yet, while it seems reasonable to me to suggest that one must teach explicitly when there is 'no logic in the knowledge,' it's too strong, I think, to suggest that when the logic is there one must not teach that way. (And I note that Ms Burns does not go this far in her comment.) Explanations do not make it impossible nor even difficult for students to traverse those conceptual nodes in 'logic-filled' knowledge—unless they are bad explanations or they are foisted on students who have a lot of background knowledge or both.

Regardless, it seems like a pretty good start to say that when deciding where on the "telling" spectrum we can best situate ourselves, we can think about, among other things, how well connected a concept is to other concepts (for the students), the conceptual distance between two or more nodes (for the students)—there are a lot of good testable hypotheses we might generate by staying in the content weeds while boxing out distracting woo.

Ultimately, what I think is pleasing about Burns's comment is that it is the beginning of a good explanation. It provides a functional rationale—tied directly to content, but not ignoring students—for choosing one or another general teaching method. And it can be connected to other things we know about, such as the expertise-reversal effect and the generation effect. These are the kinds of explanations we should be producing and looking for in education in my humble opinion. It is not necessary to be a researcher or academic to generate or appreciate them.

Image credit: Mauro Entrialgo.

Thursday, October 1, 2015

Toward an Education Science

A group known as Deans for Impact recently released this document, called "The Science of Learning," as a very public beginning of an initiative to improve university teacher preparation. If you have a moment, take a look—it is an eminently brief and readable set of answers taken from cognitive science to questions about student learning.

The timing of that announcement helpfully coincided with my revisiting David Deutsch's The Beginning of Infinity, which readers will discover places near-fatal pressure on the common notion that the goodness of science is to be found ultimately in its oft-emphasized characteristics of testability, falsifiability, transparency, rejection of authority, openness to criticism, and empirical orientation. (By extension, this also places significant philosophical pressure on much of what is thought to be good about discovery learning.) Rather, as Deutsch persuasively argues, the desire for good explanations—those that are "hard to vary"—is the real foundation for all of these characteristics, and is what has fundamentally made Enlightenment science so effective at allowing us to both control and make sense of the universe.

Consider, for example, the ancient Greek myth for explaining the annual onset of winter. Long ago, Hades, god of the underworld, kidnapped and raped Persephone, goddess of spring. Then Persephone's mother, Demeter, goddess of the earth and agriculture, negotiated a contract for her daughter's release, which specified that Persephone would marry Hades and eat a magic seed that would compel her to visit him once a year thereafter. Whenever Persephone was away fulfilling this obligation, Demeter became sad and would command the world to become cold and bleak so that nothing could grow. . . .

Now consider the true explanation of seasons. It is that the Earth's axis of rotation is tilted relative to the plane of its orbit around the sun . . .

That is a good explanation—hard to vary, because all its details play a functional role. For instance, we know—and can test independently of our experience of seasons—that surfaces tilted away from radiant heat are heated less than when they are facing it, and that a spinning sphere in space points in a constant direction . . . Also, the same tilt appears in our explanation of where the sun appears relative to the horizon at different times of year. In the Persephone myth, in contrast, the coldness of the world is caused by Demeter's sadness—but people do not generally cool their surroundings when they are sad, and we have no way of knowing that Demeter is sad, or that she ever cools the world, other than the onset of winter itself.

What's the connection? Well, a somewhat out-of-focus constellation of legitimate worries appears whenever "science" gets said a little too often in relation to classroom teaching. And just one star in that constellation seems to be the worry that "science" doesn't know what it's talking about when it comes to teaching—that its methods ignore, among other things, the powerful effects of the relationship between teachers and students, and that the environments it sets up to test its hypotheses are far removed (environment and hypothesis both) from classroom realities.

And this worry has predictably resurfaced again following the release of the "Science of Learning" document and announcement.

What Deutsch's argument can offer us in the face of this worry is the beginning of a convergence—away from feel-good unjustified assertions on the one hand and beating people over the head with stale research methods terminology on the other—toward a shared desire for good, hard to vary, explanations: those that are functional (does it explain how it works?) and connected (does it help explain other things?).

A good, and necessary, first step toward a science of education is not to arrogantly demand that science heed the "values" of practitioners nor to expect those practitioners to become classroom clinicians; but it will be to hold one another and ourselves accountable for better and better explanations of effective teaching and learning.

Image mask credit: Siyavula Education.

Tuesday, September 15, 2015

Aristotle's Narrative Bias

I wondered at length in this post, implicitly to a degree, whether our narrative bias might affect how we think about teaching and understanding concepts, especially so-called threshold concepts.

Those thoughts occurred to me again as I read this paragraph from Peter Achinstein's The Nature of Explanation. Through a bit of terminological haze, the author walks right up to the edge of narrative bias, but doesn't look over:

Why . . . are causes so frequently sought when explanations are demanded? The answer of the illocutionary view is that events have causes. But nature and our own finite minds conspire to produce in us n-states [non-understanding states] with respect to the question of what caused many of those events. We so often cite causes when we explain because doing so will alleviate n-states with which we are frequently plagued.

And the reason we are "frequently plagued" by states of non-understanding when it comes to causality—or at least one possible reason in some contexts—is that we label our non-understanding this way, through our narrative bias, as a kind of "causality deficiency," which serves to make the remedy to this problem—supplying a narrative around the "event" we are studying—seem like the only choice, or at least the only natural choice, of treatments.

Although Aristotle's "Four Causes" are not intended to be, collectively, a theory of explanation, many philosophers, including Achinstein, regard them as such. And the Internet Encyclopedia of Philosophy is pretty blunt about it: "Most people, philosophers included, think of explanation in terms of causation."

So we are surrounded in time and space and thought by narrative. Yet some or many of the concepts we try to teach in mathematics not only don't fit a narrative, they don't seem to fit any narrative. And to the extent that we try to force a narrative structure to them, perhaps we weaken them—enough to be digestible and too much to be nutritious.

Given that 0 × 0 = 0 and 1 × 1 = 1, it follows that there are numbers that are their own squares. But then it follows in turn that there are numbers. In a single step of artless simplicity, we seem to have advanced from a piece of elementary arithmetic to a startling and highly controversial philosophical conclusion: that numbers exist. You would have thought that it should have been more difficult.

--A.W. Moore, quoted in "Mathematics: A Very Short Introduction," by Timothy Gowers

Image mask: ketrin1407.

Sunday, September 13, 2015

Concept Mapping vs. Retrieval Practice

I really want to start with this quotation from the beginning of the paper we'll look at in this post—just to make it seem as though the rest is going to completely refute it:

A widely held view in academic contexts is that learning activities which involve the active elaboration of information lead to better results than passive and rote learning activities.

But of course the rest of the paper does not refute this view. Yet, helpfully, it does carve a bit away from it, which can allow us to ask better questions and perhaps target our instructional work more efficiently and effectively.

The two approaches pitted against each other in this research are concept mapping and retrieval practice. A concept map is a diagram that visually represents thinking about a topic or group of topics. Concept mapping in the classroom is said to benefit students by promoting collaborative learning, deep learning, active learning, and better recall. In the other corner is retrieval practice, which is probably better known as the testing effect: "long-term memory is increased when some of the learning period is devoted to retrieving the to-be-remembered information." Both concept mapping and retrieval practice are thought to be beneficial because they promote elaborative processing.

Yet, in previous experiments, retrieval practice had greater beneficial effects on learning than did concept mapping. The authors briefly outline the results of one of these studies (emphasis mine):

Karpicke and Blunt (2011) compared long-term learning in four groups of college students who were asked to carry out different activities using the same text. One group read the text during a single 5-min study period (S condition), whereas another group participated in a further three 5-min study periods (SSSS condition). The third group read the text for 5 min and was then given 25 min to construct a concept map, being allowed to consult the text (S + CM condition). The fourth and final group repeated a study protocol twice, which included a 5-min reading period plus a 10-min retrieval period in the form of a free-recall test (STST condition). The results demonstrated that participants in the retrieval practice group (STST condition) recalled more information than the remaining groups in a concept learning test administered a week later, and which covered both direct and inferential questions about the text. In a second experiment, Karpicke and Blunt (2011) found that the advantage in the retrieval practice group outweighed that of the group involved in creating concept maps, even when learning was measured by requesting that all participants construct a concept map about the text's content.

So thoroughly did retrieval practice wallop concept mapping in this experiment, other researchers wondered whether participants in the concept mapping condition were simply not using the technique correctly or didn't know how. Thus, the experiment discussed here was designed to address this possible weakness.

Expert Content Mappers Are Just As Good

In the present study, researchers divided 84 undergraduates into 4 conditions: (a) repeated study, (b) repeated retrieval, (c) concept-map trained, and (d) expert content-mapping. The latter group was selected based on their answers to a survey about how often they used concept mapping when studying for tests (frequently or always). The concept-map trained group was given training on creating concept maps prior to the study.

Next, each group was asked to study a text for 5 minutes for an upcoming test. The repeated study group then simply studied the text for 15 more minutes (in 5-minute chunks). The concept-map trained group and the expert content-mapping groups each used 25 minutes to create a concept map of what they read (with the text available). Finally, the repeated retrieval group wrote down everything they could remember about the text on a blank piece of paper for 10 minutes, gave the paper away, studied the text for 5 more minutes, and then used a final 10 minutes to again write down everything they remembered on another blank piece of paper.

Participants were asked how well they they thought they would remember the text in a week. Then, after one week, all subjects were given a short-response test on the text material. The graph below shows results on this test:

Although the retrieval group outperformed the others on inference questions (and significantly on verbatim questions), their advantage over the other groups was not significant on the inference measure—although it approached significance (p = 0.06) relative to the concept-map trained condition.

It is worth pointing out, though, that the groups did not spend equal amounts of time with the material. Compared with the retrieval group, who had the text available for a total of 10 minutes, the repeated study group spent twice as much time (20 minutes) with the text close at hand, and the concept mappers spent 3 times as long with the text.

And Some Scraps

Another interesting finding in the present study—which is found in much of Bjork's work as well—is that participants' predictions about their performances were inverted compared with their actual relative performances. Participants in the repeated retrieval group underestimated, on average, how well they would do on the test, whereas every other group overestimated how well they would do.

Although some relevant differences were not significant in this study, the effects reported in the previous studies, along with the time differentials mentioned above lead me to believe that, all other things being equal, retrieval practice is likely superior to concept mapping for learning (from texts).

Images credit: Laura Dahl Lechuga, M., Ortega-Tudela, J., & Gómez-Ariza, C. (2015). Further evidence that concept mapping is not better than repeated retrieval as a tool for learning from texts Learning and Instruction, 40, 61-68 DOI: 10.1016/j.learninstruc.2015.08.002

Saturday, August 29, 2015

Threshold Concepts

I first encountered the notion of threshold concepts in David Didau's marvelous book. He provides links there to this article, by Jan Meyer and Ray Land—who first wrote about threshold concepts—and this one, by Glynis Cousin, which provides a briefer introduction to the idea. Meyer and Land describe threshold concepts this way:

A threshold concept can be considered as akin to a portal, opening up a new and previously inaccessible way of thinking about something. It represents a transformed way of understanding, or interpreting, or viewing something without which the learner cannot progress. As a consequence of comprehending a threshold concept there may thus be a transformed internal view of subject matter, subject landscape, or even world view. This transformation may be sudden or it may be protracted over a considerable period of time, with the transition to understanding proving troublesome.

Reading this, I recall my freshman year in college, when I took a lot of 101 courses: Anthropology 101, Biology 101 (I think it was 150 actually), even Theology 101 (though I'm quite sure it wasn't called that). What I enjoyed about these courses—though even then I was certain that I was not going to be a biologist, anthropologist, or theologian—was that they delivered these "previously inaccessible way[s] of thinking about something."

Sure, I had some ideas about the world that fell within the purview of each of these fields—some ideas about how human societies function, how the human body functions, and some ideas about how we collectively think gods get their work done. But it was clearly not a goal in these courses to stretch my previous or intuitive understandings into something more mature and rigorous. What was intended was a welcoming into an academic community—a community that looked at the world in a specific set of ways that had served it well over the decades or centuries; one that had developed useful schemas and language for investigating the specific slice of the universe that interested it; one that had essentially constructed, at its boundaries and deeper within, threshold concepts which initiates had to come to terms with in order to navigate and contribute to the community; and one which—and this seems like a characteristic of academic communities that is often overlooked by K–12 education—continuously reinforces and overturns these concepts through free and transparent criticism and debate.

I highly recommend the more detailed exposition given at the links above.

The Cheese Stands Alone

When I think about the characteristics of these concepts, as outlined by Meyer and Land—they are transformative, often irreversible, integrative, bounded, and likely to be troublesome—it seems to become clear that threshold concepts are also, in notable ways, isolated. The authors use complex numbers as one example:

A complex number consists of a real part (\(\mathtt{x}\)), and a purely imaginary part (\(\mathtt{iy}\)). The idea of the imaginary part in this case is, in fact, absurd to many people and beyond their intellectual grasp as an abstract entity. But although complex numbers are apparently absurd intellectual artifacts they are the gateway to the conceptualization and solution of problems in the pure and applied sciences that could not otherwise be considered.

Notice the phrase "beyond . . . intellectual grasp." The authors themselves can't help but talk about a threshold concept as a node that lies above something cognitively "below" them, such that one can reach up from "below" to grasp them intellectually (or not). But if we accept the authors' characteristics of threshold concepts, it seems we should reject this implicit picture. Rather, threshold concepts stand more or less alone and disconnected (from below, at least). A threshold concept cannot be transformative and irreversibly alter our perspective while also simply being the last domino in a chain of reasoning. We can and do make sense of threshold concepts as continuations or extensions of our thinking, but we do so by employing a few tricks and biases.

We Can Make Up Stories About Anything

In his book You Are Now Less Dumb, featuring captivating descriptions of 17 human fallacies and biases (and ways to try to overcome them), David McRaney alludes to one way in which we are practically hard-wired to misunderstand stand-alone concepts—the narrative bias:

When given the option, you prefer to give and receive information in narrative format. You prefer tales with the structure you’ve come to understand as the backbone of good storytelling. Three to five acts, an opening with the main character forced to face adversity, a turning point where that character chooses to embark on an adventure in an unfamiliar world, and a journey in which the character grows as a person and eventually wins against great odds thanks to that growth.

Thus, we need for threshold concepts like complex numbers to be the middle part of some storyline. So we simply invent an educational universe in which we believe it is always possible to "motivate" this middle by writing some interesting beginning. In reality, this might be at best intellectually dishonest and at worst delusional. Given their characteristics, threshold concepts may only truly make sense as the beginnings of stories. Yet our very own narrative biases can actually cause these concepts to be troublesome, because we search for ways in which they follow from what we know when those ways don't in fact exist. To master a threshold concept, one may need to take a leap across a ravine, not a stroll over a bridge. Louis C.K.'s mom said it well:

My mother was a math teacher, and she taught me that moment where you go, "I don't know what this is!" when you panic, that means you're about to figure it out. That means you've let go of what you know, and you're about to grab onto a new thing that you didn't know yet. So, I'm there [for my kids] in those moments.

Another blockade that exists and which prevents us from embracing the very idea of threshold concepts is the implicit assumption that learning is continuous and linear—and that it always moves forward. You can see that this is in some way related to the narrative bias discussed above. I'll simply quote Didau on this, as he deals with it at length in his book:

Progress is, if anything, halting, frustrating and surprising. Learning is better seen as integrative, transformative and reconstitutive—the linear metaphor in terms of movement from A to B is unhelpful. The learner doesn’t go anywhere, but develops a different relationship with what they know. Progress is just a metaphor. It doesn’t really describe objective reality; it provides a comforting fiction to conceal the absurdity of our lives.

Dealing Honestly with Threshold Concepts in Mathematics Education

In education, I think we could stand to be more comfortable recognizing concepts that simply have more outgoing arrows of influence than incoming arrows. These kinds of concepts do more powerful work in the world shedding light on other ideas and problems. Thus, it may be a far better use of our time to treat them as the first acts of our stories—and a waste of time to treat them as objects of investigation on their own. One would think that lighthouses make a lot more sense when you look at what their light is shining on rather than directly at their light.

It can seem disquieting, to say the least, to think that it might be better to approach certain concepts by simply "living inside them" for a while—seeing out from the framework they provide rather than trying to "understand" them directly. But the point I'd like to press is that this discomfort may be a result of our bias to see 'understanding' from just one perspective—as a causal chain or story which has as one of its endings the concept we are interested in. Some understanding may not work that way. More importantly, there is no reason—other than bias or ideological blindness—to believe that understanding has to work that way.

Another reason threshold concepts may be so troublesome is that perhaps we misunderstand historical "progress" within a field of study in precisely the same way we misunderstand "progress" for students: as a linear, continuous, always-forward movement to higher planes. How, you might ask, can I expect students to simply come to terms with ideas when humanity certainly did not do this? But your confidence in how humanity "progressed" in any regard is likely informed by a narrative bias writ large. You simply wouldn't know if chance had a large role to play, because the historians involved in collating the story, along with all the major characters in said story, are biased against seeing a large role for chance. Marinating in ideas over time, making chaotic and insightful leaps here and there—such a description may be closer to the truth about human discovery than the tidy tales we are used to hearing.

Tuesday, August 25, 2015

We Can Do Persistence

Seven hundred twenty-three Finnish 13-year-olds from 17 different metropolitan public schools were each given, at the beginning of the school year, two self-report questionnaires designed to measure "students' beliefs about school mathematics, about themselves as mathematics learners, their affective responses towards mathematics and their behavioural patterns in math classes."

The items on the questionnaires, from "strongly disagree" (-5) to "strongly agree" (+5), were used to create 9 separate affective scales:

 1. Self-efficacy2. Low self-esteem3. Enjoyment of maths
4. Liking of maths5. Fear of mathematics6. Test anxiety
7. Integration8. Persistence9. Preference for challenge

In addition, students were given a 26-problem math test "about numbers and different calculations, various spatial and word problems, and examination of patterns." This math test provided a tenth scale (a "performance" scale) on which students were measured. The results below show intercorrelations among the nine affective scales and the performance scale. (Researchers used a benchmark of 0.15 for significance.)

 1. EFF.41.44–.45–.
2. LEST–.37–.39.57.49–.02–.32–.43–.42
3. ENJ.78–.65–.
4. LIK–.65–.
5. FEAR.47–.11–.46–.64–.31
6. TANX.15–.08–.31–.27
7. INT.51.13.01
8. PERS.44.20
9. CHALL.30

Few Strong Correlations for Affect and Initial Learning

The first row of the table shows that students' self-efficacy correlated positively with their self-reported enjoyment of math (Column 3, 0.41) and with their test scores (Column 10, 0.28). In contrast, and as expected, self-efficacy correlated negatively with subjects' fear of mathematics (Column 5, -0.45) and their test anxiety (Column 6, -0.26). Once you get your head around the table, you'll find that there's nothing really surprising there. Nearly all of the "positive" measures are correlated negatively with the "negative" measures, and vice versa.

But take a look at the correlation data for what is called "integration." These are the values in Column 7 and the values in Row 7. The authors describe integration as the "tendency to integrate new math knowledge with previous knowledge and experience." Out of the nine data points for integration, five show no significant correlation, using the researchers' own metric for statistical signficance (0.15). And the correlation between integration and self-efficacy is just barely significant (0.16). The only other "insignificant" correlation shown is between test anxiety and persistence (-0.08).

So integration contains five sixths of the insignificant results in the study. Indeed, if one were to use the absolute values of the correlations in the table to find a mean correlation for each scale, integration would be at the very bottom (0.174), while fear of mathematics would be at the top (0.478 . . .). And integration was the only scale which did not interact in a statistically significant way with test scores.

Given that these data are correlational (and the raw data were taken only from self reports), it is impossible to draw reliable conclusions about causality. And generalizing from 13-year-old Finnish students to all learners would be irresponsible. Yet, it is interesting to note that the only factor that correlated moderately with integration in the study was persistence (0.51). Thus, if one could say anything, one might say that these results may provide yet another indication that integrating "new math knowledge with previous knowledge and experience" (some call this "learning") is not as interwoven with students' intrinsic personal/emotional qualities as we like to think—that it doesn't matter that they have low or high self-esteem or that they fear or do not fear mathematics or that they have or do not have test anxiety or that they like to challenge themselves or not.

What seems to matter more is that they show up and keep trying. Luckily, of all the affective traits mentioned in the study, persistence is the one that we might be able to design learning environments for without the need to pretend that we have degrees in counseling psychology. Malmivuori, M. (2006). Affect and Self-Regulation Educational Studies in Mathematics, 63 (2), 149-164 DOI: 10.1007/s10649-006-9022-8

Sunday, August 16, 2015

Book Review: Read It Over and Over

I'll admit that when I saw that a book called What If Everything You Knew About Education Was Wrong? was coming out, I was committed to buying it based on the title alone.

There's a bias that helps explain my impulse. It's called confirmation bias, which is, as the book explains, "the tendency to seek out only that which confirms what we already believe and to ignore that which contradicts these beliefs."

Now, this isn’t necessarily a deliberate or partisan avoidance of contrary evidence; it’s just a state of mind to which it’s almost impossible not to fall victim. Let’s imagine, just for a moment, that you think maths is boring. If you’re told that learning maths is pointless, because most people get by using the calculators on their phones, you’re likely to accept it without question. If, on the other hand, you’re shown a report detailing the need for maths in high status jobs and calling for compulsory maths education until the age of 18, you’re likely to find yourself questioning the quality of the jobs, the accuracy of the report’s findings and the author’s motives.

Thus, since I tend to believe that we've got very little figured out about education—that we have much more pruning of bad ideas and learning from our mistakes left to do in this field—I was compelled, in part by confirmation bias, to read this book, as I suspected that it would validate those beliefs. Naturally, I was not disappointed.

But the fact that confirmation bias was at work in my decision to read this book (and David Didau's blog, Learning Spy)—and is no doubt at work in others' decisions to not read him—does not make any of our respective beliefs wrong. Nor does it make them right. This is a point to which Didau returns often and in subtle ways throughout Part 1 and the rest of the book. The existence of our cognitive fallibilities tells us that we should nurture our and others' skepticism and doubt (and sarcasm!), both self-directed and 'other'-directed, recognizing that we, along with all of our fellow travelers, are riddled with truth-blocking biases (many of which, such as the availability bias, the halo effect, and the overconfidence bias, Didau outlines in Part 1):

Maybe it’s impossible for us always to head off poor decisions and flawed thinking; knowing is very different to doing. I’m just as prone to error as I ever was, but by learning about cognitive bias I’ve become much better at examining my thoughts, decisions and actions after the fact. At least by being aware of the biases to which we routinely fall prey, and our inherent need to justify our beliefs to ourselves, maybe we can stay open to the possibility that some of what we believe is likely to be wrong some of the time.

Learning Is Not Performance

Didau takes you into Part 2 and then Part 3 of his book with the hope that Part 1 has left you "feeling thoroughly tenderised." It is here where, to my reading, his main thesis is developed. So I'll be brief—and if not that, circumspect—in my commentary.

Having called out all the ways we cannot rely solely on our own often biased observations and thinking about education in Part 1, the book's next two parts naturally draw heavily (but not laboriously) on the device humankind has invented to compensate for these weaknesses: the scientific method. We are given in this section to look at what we talk about as learning versus what scientific investigation says about it.

What are those differences? In particular, as Didau outlines here at his site (citing the work of Robert Coe), when we talk about "learning," what we are often really talking about is "performance," a proxy for learning. We behave, in our conversations and through our policies and pet ideologies, as though students are learning when they are:

  • Busy, doing lots of (especially written) work.
  • Engaged, interested, motivated.
  • Getting feedback and explanations.
  • Calm and under control.
  • Supplying mostly correct answers.
. . . when in fact these operationalizations are only loosely tethered to what they are meant to describe. These proxies are what we talk about and debate about in public, rather than learning.

In contrast to these pedestrian notions of learning, the work of Coe, Nuthall, Sweller, Bjork, among many others cited and discussed in the book, tell us that forgetting can be a powerful aspect of learning, that 'difficult' learning is often better than easy, and that motivation and engagement aren't all they're cracked up to be.

What If I'm Wrong?

I should close by listing one way in which What If Everything You Knew About Education Was Wrong? helped change my mind. It was on the subject of "desirable difficulties," a phrase that has come out of the work of Robert Bjork—work that, unfortunately, I did not know about until I came across Didau's writing (maybe messages like Bjork's should be the ones on the TED stage).

Prior to reading this work, I was inclined to resist the notion that learning had to be difficult. This was likely due, in part, to my own biases, but I also can't help but think that, to the extent that I ever encountered arguments in the past drawn from this work, they were so misunderstood or poorly defended by their proponents as to bear no relationship to the ideas you see in that video.

At any rate, Bjork's work seems to make clear, not that learning has to be difficult, but that some difficulties (which he mentions above and are discussed in the book) improve longer-term learning in many contexts. It is a responsible, serious, evidence-based perspective that eroded some of my practiced resistance. I think you'll be able to say the same about Didau's book.