## Wednesday, January 27, 2016

### Holding Back: Inhibition

I doubt I'm much different from other people in that I tend to think about intelligence, academic success, smarts, whatever you want to call it, as something achieved by adding good things, whereas the opposite indicates a lack of good things. Or, in other words, positive academic outcomes are—it occurs to me, before I self-censor—caused by turning on some internal and external power sources, whereas with poorer outcomes there are stuck valves, leaky pipes, broken or frayed wires, somewhere.

These quasi-unconscious metaphors about people's mental states show up in idioms like bright (positive luminescence) and dim, not playing with a full deck (lacking cards), on the ball (positive causal force), beyond him/her (not the causal force), made sense of (positive), lost the plot/lost the thread/miss the point (negative), and sharp or dull.

It is natural, then, that I often forget that education doesn't work in just that one way. Positive outcomes are not always (or solely) achieved by making things happen, but often by stopping things from happening.

Huge Mice and Tiny Elephants

This study, which you can read online for free (thank you!) demonstrates one way in which "stopping things from happening" is an important—in fact, possibly critical—factor in learning and excelling at mathematics.

Researchers gave 58 children between the ages of 3 and 6 a Stroop task using pictures of animals (such as mouse and elephant) presented together either with their correct relative sizes (mouse much smaller than elephant) or their incorrect relative sizes (both pictures the same size). The children were asked to choose the animal that was largest in real life.

In order to perform on Stroop tasks, participants must inhibit responses to more salient characteristics of stimuli. And this inhibition can be measured. In the classic Stroop task, participants are asked to name the color of text. Color names are printed so that the color of the text matches the word (e.g., red, green, blue) or doesn't match (e.g., red, green, blue). It has been found that the words themselves interfere significantly with performance; i.e., it takes less time to respond that the color of blue is blue than it takes to say that red is green. In the animal size Stroop task, children must inhibit responses to the pictured sizes of the animals in order to produce a correct response regarding the animals' relative sizes in real life.

What researchers discovered in this study is that performance on the animal size Stroop task was a significant predictor of mathematics achievement, as measured by the Test of Early Mathematics Achievement—a finding that simply adds to a large and growing evidence base:

Cognitive development in the domain of number representation may therefore mean learning to ignore competing task-irrelevant dimensions of the stimulus. Indeed, a study in 3- to 5-year-old children from low-income families found that performance on a magnitude comparison task related to math achievement, but that this relationship was driven by trials that required inhibiting an irrelevant stimulus dimension (surface area) to select the larger numerosity (Fuhs and McNeil, 2013). This suggests that the ability to ignore irrelevant perceptual information and focus on number may explain why inhibitory control relates to early numeracy.

Discussion

It is not clear to me—certainly not from this study—that inhibition can be taught. I remember years ago being required to write math problems containing unnecessary information so that students would have to choose the information that they needed. But just making kids do something is not the same thing as teaching them something. It is, rather, a total cave to assessment obsession—we just found a way to call assessment "instruction".

If inhibition can be taught, then it seems to me that we should try to be more knowledgeable about its effects and importance and then explicit in our instruction about how to apply this skill—particularly in mathematics. (See this post for a related discussion.)

Our intuitions—and even our language, as I mentioned above—prime us to see the clip below as an example of brilliant connection-making (positive). But try the problem if you haven't seen it already, and watch the solution. How much of the difficulty of this problem is captured in the unnecessary salience of the concrete scenario?

How much closer to hand might mathematics be to the unbrilliant if we could teach them how to strategically ignore the buzzing confusion of the real world?

Merkley, R., Thompson, J., & Scerif, G. (2016). Of Huge Mice and Tiny Elephants: Exploring the Relationship Between Inhibitory Processes and Preschool Math Skills Frontiers in Psychology, 6 DOI: 10.3389/fpsyg.2015.01903

## Saturday, January 16, 2016

### Perplexity Is Not Required for Learning

Let me see if I can accurately describe the results of the first experiment from this study. But before I do, watch this video:

The video features a student, Justine, faced with a 'cognitive conflict'—one between her notion that objects fall at different rates depending on their weight and the scientific reality that objects fall at the same rate under the force of gravity.

Importantly, according to the video this conflict seems to be a necessary part of teaching and learning, as Miss Reyes suggests at about 0:54:

Look, I've been teaching for 12 years, and trust me, these students are anything but blank slates. They may be here for 8 hours a day, but for the other 16, they're in all kinds of classrooms: the dinner table, karate lessons, their sneeper-peeper feed, trashy vampire novels . . . My point is that these kids are out there, living life, constantly learning things.

Thus, for learning to occur, student thinking must be probed for misconceptions, and these misconceptions must be patiently challenged—i.e., cognitive conflict (or in some versions, perplexity or incongruity) must be induced. This I believe is the most widely subscribed version of the conceptual change model of learning. It is the prescriptive cousin of empirical and philosophical work that has assigned the cause of conceptual change to "the drive to make sense of anomalous observations that are inconsistent with existing concepts."

Change Without Conflict

Yet, in recent years—more recent than the papers cited in the link at the end of the video—the construct of conceptual change has met with a great deal of challenge and has not found much robust empirical support, leading the author of the study I examine here to write this fairly bold statement (emphasis mine):

Since Limon’s (2001) review, cognitive change researchers have investigated the influence on conceptual change of a broader range of cognitive and non-cognitive factors, including affect and motivation, individual differences, metacognition, epistemological beliefs, and intention to change. . . .

Taken at face value, the relative lack of effect of such conflicts across a broad range of studies falsifies the cognitive conflict hypothesis: The difficulty of conceptual change must reside elsewhere than in conflict, or rather the lack thereof, between misconceptions and normatively correct subject matter.

In fact, in their first experiment, Ramsburg and Ohlsson found that even inducing conflict via the age-old method of telling students they were wrong did not have a relatively significant impact.

Over several trials, the researchers first taught 120 undergraduates a misconception regarding visual characteristics of bacteria that make them oxygen-resistant (e.g., 'contain black nuclei'). Seventy-four of these students learned the misconception 'to mastery.' Then, for one group of students, called the complete condition, a further set of trials switched up the critical characteristic in the images, allowing students to see immediately the visual evidence disconfirming their misconception. For example, images of bacteria with black nuclei were shown with accompanying feedback informing students that these were not oxygen-resistant.

A second group of students, called the confirmatory-only condition, by contrast, were never presented with disconfirming evidence of their prior misconception in subsequent trials. All images that did not show the new oxygen-resistant characteristic also did not show the black nuclei. Thus, this group saw only positive examples of the new characteristic; their prior misconception was not explicitly challenged in the image trials.

As you might expect at this point, though it is still somewhat suprising in the context of common wisdom, no significant differences were found between the groups' abilities to learn the new characteristic after the misconception had been taught. No significant differences were identified in the rate of learning either. The second and third experiments in this study replicated these results, even after attempting to control for some weaknesses in the first experiment and strengthening the complete 'conflict' condition.

What Does This Mean for Justine?

Given the results of this study and similar results over recent years, we can at least conclude that it is wrong to suggest that provoking perplexity or inducing dissonance or cognitive conflict is necessary for learning. And even 'softer' claims about conceptual change probably deserve a great deal of suspicion and scrutiny. However, as the authors describe in some detail in the study, there are a number of methodological and conceptual challenges one faces in experimenting with this construct. So, it is by no means time to cut the funding for conceptual change research.

An intriguing question, I think, is why perplexity and incongruity don't work (to the extent that they don't). Why might it not be necessary—or even important—to induce a conflict between what students know and what they don't? I suspect the beneficial effects of cognitive conflict are mediated by prior knowledge (well, what isn't?), in favor of those with more of it.

Ramsburg, J., & Ohlsson, S. (2016). Category change in the absence of cognitive conflict. Journal of Educational Psychology, 108 (1), 98-113 DOI: 10.1037/edu0000050

## Saturday, January 2, 2016

### Teach Me My Colors

In the box below, you can try your hand at teaching a program to reliably identify the four colors red, blue, yellow, and green by name.

You don't have a lot of flexibility, though. Ask the program to show you one of the four colors, and then provide it feedback as to its response—in that order. Then repeat. That's all you've got. That and your time and endurance.

Of course, I'd love to leave the question about the meaning of "reliably identify the four colors" to the comments, but let's say that the program knows the colors when it scores 3 perfect scores in a row—that is, if you cycle through the 4 colors three times in a row, and the program gets a 4 out of 4 all three times.

Just keep in mind that closing or refreshing the window wipes out any "learning." Kind of like summer vacation. Or winter break. Or the weekend.

Death, Taxes, and the Mind

The teaching device above is, in some sense, a toy problem. It is designed to highlight what I believe to be the most salient feature of instruction—the fact that we don't know a lot about our impact. Can you not imagine someone becoming frustrated with the "teaching" above, perhaps feverishly wondering what's going on in the "mind" of the program? Ultimately, the one problem we all face in education is this unknown about students' minds and about their learning—like the unknown of how the damn program above works, if it even does.

One can think of the collective activity of education as essentially the group of varied responses to this situation of fundamental ambiguity and ignorance. And similarly, there are a variety of ways to respond to the painful want of knowing solicited by this toy problem:

Seeing What You Want to See
Pareidolia is the name given to an occurrence where people perceive a pattern that isn't there—like the famous "face" on Mars (just shadows, angles, and topography). This can happen when incessantly clicking on the teaching device above too. In fact, these kinds of pattern-generating hypotheses jumped up sporadically in my mind as I played with the program, and I wrote the program. For example, I noticed on more than one occasion that if I took a break from incessant clicking and came back, the program did better on that subsequent trial. And between sessions, I was at one point prepared to say with some confidence that the program simply learned a specific color faster than the others. There are a huge number of other, related superstitions that can arise. If you think they can only happen to technophobes and the elderly, you live in a bubble.

Constantly Shifting Strategies
It might be optimal to constantly change up what you're doing with the teaching device, but trying to optimize the program's performance over time is probably not why you do it. Frustration with a seeming lack of progress and following little mini-hypotheses about short-term improvements are more likely candidates. A colleague of mine used to characterize the general orientation to work in education as the "Wile E. Coyote approach"—constantly changing strategies rather than sticking with one and improving on it. The darkness is to blame.

Letting the Activity Judge You
This may be a bit out in left field, but it's something I felt while doing the "teaching," and it is certainly caused by the great unknown here—guilt. Did I remember to give feedback that last time? My gosh, when was the last time I gave it? Am I the only one who can't figure this out, who is having such a hard time with this? (Okay, I didn't experience that last one, but I can imagine someone experiencing it.) It seems we will happily choose even the distorted feel-bad projections of a hyperactive conscience over the irritating blankness of not knowing. Yet, while we might find some consolation in the truth that we're too hard on ourselves, we also have the unhappy task of remembering that a thousand group hugs and high-fives are even less effective than a clinically diagnosable level of self-loathing at turning unknowns into knowns.

Conjecturing and Then Testing
This, of course, is the response to the unknown that we want. For the toy problem in particular, what strategies are possible? Can I exhaust them all? What knowledge can I acquaint myself with that will shine light on this task? How will I know if my strategy is working?

Here's a plot I made of one of my runs through, using just one strategy. Each point represents a test of all 4 colors, and the score represents how many colors the program identified correctly.

Was the program improving? Yes. The mean for the first 60 trials was approximately 1.83 out of 4 correct, and the mean for the back 63 was approximately 2.14 out of 4. That's a jump from about 46% to about 54%.

Is that the best that can be done? No. But that's just another way the darkness gets ya—it makes it really hard to let go of hard-won footholds.

Knowing Stuff

Some knowledge about how the human mind works is analogous to knowing something about how programs work in the case of this problem. Such knowledge makes it harder to be bamboozled by easy to vary explanations. And in general such knowledge works like all knowledge does—it keeps you away, defeasibly, from dead-ends and wrong turns so that your cognitive energy is spent more productively.

Knowing something about code, for example, might instantly give you the idea to start looking for it in the source for this page. It's just a right click away, practically. But even if you don't want to "cheat," you can notice that the program serves up answers even prior to any feedback, which, if you know something about code, would make you suspect that they might be generated randomly. Do they stay random, or do they converge based on feedback? And what hints does this provide about the possible functioning of the program? These better questions are generated by knowledge about typical behavior, not by having a vast amount of experience with all kinds of toy problem teaching devices.

How It Works

So, here's how it works. The program contains 4 "registers," or arrays, one for each of the 4 colors—blue, red, green, yellow. At the beginning of the training, each of those registers contains the exact same 4 items: the 4 different color names. So, each register looks like this at the beginning: ['blue', 'red', 'green', 'yellow'].

Throughout the training, when you ask the program to show you a color, it chooses a random one from the register. This behavior never changes. It always selects a random color from the array. However, when you provide feedback, you change the array for that color. For example, if you ask the program to show you blue, and it shows you blue, and you select the "Yes" feedback from the dropdown, a "blue" choice is added to the register. So, if this happened on the very first trial, the "blue" register would change from ['blue', 'red', 'green', 'yellow'] to ['blue', 'red', 'green', 'yellow', 'blue']. If, on the other hand, you ask for blue on the very first trial, and the program shows you green, and you select the "No" feedback from the dropdown, the 3 colors that are NOT green are added to the "blue" register. In that case, the "blue" register would change from ['blue', 'red', 'green', 'yellow'] to ['blue', 'red', 'green', 'yellow', 'blue', 'red', 'yellow'].

A little math work can reveal that positive feedback on the first trial moves the probability of randomly selecting the correct answer from 0.25 to 0.4. For negative feedback, there is still a strengthening of the probability, but it is much smaller: from 0.25 to about 0.29. These increases decrease over time, of course, as the registers fill up with color names. For positive feedback on the second trial, the probability would strengthen from 0.4 to 0.5. For negative feedback, approximately 0.29 to 0.3.

Thus, in some sense, you can do no harm here so long as your feedback matches the truth—i.e., you say no when the answer is incorrect and yes when it is correct. The probability of a correct answer from the program always gets stronger over time with appropriate feedback. Can you imagine an analogous conclusion being offered from education research? "Always provide feedback" seems to be the inescapable conclusion here.

But a limit analysis provides a different perspective. Given an infinite sequence of correct-answer-only trials $$\mathtt{C(t)}$$ and an infinite sequence of incorrect-answer-only trials $$\mathtt{I(t)}$$, we get these results:

$\mathtt{\lim_{t\to\infty} C(t) = \lim_{t\to\infty}\frac{t + 1}{t + 4} = 1, \qquad \lim_{t\to\infty} I(t) = \lim_{t\to\infty}\frac{t + 1}{3t + 4} = \frac{1}{3}}$

These results indicate that, over time, providing appropriate feedback only when the program makes a correct color identification strengthens the probability of correct answers from 0.25 to 1 (a perfect score), whereas the best that can be hoped for when providing feedback only when the program gives an incorrect answer is just a 1-in-3 shot at getting the correct answer. When both negative and positive feedback are given, I believe a similar analysis shows a limit of 0.5, assuming an equal number of both types of feedback.

Of course, the real-world trials bear out this conclusion. The data graphed above are from my 123 trials giving both correct and incorrect feedback. Below are data from just 67 trials giving feedback only on correct answers. The program hits the benchmark of 3 perfect scores in a row at Trial 53, and, just for kicks, does it again 3 more times shortly thereafter.

Parallels

Of course, the program here is not a student, and what is modeled as the program's "cognitive architecture" is nowhere near as complex as a student's, even with regard to the same basic task of identifying 4 colors. There are obviously a lot of differences.

Yet there are a few parallels as well. For example, behaviorally, we see progress followed by regress with both the program and, in general, with students. Perhaps our minds work in a probabilistic way similar to that of the program. Could it be helpful to think about improvements to learning as strengthening response probabilities? Relatedly, "practice" observably strengthens what we would call "knowledge" in the program just as it does, again in general, for students.

And, I think fascinatingly, we can create and reverse "misconceptions" in both students and in this program. We can see how this operates on just one color in the program by first training it to falsely identify blue as 'green' (to a level we benchmarked earlier as mastery—3 perfect responses in a row). Then, we can switch and begin teaching it the correct correspondence. As we can now predict, reversing the misconception will take longer than instantiating it, even with the optimal strategy, because the program's register will have a large amount of information in it—we will be fighting against that large denominator.

## Tuesday, December 29, 2015

### Do Experts Make Bad Teachers? No.

A pair of new studies has found that the stereotype of the aloof professor—you know, the one that is accomplished in her field but I'd like to see her come teach the kids in my school—might be, surprise surprise, a little unfair.

Researchers found that the superior content knowledge of mathematics professors (8 assistant professors and 7 full professors) relative to secondary teachers was associated with a significantly greater amount of conceptual explanations, as opposed to "product oriented" (answer-getting) explanations—and these conceptual explanations resulted in the superior performance of students receiving them.

Study 1

In the first of two studies, researchers gave a group of secondary school teachers and a group of mathematics professors this problem, along with the diagram that I have recreated here, and asked participants in each group to provide a written explanation for a hypothetical student.

Just imagine an 11th grade student shows you the following mathematical word problem:

At a school, a school director wants to build a new archway of the form of a rectangle and an arched semi-circle (also have a look at the figure). As the costs for the material are fixed, the perimeter is set to a predetermined value. How should the vaults be designed, to get the largest area as possible?

Please provide an explanation about the mathematical background of this word problem. Please write a coherent explanation, that the student could understand the explanation without any additional material. Please write complete sentences.

The professors and secondary teachers were scored on content knowledge and pedagogical content knowledge, and their explanations were rated according to the proportion of "process-oriented" statements they contained (statements related to conceptual knowledge) as well as the proportion of "product-oriented" statements (statements related to rules and algorithms without referring to conceptual knowledge). Participants' explanations were also compared with regard to word count, total number of statements, proportion of omissions of steps, definition statements, and reading level.

Results of Study 1

As expected, mathematics professors scored significantly higher with respect to content knowledge and secondary teachers scored significantly higher with respect to pedagogical content knowledge.

Yet, while both groups produced explanations with roughly similar word and statement counts, nearly identical reading levels, and similar proportions of omissions and definition statements, the professors' explanations contained more than twice the proportion of references to conceptual knowledge (29% to 12%). By contrast, more than half (52%) of secondary teachers' explanations, on average, contained answer-getting, "product-oriented" statements, compared with just 36% for the mathematics professors.

Further analysis showed that, within groups, "only the instructor’s content knowledge predicted the level of process-orientation [conceptual orientation], whereas pedagogical content knowledge did not account for the process-orientation of their instructional explanations."

Study 2 and Results

But were the more conceptual explanations from the professors more successful with students? To find out, in their second study researchers gave groups of students the problem shown above, along with one of the different explanations, in their regular mathematics classroom settings. Students were then given an application test consisting of a problem very similar to the example problem above as well as two near-transfer problems.

The different student groups were very similar in both their prior knowledge and in their ratings of the difficulty of the learning phase (reading the problem and the explanation). Yet, students given the professors' conceptual explanations outperformed those given the secondary teachers' "product-oriented" explanations (43% to 34%). And both of these groups roundly trounced a group given the example problem with no explanation at all (19%).

Discussion

I've highlighted three interesting points from the general discussion section of this paper. The first is a rather sensible conclusion, in line with the results of the experiments:

In contrast to findings by recent studies (e.g., Chi et al. 2001; Schworm and Renkl 2006), which provided evidence that receiving explanations can be rather detrimental to learning as compared to engage [sic] students to self-explain the subject matter under investigation, the findings of our present studies suggest that instructional explanations can be highly effective when they do not undermine students’ knowledge building activities.

This second highlight is a fascinating note, suggesting that the greater proportion of "product-oriented" statements among secondary teachers' explanations may be due, in part, to those teachers being influenced by what their students demand:

Richland et al. (2012) proposed that, during their pedagogical practices, teachers may become more like their students with regard to their mathematical conceptions, and tend to adopt students’ rule-based conceptions of mathematics. In contrast, mathematics researchers may possess rather argumentation-based conceptions of mathematics, as the provision of coherent and concise proof for mathematical solution strategies is considered crucial in the mathematical research community (Schoenfeld 1988).

Lachner, A., & Nückles, M. (2015). Tell me why! Content knowledge predicts process-orientation of math researchers’ and math teachers’ explanations Instructional Science DOI: 10.1007/s11251-015-9365-6

## Sunday, December 27, 2015

A new paper on cognitive load theory by John Sweller, the creator of cognitive load theory, is the subject of this latest research summary.

Readers of this blog are almost certainly familiar with cognitive load theory in one way or another, so I'll skip the background and get into the details of the paper.

With respect to instructional design, there are three related aspects of human cognition that frequently are ignored: (a) The distinction between knowledge we have specifically evolved to acquire and knowledge that we need for largely cultural reasons; (b) The differential role of generic-cognitive and domain-specific knowledge; (c) The conditions under which instruction needs to be explicit.

Biologically Primary vs. Biologically Secondary Knowledge

The standard example used to demonstrate the difference between acquiring biologically primary and biologically secondary knowledge is learning how to speak one's native language versus learning how to write in that language. The former is biologically primary. Under normal conditions of development, children will learn to speak their native language in the absence of formal instruction. By contrast, children do not learn how to write in their native language simply by being exposed to writing. They must be systematically taught.

Why does cognitive load theory make reference to biologically primary vs. secondary knowledge? Because the theory deals only with the latter, as it is with this kind of knowledge (secondary, cultural) that working memory limitations present obstacles to acquisition. Biologically primary skills, on the other hand—like recognizing faces—do not run up against these limitations.

Of course, biologically secondary knowledge is built on top of biologically primary knowledge, as Sweller notes:

Secondary knowledge is acquired with the assistance of primary knowledge (Paas & Sweller, 2012). For example, our ability to listen and speak influences our ability to read and write. All secondary concepts and skills have an underlying bed of primary concepts and skills. These underlying primary concepts and skills are likely to influence individual differences in secondary concepts and skills.

Generic-Cognitive vs. Domain-Specific Knowledge

The distinction between generic-cognitive and domain-specific knowledge is made for the same reason the previous distinction was made—to funnel down what cognitive load theory applies to and what it does not apply to. In this case, the theory deals with learning knowledge that is domain-specific rather than general.

This is, probably unsurprisingly, resonant with current thinking in cognitive psychology about the teachability of so-called general mental skills. For example, psychologist Dan Willingham writes this about general thinking processes (like 'critical thinking') in his book Why Students Don't Like School:

It’s hard for many people to conceive of thinking processes as intertwined with knowledge. Most people believe that thinking processes are akin to the functions of a calculator. A calculator has available a set of procedures (addition, multiplication, and so on) that can manipulate numbers, and these procedures can be applied to any set of numbers. The data (the numbers) and the operations that manipulate the data are separate. Thus, if you learn a new thinking operation (for example, how to critically analyze historical documents), that operation should be applicable to all historical documents, just as a fancier calculator that computes sines can do so for all numbers. But the human mind does not work that way.

Yet, Sweller has a slightly different take, even if he probably lands on the same conclusion. That is, rather than a heavy dollop of pessimism at the prospect of teaching students general cognitive skills, Sweller argues that these skills do not need to be taught.

We not only need to learn how to solve problems, we need to learn how to learn, how to plan or how to think. These are critical, generic-cognitive skills (or abilities) that tend to be emphasised heavily in current educational research. There are good reasons for that emphasis. Generic-cognitive skills tend to be far more important than domain-specific skills. Without generic-cognitive skills, humans may have difficulty surviving as humans. But despite their importance, they do not need to be taught because they are biologically primary skills that we have evolved to acquire and so they are acquired without tuition.

Explicit Instruction

In some sense, a preference for explicit instruction, rather than being a pillar of cognitive load theory, is simply the logical consequence of accepting the two distinctions above—that biologically secondary and domain-specific knowledges differ significantly and qualitatively from their biologically primary, domain-general counterparts such that the former require explicit teaching whereas the latter do not.

Work in cognitive load theory is, thus, focused on designing instruction for biologically secondary, domain-specific knowledge (the type of knowledge that most strongly characterizes the content taught in schools). And one of the main goals of the applied research under this theory is to design instruction around the limitations of students' working memories when they try to form schemas in these types of knowledge environments.

One Point of Interest

I have not done complete justice to the paper, nor certainly to the theory, with such a brief summary above. But as I said at the beginning, I think most of the readers of this blog are more or less familiar with the theory. So, I'll wrap up with one spot I highlighted.

The limitations of working memory ensure that a large amount of novel information is never handled simultaneously. Since only a small amount of information can be processed by a limited working memory, any changes to long-term memory are themselves limited, reducing the chance of damage to knowledge structures that have developed successfully over long periods of time. Similarly, genomes do not change rapidly. The narrow limits of change principle ensures that changes are small and incremental.

This is a fascinating thought—that there is a reason for limiting the size of working memory; one which provides an evolutionary advantage. In short, the tiny people in our classrooms were designed by evolution, as were we, to absorb cultural information slowly and incrementally, because a too frequent overwriting of knowledge was perhaps more disadvantageous to our survival than modifications that occurred too infrequently.

Mentions of evolutionary psychology always raise my suspicions a great deal, and I think cognitive load theory probably presses too hard on the evolution analogy. But still, this is an interesting notion.

Image credit: Laura Dahl

Sweller, J. (2015). Working memory, long-term memory, and instructional design
Journal of Applied Research in Memory and Cognition DOI: 10.1016/j.jarmac.2015.12.002

## Tuesday, December 8, 2015

### The Math Zombies We Create

I've been thinking about "math zombies," following the conversations around this article: here and then here, for example. If you're unfamiliar with the term, allow me to quote educationrealist's description of students who are math zombies and also add my voice to the chorus that insists they are indeed real and live among us:

They diligently memorize the cues and procedures, and obediently regurgitate the procedures, aping understanding without having a clue. There is no dawning moment of conceptual understanding. The students don’t care in the slightest. They are there for the A and, to varying degrees, play Clever Hans for math teachers interested only in correctly worked procedures and right answers.

But it's one thing to admit of the existence of math zombies and another to speculate about where they came from. And still another to talk about how to cure them. People miss these distinctions, preferring instead to draw a suspiciously straight line from the symptoms to their preferred folk remedies and diagnoses—which, as luck would have it, do not require a degree in psychology or any familiarity with learning science to understand, and are conveniently and remarkably well aligned to their creators' self-interest.

Lipstick on a Parrot

My own view at the moment—which I'll quickly admit is as open to the weaknesses and biases I mention above as anybody else's—is that deficient conceptual knowledge creates math zombies, and sufficient conceptual knowledge is the cure. (See this paper, in particular.) The deficiency is caused, in part, by (a) practitioners' lack of conceptual knowledge—keep-flip-change is really how some folks understand fraction division in its entirety; it's not laziness or methodological stubborness—and, in part, by (b) systemic, administrative, assessment-related forces that frustrate the transmission of conceptual knowledge.

The financial and political intractability of these two problems together lays the foundation for quasi-religious reform theorizing, which has had to invent a way for students to get conceptual knowledge without there being any around. Thus, sure as eggs is eggs, this knowledge is placed inside the student, and only needs to be called forth by removing obstacles to its expression. (I should not forget to mention the proceduralist zealots here too. Their broken-record, narrow obsession with standard algorithms comes from the same place. In fact, their position is probably more obviously informed by a scarcity of conceptual knowledge.)

Perhaps this will work. But, humbly, I would submit that we need to think less about conceptual 'understanding'—that awkward folk-diagnostic construct which is used to make slapdash guesses about what's going on in children's heads—and more about conceptual 'knowledge'—tangible, substantive, "desirable difficulties" that can stop zombies in their tracks.

Audio Postscript

## Friday, November 27, 2015

### Noises and Scribbles

Imagine following a group of students from Kindergarten to the end of high school, and you're never able to understand any words spoken or symbols written down, whether mathematical or alphabetical. It would seem to you, at the end of about 12 years, just a swirl of conversational sounds and strange hieroglyphics shared between and among adults and children.

Now imagine that all of this bizarre content—somewhere north of about 10,000 hours of instruction—matters far less than how much money people have or their mental states. Certainly now all of this content seems like an absurd waste of time. Why make so many mouth noises and scribble alien symbols all the time when just being rich or motivated to do well—or any of the hundred other things that don't involve the noises and scribbles—is of more importance?

On the other hand, if you infer that the incomprehensible aural and verbal signals are important—as the behavior of the system seems to imply—then it is reasonable to assume that they affect some outcomes in some way. If, for example, I were to visit a different group of students and teachers, engaged in making a significantly different set of noises and scribbles, then I should expect to see something significantly different about the two groups, provided I know what I'm looking for, both in the way of content differences and outcome differences.

If less articulate instruction is presented, the mind will do the same thing it does with well-designed sequences—try to draw inferences that are consistent with the data. [Link]

But Does the Behavior Match the Belief?

While almost everyone would say that the content of instructional discourse is important, it's difficult to find a sizeable gathering of people who behave as though this was true. That is, if you happen to sojourn outside the classroom during your longitudinal stretch as an alien observer, it's safe to say that the scribbles and noises you would encounter there would overlap very little with those inside the classroom, even if you follow practitioners, communicating about the classroom.

Although I think that much about this has improved in education in the last 20 years—my sense is that Common Core has made collective work on instruction easier—the general character of the system is still that of managing children in classrooms rather than working on instructional problems.

Classroom management is, at the moment, the dark matter of the instructional universe. If your excellent theory doesn't account for it, it won't work. I'm not sure this is anything to celebrate or be proud of, though. Quite the contrary.

## Sunday, November 15, 2015

### Some Notes on Reductionism

﻿

There are two different meanings of reductionism to which I've been exposed, mainly by those who find themselves opposed to this particular 'ism' in education.

1. On the one hand, reductionism is what Sir Peter Medawar calls "nothing-buttery" in his brutal review of Pierre Teilhard de Chardin's book The Phenomenon of Man. That is, tables are "nothing but" collections of atoms, mathematics understanding is "nothing but" a collection of memories about procedures, and so on.
2. On the other hand, reductionism can be simply talking too much about organizing learning—or using too many technical terms to do so. For instruction, it can refer to mere selectivity or filtering of information—it might be reductionist to say "To add two fractions, find common denominators, add the numerators, and then set the sum over the common denominator," because there is more to adding fractions than this.

I mention these two meanings up front only to get them out of the way—to set them up as strawmen, between which (or outside which) we have to carve a path. The notion that only one level of analysis—one scale of interaction with the world—can apply to any topic (the "nothing-buttery" notion) is not held by any sensible person; nor is the notion that reductionism should be synonymous with scheduling or selectivity in instruction.

Turning Away from "Nothing-Buttery"

It seems to me that the author of this article in The Curriculum Journal occasionally makes the same general mistake that most everyone makes when arguing against reductionism in education—he steers us rightly away from the first strawman, only to run nearly headlong into the second. Here is how the first turn is made:

There are clearly considerable practical difficulties in converting the rich complexities of a discipline such as mathematics into a curriculum which can be accommodated within the artificial school experience of learning, where days are fragmented into discrete lessons of up to an hour or so. Yet mathematics teaching can become excessively fragmented beyond this. Ollerton (1994) condemns fragmented teaching where: "for one or two lessons children are responding to a set of short questions in an exercise, such as ‘solve the following equations’, and then the following day or week they are working on another skill such as adding fractions or working out areas of triangles." (Ollerton, 1994, p. 63)

Almost without fail, those who would oppose reductionism will use the word "artificial" to describe school. This always strikes me as bizarre, even though I am completely in touch with the sensibility the use of this word appeals to. But if school is "artificial" because it is an activity divided into discrete time chunks, then so is the road trip I took with my family this past summer, or a young couple's first date, or the most open and inclusive meeting of professional educators. Of course, we can choose to describe any of these scenarios as "nothing but" blocks of time filled with prescribed activities, but nothing makes them necessarily so outside of those descriptions. This applies even to those apparently awful, disconnected lessons full of short questions. A level of analysis consistent with painting a reductionist picture of school is chosen, and then we are invited to decry how reductionist it all seems.

And what's the alternative to this 'artificiality'? Everyone has a limited amount of time, which must be taken up linearly in chunks. We do not regularly find ourselves in states of quantum superposition. Thus, having dodged the nothing-buttery strawman, here we at least graze the second one:

Working more holistically in the mathematics classroom means to some extent relinquishing teacher control (‘teacher lust’) over micromanaging every detail (Boole, 1931; Tyminski, 2010). It also entails a classroom focused on longer timescales. . . .

Features of working more holistically could include:
• giving students richer, more complex mathematical problems with a deeper degree of challenge, so that solutions are not straightforward or obvious;
• deliberately using problems which simultaneously call on a range of different areas of the curriculum, encouraging students to ‘see sideways’ and make connections;
• using ‘open’ tasks, where students can exercise a significant degree of choice about how they define the task and how they approach it–importantly, the teacher does not have one fixed outcome in mind;
• giving students sufficient time to explore different pathways without the pressure to arrive at ‘an answer’ quickly;
• encouraging a view that being stuck or confused and not knowing what to do is normal and can be productive, that ambiguities can be beneficial for a time (Foster, 2011a), and that seeking not to ‘move students on’ too quickly can deepen their opportunities to learn (Dweck, 2000).

The second and fourth of these bullet points are good ideas for making teaching more 'holistic'. The last and first don't belong at all, and their appearance doesn't inspire confidence that the word 'holistic' actually means anything in the article. As for the rest of this quote—it seems to represent this mind-boggling, to me, notion that teachers or teaching is the cause of this distasteful reductionism; that to make a class or an experience 'holistic,' we would do well to get rid of or diminish the teacher's voice, rather than raise up its quality.

We can and should (and do) avoid the idea that stringing together "nothing but" pieces of content is sufficient to make 'holistic' understanding bubble up as an emergent property of student learning. But equally dubious, and equally unsubscribed, is the idea that learning can be transformed from fragmented to holistic by subtracting something from the experience.

The right reductionism and the right holism can work together. See this study, for example, summarized over here.

Audio Postscript

Foster, C. (2013). Resisting reductionism in mathematics pedagogy Curriculum Journal, 24 (4), 563-585 DOI: 10.1080/09585176.2013.828630

## Friday, November 6, 2015

### Concept Before Procedure? It Doesn't Matter

﻿

I was excited to see, in a very recent edition of Educational Psychology Review, researchers take on a handful of education myths, which the authors describe as common educational practices and "widely held intuitive beliefs about learning that turn out to be unsupported by empirical evidence."

In this article, Rittle-Johnson, Schneider, and Star investigate the belief that instruction must proceed in a conceptual-to-procedural direction:

Most recently, the National Council of Teachers of Mathematics (NCTM 2014) explicitly asserted a conceptual-to-procedural perspective in their principle that "procedural fluency follows and builds on a foundation of conceptual understanding" (p. 42). "Conceptual understanding (i.e., the comprehension and connection of concepts, operations, and relations) establishes the foundation, and is necessary, for developing procedural fluency (i.e., the meaningful and flexible use of procedures to solve problems)" (NCTM p. 7) . . . We confirmed that the language used was deliberate, reflecting the "strong belief" of the authors of the report that developing procedural fluency "should not come first" (J. Wanko, personal communication, September 24, 2014).

After setting up further evidence that this belief is indeed widespread, the authors use the remainder of the paper to discuss research that supports and runs counter to this claim. Here are their findings in a nutshell. This'll be quick.

A Bi-Directional Relationship

The authors report on findings from 8 studies that ran a few days and at least 3 that ran over years, with samples ranging from preschool to middle school children. In each of these cases, procedural knowledge was as good a predictor of conceptual knowledge as the other way around. In addition, several other studies which directly manipulated the procedural-conceptual order all reported that procedure instruction supported conceptual knowledge and concept instruction supported procedural knowledge.

Overall, both longitudinal and experimental studies indicate that procedural knowledge leads to improvements in conceptual knowledge, in addition to vice versa. The relations between the two types of knowledge are bidirectional. It is a myth that it is a "one-way street" from conceptual knowledge to procedural knowledge.

Rittle-Johnson, B., Schneider, M., & Star, J. (2015). Not a One-Way Street: Bidirectional Relations Between Procedural and Conceptual Knowledge of Mathematics Educational Psychology Review DOI: 10.1007/s10648-015-9302-x

## Thursday, October 29, 2015

### Conceptual Knowledge Is Important

A study by researchers at Harvard and the University of Minnesota has found that "conceptual fraction and proportion knowledge and procedural fraction and proportion knowledge play a major role in understanding individual differences in proportional word problem-solving performance."

The study involved 411 seventh graders, who were tested in January on their procedural and conceptual fraction knowledge using a 12-item assessment, their conceptual proportion knowledge using 3 short-answer items requiring students to explain their reasoning, and their procedural proportion knowledge using 2 missing-value proportion items. Two months later, students were given a 21-item multiple-choice assessment composed of proportion word problems adapted from items on TIMSS, NAEP, and state assessments.

Results indicated that conceptual fraction knowledge, procedural fraction knowledge, conceptual proportion knowledge, and procedural proportion knowledge were significant predictors of proportional word problem-solving performance. Together, these predictors explained 37% of the variance in proportional word problem solving [p < .001]. . . .

Scores of all four domain-specific tasks correlated significantly with the proportional word problem-solving score. Specifically, the correlations between proportional word problem solving and domain-specific knowledge variables ranged from .37 to .45, with conceptual fraction knowledge (r = .45) and procedural proportion knowledge (r = .43) having the strongest relationship with proportional word problem solving.

Knowledge Does Not Prevent Understanding

While these results provide strong evidence to the contrary, a talking point one often encounters in one form or another in education discussions is that procedural and conceptual knowledge hinder learning. This is an easy misreading of research results, and the possibility for such a misreading is evident in this passage from the authors' opening discussion of the relevant literature:

Student difficulties with proportional thinking may be explained by the acquisition of routine expertise (the ability to complete tasks "quickly and accurately without much understanding"; Hatano, 2003, p. xi) instead of an adaptive expertise ("the ability to apply meaningfully learned procedures flexibly and creatively"; Hatano, 2003, p. xi).

One can be forgiven (not indefinitely, though) for walking away from passages like this with the notion that the "acquisition of routine expertise" causes difficulties in proportional thinking. But this isn't the case, and it's not what is being said (and I think it is hardly ever what is being said). In fact, one can and should simply lop off the first part of the statement, because what is being reported here is, at best, a revelation that the lack of "adaptive expertise" causes trouble, not that "routine expertise" does. To make it even clearer, you can simply replace the first part with just about anything, and still have a believably true statement—so long as the second part is indeed true:

Student difficulties with proportional thinking may be explained by the acquisition of expertise with regard to the entire catalog of Simpsons episodes instead of an adaptive expertise [about proportions].

Adding to the possible confusion in this case is the fact that the Hatano source referenced above is the foreword to a book, not research, wherein Hatano has this to say about the concept of adaptive expertise: "The notion of adaptive experts, which I introduced in Hatano (1982), was a theoretical ideal rather than a model derived from a series of empirical studies."