Friday, April 24, 2015

Letting Go of Free Will

The most popular conception of free will—a notion that lies mostly unexamined until it becomes necessary to defend it—is that there is something inside our minds (a "ghost in the machine") that, as Steven Pinker says, "reads the TV screen of the senses and pushes buttons and pulls levers of behavior."

While it goes mostly without saying (I hope) that this kind of free will is not real, nor even possible, there are still plenty of things to be said about the implications of this truth—in particular, implications regarding schooling—that do not sufficiently affect our thinking.

"You Are Not Controlling the Storm, and You Are Not Lost in It. You Are the Storm."

The best way to watch the idea of free will disappear before your eyes is to go looking for it. Make a simple choice in this moment—to turn your head to the right or to the left, say. If I ask what choice you made, you might say left or right, but you could also say that you chose to move one of your legs or to wink instead. Or you may have decided to not do anything differently at all as a result of my request. Ultimately, none of these possible individual differences matter to the analysis, because you did not cause your brain to produce the results, whatever they were.

If you believe you did cause this result—it certainly feels like you did—and if you are committed to describing a non-magical etiology of your behavior, you must account for the neurons that "you" controlled somehow to make the decision. If you succeed, you must then describe what caused the "you" neurons to do "their" controlling. I hope you can see that even my description of what is required does not cohere, and the job itself reduces to absurdity quickly.

Whatever it is you believe you have that can be called "free will," it is not the ability to locate yourself—your thinking, your behavior—outside of any storm of prior causes. Your decision to turn your head or move your leg or give me the finger or ignore me was ultimately not controlled by you. You could not possibly have done what it did not occur to you to do in the first place, and, second, you can give no reasonable account of why one move occurred to you while another didn't. It seems that only under a legal, compatibilist view can you do things "of your own free will."

Some Things Change, Some Stay the Same . . .

Obviously, if we are not fleshy robots operated by mysterious incorporeal homunculi, then neither are kids. Yet we all share this subjective delusion that each of us is this homunculus, living behind our eyes. So it is worth wondering what effects, if any, the illusion of free will in ourselves (and the assignation of the same to others) has on our "normal" perspectives regarding teaching and learning.

For many, abandoning the notion of free will—accepting its absence in their bones and down to their toes—would change the way they look at punishing students for misbehavior. At the very least, a strong collective recognition of the illusion of free will would remove the social cover adults in schools receive when they punish children for no good reason other than that "they deserved it." They don't simply "deserve" it. Not ever. That view makes no sense.

Yet, similarly it makes no sense to take credit for achievements. We should by no means feel compelled to disown completely the rewards that we accrue by virtue of our hard work or planning or even innate abilities, but removing the illusion of free will places these rewards in the proper perspective. You are fortunate to have achieved what you have and unfortunate to have missed out on some achievements. Pride really has no place. Who is there to be proud? What could she proud of? And as we did with punishment, we can find a similar clarity with the way we reward our students when we are not confused by free will. (See Carol Dweck's work on fixed and growth mindsets.)

We Are All Connected

It is hard to underestimate the confusion that can be sown by the illusion of free will when it comes to teaching and learning. I think Sam Harris says what I want to convey on this point more beautifully:

Some of you might think this sounds depressing. But, it's actually incredibly freeing to see life this way. It does take something away from life. What it takes away from life is an egocentric view of life. We're not truly separate. We are linked to one another; we are linked to the world; we are linked to our past and to history. And what we do actually matters, because of that linkage, because of the permeability, because of the fact that we can't be the true locus of responsibility. That's what makes it all matter.

We don't own our thoughts. They are not ours to begin with. As a student, my thoughts have proximate causes in my past experiences and the thoughts of my peers and teachers. But these thoughts didn't "belong" to anyone along the way. And they don't "belong" to me now.

So why are we afraid to share our knowledge of the world with students? Because we believe that each of them has some mysterious inner ghost that is muted by taking on board the thoughts of "others"? This is silly. While it definitely makes sense to ask students to practice generating ideas with little assistance, it doesn't make sense to draw a qualitative distinction between those ideas and knowledge from the outside world. There is no distinction. The thoughts I'm sharing with you here are now yours, for a moment, and in some form. "Your" internal thoughts appear to you in almost exactly the same way.

And, on the other hand, why are we so afraid of giving our students time and space to explore and succeed and fail? Because we see the world as a collection of atomized individuals, hoarding particular bits of knowledge and expertise in order to sell them for a price? Again, while it makes sense to prepare students for the reality they will likely face as adults, it doesn't make sense to enforce a "knowledge capitalism" on them, especially when that system rests on a delusion.

Audio Postscript

Saturday, April 11, 2015

Telling Vs. No Telling

So, with that in mind, let's move on to just one of the dichotomies in education, that of "telling" vs. "no telling," and I hope the reader will forgive my leaving Clarke's paper behind. I recommend it to you for its international perspective on what we discuss below.

"Reports of My Death Have Been Greatly Exaggerated"

We should start with something that educators know but people outside of education may not: there can be a bit of an incongruity, shall we say, between what teachers want to happen in their classrooms, what they say happens in their classrooms, and what actually happens there. Given how we talk and what we talk about on social media—and even in face-to-face conversations—and the sensationalist tendencies of media reports about education, an outsider could be forgiven, I think, for assuming that teachers have been moving en masse away from the practice of explicit instruction.

There is a large body of research which would suggest that this assumption is almost certainly "greatly exaggerated."

Typical of this research is a small 2004 study (PDF download) in the U.K. which found that primary classrooms in England remained places full of teacher talk and "low-level" responding by students, despite intentions outlined in the 1998–1999 National Literacy and National Numeracy Strategies. The graph at the right, from the study, shows the categories of discourse observed and a sense of their relative frequencies.

John Goodlad made a similar and more impactful observation in his much larger study of over 1,000 classrooms across the U.S. in the mid-80s (I take this quotation from the 2013 edition of John Hattie's book Visible Learning, where more of the aforementioned research is cited):

In effect, then, the modal classroom configurations which we observed looked like this: the teacher explaining or lecturing to the total class or a single student, occasionally asking questions requiring factual answers; . . . students listening or appearing to listen to the teacher and occasionally responding to the teacher's questions; students working individually at their desks on reading or writing assignments.

Thus, despite what more conspiracy-oriented opponents of "no telling" sometimes suggest, the monotonic din of "understanding" and "guide on the side" and "collaboration" we hear today—and have heard for decades—is not the sound of a worldview that has, in practice, taken over education. Rather, it is one of a seemingly quixotic struggle on the part of educators to nudge each other—to open up more space in class for students to exercise independent and critical thinking. This a finite space, and something has to give way.

Research Overwhelmingly Supports Explicit Instruction

Teacher as
dTeacher as Facilitatord
Teaching students self-verbalization0.76Inductive Teaching0.33
Teacher clarity0.75Simulation and gaming0.32
Reciprocal teaching0.74Inquiry-based teaching0.31
Feedback0.74Smaller classes0.21
Metacognitive strategies0.67Individualised instruction0.22
Direct instruction0.59Web-based learning0.18
Mastery learning0.57Problem-based learning0.15
Providing worked examples0.57Discovery method (math)0.11

On the other hand, it is manifestly clear from the research literature that, when student achievement is the goal, explicit instruction has generally outperformed its less explicit counterpart.

The table at the left, taken from Hattie's book referenced above, directly compares the effect sizes of various explicit and indirect instructional protocols, gathered and interpreted across a number of different meta-analyses in the literature.

Results like these are not limited to the K–12 space, nor do they involve only the teaching of lower-level skills or teaching in only well-structured domains, such as mathematics. These are robust results across many studies and over long periods of time.

And while research supporting less explicit instructional techniques is out there (as obviously Hattie's results also attest), there is much less of it—and certainly far less than one would expect given the sheer volume of rhetoric in support of such strategies. On this point, it is worth quoting Sigmund Tobias at some length, from his summarizing chapter in the 2009 book Constructivist Instruction: Success or Failure?:

When the AERA 2007 debate was organized, I described myself as an eclectic with respect to whether constructivist instruction was a success or failure, a position I also took in print earlier (Tobias, 1992). The constructivist approach of immersing students in real problems and having them figure out solutions was intuitively appealing. It seemed reasonable that students would feel more motivated to engage in such activities than in those occurring in traditional classrooms. It was, therefore, disappointing to find so little research documenting increased motivation for constructivist activities.

A personal note may be useful here. My Ph.D. was in clinical psychology at the time when projective diagnostic techniques in general, and the Rorschach in particular, were receiving a good deal of criticism. The logic for these techniques was compelling and it seemed reasonable that people’s personality would have a major impact on their interpretation of ambiguous stimuli. Unfortunately, the empirical evidence in support of the validity of projective techniques was largely negative. They are now a minor element in the training of clinical psychologists, except for a few hamlets here or there that still specialize in teaching about projective techniques.

The example of projective techniques seems similar to the issues raised about constructivist instruction. A careful reading and re-reading of all the chapters in this book, and the related literature, has indicated to me that there is stimulating rhetoric for the constructivist position, but relatively little research supporting it. For example, it is encouraging to see that Schwartz et al. (this volume) are conducting research on their hypothesis that constructivist instruction is better for preparing individuals for future learning. Unfortunately, as they acknowledge, there is too little research documenting that hypothesis. As suggested above, such research requires more complex procedures and is more time consuming, for both the researcher and the participants, than procedures advocated by supporters of explicit instruction. However, without supporting research these remain merely a set of interesting hypotheses.

In comparison to constructivists, advocates for explicit instruction seem to justify their recommendations more by references to research than rhetoric. Constructivist approaches have been advocated vigorously for almost two decades now, and it is surprising to find how little research they have stimulated during that time. If constructivist instruction were evaluated by the same criterion that Hilgard (1964) applied to Gestalt psychology, the paucity of research stimulated by that paradigm should be a cause for concern for supporters of constructivist views.

Both the Problem and the Solution

So, it seems that while a "telling" orientation is better supported by research, it is also identified as a barrier, if not the barrier, to progress. And it seems that a lot of our day-to-day struggle with the issue centers around the negative consequences of continued unsuccessful attempts at resolving this paradox.

Yet perhaps we should see that this is not a paradox at all. Of course it is a problem when students learn to rely heavily on explicit instruction to make up their thinking, and it is perfectly appropriate to find ways of punching holes in teacher talk time to reduce the possibility of this dependency. But we could also research ways of tackling this explicitly—differentiating ways in which explicit instruction can solicit student inquiry or creativity and ways in which it promotes rule following, for example.

It is at least worth considering that some of our problems—particularly in mathematics education—have less to do with explicit instruction and more to do with bad explicit instruction. If dealing with instructional problems head on is more effective (even those that are "high level," such as creativity and critical thinking), then we should be making the sacrifices necessary to give teachers the resources and training required to meet those challenges, explicitly.

Thursday, April 2, 2015

Teachers Should Be "Poised and Articulate"

Resurrecting an old post from 2009 or 2007 or some year around then:

At the end of what seems like a long chain of events, I asked, and answered yes to, this question about professionalizing teacher practice:

Is there one or more cultural "teaching scripts" that might tend to stymie the practice of collecting and critically analyzing specific best-practice knowledge linked to academic outcomes?

The question was, at the time, the end result of my thinking about Jenny D.'s terrific post, Chris Correa's input on the subject, and ideas presented by different commenters.

Regardless whether yes is the correct answer to that question or not, I'd like to follow up and suggest one script that I think may be a significant culprit. Of course, in doing so, I will be making a solid leap away from firm ground, because cultural scripts are constructs that one can observe only indirectly, if at all:

[Cultural scripts] are not proposed as rules of behaviour but as rules of interpretation and evaluation. It is open to individuals in concrete situations whether to follow (or appear to follow) culturally endorsed principles, and if so, to what extent; or whether to manipulate them, defy them, subvert them, rebel against them, play creatively with them, etc. Whether or not cultural scripts are being followed in behavioural terms, however, the claim is that they constitute a kind of shared interpretive "background."

One part of an "interpretive background" that I would suggest we share with regard to the idea of teaching is this: Teaching is "ethotic" and "pathotic" persuasion.

That is, the input of teaching is gauged in terms of the character of teachers (ethos) and their ability to navigate and control the emotional, cognitive-psychological, and interpersonal dynamics of learning (pathos). "Logetic" persuasion (logos)—which involves consideration of the presentation and organization of content in isolation--is really not part of the script for teaching or is, at best, completely overshadowed.

Consider these ethotic/pathotic selection criteria for the National Teacher of the Year award as a bit of indirect evidence for the existence of this script:

  • Inspire students of all backgrounds and abilities to learn.
  • Have the respect and admiration of students, parents, and colleagues.
  • Play an active and useful role in the community as well as in the school.
  • Be poised, articulate, and possess the energy to withstand a taxing schedule.

In short, we tend to view better teaching exclusively as a function of better people (more compassionate or caring or moral or humane, etc.), and almost never as a function of better technical information ("mere technicians")--a script which makes the compilation and dispensation of best-practice knowledge nearly unimaginable.

Saturday, March 28, 2015

Sophisticated Educators, Please Stand Up

Cross posted at School of Doubt.

People have been talking about false dichotomies in math education forever, it seems. And so have I (as long as you think of seven years ago as "forever"). And so has Professor David Clarke! His paper, titled Using International Research to Contest Prevalent Oppositional Dichotomies, was published in 2006, and I wrote it up on my old blog in 2008.

In that old post, though, I only highlighted some good quotations from Clarke's piece. So perhaps it's time to talk about it again in a little more detail. I'll do just that in the next post, I promise.

But before we get to Clarke's dichotomies, I find myself compelled to anticipate the inevitable reaction that Sophisticated Educators (to repurpose Dr Coyne's "trademarked" phrase) have when cornered by what seem like caricatures of their professional ideas.

You see, Sophisticated Educators do not dichotomize in the ways Clarke describes. They are flexible and open-minded. They can, for example, be advocates of a teaching practice or philosophy and even cite research and make solid logical arguments defending it, but they are also, with no loss of reasonable vigor for their own preferences, aware that there are narratives that compete—directly and legitimately—with their own. Sophisticated Educators find the discussion about dichotomies "tedious" because they have long ago learned to accommodate competing theories and practices into different contexts within their own work.

Problematically, however, the messaging of Sophisticated Educators often overlaps with batshit insanity. And we simply don't have access to the reasoning that would help us tell the difference between the two.

There is an analogous situation in religion. Both a Sophisticated Theologian and a fundamentalist share a fervorous belief in the supernatural, but whereas the former has arrived at his belief through some process of semi-rigorous investigation or inquiry, the latter believes what he does because it says so in a book. When pressed, the fundamentalist may quickly scribble some syllogistic ransom note, cut and pasted from pieces of different arguments (helpfully delivered by his Sophisticated counterparts), but this is only a makeshift shield, employed to deflect scrutiny. The content of his belief is derived solely from an ancient text. If he had grown up with a different ancient text, he would have a different belief.

Similarly, a Sophisticated Educator and his less sophisticated counterpart may claim some common ground in their opposition to lecture, say. Yet, whereas the former stands this belief on evidence and knows where it begins and ends, the latter believes it because someone told them to believe it—because it's popular to do so. (Note that over 40% of Americans believe that God created humans in their present form if it seems incredible to you that large groups of good, smart people can believe silly things in the face of overwhelming evidence to the contrary.) The fundamentalist educator runs the idea into the ground, into the absurd, as does the fundamentalist believer—because no one is there to stop them from doing so.

The problem, among both co-religionists and co-educationists, if you will, is that there is no price to pay within the group for having bad reasons or no reasons for your beliefs. It seems to be enough that their constituencies' opinions are pointing in the same general direction, because there is no robust and widespread tradition of scrutinizing the value of evidence or logical arguments within either community. (See granfalloon.)

And as long as there is no internal policing of critical thinking and scientific reasoning within education, it will be imposed, annoyingly, from the outside. If Sophisticated Educators are tired of the tedium of dichotomous thinking in education, perhaps we should practice what we preach and demand more often of fellow educators what we ask of students—to explain their reasoning.

Wednesday, March 18, 2015

More Tidbits

Yet another article at School of Doubt, this time about "topless" (and "bottomless") teaching. You'll have to check it out to see how that analogy plays out. It's pretty good.

I'm sure I'll blog again here soon maybe.

Sunday, March 8, 2015


I've got an article up over at School of Doubt about conceptual understanding.

And I threw another tool I've been tinkering with into my notebook. This one's providing some slope and y-intercept exploration. I think I've got enough space to put it right here on the blog at some point in the future. But maybe I should get to a point where I call it finished first.

Friday, February 27, 2015

Solving the Two-Door Problem with Math

There are 2 doors—talking doors—in a room in which you are a prisoner. You must choose and then pass through one of the doors. One leads to your immediate freedom, and the other leads to your doom, but you don't know which is which (but they do). Further, one of the doors always lies, and the other always tells the truth. And you don't know which is which (but they do).

You may ask only one question of only one of the doors. What question do you ask?

For example, you could ask one of the doors, "Are you the door that leads to my freedom?" If the liar door guards the route to your freedom, the response will be no. If the honest door is the freedom door, the response will be yes.

"Are you the door that leads to my freedom?"

But of course since you don't know which door is which, even imagining asking more than one question is not helpful. And you can only ask one question.

There are some different ways to think about this problem, but I thought I'd apply a mathematical lens—not really to make the solution any clearer, but to show at least that such a lens can be applied.

We Have to Get Rid of the Questions

We can't really deal with "questions" mathematically. So, we have to change them to statements. Instead of, "Are you the door that leads to my freedom?" we can make the statement (to one door), "You are the door that leads to my freedom." And instead of responding with yes or no, we can imagine that the door will respond with true or false. The table below is the same as the one above, except the responses are changed.

"You are the door that leads to my freedom."

If we made the opposite statement, "You are the door that leads to my doom," the responses in each column would simply trade places. So, we can generalize from this: no matter which fate the liar door guards, a true statement given to that door would produce a false response, and a false statement would produce a true response. If our statement were x, then the liar door would return L(x) = -x. If the statement were -x (false), the door would return -(-x), or x.

False Input True Input True Output False Output

For the honest door, a true statement would produce a true response, and a false statement a false response. Again, if our statement were x, then the honest door would always return H(x) = x. If our statement were -x, the door would return -x.

At the right, the liar's line, L(x) = -x, runs from top left to bottom right. The honest door's line, H(x) = x, runs from bottom left to top right.

Some Clarity

Although we're not much closer to a solution, the mathematical formulation provides some clarity. It shows, for example, that no matter what question we ask, responses of the form H(x) = x and L(x) = -x will always be equal but opposite answers. So we want to get a response in a different form, but we still can use only one x—just one input from us.

And that's when function composition can come to the rescue. The rest of the story is just filling in the details! Can you fill them in?

P.S.: This is a rewrite of this but without the code.

Sunday, February 22, 2015

Teaching: So Easy a "Housewife" Could Do It?

Two years before the United States put men on the moon, William James Popham and colleagues conducted two very interesting—and to a reader in the 21st century, bizarre—education experiments in southern California which were designed to validate a test they had developed to measure what they called "teacher proficiency."

Instructors in both studies were given a specific set of social science objectives, a list of possible activities they could use, and then time with high-school students. Each instructor's "proficiency" relied solely on how well students did on a post-test of those objectives after instruction, relative to a pre-test given before instruction. Thus, rather than focusing on how well an instructor followed "good teaching procedures," his or her performance as a teacher was measured only by student growth from pre-test to post-test after one instructional session.

What's fascinating about these experiments is to whom the researchers compared experienced teachers with regard to achieving these instructional objectives: "housewives" and college students:

Our plan has been to attempt a validation wherein the performance of nonteachers (housewives or college students, for example) is pitted against that of experienced teachers. The validation hypothesis predicts that the experienced teachers will secure better pupil achievement than will the nonteachers. This particular hypothesis, of course, is an extremely gross test in that we wish to secure a marked contrast between (1) those that have never taught, and (2) those who have taught for some time.

Keep in mind that the purpose of this study was not simply to compare the instructional performances of teachers and nonteachers; it was to see if the measure they had developed (student growth) would pick up a difference between the groups. In theory, of course (their theory), differences in student growth between teacher-taught and nonteacher-taught students should be noticeable on a test purporting to measure teacher proficiency.

It is also worth emphasizing that instructional approaches were not prescribed by the researchers. The various instructors were simply given a list of suggested activities, which could have been immediately thrown away if the instructor so chose.

Results, People, and Procedures

Let's just skip to the results and work our way out from there. In short, there was no difference between experienced teachers and either "housewives" or college students in effecting student growth. This first table compares experienced teachers and "housewives." It shows the "student growth" from pre-test to post-test by instructor:

SubjectsnMean Post-TestMean Pretest

The 6 "experienced teachers" in the study were actually student teachers, yet these represented half of just 6.5% of candidates who met the following criteria: "(1) they were social science majors; (2) they had completed at least one quarter of student teaching in which they were judged superior by their supervisors; and (3) they had received a grade of 'A' in a pre-service curriculum and instruction class which emphasized the use of behavioral objectives and the careful planning of instruction to achieve these goals."

The 6 nonteachers in this first experiment were "housewives" who (1) were not enrolled in school at the time of the study, (2) did not have teaching experience, (3) completed at least 2 years of college, and (4) were social science majors.

As you can no doubt already tell, the whole gestalt here is very Skinnerian, very "behavioral" (it's the mid-60s!) so I'll just quote selectively from the article about the procedure used in the first experiment:

The subjects [the instructors] were selected three weeks prior to the day of the study. . . . All subjects were mailed a copy of the resource unit, including the list of 13 objectives and sample test items for each. An enclosed letter related that the purpose of the study was to try out a new teaching unit for social studies. They also received a set of directions telling them where to report, that they would have six hours in which to teach, that they would be teaching a group of three or four high school students, and that they should try to teach all of the objectives. . . .

Learners reported at 8:45 in the morning, and the Wonderlic Personnel Test, a 12 minute test of mental ability, was administered. The learners were next allowed 15 minutes to complete the 33 item pretest . . . Students were then assigned and dispersed to their rooms.

At 9:30 a.m. all learners and teachers were in their designated places and each of the 12 teachers commenced his instruction. After a 45 minute lunch break, instruction was resumed and continued until 4:00 p.m., at which time the high school students . . . were first given the 68 item post-test measuring each of the explicit objectives. They next completed a questionnaire (found in Appendix D) designed to measure their feelings about the content of the unit and the instruction they received. [No significant "affective" differences between the teachers and nonteachers either.]

The next table compares experienced teachers and college students on mean post-test scores, a comparison that was conducted in a second experiment. Only a post-test was given:


This experiment went down differently than the first experiment, but it is worth mentioning that it was designed to remedy some of the possible weaknesses of the first experiment. (You can read the researchers' rationale in the study embedded above.)

In particular, the experienced teachers in the second study were working teachers. The college students (all female) were all social science majors or minors who had completed at least two years of college and had neither teaching experience nor experience with college coursework in education. There were other minor differences in the second experiment, which you can read about in the study, but the most significant was that the instruction time was reduced from 6 hours to 4.


It's worth quoting the researchers' interpretation of both studies in detail (emphasis is the authors'). It's pretty comprehensive, so I think I'll let this stand as its own section:

Some of the possible reasons for the results obtained in the initial validation study were compensated for by adjustments in the second study. We cannot, therefore, easily explain away the unfulfilled prediction on the basis of such explanations as "The study should have been conducted in a school setting," or "The nonteachers were too highly motivated." Nor can we readily dismiss the lack of differences between teachers and nonteachers because of a faulty measuring device. The internal consistency estimates were acceptable and there was sufficient "ceiling" for high learner achievement.

Indeed, in the second validation study the teacher group had several clear advantages over their nonteacher counterparts. They were familiar with the school setting, e.g., classroom facilities, resource materials, etc. They knew their students, having worked with them for approximately three weeks prior to the time the study was conducted. Couple these rather specific advantages with those which might be attributed to teaching experience (such as ability to attain better classroom discipline, ease of speaking before high school students, sensitivity to the learning capabilities of this age group, etc.) and one might expect the teachers to do better on this type of task. The big question is "Why not?"

Although there are competing explanations, such as insufficient teaching time, the explanation that seems inescapably probable is the following: Experienced teachers are not experienced at bringing about intentional behavior changes in learners. . . .

Lest this sound like an unchecked assault on the integrity of the teaching profession, it should be quickly pointed out that there is little reason to expect that teachers should be skilled goal achievers. [No unchecked assault here!] Certainly they have not been trained to be; teacher education institutions rarely foster this sort of competence. There is no premium placed on such instructional skill; neither the general public nor professional teachers' groups attach any special importance to the teacher's attainment of clearly stated instructional objectives. Whatever rewards exist for the teacher in his typical school environment are not dependent upon his skill in promoting measurable behavior changes in learners. Indeed, the entire educational establishment seems drawn to any method of rewarding instructors other than by their ability to alter the behavior of pupils.

So there you have most of the authors' interpretations of the results. What's your interpretation?

Update I: The study linked below is not the same as the one embedded above, but it's so closely related that I thought it okay to use it (Research Blogging couldn't find the citation to the above study). The below study has the same basic design as the one above, except the domain was "vocational studies," like shop and home-ec. In that study as well, no significant difference was reported between teachers and non-teachers.

Update II: Just to be perfectly fair, the tenor of the quoted writing above is not reflective of its author's current views with regard to education. Here's an interview with Mr Popham in 2012.

Image mask credit: Classic Film Popham, W. (1971). Performance Tests of Teaching Proficiency: Rationale, Development, and Validation American Educational Research Journal, 8 (1), 105-117 DOI: 10.3102/00028312008001105
1946 Ad, Pequot Sheets with '40s Housewife, "So Good-Looking, So Long-Wearing"

Sunday, February 15, 2015

Making Change in the 21st Century

You have a little store. See? It's right there. It's a nice store on a lake in Florida.

And in that store you sell just 3 things. You sell bottled water, sunscreen, and, uh, alligator repellent, which can apparently be made from a combination of ammonia and pee.

Click on the items to see their prices.

Canvas not supported. Canvas not supported. Canvas not supported.

A customer comes in and buys a bottle of sunscreen. They pay with a $20 bill. Use the example shown in the Trinket box to print out the customer's change.

Did the computer get it right? How do you know? Write another print statement with addition to verify that the change amount is correct.

Now try the ones below. (You can reset the box above by going to the menu at the top left of the box and selecting Reset. Or you can just keep typing a bunch of different print statements. Just keep track of what result shows what!) You don't have to check every result. Just make sure the result seems reasonable.

  • A customer buys a bottle of water. She pays with a $5 bill.
  • A customer buys a bottle of alligator repellent. He pays with a $20 bill.
  • A customer uses a $50 bill to buy 2 waters and a bottle of sunscreen.

A New Day . . . and More Customers

That was a pretty slow day for the store—just 4 customers. When the store gets slammed, it will be hard to keep up with all that. And what if we eventually want to sell more than 3 things? I'll have to keep a list of all the items with their prices, look up (or try to remember) each one, and only then type all the information in. Let's see if we can think about making this work a little easier.

We can actually make a dictionary for the computer. But this won't be a word dictionary, where it can look up a word to find its definition. In this dictionary, the computer will be able to look up an item (sunscreen, water, or gator repellent) and tell me its price. It looks like this:

I made a dictionary called store_items with each of my store items and its price. Notice how I can print out the price of an item (Lines 3, 4, and 5). It looks like this: print(store_items['item']). That prints out the price of 'item'. It looks up 'item' and return its price.

On Line 7, I printed out the whole dictionary. On Line 8, I put .keys() after the name of the dictionary to print out just the item names. And finally, on Line 9, I put .values() after the dictionary name to print out just the prices.

Try it out by following the instructions in the Trinket box above to create your own item : price (key : value) dictionary and print out different pieces of information about it. See if you can make it work.

Making This All Function

Now that I've got my store inside a dictionary, I need your help to write a function I can use to give the customer change every time I make a sale. So far, I have this:

To make the function, I used the keyword def followed by the name I gave to the function, amt_change, followed by two names (in parentheses) I made up to be the cost of the item and the amount the customer paid.

What this function spits out, or returns, is the amount paid minus the cost of the item, which is the change. So, on Line 4, amt_change(3.69, 20) is code that sends two values, 3.69 and 20, to the amt_change function. There, those values become cost and amt_paid, in that order. The value 3.69 is subtracted from 20, and the result is returned so that it can be printed.

Do you think you can write this differently, so I can just put in the item name and the amount paid and get back the change? It might even be helpful to give me a way to type in the quantities too. I don't know. See what you can do!

Store mage credit: Matthew Paulson

Tuesday, February 10, 2015

Intuition and Domain Knowledge

Can you guess what the graphs below show? I'll give you a couple of hints: (1) each graph measures performance on a different task, (2) one pair of bars in each graph—left or right—represents participants who used their intuition on the task, while the other pair of bars represents folks who used an analytical approach, and (3) one shading represents participants with low domain knowledge while the other represents participants with high domain knowledge (related to the actual task).

It will actually help you to take a moment and go ahead and guess how you would assign those labels, given the little information I have provided. Is the left pair of bars in each graph the "intuitive approach" or the "analytical approach"? Are the darker shaded bars in each graph "high knowledge" participants or "low knowledge" participants?

When Can I Trust My Gut?

A 2012 study by Dane, et. al, published in the journal Organizational Behavior and Human Decision Processes, sets out to address the "scarcity of empirical research spotlighting the circumstances in which intuitive decision making is effective relative to analytical decision making."

To do this, the researchers conducted two experiments, both employing "non-decomposable" tasks—i.e., tasks that required intuitive decision making. The first task was to rate the difficulty (from 1 to 10) of each of a series of recorded basketball shots. The second task involved deciding whether each of a series of designer handbags was fake or authentic.

Why these tasks? A few snippets from the article can help to answer that question:

Following Dane and Pratt (2007, p. 40), we view intuitions as "affectively-charged judgments that arise through rapid, nonconscious, and holistic associations." That is, the process of intuition, like nonconscious processing more generally, proceeds rapidly, holistically, and associatively (Betsch, 2008; Betsch & Glöckner, 2010; Sinclair, 2010). [Footnote: "This conceptualization of intuition does not imply that the process giving rise to intuition is without structure or method. Indeed, as with analytical thinking, intuitive thinking may operate based on certain rules and principles (see Kruglanski & Gigerenzer, 2011 for further discussion). In the case of intuition, these rules operate largely automatically and outside conscious awareness."]

As scholars have posited, analytical decision making involves basing decisions on a process in which individuals consciously attend to and manipulate symbolically encoded rules systematically and sequentially (Alter, Oppenheimer, Epley, & Eyre, 2007).

We viewed [the basketball] task as relatively non-decomposable because, to our knowledge, there is no universally accepted decision rule or procedure available to systematically break down and objectively weight the various elements of what makes a given shot difficult or easy.

We viewed [the handbag] task as relatively non-decomposable for two reasons. First, although there are certain features or clues participants could attend to (e.g., the stitching or the style of the handbags), there is not necessarily a single, definitive procedure available to approach this task . . . Second, because participants were not allowed to touch any of the handbags, they could not physically search for what they might believe to be give-away features of a real or fake handbag (e.g., certain tags or patterns inside the handbag).


Canvas not supported.

As you can see in the graphs at the right (high expertise in gray), there was a fairly significant difference in both tasks between low- and high-knowledge participants when those participants approached the task using their intuition. In contrast, high- and low-knowledge subjects in the analysis condition in each experiment did not show a significant difference in performance. (The decline in performance of the high-knowledge participants from the Intuition to the Analysis conditions was only significant in the handbag experiment.)

It is important to note that subjects in the analysis conditions (i.e., those who approached each task systematically) were not told what factors to look for in carrying out their analyses. For the basketball task, the researchers simply "instructed these participants to develop a list of factors that would determine the difficulty of a basketball shot and told them to base their decisions on the factors they listed." For the handbag task, "participants in the analysis condition were given 2 min to list the features they would look for to determine whether a given handbag is real or fake and were told to base their decisions on these factors."

Also consistent across both experiments was the fact that low-knowledge subjects performed better when approaching the tasks systematically than when using their intuition. For high-knowledge subjects, the results were the opposite. They performed better using their intuition than using a systematic analysis (even though the 'system' part of 'systematic' here was their own system!).

In addition, while the combined effects of approach and domain knowledge were significant, the approach (intuition or analysis) by itself did not have a significant effect on performance one way or the other in either experiment. Domain knowledge, on the other hand, did have a significant effect by itself in the basketball experiment.

Any Takeaways for K–12?

The clearest takeaway for me is that while knowledge and process are both important, knowledge is more important. Even though each of the tasks was more "intuitive" (non-decomposable) than analytical in nature, and even when the approach taken to the task was "intuitive," knowledge trumped process. Process had no significant effect by itself. Knowing stuff is good.

Second, the results of this study are very much in line with what is called the 'expertise reversal effect':

Low-knowledge learners lack schema-based knowledge in the target domain and so this guidance comes from instructional supports, which help reduce the cognitive load associated with novel tasks. If the instruction fails to provide guidance, low-knowledge learners often resort to inefficient problem-solving strategies that overwhelm working memory and increase cognitive load. Thus, low-knowledge learners benefit more from well-guided instruction than from reduced guidance.

In contrast, higher-knowledge learners enter the situation with schema-based knowledge, which provides internal guidance. If additional instructional guidance is provided it can result in the processing of redundant information and increased cognitive load.

Finally, one wonders just who it is we are thinking about more when we complain, especially in math education, that overly systematized knowledge is ruining the creativity and motivation of our students. Are we primarily hearing the complaints of the 20%—who barely even need school—or those of the children who really need the knowledge we have, who need us to teach them?

Image mask credit: Greg Williams Dane, E., Rockmann, K., & Pratt, M. (2012). When should I trust my gut? Linking domain expertise to intuitive decision-making effectiveness Organizational Behavior and Human Decision Processes, 119 (2), 187-194 DOI: 10.1016/j.obhdp.2012.07.009