Kristopher Harrison, Raina Kor, Jesse Lubinsky:

Geek Heresy


Good teaching:

Laptops undermine teacher-centered classrooms and promote constructivism:

Old wine, new bottle:

  • From 1918“The Project Method”: Child-Centeredness in Progressive Education – “In the early 20th century, progressive education reformers promoted a pedagogy that emphasized flexible, critical thinking…. [T]he child-centered method asked teachers to position each child at the center of the learning process by focusing activities around the interests of the pupil.
  • Today’s K-12 technology enthusiasts envision finally winning the century-old war between teacher-centered education, which parents generally prefer, and student-centered education, which education schools prefer. Inside a student-centered classroom, the teacher is a guide on the side, not a sage on the stage. Because mobile devices compete with the teacher for student attention, the device itself moves the teacher off center stage.

Urban legends: the ‘digital native’ myth

Articles:

Books:

Research:

Note taking by hand or computer:



Where our district is headed:

Our central administrators on technology:


Inside public education, “technology” means constructivism.

Outside public education, people buy technology to increase efficiency. Cars are faster than horses; typewriters are faster than longhand; desktop computers are faster than typewriters.

Inside public schools, however, technology is prized not for its efficiency, but for its power to transform the nature of the classroom, the curriculum, and the relationship between student and teacher. Constructivist educators believe that technology — especially 1 to 1 computing — has the potential to usher in a new era of student-centered classrooms.

A Virginia superintendent explains.

The “correct” way to use technology in schools, for constructivist educators, is to assign “digital tasks and activities” that are “linked to a product, performance, or exhibition to which the student attaches personal value.”

This means students make videos and post them to the internet. Or create Powerpoints and present them to the class. Etc.

The student videos posted by the Virginia superintendent, which are meant to be instructional, are poor quality. If you didn’t already understand the content being demonstrated, you wouldn’t understand it after watching the video. But quality is neither here nor there. Students have used technology to produce a product, and that is the goal.

This is the apotheosis of the student-centered classroom. Students watch and make videos (instead of reading and writing), but no one teaches them how to make their videos, or how to improve their work once the first cut is completed. The result: student-created instructional videos that do not instruct.

None of this makes sense.

Students need teachers who teach.



Excerpt from “The Computer Delusion” by Todd Oppenheim:
IN 1922 Thomas Edison predicted that “the motion picture is destined to revolutionize our educational system and … in a few years it will supplant largely, if not entirely, the use of textbooks.” Twenty-three years later, in 1945, William Levenson, the director of the Cleveland public schools’ radio station, claimed that “the time may come when a portable radio receiver will be as common in the classroom as is the blackboard.” Forty years after that the noted psychologist B. F. Skinner, referring to the first days of his “teaching machines,” in the late 1950s and early 1960s, wrote, “I was soon saying that, with the help of teaching machines and programmed instruction, students could learn twice as much in the same time and with the same effort as in a standard classroom.” Ten years after Skinner’s recollections were published, President Bill Clinton campaigned for “a bridge to the twenty-first century … where computers are as much a part of the classroom as blackboards.” Clinton was not alone in his enthusiasm for a program estimated to cost somewhere between $40 billion and $100 billion over the next five years. Speaker of the House Newt Gingrich, talking about computers to the Republican National Committee early this year, said, “We could do so much to make education available twenty-four hours a day, seven days a week, that people could literally have a whole different attitude toward learning.”
The Computer Delusion by Todd Oppenheimer | The Atlantic | July 1997




a  b  c  d  e  f  g  h  i  j  k  l  m  n  o  p  q  r  s  t  u  v  w  x  y  z


Personal review software: study


a….aaaa….

b….bbbb….

c….cccc….

d….dddd….

e….eeee….

f….ffff….

g….gggg….

h….hhhh….

i….iiii….

j….jjjj….

k….kkkk….

l….llll….

m….mmmm….

n….nnnn….

o….oooo….

p….pppp….

q….qqqq….

r….rrrr….

s….ssss….

Software

POST: Using software for personalized review in a middle-school Spanish class

Improving students’ long-term knowledge through personalized review | Lindsey Shroyer Pashler Mozer 2013

John D’s questions:

  • What was the human and financial cost for this gain?
  • Was this gain course specific?
  • If it was course specific, was a useful gain?
  • Was it useful in using the language or was it useful in getting multiple choice questions correct on a computerized test?

Definitely the right questions to ask for any district considering adopting personalized review software (or any other educational software).

What I glean from a quick skim is that prior to this study students had already been using a software flash-card program, so the ‘personalized’ software flash cards were simply swapped in while the non-personalized flash cards were swapped out. Nothing else appears to have changed. (I think students still took their final exams via paper-and-pencil, though I’m not sure.)

Looks to me as if the gains students made would have been no more or less useful than the gains they were making with the previous software.

No consideration is given to the question of how well knowledge gained via any form of software flash card transfers to Spanish conversation, reading, and writing.


Abstract

Human memory is imperfect; thus, periodic review is required for the long-term preservation of knowledge and skills. However, students at every educational level are challenged by an evergrowing amount of material to review and an ongoing imperative to master new material. We developed a method for efficient, systematic, personalized review that combines statistical techniques for inferring individual differences with a psychological theory of memory. The method was integrated into a semester-long middle school foreign language course via retrieval-practice software. In a cumulative exam administered after the semester’s end that compared time-matched review strategies, personalized review yielded a 16.5% boost in course retention over current educational practice (massed study) and a 10.0% improvement over a one-size-fits-all strategy for spaced study.

Robert V. Lindsey
Jeff D. Shroyer
Harold Pashler
Michael C. Mozer
Institute of Cognitive Science and Department of Computer Science
University of Colorado, Boulder
Department of Psychology
University of California, San Diego
August 16, 2013
Accepted for publication


From the paper:

Forgetting is influenced by the temporal distribution of study. For over a century, psychologists have noted that temporally spaced practice leads to more robust and durable learning than massed practice (Cepeda, Pashler, Vul, Wixted, & Rohrer, 2006). Although spaced practice is beneficial in many tasks beyond rote memorization (Kerfoot et al., 2010) and shows promise in improving educational outcomes (Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013), the reward structure of academic programs seldom provides an incentive to methodically revisit previously learned material. Teachers commonly introduce material in sections and evaluate students at the completion of each section; consequently, students’ grades are well served by focusing study exclusively on the current section. Although optimal in terms of students’ short-term goals, this strategy is costly for the long-term goal of maintaining accessibility of knowledge and skills. Other obstacles stand in the way of incorporating distributed practice into the curriculum. Students who are in principle willing to commit time to review can be overwhelmed by the amount of material, and their metacognitive judgments about what they should study may be unreliable (Nelson & Dunlosky, 1991). Moreover, though teachers recognize the need for review, the time demands of restudying old material compete against the imperative to regularly introduce new material.

We incorporated systematic, temporally distributed review into third-semester Spanish foreign language instruction using a web-based flaschard tutoring system, the Colorado Optimized Language Tutor or COLT. Throughout the semester, 179 students used COLT to drill on ten chapters of material. COLT presented vocabulary words and short sentences in English and required students to type the Spanish translation, after which corrective feedback was provided. The software was used both to practice newly introduced material and to review previously studied material.

For each chapter of course material, students engaged in three 20–30 minute sessions with COLT during class time.

[snip]

Results

Two proctored cumulative exams were administered to assess retention, one at the semester’s end and one 28 days later, at the beginning of the following semester. Each exam tested half of the course material, randomized for each student and balanced across chapters and schedulers; no corrective feedback was provided. On the first exam, the personalized spaced scheduler improved retention by 12.4% over the massed scheduler (t(169) = 10.1, p < .0001, Cohen’s d = 1.38) and by 8.3% over the generic spaced scheduler (t(169) = 8.2, p < .0001, d = 1.05) (Figure 2a).

[snip]

Discussion

Our work builds on the rich history of applied human-learning research by integrating two distinct threads: classroom-based studies that compare massed versus spaced presentation of material (Carpenter, Pashler, & Cepeda, 2009; Seabrook, Brown, & Solity, 2005; Sobel, Cepeda, & Kapler, 2011), and laboratory-based investigations of techniques that select material for an individual to study based on that individual’s past study history and performance, known as adaptive scheduling (e.g., Atkinson, 1972).

[snip]

Previous evaluations of adaptive scheduling have demonstrated the advantage of one algorithm over another or over nonadaptive algorithms (Metzler-Baddeley & Baddeley, 2009; Pavlik & Anderson, 2008; van Rijn, van Maanen, & van Woudenberg, 2009), but these evaluations have been confined to the laboratory and have spanned a relatively short time scale.

[snip]

Whereas previous studies offer in-principle evidence that human learning can be improved by the timing of review, our results demonstrate in practice that integrating personalized-review software into the classroom yields appreciable improvements in long-term educational outcomes. Our experiment goes beyond past efforts in its scope: it spans the time frame of a semester, covers the content of an entire course, and introduces material in a staggered fashion and in coordination with other course activities. We find it remarkable that the review manipulation had as large an effect as it did, considering that the duration of roughly 30 minutes a week was only about 10% of the time students were engaged with the course. The additional, uncontrolled exposure to material from classroom instruction, homework, and the textbook might well have washed out the effect of the experimental manipulation.

Personalization

Consistent with the adaptive-scheduling literature, our experiment shows that a one-size-fits-all variety of review is significantly less effective than personalized review. The traditional means of encouraging systematic review in classroom settings—cumulative exams and assignments—is therefore unlikely to be ideal.

[snip]

Any form of personalization requires estimates of an individual’s memory strength for specific knowledge. Previously proposed adaptive-scheduling algorithms base their estimates on observations from only that individual, whereas the approach taken here is fundamentally data driven, leveraging the large volume of quantitative data that can be collected in a digital learning environment to perform statistical inference on the knowledge states of individuals at an atomic level. This leverage is critical to obtaining accurate predictions (Figure 4).

Apart from the academic literature, two traditional adaptive-scheduling techniques have attracted a degree of popular interest: the Leitner (1972) system and SuperMemo (Wozniak & Gorzelanczyk, 1994). Both aim to review material when it is on the verge of being forgotten. [emphasis added] As long as each retrieval attempt succeeds, both techniques yield a schedule in which the interpresentation interval expands with each successive presentation. These techniques underlie many flashcard-type web sites and mobile applications, which are marketed with the claim of optimizing retention. Though one might expect that any form of review would show some benefit, the claims have not yet undergone formal evaluation in actual usage, and based on our comparison of techniques for modeling memory strength, we suspect that there is room for improving these two traditional techniques.

Beyond fact learning

Educational failure at all levels often involves knowledge and skills that were once mastered but cease to be accessible due to lack of appropriately timed rehearsal. While it is common to pay lip service to the benefits of review, providing comprehensive and appropriately timed review is beyond what any teacher or student can reasonably arrange. Our results suggest that a digital tool which solves this problem in a practical, time-efficient manner will yield major payoffs for formal education at all levels. [emphasis added]

[snip]

From Supplementary Online materials:

In seventh grade Spanish 1 and 2, these same students had used commercial flashcard software for optional at-home vocabulary practice. Like COLT, that software was preloaded with the chapter-by-chapter vocabulary for the course. Unlike COLT, that software required students to select the chapter that they wished to study. Because review was scheduled by the students themselves and because students had weekly quizzes, students used the software almost exclusively to learn the current chapter’s material.

From the students’ perspective, COLT was simply a replacement for the software they had been using and a substitute for pencil-and-paper quizzes. Students were not aware of the details of our experimental manipulation, beyond the notion that the software would spend some portion of study time reviewing older vocabulary items.


t….tttt….

u….uuuu….

v….vvvv….

w….wwww….

x….xxxx….

y….yyyy….

z….zzzz….


Large-scale 1 to 1 computing initiatives – decrease in direct instruction