Journalistic rules about press releases are murky. Rules about taking credit for other journalists' prose are not.
Sixty kids were shown a boxy toy that played music when beads were placed on it. Half of the children saw a version of the toy in which the toy was only activated after four beads were exactingly placed, one at a time, on the top of the toy. This was the “unambiguous condition,” since it implied every bead is equally capable of activating the device. However, other children were randomly assigned to an “ambiguous condition,” in which only two of the four beads activated the toy. (The other two beads did nothing.) In both conditions, the researchers ended their demo with a question: “Wow, look at that. I wonder what makes the machine go?”
Next came the exploratory phase of the study. The children were given two pairs of new beads. One of the pairs was fixed together permanently. The other pair could be snapped apart. They had one minute to play.
Here’s where the ambiguity made all the difference. Children who’d seen that all beads activate the toy were far less likely to bother snapping apart the snappable bead pair. As a result, they were unable to figure out which beads activated the toy. (In fact, just one out of twenty children in that condition bothered performing the so-called “experiment”.) By contrast, nearly fifty percent of children in the ambiguous condition snapped apart the beads and attempted to learn which specific beads were capable of activating the toy. The uncertainty inspired their empiricism.
A second study was similar to the first, but this time the children were only given a single bead pair that was permanently fixed. This toy was trickier to activate, since it required that the kids place the pair of beads so that one bead was one top and one bead was dangling over the edge. Once again, children first presented with ambiguous evidence were five times more likely to perform this original “experiment” and thus activate the toy.
Sixty 4- and 5-year-olds were shown a box-shaped toy that played music and lit up when beads were placed on it. Crucially, some of the children were shown that each of four beads, placed one at a time on the toy, activated it. This was the "unambiguous condition" that implied any old bead is capable of activating the toy. Other children were in an "ambiguous condition": they were shown, by placing beads one at a time on the box, that two of the beads activated it, but two of them didn't. In both conditions, the researchers said afterwards: "Wow, look at that. I wonder what makes the machine go?", followed by: "Go ahead and play".
Next came the key exploratory phase of the study. The children were given two pairs of new beads (different from those seen earlier). One pair was fixed together permanently. The other pair could be snapped apart. They had one minute to play.
Here's the take-home finding: children who'd earlier seen that all beads activate the toy were far less likely to bother snapping apart the snappable bead pair to test which beads activated the toy and which didn't. In fact just 1 out of 20 children in that condition bothered performing this "experiment". By contrast, 19 out of 40 children in the ambiguous condition snapped apart the snappable bead pair and tested which specific beads were capable of activating the toy and which weren't.
A second study was similar to the first, but this time the children were only given a single bead pair that was permanently fixed. This time, to identify precisely which beads activated the toy and which didn't, the children had to come up with the entirely original idea of placing the pair on the toy in such a way that one bead made contact with its surface whilst the other bead hung over the edge. Again, children presented initially with ambiguous evidence (some beads activated the toy, some didn't) were far more likely to perform this original "experiment" to isolate the beads with the activating effect ...
In another experiment, the researchers varied the volunteers’ mindsets, sometimes asking them to look at photos as if they were on an online-dating website, focusing on attractiveness, and sometimes asking them to look at the photos as if they were hiring for a professional job, focusing on the mind.
In another experiment, the researchers varied the volunteers' mindsets, sometimes asking them to look at photos as if they were on an online-dating website, focusing on attractiveness, and sometimes asking them to look at the photos as if they were hiring for a professional job, focusing on the mind.
In a third post from mid-2011 titled "Basketball and Jazz," one of Lehrer's paragraphs closely paralleled one written by Newsweek science writer Sharon Begley some three years earlier.
The rebounding experiment went like this: 10 basketball players, 10 coaches and 10 sportswriters, plus a group of complete basketball novices, watched video clips of a player attempting a free throw. (You can watch the videos here.) Not surprisingly, the professional athletes were far better at predicting whether or not the shot would go in. While they got it right more than two-thirds of the time, the non-playing experts (i.e., the coaches and writers) only got it right about 40 percent of the time.
In the experiment, 10 basketball players, 10 coaches and 10 sportswriters (considered non-playing experts), and novices all watched a video clip of someone attempting a free throw. The players were better at predicting whether the shot would go in: they got it right in two-thirds of the shots they saw, compared to 40 percent right for novices and 44 percent for coaches and writers.
Tellingly, Begley misstated the number of participants in the study. (There were only 5 coaches and 5 sportswriters, not 10 of each. In addition, there were also 10 people in the novice group who were neither coaches nor sportswriters.) Lehrer made the exact same mistake in precisely the same manner.
Issues with quotations
Lehrer has altered quotations, for instance of a written phrase in a scientific paper in "Basketball and Jazz." Lehrer says that the scientists described a behavior as a "covert simulation of the action," in quotation marks. The actual quotation, from a scientific paper, was that the subjects were performing "a covert simulation of the very same action" that was depicted on the video screen.
In "When Reinforcement Fails," Lehrer quotes scientists who authored a research article as saying, “The behavior of basketball players shows the limitations of learning from reinforcement, especially in a complex environment such as a basketball game.” This passage appears, verbatim, in a press release issued by the Hebrew University of Jerusalem—but it is not a quotation from the scientists. (Conversely, a quotation in the press release attributed to Yonatan Lowenstein, "The study shows that despite many years of intense training, even the best basketball players over-generalize from their most recent actions and their outcomes...." appears, word for word, in Lehrer's blog post—without any indication that it is a direct quotation.)