Measuring how well kids do science

New assessments of U.S. students show improvement is warranted

Since 1969, the National Assessment of Educational Progress has issued report cards on how well America’s youth perform on classroom tasks. Previously, they have assessed what kids know or can calculate. Two new components have now been developed to gauge a child’s performance in hands-on and research-oriented interactive computer tasks. On June 19, NAEP released the first scores for these tests. And the overall grades: Well, they show plenty of room for improvement.

EXPERIMENT-AL REPORT CARD NCES

The new data from pilot-scale assessments of hands-on and computers-on research come from tests in 2009. Some 2,000 children took each test at each of three grade levels: 4th, 8th and 12th. “Across the 9 interactive computer tasks, we found that 42 percent of 4th graders, 41 percent of 8th graders and 27 percent of 12th graders gave correct answers on the steps they attempted,” reports Jack Buckley, commissioner of the National Center for Education Statistics, which administers NAEP tests.

Overall, students were likely to be successful on parts of the testing “that involved limited sets of data and making straightforward observations from those data,” he observes. Where kids tended to stumble — sometimes badly — was in using those data to extrapolate a general trend or justify a conclusion. For instance, Buckley notes, on one computer simulation for 4th graders of plants growing in a greenhouse, kids could move the plants around and identify, based on growth patterns, which were sun- versus shade-loving plants, and which fertilizer application rate proved most effective.

But when asked to explain in writing how they reached those conclusions, “this is where things started to go awry, Buckley said. Many simply couldn’t “back up their conclusions effectively with the evidence they had just collected from the simulation.”

Last month, NAEP issued 2011 science achievement stats for kids in middle schools across the nation. The science score was middling. Literally. On a 300-point scale, 8th graders collectively scored 152 points — up a mere 2 points from 2009. Two percent of the 122,000 surveyed children scored at an advanced level, no differently than two years earlier.

Nothing to brag about, such scores should come as no surprise. On one international survey after another, U.S. students fail to lead the pack. For instance, scores for 8th graders in the 2007 Trends in International Mathematics and Science Study (issued in 2009 and the most recent data available) averaged 508 points for math and 520 for science — hovering around the average (500 points) for this yardstick.
 
How did that compare with scores elsewhere around the world? “At eighth grade, the average U.S. science score was higher than the average scores of students in 35 of the 47 other countries, lower than those in nine countries (all located in Asia or Europe), and not measurably different from those in the other three countries,” TIMSS reported. Ten percent of U.S. kids met or exceeded the advanced international benchmark in science — a smaller share than in Singapore, Taiwan, Japan, England, Korea or Hungary.

But the rote memorization of facts, formulas or rules that can lead to high scores on such tests do not a good 21st century scientist or engineer make, notes Alan Friedman, a member of an independent, bipartisan board established by Congress to set policy for NAEP. Important as those skills are, he says, in today’s climate they simply aren’t sufficient. So NAEP developed research-performance based tasks, he says, to measure “what students know and can do in more complex, real-world situations. (And this physicist is familiar with science achievement and outreach to the nation’s youth: For 22 years he directed the New York Hall of Science.)

Regarding the newly reported scores, Buckley says that “As a citizen and a parent, I was not particularly happy — although pleased to see that the vast majority of students was capable of making straightforward scientific observations from data.” He expressed far less satisfaction that a much smaller share could “either use strategy to actually decide what data to collect, or to arrive at the correct conclusions and be able to back them up with the evidence that they had just collected. I think that points to something that we need to work on.”

Friedman was a bit more charitable. “The fact that we didn’t bomb on it” — at least the initial, simpler elements of these tests, “that’s very satisfying.” As a science educator, he said: “I was relieved, frankly, that students didn’t do really badly.” Keep in mind, he pointed out, “No amount of rote drill and practice” — of memorizing formulas, words and scientific laws — “would help you to any significant extent on these tests. You really had to think on your feet.”

The new research report card raises a big question for the nation’s education elite: how to raise those scores, because they point to shortfalls in developing, synthesizing and using data — the essence of science. The issue isn’t how poorly kids elsewhere around the world might do this (and we don’t know that they do it poorly), it’s only important that U.S. schools ensure their students do it well. At issue? Only the future economy and health of the nation.

Janet Raloff is the Editor, Digital of Science News Explores, a daily online magazine for middle school students. She started at Science News in 1977 as the environment and policy writer, specializing in toxicology. To her never-ending surprise, her daughter became a toxicologist.

More Stories from Science News on Science & Society

From the Nature Index

Paid Content