Saturday, December 24, 2005

Faked clone research shows chinks in peer review's armor

Nicholas Wade's article "Clone Scientist Relied on Peers and Korean Pride" in today's New York Times reminds us of some of the ways that closed peer review can be prone to abuse. This analysis asked why the falsified research reported by South
Korean stem-cell researcher Hwang Woo Suk made it through the much-vaunted peer review gauntlet of journals like Science and Nature.

http://www.nytimes.com/2005/12/25/science/25clone.html

Apart from Hwang's apparent knack for pulling the wool over the eyes of his co-workers and funders in the South Korean government, Wade reports that Hwang was adept at attracting co-authors from the biomedical insider's club:

"In addition, Dr. Hwang invited well-known American researchers to be co-authors on his articles, which he may have hoped would make his findings more acceptable to leading journals like Science and Nature. He even invited Dr. Gerald Schatten, a
stem cell expert at the University of Pittsburgh, to be the lead author on the June 2005 report although Dr. Schatten had done none of the experiments....An indication of Dr. Hwang's good connections to the government was the inclusion of Dr. Park
Ky Young as a co-author of his 2004 report on human cloning. A botanist by training, Dr. Park may not have contributed much scientifically to the task of cloning of human cells. She is, however, the science adviser to Roh Moo Hyun, the president of
South Korea."

Curiously, it seems Hwang also exploited a vulnerability in the publication paradigm for experimental science, so often touted as more rigorous than research in the humanities:

A question both journals have considered is that of whether their editors and reviewers should have caught the errors in Dr. Hwang's papers before publication. But as in past cases of fraud, the journals' editors and other scientists assert that
their system depends basically on trust and that reviewers can check only whether a report's conclusions follow from the data presented. "Peer review is not set up to test for fraud," Dr. Campbell said. "It is set up to provide expert assessment of
the scientific credibility and reliability of what scientists report, taking the report itself in good faith." Dr. Kennedy noted that journals often published articles that were later shown to be innocently in error. "The public needs to understand
that the journals and peer review are not perfect," he said.

So where did the truth come out? On the Web, predictably, posted by younger researchers who spent less time accumulating vanity co-authorships and more time analyzing the data:

"It was also South Koreans who took the lead in detecting Dr. Hwang's falsifications. Dr. Zach Hall, president of the California Institute of Regenerative Medicine, noted that young South Korean scientists had brought to light many problems with Dr.
Hwang's papers in Web site postings."

jon

Thursday, December 08, 2005

Howard White's co-citation visualizations

Jim Campbell, a U-Me grad student who I hope will be joining this list soon, recommended this article to me:

Howard D. White, "Pathfinder Networks and Author Cocitation Analysis: A Remapping of Paradigmatic Information Scientists"
http://www.umit.maine.edu/~jon_ippolito/cip/white_pathfinder_networks@m.pdf

It's a bit technical, but I was interested in the social patterns revealed by measuring co-citations--i.e., when researchers cite each other. For example, lots of folks online are going to cite Lawrence Lessig, but only those researchers whom he
also cites are part of his "intellectual circle." Hence, the number of citations to a work from inside or outside a researcher's intellectual circle give a sense of their relevance locally and globally.

jon