China issues another crackdown on scientific misconduct

China issues another crackdown on scientific misconduct

Gina Lin

30 March 2009 | EN | 中文


The notice also ordered universities to train teachers and students in academic conduct


[BEIJING] China’s Ministry of Education has stipulated seven acts of academic misconduct and how they will be punished in an attempt to combat scientific misconduct in the country.

But critics doubt they can solve the long-standing issue of fraud and misconduct in Chinese academia.

The circular, issued this month (19 March) says that plagiarism, falsifying data and references, fabricating CVs and changing others’ academic achievements or signing their names without permission are scientific misconduct.

It is the latest effort to tackle the problem. In 2006 the Ministry of Science and Technology created a set of rules to monitor state-funded research projects (see China sets up rules to combat scientific misconduct) in response to six high-profile cases of scientific misconduct that year.

The new measures are aimed at misconduct in higher education institutions, following a recent scandal involving Zhejiang University in Hangzhou, where associate professor He Haibo and dean of pharmaceutical science Li Lianda lost their jobs over He’s alleged copying of data.

Punishment for anyone in breach of the new rules could involve warnings, dismissal or legal charges. Their research programmes could also be suspended or terminated, they could lose their funding, or have awards and honours revoked.

The notice also ordered universities to train teachers and students in good academic conduct.

“These measures are intended to build up a long-term prevention mechanism to keep the academic field ‘clean’,” said Xu Mei, spokeswoman with the ministry.

But critics say the circular only “scratches the surface of a problem”.

Hou Xinyi, a law professor from Tianjin-based Nankai University, says it is the government-controlled grant and award system that has spawned misconduct among Chinese academia.

“In China, the government controls almost all the funding resources, which are usually available for a limited selection of projects,” says Hou.

He adds that because it is much easier for people in higher positions to win funding, researchers are faced with the pressures of socialising and gaining contacts and finding the time to publish as many papers in high impact journals as possible — the most important criterion the government rely on to assess eligibility for project funding.

“It is understandable and necessary for the government to have funding control of some major projects essential to the country’s safety and development,” says Hou. “But as for that of others, they’d better leave it to academia to encourage true scientific excellence.”


Study finds plenty of apparent plagiarism

Data mining reveals too many similarities between papers

IS THIS PLAGIARISM?Yellow highlights aspects of this paper that copy material published in a previous paper — by other authors.UT Southwestern Medical Center

If copying is the sincerest form of flattery, then journals are publishing a lot of amazingly flattering science. Of course to most of us, the authors of such reports would best be labeled plagiarists — and warrant censure, not praise.

But Harold R. Garner and his colleagues at the University of Texas Southwestern Medical Center at Dallas aren’t calling anybody names. They’re just posting a large and growing bunch of research papers — pairs of them — onto the Internet and highlighting patches in each that are identical.

Says Garner: “We’re pointing out possible plagiarism. You be the judge.” But this physicist notes that in terms of wrong-doing, authors of the newest paper in most pairs certainly appear to have been “caught with their hands in the cookie jar.”

Garner’s team developed data-mining software about eight years ago that allows a resarcher to input lots of text — the entire abstract of a paper, for instance — and ask the program to compare it to everything posted on a database. Such as the National Library of Medicine’s MEDLINE, which abstracts all major biomedical journal articles. The software then looks for matches to words, phrases, numbers — anything, and pulls up matches that are similar. The idea: to help scientists find papers that offer similar findings, contradictions, even speculations that might suggest promising new directions in a given research field.

Early on, Garner says, his team realized this software also had the potential for highlighting potential plagiarism. But that was not their first priority. In fact, his group didn’t really begin looking in earnest for signs of copycatting until about two years ago.

Today, Garner’s group has published a short paper in Science on results of a survey it conducted among authors of pairs of remarkably similar papers (identified from MEDLINE), and the editors who published those papers. The Texas team wanted to find out whether the apparent copycats — not only the authors but also the editors who published their work — would own up to plagiarism. And once confronted with this public finger pointing, what would they do about it?

The real surprise, says Garner — indeed, “the shock” — was that so few authors of the initial papers were aware of the copycat’s antics. Prior to emailing PDFs that highlighted identical passages in each set of paired papers, 93 percent said they had been unaware of the newer paper.

Since those newer papers were all available via MEDLINE searches, they should have come up every time authors of the first paper searched for work on topics related to their own. In fact, Garner points out, because MEDLINE posts search results in reverse chronological order, copycatted papers should turn up before the papers on which they had been based.

To date, 83 of the 212 pairs of largely identical papers identified so far by the data-mining software that Garner’s team has developed have triggered formal investigations by the journals involved. In 46 instances, editors of the second papers have issued retractions. However, what constitutes a retraction varied considerably. It might have been broad publication of problems with the offending second paper — both in the journal and in a notice sent to MEDLINE.

Other times, some website might have acknowledged the retraction of some or all of a paper, with no notification of the problem forwarded to MEDLINE. In such cases, Garner notes, anyone using MEDLINE’s search function would get no warning that the abstract it pulled up relates to findings that have been discredited.

Have you ever shared this material on apparent plagiarism with the administrators of the second paper’s authors, I asked Garner. “No, that would have put us into this situation where we would be acting more as police or an investigatory body,” he said. And they’re not anxious to serve as honesty cops.

Too bad.

So far, his team’s software has turned up more than 9,000 ‘highly similar’ papers in biomedical journals indexed by MEDLINE. And only 212 are copycats? Actually, Garner says, that estimate is probably way low. Of that big number, “We have only gotten through looking at 212 so far.” Their investigations continue.

Blog at

Up ↑