According to a foreign media report, in 2011 the eye-catching psychological study caused a stir on social media, news and academics. According to the study, people view the Internet as an “external” form of memory, relying on the Internet for information, rather than recalling facts on their own. In 2018,When a team of psychologists examined the paper and 20 other top social science studies, the paper’s key findings failed to be copied.
But the original paper has been cited 1,417 times – more than 400 of which were after the 2018 copy project. On average, this is much higher than the number of documents actually copied into the project. Now, a new study supports the popularity of incredible research. Social science documents that failed to copy had an average of 153 citations more than documents that were successfully copied.
Michael Dakerti, a cognitive scientist at Maryland College Park, said the latest decision was “extremely damaging” and that he was not involved in the study. He said: “For a long time, the number of citations was considered representative of the quality of the research.” Therefore, less reliable research is often cited. Evaluate these works on how this finding represents the “fundamental problem”.
University of California economists Marta Serra-Garcia and San Diego are interested in whether attractive research ideas receive more attention than ordinary ideas, although they are unlikely to be true. Therefore, they collected data in 80 documents from three different projects that sought to reflect important social science discoveries with varying degrees of success.
In Monday’s issue of “Scientific Advances”, the number of citations for documents that failed to be copied in Google Scholar increased significantly, to an average of 16 citations per year. Serra-Garcia and Guinea It was a large number — over the same period, documents in the most influential journals piled up an average of 40 citations a year.
When researchers examined quotations from documents released after these milestone copying projects, they found that these documents rarely acknowledged replication failures, and that this was only 12% of the time.
Serra-Garcia pointed out that duplicate failure does not mean that the initial discovery was wrong. Changes in methods and changes in participants’ habits — such as changes in Internet usage patterns — may explain why the previous result may not be true. But he added that his findings point to a fundamental contradiction in the research. Scientists want their work to be accurate, but they also want to publish attractive results. He said when the results are particularly surprising or exciting, peer reviewers may reduce the need for evidence, which means that compelling results and weak evidence often go hand in hand.
Thomas Pfeiffer, a computational biologist at the University of Massey, agreed with the principle that “extraordinary claims require extraordinary evidence.” He studied the problem of copying, but did not participate in this work. He said this shows the need to take extra precautions to increase the credibility of published works, i.e. to establish more gateways to good evidence and to make greater efforts to focus on strong research issues. And methods, not exaggerated findings.
Brian Nosek, a psychologist at the University of Virginia, said: “This finding is a bit tricky for (research) cultural change advocates like me.” He has been at the forefront of some duplication work, and is the author of two of the three duplication projects used for the Serra-Garcia and Guinea-Bissau reference. He said, but before taking it too seriously, it remains to be seen whether the invention can be copied with different paper samples.
This result is consistent with previous studies, which shows that popular studies are not very reliable. For example, a 2011 study in the journal Infection and Immunity found that high-impact journals have a higher withdrawal rate than low-impact journals. He said Dockerty’s research — a label currently uncensored — was highly cited because the documents were based on weak data. However, in 2020 a study in the processes of the National Academy of Sciences examined different models and found that there was no relationship between quoting and copying. This suggests that the pattern of papers may be more important – for example, its impact may be particularly strong in high-impact journals.
Nosek said strong but less attractive documents could pile up more citations in the long run if the competition for popularity results is over. “We ‘ve all seen enough teen movies to know that famous kids will eventually lose their brilliant pranks. This is also true of how scientific discoveries work.