Journals: Reading makes empathic. Genuine?

Researchers have undergone studies from the top journals 34; nature 34; and 34; science 34; a TÜV – many have not passed.

Journals: Reading makes empathic. Genuine?
Content
  • Page 1 — reading makes empathic. Genuine?
  • Page 2 — a new insight
  • Read on a page

    When scientists have found out something – when can y actually rely on it? One answer is: When peers have reviewed study. Anor: When it was published in a renowned journal. But sometimes both are not enough toger, as researchers have now shown. In best and most elaborate way: you have repeated underlying experiments. and looked to see if same thing came out again.

    It was about 21 social science studies from journals Nature and science. More reputation does not go. And of course, submitted work will be examined by experts (peer review). Neverless, in almost 40 percent of cases, same thing did not come out again – but mostly: nothing.

    "I would have expected a better result, after all it was about nature and science," says John Ioannidis, medical statistician at Stanford University. He is one of most distinguished fighters against bad science and usually not squeamish with his colleagues. The study TÜV was initiated by American social psychologist Brian Nosek, founder of Center for Open Science. He says: "It could also be that in such magazines, works are less solid, because top-journals prefer sexy results."

    What kind of results did it actually have? A few examples:

    • Jobkandidaten were rated better when ir CV was judged on a heavy rar than a light clipboard.
    • Subjects who were attuned to analytical thinking with images of Rodin sculpture "The Thinker" stated that y believed less strongly in God.
    • People who asked difficult questions of knowledge, which y could not answer in part, often thought of computers. (hyposis: Those who encounter a knowledge gap today are quickly considering a search by Google.)
    • Those who had read literary texts could better put mselves in ors.
    This article comes from time No. 36/2018. Here you can read entire output.

    It all sounds sexy. Because results are surprising because y seem to reveal unconscious influences on our behavior because y are close to life and current issues, such as wher we rely too much on Google. All se results could not be confirmed (in technical jargon: replicating). This does not necessarily mean that y are false, blundered or counterfeit. But it means you can't rely on m.

    Even if similar effects occurred in repetition of experiments, y were noticeably smaller than in original, on average only three-quarters of size. If one calculates non-replicable studies, average effect of all repetitions even shrinks to half. That is why research critic John Ioannidis says: "If you read an article about a social science experiment in nature or science, you have to cut effect in half."

    In similar TÜV projects, 64 percent of psychology studies and 39 percent of economics studies could not be confirmed. In pharmaceutical and cancer research, samples showed even higher bankruptcy rates. And even supposedly harsh sciences have problems, says Ioannidis: "There are areas in chemistry where it doesn't look much better." Brian Nosek is not surprised. "The incentives are same in all disciplines: you have to publish a lot, in most prestigious journals, and for that you need exciting results that can be told as a simple story." An old problem.

    Date Of Update: 30 August 2018, 12:00