Homepage Top Ad

Save on GreenBiz 26 — the premier event for sustainable business leaders. Prices increase Feb 13.

A landmark sustainability study was wrong. Correcting it took two years

How one professor tried to correct a study on sustainability and stock returns that's been cited 6,000 times. Read More

Professionals can improve trust in science by advocating for error correction. Source: Video Wonder/Shutterstock
Key Takeaways:
  • The central claim of an influential sustainability study — that corporate sustainability increases stock returns — was undermined by a key finding that was misreported as statistically significant.
  • But when a business professor tried to correct the record, he faced numerous institutional barriers.
  • To improve trust in science, which already faces many headwinds today, professionals need to support error correction and strengthen institutional research integrity policies.

The opinions expressed here by Trellis expert contributors are their own, not those of Trellis.​

For a long time, I resisted the accumulating evidence that our institutions for curating trustworthy science were failing. I believed our academic gatekeepers were quietly doing their jobs.

That belief ended when I attempted to replicate an extraordinarily influential article: “The Impact of Corporate Sustainability on Organizational Processes and Performance,” which appeared in a prestigious journal, Management Science by Robert Eccles, Ioannis Ioannou and George Serafeim. The paper, which posits that sustainable companies have outperformed the stock market by roughly 40 percent each year for 20 years has been cited more than 6,000 times — by Wall Street executives, top government officials, and even a former U.S. Vice President.

When I tried to replicate it, I found serious flaws and misrepresentations:

  • A key result labeled as statistically significant was not
  • The analytical method didn’t work as described
  • Critical statistical tests were omitted
  • No matter what I tried, I couldn’t replicate the results

I thought correcting the record would be easy. The authors work at highly reputed institutions and the article appeared in a prestigious journal.

But I was wrong.

Encountering barrier after barrier

Following academic etiquette, I contacted the authors and kept them informed as my replication proceeded. They never responded to more than half a dozen emails.

I submitted a comment (a short paper) to Management Science about the errors, but it was rejected. Reviewers objected to the “tone” of my submission and found it impudent that I was challenging such an important paper. Authors, one wrote to me, are granted “discretion” in conducting their work, and therefore “inclined to turn down any invitation to review a revision” unless it was accompanied by a note from the original authors.

Having no luck with the journal, I turned to the scholarly community for advice, asking colleagues to help encourage the authors to engage. I argued that the best course—for them and for the field—was to correct the mistakes. Doing so would elevate, not diminish, their scholarly standing. Few people responded. Those who did offered excuses. One internationally-respected, chaired professor was refreshingly honest: “I’m too much of a coward.” He articulated what many scholars quietly believe: it’s more harmful to one’s career to try to correct a flawed—or even fraudulent—study than to be the one who published it.

Going beyond normal channels

I decided to go public about some of the article’s errors—a step so unusual that I feared it might end my ability to publish future work.

I posted on LinkedIn that a key finding labeled as statistically significant was, in fact, not. Within days, Management Science published a correction from the authors acknowledging the error and attributing it to a “typo.” They claimed they had meant to write “not significant” but had omitted the word “not”.

Convinced that the paper’s reported method was fraudulent, I also submitted complaints to two research-integrity offices. Soon after they received my complaint, the authors admitted they had indeed misreported their analysis. Again, they blamed poor editing. There had been two studies, they said, and the false description belonged to an “exploratory” study that was later removed to satisfy length requirements — except that the sentences describing its matching process were inadvertently left behind.

They didn’t explain that this rendered their results uninterpretable. Nor did they submit a correction to Management Science.

That is where things stand today.  Their paper continues to mislead thousands of people a year.

Social science needs reform

I now believe our systems for curating trustworthy science are broken. Both individual- and system-level changes are necessary.

As individuals, we can:

  • Stop citing single studies as definitive. They aren’t. Check whether studies you read and cite have been replicated
  • Tell colleagues to stop when they behave unethically
  • Support replication and encourage others to do it, too

Most of all, we need to exercise critical thinking. A close reading of this study should’ve raised red flags: key tests are missing, variables were unusual and the headline claim was implausible. We were told that sustainable companies outperformed the stock market by roughly 40 percent per year for 20 years. Such an extraordinary finding requires careful, credible evidence. That evidence was missing.

But the result was highly desirable, so our hopes overcame our judgment. It’s a reminder that, in the words of Nobel laureate Richard Feynman: “The first principal [of science] is not to fool yourself — and you are the easiest person to fool.”

Trellis Briefing

Subscribe to Trellis Briefing

Get real case studies, expert action steps and the latest sustainability trends in a concise morning email.
Coming up

Article Sidebar 1 Ad
Article Sidebar 2 Ad