Hacker News new | past | comments | ask | show | jobs | submit login
New academic journal only publishes 'unsurprising' research rejected by others (cbc.ca)
884 points by apsec112 on Aug 20, 2020 | hide | past | favorite | 202 comments



Half of my PhD thesis is considered "unpublishable" because, after doing the work, my supervisors felt it's actually "unsurprising" that it didn't work out. We took methods that had been exploited to improve on previous results for over a decade to their logical extreme, and found that this method no longer leads to improvements. After doing the work it seems obvious. A paper on the subject would almost be considered uninteresting, and a high ranking journal would ignore it (which is why it's considered "unpublishable"). However, nobody has published this information, and it would help others to not make the same mistake.

I wonder how many times similar "mistakes" have been made by PhD students across disciplines.


My experience: I proposed a thesis to an advisor who deemed it unlikely to work. He ran it by a colleague who came to the same conclusion. I went to a different major and pursued the same thesis and invited the previous faculty who initially turned it down to the defense because it was relevant to their field.

It was frustrating to hear them voice their opinions in the defense that they felt, “of course it world work.” After seeing the data, they took the exact opposite side claiming it was obvious to the point of being of limited publishing value.


Academia is 'full' of people who lack intellectual integrity. Imagine Frege's response to Bertrand Russell's letter concerning barber's paradox: 'it's obvious!' Instead of doing that, Frege openly acknowledged Russell's criticism in his book.


Imagine how much worse it is outside where there's even no pretense of adhering to any intellectual rigor.

Also within academia there is still a wide spectrum of intellectual rigor across the disciplines. Some things are just more verifiable than others.



I recall Dan Ariely mentioning this in one of his books. His field is psychology/behavioural economics, a field where very often either outcome of an experiment can seem obvious after the fact. (Questions like Do newborn babies have an intuitive understanding of gravity?)

As I recall, he restructured his lectures, asking upfront for a show of hands as to which outcome everyone anticipated, before the big reveal. After making this change, he had fewer people approaching him after lectures saying how obvious the outcome was.


> Do newborn babies have an intuitive understanding of gravity?

I really had no idea about this one, but how it's studied is very interesting.

https://www.livescience.com/18101-infants-grasp-gravity.html


That’s interesting because much of the thesis was rooted in behavioral economics. The faculty that turned it down initially were in the economics department


Do you know which book?


It's one of the three of his books I own, I'm afraid I can't easily narrow it down further. (I own all three as audiobooks, and I recall Simon Jones reading it, but it turns out he read all three.)

• Predictably Irrational

• The (Honest) Truth About Dishonesty

• The Upside of Irrationality


Tangentially related I wish there were a way to “search” within audio books. Once you’ve finished the book its almost impossible to figure out where a specific chapter or passage is if you’d like to go back.


The semantic data format people have had a point all along. Just because digital audiobooks are inspired by books on cassette is no reason the data format can't support all sorts of metadata. We could have a format for written and read aloud works that highlights every word in the text on a screen as it's read when used in the proper player software, with user notations, bookmarks, indexes, an completely searchable by full text.


Kindle supports this with Whispersync. I don't know how the file format works.


They'd rather charge you separately for a DRM-locked ebook version.


I've been noticing a variant of this in myself lately. Maybe I think about some problem and stew on it and think to myself that it's probably not possible, or at least I can't come up with ideas. Then I hear that somebody else has made progress and suddenly I have a bunch of ideas. Somehow switching from "how could this work" or "can this work" to "how did they do it" leads me down entirely different paths.

I've been trying to get better at recognizing the bias and switching viewpoints without the external push.


I often think about this Michael Abrash story: (chapter introduction) http://orangeti.de/OLD/graphics_programming_black_book/html/...

When I'm stuck, or getting close to stuck, I always try to assume what I want to do has already been done in some way. Long Google searches or discussions with domain experts, purposely vague, looking for similar ideas. Even a ridiculously not-so-related paper or mention in a paper will launch me in a idea-generation frenzy and I'll quickly build confidence.

Love M. Abrash's books. Shame he didn't keep writing them, they were inspirational for me.


This is an interesting perspective that I think may exhibit itself in many domains. I’m reminded of the fact that the sub-four minute mile was impossible and then when it was first broken, many others completed the same feat in a relatively short period of time


I like the linked Egg of Columbus:

https://en.wikipedia.org/wiki/Egg_of_Columbus


I guess the lesson is, when an advisor rejects your thesis idea, get them to put their reasons in writing.


I had their rejections in email. Ultimately, the committee brought them around and accepted my thesis so I didn’t feel it was worth burning those bridges.


What would that get you though, other than knowing you were right (which you already knew without having it in writing) and (if you decide to publicly call them out on it) enemies for life in your chosen field of study?


Oh, I wouldn't do it publicly. The point would be to push back, in private discussions, against the argument that the result was not interesting enough to publish.


I believe this quote from J.B.S. Haldane is relevant here:

I suppose the process of acceptance will pass through the usual four stages:

(i) This is worthless nonsense;

(ii) This is an interesting, but perverse, point of view;

(iii) This is true, but quite unimportant;

(iv) I always said so


It's actually pretty impressive when a scientist goes through the full cycle, especially if they're already at the top of their field. Usually, they never make it past (ii) hence the Planck principle: "science advances one funeral at a time" (see The Structure of Scientific Revolutions by Kuhn)


Didn't you ask them "But didn't you say this was uspublishable?" and what was their response?


I did not. Maybe I was being weak or maybe I didn’t want to try and make them look bad in front of their peers but I did not bring up any of the previous conversations during the defense


Thanks for the response. I'm always just curious how people react to things like that.


Since the first and second interactions were 4 or more years apart, it is entirely conceivable that the field had moved enough during those years to warrant a genuine change in opinion.


It's also possible that they were just trying to protect a student from investing years into a project that they deemed to have a low (but perhaps non-zero) chance of success. This is a thing that good advisors should do to protect their students from career-wasting wild goose chases.

The fact that the two interactions were very different with four years and a completed thesis between them doesn't surprise me at all. My own embarrassing story is that I advised Jason Donenfeld to submit his WireGuard paper to NDSS, forgot about the meeting entirely after a few months, then complained (in retrospect, unfairly) when NDSS accepted it. Advisors do stupid, embarrassing, forgetful things all the time. The OP's story isn't even a misdemeanor.


Well I guess the problem there is inviting them to your defence. People gonna people.


True. In retrospect, I was a bit naïve.


I would have challenged them to a duel.


But be sure to put all your brilliant thoughts into writing and send it to a friend, you just might get a whole area of mathematics named after you...


That sucks to hear. Null results are important, as you say, if only to dissuade others from doing the same.

See also the "file-drawer problem" (https://en.wikipedia.org/wiki/Publication_bias). Also, with regards to the incentives in the field and the lack of null results, there's always Ioannidis's classic work (https://journals.plos.org/plosmedicine/article?id=10.1371/jo...).


I disagree. Negative results are important. Null results are of very limited interest. The two are worlds apart.

A null result simply means you tried something and it didn't work. But you don't know why. You haven't proven it didn't work. There are literally millions of reasons why something might not work. For instance, you could try to use compound X to cure disease Y, observe no effect, and conclude that X doesn't cure Y. But what if somewhere in the process of making X you made an uncaught mistake and you instead used X'?

A negative result means that you tried something and you came to the proven conclusion it doesn't work. This is, crucially, as hard to obtain as a positive result. In my example, it would imply a much longer process than simply "apply X, see no effect in Y, make a few robustness checks, done".

You could say "Well, publish the null anyway, somebody will catch the mistake". Unlikely. There are already so many papers out there that keeping up is impossible. If we were publishing also null results this number will grow tenfold at the very least. Nobody could possibly check everything. They will see a paper "X doesn't cure Y" and call it knowledge, stifling a possible cure virtually forever.

Am I splitting hairs? Perhaps. But I think HN prizes itself to be a scientifically minded community, and thus it has a mandate to use terms correctly. Confusing "null" with "negative" is a sin.

I hope one day I'll find a way to strongly and passionately argue against the "null results are as important as positive results" position. It is a bad meme. Charitably, I consider it most of the times a honest mistake. But sometimes it gives me the impression it is a cheap trick used by people to erode the reputation of academia.


I understand that people generally mean 'null results' here, by your definition, when they say 'null results'. That's the intent.

Null results are also important.

Suppression of null results allows for p-hacking and confirmation biases to creep into research, and greatly reduces the power of literature reviews.


True, but I'm not really arguing against what you're saying. It is true that, when you have a positive (or a negative!) result you should also report on the nulls you obtained on the way (most likely in the supplementary materials) as a compendium of the result, to put it into context.

What I'm arguing against is publishing a null result as a stand-alone publication. This creates the illusion of it being somehow a "result", which is not (in fact, we should stop calling them "results" altogether). With a null you haven't proven anything, and thus it is not a sufficient basis for a publication.


I see. Thanks for adding to the clarification. I think that the presentation of nulls as "results" can definitely be disingenuous. Ideally, science would have a better database to keep track of what people find, where we could add nulls in a way that doesn't highlight their "importance". As the person above says, reporting nulls is still useful to prevent p-hacking and publication bias.

(Of course, ideally I think we'd be better off focusing on reporting the data in a Bayesian approach, but that hasn't really gotten traction in the broader community.)


> Negative results are important. Null results are of very limited interest.

Correct. There is a highly cited paper in CS where the author showed that a mathematical model that was widely used in research didn't actually work (anymore) in reality. That paper was the starting point of a lot of new research in that field.


Can you add a citation for that paper?


> I disagree. Negative results are important. Null results are of very limited interest. The two are worlds apart.

I agree they're different but, but disagree that they're worlds apart. There's a spectrum between them, caused by uncertainty and statistics. If I say the average treatment effect of my new drug is probably somewhere between -x and +y, it could be a negative result or a null result. It's the fuzzy line between statistically insignificant and materially insignificant.

Maybe I only had two patients per experimental cell, so I barely learned anything. The drug's treatment effect on lifespan is between -30 years and +10 years. It's "null" in that we didn't learn much of anything.

Maybe I had a billion patients per cell and I learned that the average treatment effect on lifespan is between -0.001 days and +0.1 days. It's "negative" in that we learned the drug doesn't materially affect lifespan.

The position we seem to be in is that most conventional experiments are powered with a moderate effect size at 80%, meaning that many of our null-or-negative (-x, +y) results will be right around the region where it's unclear whether results are null or negative.


I generally agree in the sense that "null results" should not be published as "results." But, especially in the experimental sciences, I think it would be an incredible (and very useful) feat of work to have well-documented experiments that turned out to be ultimately null or failed, to prevent others from doing the same. (Or, on the other hand, to have people improve on the given methods in order to get a positive/negative result in some specific sense. For example, photonics returning to lithium niobate platforms, which were essentially abandoned in the 80s, but has had incredible successes lately. I'm sure there's been a lot of replicated work here.)

Of course, the problem with all of this is that there really aren't very good incentives to accurately and carefully report null experimental results (except as a kind of "folk knowledge" within a given lab) which would limit its general usefulness. But the "platonic ideal," so to speak, of a null result journal I think would be relatively useful.


I think you need to rework your definitions. Avoid using the word proven. Most of the time science proves things false. You can't prove anything to be true.

The difference between a null and a negative is just that a negative is an interesting null. In your null example, to create a proper negative you'd probably report several compound synthesis methods instead of one. You'd probably also want to use more mice/data in your analysis.


Those are some good reads, and have absolutely been my experience. It's depressing how many publications that I've come across don't provide the whole story, and are probably false.

I've found that looking at what a paper doesn't report can be far more important that what they claim.


I wonder how many novel techniques could come out of these types of reports if they were actually analyzed by ML or NLP.


I definitely think there's more room for this sort of guided / ML analysis, but I'm not quite sure to make traction on extracting the structure of scientific papers...hopefully someone with more experience can chime in.

I think paper discovery has recently suffered a huge boon thanks to ConnectedPapers, though. [https://www.connectedpapers.com/]


By experience, I want to say : way too much. Journals and published articles looks like a research lab full of PhD working on their owns without access to all the previous results of the labs (good, bad and everything in between) and do not talk to others unless they have a 10 minutes seminar every six months to show some stuff quickly.


IMO, nowadays top-tier conference paper tends to focus too much on telling an interesting story. This makes researchers only show the surprising result in the paper. The unsurprising one is hardly mentioned.

However, the uninteresting part would definitely help others to not make the same mistake. The uninteresting result is still result (and contribution), isn't it?


Yeah, that's the TED Talk effect, as some people put it

You cannot make a TED Talk about something that people already know


What if you presented it with a bunch of single-word slides and had a compelling frame? “What my 10 years among uncontacted tribes in the Amazon taught me about the boiling point of water”


> We took methods that had been exploited to improve on previous results for over a decade to their logical extreme, and found that this method no longer leads to improvements.

This actually sounds like a really good review paper! Review papers serve multiple purposes: getting people up to speed on a subject, and putting your own spin on a subject to guide future investigation.


One of the most important things my PhD advisor taught me was to design an experiment so that whichever way the result comes out, it tells you something interesting (even if one of those ways might be more surprising, and more interesting).


1. Other than your time, why not preprint it?

2. As an aside, I can't tell you how many times I've tried to work on stuff, it ends up working, and then I find papers and people saying what we did would never work. Sometimes the ignorance is good.


What field is this? In the physics papers I've worked on we generally try to state all the assumptions we made when we rule something out, but we do sometimes miss things, and I suppose that in some fields the preparation might me messier.


> then I find papers and people saying what we did would never work.

Seriously? What kind of scientific paper makes such claims?


To be fair, it's mostly people that do this because negative results publishing is rare. There have definitely been papers saying this though; plenty of shade thrown at basically every new method in its infancy with papers saying why they won't work (you can find plenty of academic papers dunking on human genome project and shotgun sequencing; next-gen sequencing; talens/crispr's, gene therapy, immunotherapy, ai, etc).


Sounds like a Columbus' Egg kind of situation. Your conclusions may be obvious in retrospect, but they weren't obvious at the time that you chose to pursue them and your supervisors gave you their blessing.


> A paper on the subject would almost be considered uninteresting, and a high ranking journal would ignore it (which is why it's considered "unpublishable").

There is a range of journals from high ranking to solid mid-level to lower tiers to somewhat suspicious to downright obviously pay-to-publish. You can always find a level that will publish your article.


The problem in such cases usually isn't in finding a willing journal, but constraints on the authors. For example, during my PhD at a leading biological research institute in India, there was an informal ban on sending manuscripts to open access journals - a rule instituted by the Director, who's office had to approve every submission. At some point this ban was extended to conference proceedings, or journals below a certain Impact Factor. These rules might have been overturned by subsequent administrations, I don't know.


What's even worse though is this creates enormous pressure to tweak results such that the findings are publishable.

I know one person who basically couldn't get their PhD because they couldn't reproduce another experiment and after several years of trying is pretty much certain the original results were faked in order to be publishable.


Nature scientific reports? A number of people have mixed feelings about the journal, but I think it does encourage people to publish work that is technically correct, but not exciting. I remember going through the pain of publishing a boring piece of work before that corrected a boring but incorrect study by someone else. It's tedious, but I try to think of it as community service.


You could publish this in a blog (summarized of course) At least other scientists/researches would be warned.


That's all well and good, but PhD students have zero incentive to do this, and the blog would likely go completely ignored anyway. Not only does a blog post not help you graduate but in order for anyone to care about the results posted on your blog you have to market it!


Would it be possible for you to upload the paper to researchgate.net ?


Same experience for my master's thesis.


Why don’t research groups just publish to their own websites or directories like arxiv? What’s the role of an academic journal in 2020?

Honest question, I’d love to see more blogging from hard science academics but I’m wondering if there’s a reason why that’s challenging or if it’s just academic culture. We should have a Substack/OnlyFans for scientists.


1. Plenty of scientists do. In biology, medRxiv and bioRxiv are pretty popular. For example, all publications from my lab are first put there (www.kosurilab.org/publications.html). If we have smaller pieces of work, we tend to just open source them. It's not widespread practice, but it's definitely not uncommon.

2. Plenty of good blogs, open source results & protocols, a strong github community in academia. Again, not widespread, but not uncommon either.

3. The role of academic journal is attention. There are heirachy of journals meant to signal quality, and they are hard to get into. They are very very useful as a signal for future career prospects (much in the same way as going to a good school).


Speaking from my own experience in the physical sciences, labs don't self publish unsurprising results because there are only so many hours in the day and it's not worth the effort.

Even just putting the results on your own website is a lot of work. Pulling all of the data together, analyzing it, putting it in visuals, writing up the results. It can be hard to justify committing that much time to something where the pay off is "other people might be interested".


Hmm... it seems like this would be a good thing for undergrads to do in a class setting or an internship. It would give them experience writing an actual paper, albeit with null results.


It would require a lot of babysitting. Some objects have a few slightly different definitions (for example,one book has a definition, and the other book the definition is 1/2 of that). Sometimes the programs to run the calculations don't use the same variables than the paper (perhaps the team changed the opinion, or the main reference, and the graph must show x+y vs y). Sometimes the work that should be included in the paper is underdocumented, ...

Another difficult part is to select what to publish, for example cut the dead branches and add a few more data about the interesting part. It is not usual to get a bunch of data and just publish it without some additional work.


Yep, and it’s not really how authorship is supposed to work.

The hard part here is that communicating any idea clearly to an audience takes massive effort, and usually null results, unless quite interesting in a specific context, are naturally a lower priority.

I’m currently working on a paper built around what I believe to be a fascinating null result though...


> It is not usual to get a bunch of data and just publish it without some additional work.

I thought that is what data lakes and event sourcing is suppose to solve.


I'm not sure what that means, but we are not using it.

In medicine some studies are preregistered, but one of the lessons of Covid-19 is that each week there is a new study that is clearly unregistered, without a control group or with a definition of control group that makes me cry (like "an unrelated bunch of guys in another city").

I think the people in particle physics have a clear process to "register" what they are going to measure and exactly how they are going to processes it. (The measurements are too expensive and too noisy, so it is very easy to cheat involuntarily if you don't have a clear predefined method.) Anyway, I don't expect them to have the paper prewritten with a placeholder for {hint, evidence, discovery}.

In most areas you just put in the blender whatever your hearth says and hope the best. Or run a custom 5K LOC Fortran 77 program (Fortran 90 is for hipsters).

If you get an interesting result for X+A, Y+A and Y+B, you probably try X+B before publishing because the referee may ask, or more B because B looks promising.

If you run a simulation for N=10, 20, 30 and get something interesting, you try to run it for N=40 and N=50 if the interesting part is when N is big, or for N=15 and N=25 if the program is too slow and the range is interesting enough.

And it is even more difficult in math. You can't preregister something like "... and in page 25 if we are desperate we will try integration by parts ...".


That’s an option.

What would be useful is a low effort way to translate what’s written in a notebook (or electronic notebook) into a nice summary that can be shared.


That, and on the other side of the equation, there’s not enough time to read the stack of papers I already know to be deeply interesting. I could read in my subfield of neuroscience 24/7 and never catch up with the deluge of new, interesting, and high quality work. I agree that negative results should be publishable, but the whole incentive system in science must change to accommodate that.


This. Writing up work is a LOT of work. Doing that for something that's negative, often for unknown reasons is just opportunity cost most of the time.


Copyright issues too. A lot of journals require that they have the sole copyright to the work they publish. If you have already published a portion of the work on your blog or whatever, then things can get lawyer-y.


So the same reason programmers don't document our code? We don't realize that without communicating our work, others can't make use of it. I think this is even more true in science than in programming.


> What's the role of an academic journal in 2020?

For me, trustworthiness. Not all journals are equal but some are held in such high esteem that I would grant a lot more credence to the findings of an article published in a journal than one self-published.

I have neither the time or will to assess the merits of each individual who might publish something. If a name is sufficiently big within a field, it's usually in a sufficiently big journal; on the other hand, I would treat a self-published article with the same level of skepticism as those low-tier journals that happily publish pseudoscience as fact.

Call it snobbishness if you will, and I'm fully aware that academia is full of it, but that's the role that a journal fills for me.


For me it's also a probability game:

If it's good research, it's more likely to be published in an academic venue (and possibly the other way around). So if there are a lot of papers, I will prefer the academically published ones.

I have limited time to read publications, I am not going to read everything just because it is available somewhere. For me, the amount of papers published these days is an argument for academic venues, not against them.

(As I am a computer scientist I have a hard time writing "journals" instead of any more general term, as conferences are way too popular in this field.)


One cool thing about "unsurprising" results—trustworthiness of the source doesn't mean much more than fitting your preconceptions in any way that reviewers can tease out anyway.


If a self-published article included full equipment and software to replicate their results in a multimedia fashion (basically included engineering), would that alter your snob-ness?


It's about laziness and efficiency rather than snobbery, IMHO.

You're outsourcing the QC to a trusted third party.

Reading papers takes up enough time and effort as it is. I do not have the time to reproduce all of them.



I don't have time to reproduce results, nor should I be expected to do so. I trust the journal to trust the author. It's a chain of trust.

I shouldn't have mentioned snobbiness; people've gotten caught up on the wrong part of the message.


i think it's largely culture. academics are "graded" based on how many papers they publish. depending on your geography, quality matters too.

but in most cases they aren't measured by how much non-peer reviewed publications they have.

there is also a technical barrier that works both ways. most don't have the capability to have a regularly updated website with content that otherwise would have been put in the file drawer. and the other side of that coin is what audience will actually find / read it online.

twitter is becoming the defacto medium of dissemination, however, so that may bode well for promoting other types of publishing medium.

edit ALSO note that there are some academics who publish high-quality blogs. i'm thinking of [murat demirbas](http://muratbuffalo.blogspot.com/). i'm not familiar with any that publish actual results, however. the threat would be someone else might "steal" the idea and publish it elsewhere, particularly in competitive fields like bio.


> there is also a technical barrier that works both ways. most don't have the capability to have a regularly updated website with content that otherwise would have been put in the file drawer. and the other side of that coin is what audience will actually find / read it online.

Add to this: the ability to get things indexed properly in google (scholar), easy way to update metadata, server availability, dois, apis, etc. I contract for neliti.com and we provide stuff like this now for orgs/journals and conferences all over the world outside of the US for en, id, tr, ru, uk, es, pt, and ms locale content (pdfs,xmls,docx, datatests, etc.) but mostly a lot of indoensian orgs/journals and conferences, with ~40 using their own custom (sub) domain we route for.

I hope that eventually the landscape moves to a place where these services can be provided for individuals too (lots of upstream stuff like dois requires having an organization which i think is stupid in this day in age where any piece of digital content can easily get an identifier, related org or not).


> What’s the role of an academic journal in 2020?

Status. Brand. Trust. That's all it's really about. If Newton and Einstein started a curated website of the best papers of the year, those papers would probably experience increased citations, and you would probably pay a premium to subscribe to that list.


Well, as for non-‘vanity journals’, I would say the anonymous peer review process. Similarly, the publishing process after the initial anon peer review process can be generative + a healthy back and forth revision process. Plus it has the potential to take you out of sub-sub discipline echo chambers w/r/t some aspects of methodologies and other approaches to even writing/explication. Largely, though, I would say from both sides, (I.e. reader and scholar), the journal and all that comes with it (I.e. peer review process, the academic press and their branding, in some disciplines the main editors can also lend something to this) lend a greater degree of credibility and confidence in the expertise of the scholar(s). I mean this in a sort of take it for what it is, the positive and the unseemly all at once, sort of way.


There are sociology studies, published in closed journals, that detail the positive impacts that public libraries have on communities. The PhDs of that field must either be (1) blind to the irony, (2) totally resigned to this state of affairs, or (3) themselves complicit as managers of closed journals.

One more indictment: I'll bet they're all extremely depressed about how scientifically illiterate the general population is. They've probably published papers about that too, which the public can never read.

Some people see the devil in the Koch brothers, some people see it in Trump. I see it in the managers of closed journals. If you possess scientific knowledge and hoard it away behind a paywall, you are going to science hell. If you make a scientific discovery and sign away the publishing rights to somebody who intends to hoard it in this way, then you are spending at least a few years in science purgatory.


Well said. I don’t understand why public institutions at the very least aren’t required to share their research. And considering that the majority of higher ed public or private wouldn’t last a day without federal backed loans, I’m not sure only public institutions should be held to this fire.

I take actual pleasure in pirating research papers.


Any research funded by the NIH is required to be published open access at PubMed Central.


> Why don’t research groups just publish to their own websites or directories like arxiv?

Depending on the field, this is often the case. Here are some examples from both of the labs I'm in, with PDFs available:

https://stanford.edu/~boyd/papers.html

https://nqp.stanford.edu/journal-publications

(I also post the PDFs and links to code on my website, as well.)

> What’s the role of an academic journal in 2020?

Partly peer review, partly signaling game, and partly exposition/advertisement/reach of papers. Those are, of course, all intimately linked in scientific fields.

> I’d love to see more blogging from hard science academics

I think a lot of them have taken the Twitter route! There are a few who also have their own somewhat-updated blogs (myself included).

> I’m wondering if there’s a reason why that’s challenging or if it’s just academic culture.

The problem is kind of general, though: (a) writing good, useful blog posts takes a long time, especially when attempting to distill the topic even further for a general audience, and (b) writing (good) papers really does take a long time. (Of course, there are many, many poorly-written papers, but, unless the results are truly incredible, almost nobody spends time deciphering them.)

For example, I think I'm a fairly fast/decent academic writer, yet on a given paper, I spend roughly half of the time doing research and the other half of the time finding the right presentation/abstraction to present, along with writing and editing a given exposition to make it clear and legible. Any given paper will take me ≥20 total hours in structuring, writing, and editing (not including research hours). Reviews can take ≥40.


I have very limited research experience (2 undergrad practicums) but the professor I worked with was a leader in their field and sat on a journal review panel along with others of similar standing. So I suppose consensus among colleagues is " x,y and z are leaders on a topic, their opinions are in b journal so I should read that."


Working grad student scientist here.

One reason is prestige. This is not a fluffy concept in academia - publishing in prestigious journals means funding and tenure for professors and good job prospects for grad students.

Another is time. Every minute spent blogging is a minute spent not doing the above.

Another is curation. Frankly, Nature, Cell, Science feature work from the biggest, most well-funded labs. They have a lot of stuff masquerading as novel, but they also feature pioneering, transformative work. Science is about methods, and the fancy journals are where you see methods first.

Its a tragedy of the commons situation, sort of. We are all very aware of the problems with academic publishing.


> We should have a... OnlyFans for scientists.

Somewhat relevant Dilbert strip: https://dilbert.com/strip/1993-04-09


(At least it used to be because) the paper's publication score, which is relevant for your quantified objectives (universities run by administrators and all), is weighted by the journal's impact score, so e.g. one paper in Nature is 'worth' more than 10 in conference proceedings.

Most people do publish 'working drafts', in practice the same paper submitted to the journal, on their own sites and to archives.


While we're at it, why not make public whoever audited or referred the paper? Why does that need to be secret?


We should have less transparency, not more. Right now the peer review process is single-blind, but it really should be double-blind.

Even if someone doesn't have a PhD or work at a research institution, they should be able to publish good science. Right now, that just isn't possible. And the opposite problem is also true: if you're a big shot in your field, you'll be able to find at least one journal that will publish whatever crap you submitted, regardless of the quality.


Getting that benefit only requires double-blinding during the review process. There’s no reason that both sides of the blinding cannot be removed (and also revealed to third parties) after the review process is complete.


I can't think of any real upside of that, but can think of a lot of downside. Humans, and especially academics (I've noticed a trend in my work environments: the less money everyone makes, the more power is sublimated into inane dick measuring contests) are petty, and less personal it is, the better.


How about if the anonymity is "de-blinded" only partially, replacing it with pseudonymity?

Imagine a system like this:

1. The reviewers on your paper not only have to collaborate on a decision to accept/reject, but also write an opinion (like a https://en.wikipedia.org/wiki/Legal_opinion) about your paper, individual to themselves, after the consensus to accept/reject is reached (so some of them will likely be https://en.wikipedia.org/wiki/Dissenting_opinion s.)

2. The reviewers are each assigned a global permanent pseudonymous identifier—a UUID, basically—known only to them and some "Society for the Advancement of the Scientific Process in Academia" organization.

3. Every vote a peer-reviewer makes, and also every opinion they write about a paper, must be registered with the same academic-process org, whose job is then to collate and publish them to the Internet under the reviewer's pseudonymous identifier.

You'd be able to use such a website to both 1. audit the peer-review process for a given paper; and 2. cross-reference a given peer-reviewer's votes/opinions.

Additionally, the standards body itself could use the cross-referencing ability to normalize peer-reviewer votes, ala how the Netflix Prize recommendation systems normalized votes by a person's interpretation of the star ratings. (They'd have to ask peer-reviewers to vote with something more fine-grained than a binary pass/fail, but that'd be an easy change.)

The only thing I would worry about in such a system, is that academics might not want the negative opinions of the peer-reviewers on their paper to pop up when random other people plug the paper's DOI into Google Scholar, because a dissent on an accepted paper might unduly impact the paper's impact-factor.


You can't think of any reason? What about retaliation for being the less enthusiastic reviewer? What about the opposite, that you become known for being an easy reviewer who missed some obvious flaws?


You're still assuming that this happens right away. How about if the blinding is removed after 50 years? Then people studying the history of science would have the data, but it would have no impact on the careers of the people involved.


Double blind reviews are the standard in at least some corners of CS. Not only conferences, also for some journals.

Though you will also encounter single blind reviewing. I haven't encountered a truly open review process yet.


People might retaliate against a colleague who gave a bad review.


Indeed. I've seen some mightily acerbic rebuttals to other researchers' articles published as articles; I dread to think what a rebuttal to a review might look like.

I'm sure someone will say "well, maybe the acerbicness is the problem"; perhaps so, but I welcome the rigorous honesty with which some academics willingly write.


True, and just imagine if the reviewer were a PhD candidate who would be seeking a job in a few years.


It might be that the person whoes paper you're reviewing will some day be able to influence whether you'll be hired.


There's a pragmatic reason and a serious reason:

1. Pragmatic: you don't get tenure for publishing to your website.

2. Serious: peer review, despite its flaws, is a crucial safety check in science and it's not wise to side-step it.


Institutional bureaucracy only cares about official journals. People in real fields use arXiv anyway - the more the researchers use arXiv or similar, the realer the field.


Tenure and funding are based on performance in journals and previous grants. Informal writing and reporting is nice, but it won't "pay the bills."


Peer-reviewing.

A research that haven't been peer-reviewed is worth nothing in my opinion and it can't be achieved with open archives.


Luckily not everyone feels this way, and may be willing to mail an author if they found some mistake in their open archived paper/data/document they read because they think it could be useful/interesting and the author may be willing to update it with all the relevant metadata acknowledging the edit.

The more of academia that happens outside of well defined institutions, the better for the public.


but journals and peer-review enforce this. That's a good thing (though I agree that letting journals do this is stupid – arxiv is out there, why can't they sign up unis to make people review the things there.)


Yeah, but it comes with all the gate keeping, prestige laundering and grant hamster wheel dynamics. Works for some, but not all.


if you do true double-blind reviews and don't do the thing, they do at the great CS conferences (e.g. chair is from FANG, 80% of papers are from FANG; chair passes everything from his FANG).

Open science is great, doing the mixture of boasting over mediocre results (hey, we can classify 10 samples more correctly from MNIST) and outright faking stuff (which is also there) is a problem even with journals. It doesn't get better wirh Arxiv and all the details packed away in never published supporting information (happened to me sometimes now...) - of course you can ask and then with the SI realize: this is bullshit... Good peer review should weed this out.


I don't deny that "good" (people may have varying definitions of good, acceptable costs of obtaining sufficient good-ness, whether those costs are assumed to be burdened upon a individual or society unquestioningly, on whether such costs can be minimized and how such minimization might negatively affect those not directly integral to academic pursuits and more integral a current system of manifesting such at non minimized costs) peer review may weed these behaviors out more often than not (boasting over mediocre results, outright faking stuff, etc), but for me, I'm not too concerned because this stuff will happen no matter what to some degree (like you say `It doesn't get better wirh Arxiv`). We are only human.

I'm more interested in making the _costs_ of publishing/disseminating/accessing research cheaper so that individuals/orgs/etc can bypass expensive institutions if they so choose, or even gain the ability to participate because they have no other realistic options now due to current costs of participation in the ecosystem (I personally have to deal with thousands of journals/conferences/orgs every week that use software I write at significantly cheaper costs than say systems MIT has at its disposal, that without such, their works would not be available to society [i've see domains i route for go down when they stopped paying their bills for a month or longer, and we had to redirect publications/data back out from domain so that it could still be available]).

Any system that can drive down the costs for pursing academic interests for both the individual and society will eventually subvert those that came before, relegating them a hollow shell of the purpose they once served, if any aspect of a previous system manages to survive.


We also need a journal to publish methods that failed. I did so much work during my PhD that was dead end and is not documented.


In the commercial software world you just open-source the code and hope your competitors adopt it.



Is it normal for journals to charge a fee for publishing?


Yes. Academic publishing is a bizarre market. Editors and referees volunteer their time. Authors pay to publish. Libraries pay for subscriptions so that researchers can read. Publishers handle distribution, but in a digital-first era that's mostly pure profit.


This is normal practice. To publish in a respectable journal you are charged £1000+. To publish your paper as open access, you can be charged another ~£1000 for the privledge (IEEE).


Depends on the field. For example, practically all of machine learning and related fields are $0 to publish and are open access by default. It’s been a huge boon for everyone except I guess traditional publishers.


>It’s been a huge boon for everyone except I guess traditional publishers.

My heart bleeds for them!


And the reviewers will get exactly zero out of this


And I see all these people cautioning against publishing in predatory journals, which can be distinguished by the fact they require me to pay to publish. Then we have the big respectable journals which also require payment to publish. Hmm...sounds like the only real difference is the respectable journals are considered respectable and the predatory journals are considered predatory...


This is a relatively new practice, invented by Robert Maxwell in the 1950s or so.


I think it's laughable they still charge extra for color pictures.


They charge extra to have your pictures printed in color, online everything is in color anyway. I think that is a fair practice no?


No. If you want to submit color regardless of medium. Besides, color printing isn't that costly compared to their high margins. It's in their favor too if people use color images, the age of simple scatterplots has passed for most fields.


Never encountered paying for non-open access publication. I have only encountered "article processing fees" in relation to open access publishing.

If you're okay with having your paper paywalled, you do not need to pay for journal publication.

So I'm curious in which domain/subdomain you have encountered this.


In my area of semiconductor engineering/ detectors, we generally have to pay to publish. Journals do seem to be slowly moving away from this for just the publication, however as you say for open access you still need to pay. Depending on the funding provider for the research, it can be compulsory for the papers to be open access, so we would end up paying anyway.


It's only normal for open access journals to charge (or making a paper in a traditional journal open-acess), as the journals cannot sell access to the article. And the fees are usually a few thousand.

Of course it's also most common for "predatory" journals to be open access and charge thousands.


"Start Year: 2014"

"This is a new journal. No publications have been accepted yet."

Looks dead to me.


there was a journal of negative results in biomedicine, but i don't think they've published since 2017.


I support this. Although a journal may not accept that kind of work, you can still publish it in your blog.


Chemistry is so bad for this. That and how everything is behind a paywall that only multinationals and universities have access too.


Good, science needs more of this. A negative result is a result, and if you don't publish it, then you end up skewing the distribution of results. Which can lead to people (and meta studies) drawing incorrect conclusions.


Ben Goldacre talks about this in Bad Science. Well worth a read.


I'd also love to see a journal that agreed to publish work before any results are known. Researchers would submit hypotheses and methodology for review, and the journal would publish the results after the experiments were conducted, regardless of their outcome.

It would still incentivize interesting hypotheses, but wouldn't lead to results-biased publications.


This is a great idea and already being done (somewhat) in some of fields of psychology through pre-registration [0, 1, 2]

[0] https://en.wikipedia.org/wiki/Preregistration [1] https://www.psychologicalscience.org/publications/psychologi... [2] https://www.psychologicalscience.org/observer/preregistratio...


There is a workshop[1] at NeurIPS this year (2020) for experimenting with this model. I hope it is adopted more widely, especially in the ML community to disincentivize the +1-2% performance increase in SOTA papers.

[1] https://nips.cc/Conferences/2020/Schedule?showEvent=16158


You'd do the work, then sign up, wait a bit (while working on your next project) then you submit the results.

This is what happens with many grants anyway. You propose to research the stuff that you already understand, you get funded and can finally work on new stuff you don't understand well enough to get funded for.


The root problem is the reward system in academia. Even today, when the relevant players are discussing how to improve it, the question is always phrased as needing to find better ways to reward excellence.

If you want to reward excellence, you're by definition looking for the exceptional, the surprising. It might sound good, and surprising discoveries are necessary, but they're not the only thing that's good for science.

We need ways to recognise more than just exciting results; researchers should also be rewarded for robust contributions to the body of human knowledge.

(This is also why Plaudit.pub, a project I volunteer for, allows researchers to explicitly recognise robust or exciting work as separate qualities.)


> If you want to reward excellence, you're by definition looking for the exceptional, the surprising. It might sound good, and surprising discoveries are necessary, but they're not the only thing that's good for science.

I think this depends a lot on how you define 'excellence'. I agree that currently 'excellence' == novel and interesting findings, but we could interpret it to mean 'excellent science' in the broader sense, where showing that another study fails to replicate, or getting a negative result, are equally important parts of the scientific process.


this is old news from last year, also there seems to be only one issue with one article, which means this project is stillborn. https://ir.canterbury.ac.nz/handle/10092/14932/browse?type=d...


Oof, that sucks to hear. Thanks for letting us know it's not been updated.


The challenge of modern academia in certain fields is not to publish but to be read by others. Everyone is so busy publishing that very few papers get decent readership (retweets and citations happen mostly without reading the substance).

I would rather have an upper limit on the number of papers one can publish in a year than more avenues to publish unsubstantial findings.


I just gave a briefing today where I work, that had two conclusion slides: On the first, "None of this matters", and on the second "I've no idea".

One the one hand, that's pretty nihilistic. On the other hand, it's cool that I was able to explore it and come to the conclusion that everything is fine and some aspects of the past are unknowable.


The problem is researchers aren't going to want/be able to spend the time to properly document negative/unsurprising results. The financial incentives in place don't support it despite its incredible value.


But isn't the reason they don't want to write it up because they know they won't get any credit for it? That is exactly what this is trying to remedy


There are many method to evaluate the work.

Sometimes it is just the count the number of published papers, or that dividing by the number of authors, or some weight if you are the first or the last author.

Number of citations, h-index, ...

About the journal, there is the impact factor and many somewhat arbitrary ranks...

A paper in a totally obscure journal that is not cited by other papers, has the same weight than a blog post.


It takes time to write a paper, even for negative results, and moreover journals charge a hefty fee.


I'm referring to grants and how people get funding.


As a non-academic: this journal will presumably end up with a low impact factor, right? So it might help with the problem of negative results not getting published at all, but it doesn't address the 'academic penalty' for getting negative results, right?

Seems to me that preregistration [0] is the real answer here. If all journals went with preregistration, there'd be no need for a journal like this, right?

[0] https://plos.org/open-science/preregistration/


Related (but more general): the Journal of Articles in Support of the Null Hypothesis, https://www.jasnh.com


I still shudder when I think of the fact Tim Ferris mentioned, that trouble with thesis advisors is among leading cause of suicide in young men. Had similar trouble myself.


I like the idea but I don't really get why they accept "statistically insignificant" results. What I expect from a scientific paper is to prove (even emprically) something. For example, if a paper claims something like "we show that using method X instead of Y doesn't improve the results", it won't get published on most journals... except this one, which is awesome, but the paper still has to prove that claim.


When you open the field to reward research itself rather than p-hacking, everyone benefits.

It's like a flea market. Bargain bin discounts, junk to many, valuable to some, but it shifts the economics of the whole system and lets specialty niche researchers not have to reinvent the wheel.

Also, knowing that a method or study was statistically insignificant is statistically significant for future study design.

Thanks!


If the only goal is p 0.05 and there's many groups around the world trying the same experiment, we should hope that they all publish their non-results otherwise by chance someone is going to have a significant result just by luck.


Doesn't "statistically insignificant" mean papers which conclude something like "we find no correlation between X and Y"?


No! It means the data is too noisy to make any conclusion.


not necessarily. it could. but if x and y are truly uncorrelated variables that are observed with unlimited precision, then you still would not reject the null that they are uncorelated


"Truly uncorrelated" is a problematic concept. Most of the time, any two things within each other's light cones, have at least tiny direct or indirect effects on each other. These effects are usually not that tiny in experimental settings because of imperfect experimental tools and procedures.


Like the speed of sound and the height of the table where the experiment is.

Note that the pressure of the air changes with height. For a few inches the change is negligible and very difficult (impossible?) to measure, but the effect must be out there.

I had once a problem with the temperature of the room, you usually ignore it, but it has a bigger effect like a 5% variation.


I think information like that is perhaps useful, at least to avoid wasting time repeating the same research, but publishing that does not prove that there's no correlation between X and Y, only that the things tested didn't show one.


You can't ever prove that switching to X doesn't change the result. All you can really hope for is putting smaller limits on the amount by which something could be improved.

For one dumb example: if you wanted to see if rats can fly, you might observe 100 rats and see that none of them fly away. But this doesn't prove anything: in reality even if 3% of rats could fly you'd get this result about 5% of the time. You've really shown with 95% confidence is that less than 3% of rats fly.


If you only publish the positive results, you miss the context of the unsurprising results. For example, in https://xkcd.com/882/ , if the results from all the experiments in the comic were published, the public would know that the "Green Jellybeans cause acne" result is probably a fluke.


It should stop other groups wasting their time on the same research, no?


> Nick and Andrew did a similar experiment, they found no effects whatsoever.

> And so they send this to a journal and the response they got was, well, you know, you probably wouldn't expect an effect because this is not fairly novel information to the students. So we are not going to publish this because it's not really surprising.

I don't quite understand that example.

If every one else in their papers indeed suggested that telling students of the cost should solve the issue, and everyone believed them, then how can it be claimed that the experiment results are not surprising. The whole point was to disprove what apparently was blindly accepted because of common sense; being then told that in fact the opposite is common sense seems like a slap in the face.

I didn't read the paper, so maybe I'm missing something, but I'm not sure it's actually a good example for that journal. Had they done experiment in isolation with no preexisting notion or consensus of what might solve the graduation issue, maybe.


I initially read this headline as an Onion-article style joke. That said, I don’t disagree with some of the ideas set forth here. I mean, I think this same sort of thing is the reason I regularly find myself reading journal articles and thinking that the title is so misleading/essentially just scholarly clickbait titles. From my perspective, however, I would be really interested to see a journal do something like this but instead with ambivalence about the use of particular stat models. (i.e. I definitely feel that there is still a religiosity in how some reviewers/journals/even disciplines have singular loyalty to p-values, but this may merely be my own perception or anecdotal experience).


“We kind of want to fill the void and publish results that are the opposite of that —unsurprising, weaker, statistically insignificant, not conclusive and so on.”

These are at least 5 Distinct categories, hope they make a distinction...


I can see this journal being 10x-20x the size of normal journals


A very good point. My experience was that maybe 5% of the work I did made it into a publication.

If I wrote up everything that didn't work, my PhD would have been 10 years instead of 5.


I sympathize, your work that was never published was probably of equal caliber to your published work, and just as valid.


Perhaps they can publish Dijkstra's little gem[1] on the Pythagorean theorem? He explicitly asks what journal would publish such a triviality. However it's a really neat little bit of reasoning and I quite enjoyed it.

[1] https://www.cs.utexas.edu/users/EWD/transcriptions/EWD09xx/E...


Null Hypothesis quarterly needs to be on every coffee table.


this is nice, and a clear step forward. but more is needed to fix a semi-broken system. because publication bias is one thing, but there is also something like a _reception bias_.

there are already null result journals and workshops. it is increasingly possible to publish negative results / unsurprising results.

still, imv those results are less likely get cited. because negative results are often messy that can be hard to understand. and people don't want to pick up a paper and then not understand. so it's no surprise that positive results, certainly the way they are framed, often neatly come together. the 'stories' they tell you are often easy to grasp mono-causal stories. it's almost like you read them and you feel good about yourself, because you feel like you suddenly understand a difficult problem.

and then, which paper will you cite in your own work? the paper you understood, that has that catchy title? or the one that you struggled through, the one that painted a picture of a far more complex, messy reality?

this kind of reception bias will be very hard to fix. it takes more critical editors, reviewers, and readers to fix this.


Building off the example in the article about college students, I can attest to this: For years it's been required for colleges to be upfront and disclose the total costs, and financial literacy initiative at orientations & introductory courses have pushed the issue explaining the additional costs of taking longer than 4 years to graduate. The results have been indiscernible from prior to this.


I haven't seen anyone mentioning this scary thing, true when I was at a top-10 university:

No significant-enough publication, no Ph.D., even after years of all-in work and an accepted dissertation are done.

Things might have changed, but this is up there with foreign language requirement in terms of fine print nobody reads going in.


It is an improvement over the current situation but the question remains what these journals are good for besides gobbling up money that should go to research instead. There are preprint servers. If they can create a comment system where scientists can leave comments what are journals good for anymore?


Love the idea but wish there was a better writeup. Transcripts of radio interviews seem stilted and miss the tone context that can indicate whether something was a joke or a casual aside.

I think the journal itself could be quite valuable and hope it succeeds. Perhaps this model can be generalized to other fields too.


This reminds me of the Failed Aspirations in Database Systems workshop at VLDB back in 2017. I don't think it turned out to be quite what I was personally hoping it to be, but the idea is great.

https://fads.ws/


There's a name for that: it's called a "low-level journal"


Reminds me of this: https://en.wikipedia.org/wiki/Rejecta_Mathematica

which sadly did not last.


Why isn't "verbal intervention for university students didn't make them graduate any faster at all" an interesting result?


I wonder what is the actual, hidden, incentive for academia to stay this dysfunctional. It can't just be Hanlon's razor, right?


If every SW-developera had the same mindset they would have build a lot from scratch every time


So, it's a year old, and only one article? Now that is surprising.


An unsurprising result for sure.


For those involved in QA, the benefits of boring work are unsurprising.


Is there a journal that publishes "beautiful experiments" ?


That journal already exists lol, it's called PlosONE.


PLOS accepts positive results as well.


To the detriment of the authors who should know better than publish there if they have something interesting


20th century+ science = pyramid scheme


There is only one article.


The problem with null results and boring results is that there are too many of them.


Huh, this is my university


Climate change science is in dire need of a journal like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: