There is a difference between unqualified people, and qualified but busy people. While there may be unqualified people reviewing papers, even qualified people who don't put in enough time and effort to review a paper comes to the same result. But the solution for that problem will look different.
There's also a big difference between accepting a paper and judging if it is good. At a top conference like NIPS, lots of good papers will get rejected. That's the nature of being a top conference in a hot field. It's similar to a high school valedictorian not getting into Harvard: it's not a judgement that they are a bad student. But when you're an elite thing, and lots of people want in, you end up looking for reasons to reject rather than accept. You will end up rejecting good papers/students in such a case. NIPS, or NeurIPS now, is a particularly interesting example, as a colleague recently told me that they just got 12,000 submissions this year.
There's also an issue in computer science papers where people may not like your paper based on personal taste. They may not be convinced you're solving a real problem, and if your paper involves a new artifact, the design of it. I refer to these as "your baby is ugly" reviews. They don't think you're wrong, they just don't like it.
I think people's time is a bigger thing than the wrong people. One problem, I think, is that we tend to compete for these once-a-year conferences, causing an enormous time crunch for reviewers. VLDB has a great model where they do rolling acceptance throughout the year, accepting a few papers every month, even though the conference happens once a year (http://vldb.org/2019/?submission-guidelines). I believe this is the best hybrid between the typical CS conference system, and the more common journal system in the rest of science and engineering.
Submitting a paper to a conference for it to be published 11 months later is not going to work - the paper will be hopelessly obsolete by then.
A better solution imo is to improve openreview.net process somehow, so that I'm motivated to go there, find papers relevant to my research (ideally get notified when such papers are posted for review), and leave a review (and perhaps vote on other reviews just like we do here on HN, or do something else to influence the acceptance decision). Obviously there should be methods to prevent abuse: moderators, reputations of reviewers, restricting reviewers to those with specific publications (e.g. based on keywords), etc. Btw, code reproducibility could be used to give some weight in such public review process.
> Submitting a paper to a conference for it to be published 11 months later is not going to work - the paper will be hopelessly obsolete by then.
That's not how the VLDB process works. What they did was establish a PVLDB journal which has a monthly deadline, and it accepts 5-12 papers a month. The papers are public on the website a few months after acceptance. (See: https://vldb.org/pvldb/vol12.html) The VLDB conference is then all of the papers that appeared in PVLDB in the past year.
I would be in favor of exploring your model as well, but I also see the hybrid model developed by VLDB as superior to the standard conference submission and review process.
This is ok if you can upload preprints on arxiv or equivalent. The obsolescence of papers already happens now, when a conference is 6 months after the submission, and the preprints system works fine.
There's also a big difference between accepting a paper and judging if it is good. At a top conference like NIPS, lots of good papers will get rejected. That's the nature of being a top conference in a hot field. It's similar to a high school valedictorian not getting into Harvard: it's not a judgement that they are a bad student. But when you're an elite thing, and lots of people want in, you end up looking for reasons to reject rather than accept. You will end up rejecting good papers/students in such a case. NIPS, or NeurIPS now, is a particularly interesting example, as a colleague recently told me that they just got 12,000 submissions this year.
There's also an issue in computer science papers where people may not like your paper based on personal taste. They may not be convinced you're solving a real problem, and if your paper involves a new artifact, the design of it. I refer to these as "your baby is ugly" reviews. They don't think you're wrong, they just don't like it.
I think people's time is a bigger thing than the wrong people. One problem, I think, is that we tend to compete for these once-a-year conferences, causing an enormous time crunch for reviewers. VLDB has a great model where they do rolling acceptance throughout the year, accepting a few papers every month, even though the conference happens once a year (http://vldb.org/2019/?submission-guidelines). I believe this is the best hybrid between the typical CS conference system, and the more common journal system in the rest of science and engineering.