Hacker Newsnew | past | comments | ask | show | jobs | submit | onhn's commentslogin

>Really Bad Acronyms, or FBAs, are spawned by FNPLs (Nerdy Project Leaders) when naming new systems

Nothing like personal attacks to get your point across. Well done author.


No one is named, so how does this constitute a personal attack?


Unless I misread it, if a project is deemed to have an "FBA" then the project leader is an "FN", in the author's opinion. And yes, some such projects have been explicitly called out.


"You can upload your research and publish it on the open web. Members of the community will be able to vote on your research to raise its visibility."

Oh dear.


How would you set it up? The decentralized world doesn't really have a great system for curation at this point (unless you can point to a counterexample!), and so I'm in favor of any sort of playing around with decentralized voting/curation until we find something that seems to be working well.


Start from the objective of first do no harm. Voting systems may eventually be gamed to distort results, so eliminate the voting system. Instead rely on ad-hoc personal networks to disseminate signal about quality papers out-of-band. Don’t assume you have to systematize everything.


Voting (as was bore out in many examples including digg.com and elsewhere) becomes a mob rule situation and variation of tyranny of the commons without a novelty algorithm in addition to total votes. If you just go by totals, it will be easily gamified and rendered useless as a metric.


The standard and most effective form of curation in science is the reference list at the end of a paper.

But usually you just read everything that is relevant to your research interests from the daily arxiv posting.


A perfect system? No, but think about how people must have felt about Wikipedia on launch.

Love this idea.


Actually I don't think science has democratic nature. Yes we do somehow do that as a theory would still need to be accepted widely. But in reality one person can have the correct idea while all others disagree. Still this person is doing it right.


I believe science is a democratic process. If someone has the correct idea but communicates it poorly, so poorly that others in the field disagree, then this person is doing it wrong. (Thinking specifically of https://en.m.wikipedia.org/wiki/Shinichi_Mochizuki )


The participation ought to be democratic in the sense of being open to everyone to participate. But, you can't do a vote and use it to decide who is right. Deep down we know that being right or wrong is independent from the scientific consensus. Mochizuki may be interacting with the scientific community in the wrong way, but it has no bearing on whether his theory is correct.

The consensus itself has some democratic features, but it's weighed by prestige and adherence to the current paradigm. I think Kuhn described its mechanism pretty well. It's far easier to convince people of a wrong result if you follow the established paradigm, than convince people of something right if you go against it. What really saves science from being pure dogma is that there are paradigm shifts, revolutions in which the scientific consensus change.


All and all it is a non trivial problem. You have at the very least have to attach some kind of form of reputation system into the verification process. Even with that you will still have the "misunderstood genius" issue, or the "excellent reputation professor" that everyone trust without (enough) verification.


But at least there’s be a system for other researchers to record “failed to replicate” that could give a channel to critique reputable professors that’s not controlled by the same professors (as they often can in journals).


Scientific consensus is democratic in nature (even though votes are not distributed evenly). The ideal is that through reproducible experiments and application of the scientific method the scientific consensus moves to increasingly accurate models of reality over time. But obviously the speed at which that happens varies, and some right ideas took annoyingly long to get accepted into scientific consensus.


Sure the right answer will eventually prevail but the process is much worse than we like to admit. Many breakthrough advances were outright rejected by contemporary peers when first proposed.

"Fermi first submitted his "tentative" theory of beta decay to the prestigious science journal Nature, which rejected it "because it contained speculations too remote from reality to be of interest to the reader." Nature later admitted the rejection to be one of the great editorial blunders in its history. ... Fermi found the initial rejection of the paper so troubling that he decided to take some time off from theoretical physics, and do only experimental physics" https://en.wikipedia.org/wiki/Fermi%27s_interaction


Using Wikipedia as an example of a seemingly naïve idea that was ultimately proven to work is a pretty bad argument that completely ignores how Wikipedia operates at the moment.

It's routinely used for propagating smears:

https://odysee.com/@AlisonMorrow:6/how-wikipedia-decides-if-...

Even one of its co-founders says it's failing as an accurate source of information:

https://odysee.com/@TimcastIRL:8/former-founder-of-wikipedia...

Just like Jaron Lanier predicted in 2006:

https://www.edge.org/conversation/jaron_lanier-digital-maois...

I never understood why so many technologists vehemently defend a website that was obviously prone to a form of "regulatory capture" and groupthink.


Larry Sanger has made something of a career out of being "the cofounder of Wikipedia who thinks it's getting it all wrong". There's a point at which the latest iteration of his criticism ceases to be a stop-the-presses newsworthy event.

Sanger wrote a great set of essays, largely based on the lecture notes of courses he taught as an academic, that seeded Wikipedia with a load of freely licensed content that kickstarted the whole enterprise. It's quite possible that without this initial burst of momentum, Wikipedia would have failed. For that he has earned and will never lose recognition. But the negative part of his critique of Wikipedia is not more searching than that Wikipedia editors perform on themselves without his help, and his series of suggestions for positive alternatives have lost credibility because his ideas never work.

I still pay attention to what Sanger says, but not with a high expectation that what he says will be exceptionally insightful.


In all my experience using wikipedia it has been successful at providing facts and accurate references.

I don't mean to attack the speaker here, but that former cofounder of wikipedia you just cited... isn't he an extremist neo-conservative? Why did he leave wikipedia in the first place? What are his proposed solutions?



Sounds amazing to me


Are you sure it was peer reviewed at all? It looks like a conference proceedings.


According to this https://iopscience.iop.org/article/10.1088/1742-6596/1943/1/... they're all peer-reviewed. An average number of reviews per paper of 1 seem a bit curious though.


I think this is the declaration for the volume containing the conference in question?

https://iopscience.iop.org/article/10.1088/1742-6596/1956/1/...

Higher acceptance rate, but more reviewers per paper.


Assuming that every paper is reviewed at least once, Markov’s inequality implies that every paper is reviewed exactly once!


The author is talking about how a given physics model appears simple when they are presented with it, e.g. a particular quantum field theory. This is the kind of limited perspective about research that an undergraduate physicist may develop simply by solving the hand crafted problems that are presented to them.

However, the true difficulty in physics is arriving at that model in the first place. Decades of work offered up against experiment, the associated conceptual leaps in understanding required to get to e.g. a quantum field theory which succesfully predicts things are nothing short of a monumental achievement. To say that physics is simple is ludicrous.


You are missing the point the article. Author is not trying to argue that AI is 'harder' than physics, like a freshman cs major might argue with their physics friends.

Author is talking about how our physical theories, such as QFT, currently have more predictive power than any theories we currently have about machine learning/deep learning.

(Author has a PhD in theoretical physics).


While QFT makes some amazingly precise predictions in certain areas like the fine structure constant, it is nearly useless for predicting even most chemistry.

In practice, the computations required to use the QFT model are just too complex for modern computers when it comes to single atoms with more than a few protons, not to mention larger molecules. Instead, we must use simplified models like the Bohr model to make predictions about molecular bonds.

This actually seems to be very similar to AI where we understand not everything, but a lot about basic neurons, yet the emergent phenomena of intelligence is very difficult to predict due to the explosion of computational complexity.


That's a good point. I guess our current mathematics is not good enough to say much about the macroscopic behaviour of large interacting models.


I think the article misses the point of what physics is. It is not a collection of "sparse" models and principles, rather, it is a scientific discipline from which such models have emerged.

You will notice the article conflates the two things: physics and the known laws of physics (e.g. first para in section 1.2). Simplicity of the latter does not imply simplicity of the former, but the article assumes that it does in order to tackle/state the question as posed: "Why is AI hard and physics simple?".


IMO if all it takes is a few simple substitutions like “physics” -> “known laws of physics” to make the article or title make sense, then it’s unfair to say that the author has missed the point. C.f. Reading the strongest possible interpretation and all that.


"The counterfactuals that matter to science and physics, and that have so far been neglected, are facts about what could or could not be made to happen to physical systems; about what is possible or impossible."

On the contrary, if I had to describe theoretical physics in a nutshell, I would say it is entirely about what is impossible. Pick any physical law or theorem. I cannot exceed the speed of light. I cannot globally decrease entropy. I cannot measure a force between two static electric charges in vacuum that deviates from the Coulomb force law.


Also basic quantum physics: it's all about the "possible" state of a system (e.g the wave function of an electron).


Note that if f(x) = x^2 then the second derivative f''(x) = 2, so it looks like you're off by a factor of 2 (as well as the accuracy issue).


Knowledge of complex numbers are required to solve quadratics too (when the discriminant is negative).


Not to solve real quadratic equations with real solutions. In that case, negative discriminants mean there is no solution.

Whereas in the cubic case, you can come across Square roots of negative numbers when finding real solutions in real equations.

Hence, you can treat the quadratic case within the world of just the reals. Whereas the cubic case doesn't work that easily when working with just the reals.


If you liked Primer, the film "Timecrimes" does an excellent job at the physical consistency of time travel.


Timecrimes is worth a watch.


I love time travel movies, and that movie sucked. not worth the time, or the $$$ I spent on it


It is interesting that they chose to use a first person shooter as an example screenshot since gameplay would be broken by visual changes in occlusion.


I think we need more clever people making more predictions (and especially from people quoted in the article like Gross, Witten, Rattazzi etc), and fewer blog articles like this designed to discourage them.

The last time a huge, costly, dedicated collider was built, it was in service of the Higgs prediction, and that worked out quite nicely.


I think you’re setting up a bit of a straw man there. I agree that people coming up with creative ideas and making bold predictions is good in general. But surely we’re allowed to point out when a) predictions have not in fact worked out, and b) nothing particularly useful was learned from the failure. That’s not the same as trying to discourage new ideas.

For the LHC specifically, it was widely expected that it would find evidence of supersymmetry, and that pinning down the details would help identify which extensions to the Standard Model are worth pursuing. But in fact a) no evidence of supersymmetry has been found, and b) no new lines of inquiry have been suggested. Most theorists have simply adjusted their existing models, moving the goalposts to account for the lack of experimental support.

This is exactly what Hossenfelder is complaining about. Why double and triple down on the same strategy that hasn’t worked yet? Why not at least spread your bets across some different strategies?


This appears to value the negative findings of the LHC at zero. I’m no physicist, but my understanding is that some variants and parameters of the theory have been excluded as a result of LHC experiments.

More to the point, what is the alternative strategy that’s more likely to produce useful data? “Don’t do experiments to validate or invalidate theory” doesn’t obviously seem like it’s going to produce better results.


I’m no physicist, but my understanding is that some variants and parameters of the theory have been excluded as a result of LHC experiments.

I'm no physicist either, so take my hot take with the appropriate pinch of salt. :) My understanding is that some variants and parameters should have been excluded, if you hold theorists to their ~2005 predictions, but in practice most theorists are essentially burying their heads in the sand, or ignoring their past predictions and just making new ones. Both Hossenfelder and Peter Woit (of the "Not Even Wrong" book and blog) have documented many such cases. Many theorists are still assuming supersymmetry as a basis for their work and essentially ignoring the fact that it didn't show up where expected.

More to the point, what is the alternative strategy that’s more likely to produce useful data?

I think it's "remove the focus from high-energy physics and particle colliders, bring more attention to less fashionable areas that might benefit more from huge big-budget experiments." I don't know of any specific proposals to do something completely different with the budget that might otherwise go to e.g. CERN's Future Circular Collider proposal. Hossenfelder has pointed to other big science projects that could produce more scientific bang-for-the-buck than the LHC, such as LIGO and the James Webb space telescope.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: