> Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.
You don't have to do things that other companies would like you to do, no matter how emphatically they are stated. Just the other day I replied to a comment complaining about how LinkedIn broke Google's ToS. As if that's somehow a problem.
I've read of a few cases like this on Hacker News. There's often that assumption, sometimes unstated: if a junior scientist discovers clear evidence of academic misconduct by a senior scientist, it would be career suicide for the junior scientist to make their discovery public.
The replication crisis is largely particular to psychology, but I wonder about the scope of the don't rock the boat issue.
It's not particular to psychology, the modern discussion of it just happened to start there. It affects all fields and is more like a validity crisis than a replication crisis.
He’s not saying it’s Psychology the field. He’s saying replication crisis may be because junior scientist (most often involved in replication) is afraid of retribution: it’s psychological reason for fraud persistence.
I think perhaps blackball is guaranteed. No one likes a snitch. “We’re all just here to do work and get paid. He’s just doing what they make us do”. Scientist is just job. Most people are just “I put thing in tube. Make money by telling government about tube thing. No need to be religious about Science”.
I see my phrasing was ambiguous, for what it's worth I'm afraid mike_hearn had it right, I was saying the replication crisis largely just affects research in psychology. I see this was too narrow, but I think it's fair to say psychology is likely the most affected field.
In terms of solutions, the practice of 'preregistration' seems like a move in the right direction.
That's the point though, it doesn't reflect human usage of the word. If
delve were so commonly used by humans too, we wouldn't be discussing
how it's overused by LLMs.
Technically correct, but not an issue in practice. If you want a licence that's approved by the OSI but not the FSF, or vice versa, you have to go looking for it. If memory serves there are no licences in the latter category, and the few in the former category are very obscure.
> The proposition that technological progress that increases the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource.
> A programming language is a formal specification language that we know how to compile
Plenty of real programming languages are ambiguous in ways that surely disqualify them as formal specification languages. A trivial example in C: decrementing an unsigned int variable that holds 0. The subtraction is guaranteed to wrap around, but the value you get depends on the platform, per the C standard.
> There are plenty of formal specifications that cannot be compiled, even not by an AI. If you use AI, how do you make sure that the AI compiler compiles correctly?
By proving that the code satisfies the formal spec. Getting from a formal spec to a program (in an imperative programming language, say) can be broken down into several stages of 'refinement'. A snippet from [0] :
> properties that are proved at the abstract level are maintained through refinement, hence are guaranteed to be satisfied also by later refinements.
A formal specification language doesn't have to be deterministic.
And yes, if you can prove that the implementation is correct with respect to the formal spec, you are good, and it doesn't really matter how you got the implementation.
Refinement is one approach, personally, I just do interactive theorem proving.
> A formal specification language is a programming language that we don't know how to compile.
Not really, on both counts.
Firstly they're not really programming languages in the usual sense, in that they don't describe the sequence of instructions that the computer must follow. Functional programming languages are considered 'declarative', but they're still explicit about the computational work to be done. A formal spec doesn't do this, it just expresses the intended constraints on the correspondence between input and output (very roughly speaking).
Secondly, regarding the we don't know how to compile it aspect: 'constraint programming' and SMT solvers essentially do this, although they're not a practical way to build most software.
It's no guarantee, but it's a positive indicator of trustworthiness if a codebase is open source.
I don't have hard numbers on this, but in my experience it's pretty rare for an open source codebase to contain malware. Few malicious actors are bold enough to publish the source of their malware. The exception that springs to mind is source-based supply chain attacks, such as publishing malicious Python code to Python's pip package-manager.
You have a valid point that a binary might not correspond to the supposed source code, but I think this is quite uncommon.
> It's no guarantee, but it's a positive indicator of trustworthiness if a codebase is open source.
It's something we as techies like to believe due to solidarity or belief in the greater good, but I'm not sure it's actually justified? It would only work if there's a sizeable, technically-inclined userbase of the project so that someone is likely to have audited the code.
If you're malicious, you can still release malicious software with an open-source cover (ideally without the source including the malicious part - but even then, you can coast just fine until someone comes along and actually checks said source). If you're anonymous there is little actual downside of detection, you can just try again under a different project.
Remember that the xz-utils backdoor was only discovered because they fucked up and caused a slowdown and not due to an unprompted audit.
> It would only work if there's a sizeable, technically-inclined userbase of the project so that someone is likely to have audited the code.
Not really. There's a long history of seemingly credible closed-source codebases turning out to have concealed malicious functionality, such as smart TVs spying on user activity, or the 'dieselgate' scandal, or the Sony rootkit. This kind of thing is extremely rare in Free and Open Source software. The creators don't want to run the risk of someone stumbling across the plain-as-day source code of malicious functionality. Open source software also generally makes it easy to remove malicious functionality, or even to create an ongoing fork project for this purpose. (The VSCodium project does this, roughly speaking. [0])
Firefox's telemetry is one of the more high-profile examples of unwanted behaviour in Free and Open Source software, and that probably doesn't even really count as malware.
> If you're malicious, you can still release malicious software with an open-source cover (ideally without the source including the malicious part - but even then, you can coast just fine until someone comes along and actually checks said source).
I already acknowledged this is possible, you don't need to spell it out. Again I don't have hard numbers, but it seems to me that in practice this is quite rare compared to malicious closed-source software of the 'ordinary' kind.
A good example of this was SourceForge injecting adware into binaries. [1]
> Remember that the xz-utils backdoor was only discovered because they fucked up and caused a slowdown and not due to an unprompted audit.
Right, that was a supply chain attack. They seem to be increasingly common, unfortunately.
> In future everyone will expect to be able to customise an application, if the source is not available they will not chose your application as a base. It's that simple.
This seems unlikely. It's not the norm today for closed-source software. Why would it be different tomorrow?
Because we now have LLMs that can read the code for us.
I'm feeling this already.
Just the other day I was messing around with Fly's new Sprites.dev system and I found myself confused as to how one of the "sprite" CLI features worked.
So I went to clone the git repo and have Claude Code figure out the answer... and was surprised to find that the "sprite" CLI tool itself (unlike Fly's flycli tool, which I answer questions about like this pretty often) wasn't open source!
That was a genuine blocker for me because it prevented me from answering my question.
It reminded me that the most frustrating thing about using macOS these days is that so much of it is closed source.
I'd love to have Claude write me proper documentation for the sandbox-exec command for example, but that thing is pretty much a black hole.
I'm not convinced that lowering the barrier to entry to software changes will result in this kind of change of norms. The reasons for closed-source commercial software not supporting customisation largely remain the same. Here are the ones that spring to mind:
• Increased upfront software complexity
• Increased maintenance burden (to not break officially supported plugins/customizations)
• Increased support burden
• Possible security/regulatory/liability issues
• The company may want to deliberately block functionality that users want (e.g. data migration, integration with competing services, or removing ads and content recommendations)
> That was a genuine blocker for me because it prevented me from answering my question.
It's always been this way. From the user's point of view there has always been value in having access to the source, especially under the terms of a proper Free and Open Source licence.
The guidelines ask that you don't do this. From https://news.ycombinator.com/newsguidelines.html :
> Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.
reply