Hacker Newsnew | past | comments | ask | show | jobs | submit | epgui's commentslogin

Good python code can exist, but python makes it so easy to write bad code that good python rarely exists.

Weird, in Canada (at least some provinces) I think that's a pretty standard part of both high school and undergraduate maths.

The relationships between the functions are pretty standardly taught, but their derivation from the right triangles on the unit circle less so (other than sin and cos).

Have you considered the possibility that your opinion is just not representative of the scientific consensus?


I asked ChatGPT on whether or not it was the "scientific consensus."

"Anonymous surveys of intelligence experts reveal division: a 2016 survey found that about 49% attributed 50% or more of the Black-White gap to genetics, while over 80% attributed at least 20%; an earlier 1980s survey showed similar splits. These views are more common in private or anonymous contexts, contrasting with public statements from bodies like the APA that find no support for genetic explanations."

Hm, sure seems like Wikipedia should probably have a more balanced, nuanced discussion considering the experts are split at least 50/50.


The "scientific consensus" the parent comment mentioned is referring to published studies, with data to back up their conclusions. The numbers you are citing seem to be from an opinion poll. Where did any of the 49% surveyed get the idea that "50% or more of the Black-White gap" can be "attributed" to genetics? What is their methodology for the attribution?

Bringing up an opinion poll as a counterpoint makes it read like you're arguing that Wikipedia should focus less on fact and more on opinion. Of course, you're free to think what you wish, but I suspect that's where most disagree.


We don't really have "intelligence genes" mapped out, if they exist. Therefore, something like this, from Wikipedia: "Genetics do not explain differences in IQ test performance between racial or ethnic groups" is effectively a lie.

Genetics certainly don't explain all the differences in IQ. They very well might not explain the majority of the the difference. However, considering we know that intelligence is quite heritable along with various adoption and twin studies that have happened throughout the decades (along with simple freaking logic), we have a pretty good idea that it explains at least some of the difference. That "opinion poll," while not super-great because only some elected to reply, was a poll of experts in the fields that study this stuff, not random people.

A real unbiased article would mention that (and perhaps whatever counterarguments there are), not straight up do the encyclopedia equivalent of sticking their fingers in their ears and going "nah uh I can't hear you."


Wikipedia does not care about scientific consensus. It just summarizes "reliable" secondary sources.


Wrong in two different ways:

- this tends to approximate consensus.

- Wikipedia does care, and has a policy on this: https://en.wikipedia.org/wiki/Wikipedia:Scientific_consensus


>and has a policy on this

Look at the top of that page.

>This is an essay. It contains the advice or opinions of one or more Wikipedia contributors. This page is not an encyclopedia article or a Wikipedia policy, as it has not been reviewed by the community.


That’s like arguing that “forming a queue at the store” is not an official policy.

The document outlines normative / prescriptive approaches that are followed in practice.


I really hate how people think LLMs == AI. An LLM can’t/shouldn’t be doing anything other than generating text.


LLMs are AI. Markov text generators from the 1970s/80s are AI. Face recognition software like FaceID is AI. Many people behind LLMs got degrees under departments that have AI in their title.

AI is just computers doing things we typically associate with human intelligence, and having a conversation with a computer that effectively passes the Turing test, is definitely AI. If LLMs aren't AI, then AI isn't a useful term. (though agreed that LLMs aren't AGI, which I assume is what you're thinking of)

Wikipedia's list of AI applications: https://en.wikipedia.org/wiki/Artificial_intelligence#Applic...


This is called the AI effect: we stop calling it AI when it works. Its goal post moving to keep the sci-fi term sci-fi.

There’s a similar thing with transhumanist “enhancement” or “life extension” stuff. When it actually works we call it medicine. Statistically one of the most powerful life extension techs ever developed was the cardiac bypass, which would have been sci-fi in 1900.

I’ve been using stuff like Claude Code and personally feel comfortable calling this stuff AI. Is it AGI? I don’t think so, but then again I’m not totally sure what that is. Am I AGI? I’m not universally able to handle all forms of cognition well and I can’t self modify much, so I’m not sure either. I’m not even sure if AGI is a well formed concept.

Intelligence is a pretty broad concept too. My pet rabbit is intelligent. Plants are intelligent. Bacteria are intelligent. Anything that can run an OODA loop, learn, adapt, and move toward a goal function is intelligent. By that definition some computer systems have been AI for decades. They’re just getting better.

I think there’s intelligence all around us. We just don’t get the wow factor from it unless it talks.


My point is that they are a small subset of AI.


Bro wtf, this is like saying "I really hate how people say trees == plants" and then when challenged saying you merely meant that there are more plants than just trees. No shit, everybody knows that. What the fuck was the point?


The point is that people equate AI with general intelligence, and the shortcut between LLMs and general intelligence is a bad one.

Yes LLMs can do surprisingly well at a bunch of tasks, but broca’s area will never replace a human. And yet that’s the sort of mental shortcut people make on the daily. Bro.

Perhaps not this sophisticated audience, but I witness it personally every day.


The real thing to lament is that people think AI==AGI, and by extension that LLMs==AGI. Which they definitely aren't.

But the mistake is thinking AI is AGI, rather than thinking LLMs are AI. Which is just true, as the term has been defined since its inception in the 50s.


I’ll admit to having expressed myself wrong. I think we agree on the substance.


I’m also as pedantic as you and use “LLM” even talking about these systems but you need to be flexible and accept that “AI” is already in everyone’s head when referring to GPT variants.


How is this related to the current discussion at hand?


This is pedantic. AI has many definitions. There was "AI" powering enemies in 80s and 90s video games


I think that when some people talk about "AI" they have "AGI" in mind, and when others talk about "AI" they have "latest computer does the smarts" in mind.

I personally would prefer "AI" to be "AGI" but there's no point fighting the way people use language (see: every damned pedantic comment about English usage ever!! :-)


Agreed that people increasingly interpret AI to mean AGI, but the academic use of "Artificial Intelligence" has been mostly consistent since the famous 1950s Dartmouth workshop that coined the term. It's not just a recent phenomenon and AI has never really meant "broad human-equivalent intelligence". Fun quote from John McCarthy, who helped coined the term: "Artificial intelligence is not, by definition, simulation of human intelligence".

But beyond the pedanticness and authority appeals, I think keeping the term AI distinct from AGI is just useful so it can be an umbrella term for all the human-like smart-ish things computers do. And so its Wikipedia page doesn't have to be re-written.


Military orders are text.


It’s a meaningless question, there is no “out there”.


The rate of change is the issue sir. You can’t “aside” it.


TBF the end state is probably also a bit of an issue. Just less so. More frequent higher energy storm systems would be economically detrimental.


Then OP should do well to clarify.


This is abundantly clear in all remotely-scientific discussion of climate change.


> run-away climate change


> With software engineers […] you don’t have legal limitations on who can perform the work

While in practice that is true, in theory this is why professional engineering accreditations (I mean like P.Eng., not little certificates) exist. Perhaps we will see a broader professionalization of the profession one day.


You literally wrote: "Some anti-intellectualism might be a good thing".

It doesn't get any more anti-intellectual than that, you put it down yourself in blackletter.


It sounds like you have your own definition of "intellectuals", which appears to mean pretty much the opposite of what it usually means.


I know exactly what "intellectual" means. It's someone who has spent years obtaining credentials in an economically useless subject and looks down their nose at people who would lower themselves to actually do practical work. A brilliant mechanic who can fix anything and has made millions running his business would never be called an intellectual, but someone serving coffee with a phd in English literature would.


It really doesn't sound like you know exactly what it means.


Credentials are largely unrelated to being an intellectual. Academics may be the word you’re thinking of.

A mechanic who’s been publishing poetry would easily qualify. Further labs also need people who maintain equipment, it’s a viable and hands on carrier path to maintain electron microscopes and such all day.


That's a completely non-sensical take.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: