Hacker Newsnew | past | comments | ask | show | jobs | submit | Chinjut's commentslogin

I love making money for my employer with every spare moment of my life.


If you took the time to read the first paragraph instead of typing this snarky comment, you'd know he's working on his personal projects.


Not paying attention on the train, even in 2025 girliepop-influencer-Instragram-latte-art New York, is not the smartest. You're probably better off during rush hour, but being aware of your surroundings is never a bad idea, even in "safe" New York.


Do your work today and tomorrow you can fool around and have some fun. Do the minimum today so you can do the maximum tomorrow rarely makes sense.


Why continuously differentiable and not just monotonic?


Monotonic is what we have, and it allows cliffs.

Suppose, e.g., that you can get $5k/yr in benefits if you make less than $10k/yr in other revenue and $0 otherwise. Unless you have a viable strategy for pushing past $15k/yr it's a strong financial disencentive against actually working, and even then your incremental ROI isn't very good past that cliff (if it takes an extra hundred hours to push to $15.1k/yr, then compared to your $10k/yr option you're only making $1/hr for the extra work).


This doesn't sound monotonic. This sounds like a mapping from pre-benefit income to post-benefit income which sends just under $10k/yr to just under $15k/yr, but sends just over $10k/yr to just over $10k/yr. So it sends a larger input to a smaller output.


I think I see the definitions we're disagreeing on. I'll lay out what I meant and then address your thing.

1. A couple levels up, the function somebody requested being continuously differentiable was the "benefits." You seem to be looking at the total post-benefit income instead.

2. It's not totally clear, but you _might_ also be using "monotonic" to refer to "monotonic increasing" or "strictly monotonic increasing" instead (the total income function in my example isn't even monotonic, so this is just me reading between the lines in the wording of your reply).

The cliff issue still exists though (and still exists in the differentiable version -- you want bounds on the derivative ideally to prevent the cliff issue, and for unrelated reasons you probably want other properties like the benefits not increasing as a function of pre-benefit income). Suppose you have a strictly monotonic increasing function mapping pre to post benefit income. That function can still, e.g., have a long region with a high slope, followed by a long region with a low slope, and some curvy thing connecting those. It's still continuously differentiable and strictly monotonic increasing, but the incremental value of work in the low-slope region is low (by definition). You might make a dollar in pre-benefit income and $0.10 in post-benefit income, so at minimum wage you're back to a <$1/hr situation unless you can make enough money to push substantially past the right side of that low-slope region (which we're assuming exists, else it basically says you have a 90% tax rate if you make too much money, and in-context "too much" would be near poverty levels, so even people wanting that sort of thing for the ultra rich wouldn't think that to be a good idea).


Yeah I agree, it sounds like a misunderstanding.

I'll go further and say what we probably want is for the derivative of net income as a function of earned income to be monotonic increasing but max out less than 1. So that there aren't ranges of income where you are receiving very little per dollar earned and then after some point start receiving more per dollar.

But solving benefit cliffs really just means having earned=>net income strictly increasing with the marginal rate reasonable, say at least 30 cents more net income per earned income. Under that constraint, you could have ranged where net income grows slower until you hit some higher dollar amount of earnings, but imo that should also not be desirable.


I mostly agree. However that's all a gross simplification, because there are plenty of taxes and benefits that aren't just a function of income.

Eg property taxes or capital gains taxes don't depend on your income at all.


Why are we limiting ourselves to business concepts?


Not GP, but … because the overwhelming majority of programming is done in support of businesses selling things?

I’m not just talking about people who program for a living. The majority of academic CS chooses its research directions because of what limits people are running into for business; even privacy-focused software has been commoditized by many business; a large amount of OSS development is by (and often paid for by the employers of) people working for money; heck, after Linus’s initial “just a hobby OS” period even Linux’s contribution base was primarily driven by business needs (even if those needs influenced it via “contributor had a problem at work and committed a solution for it upstream in spare time” as often as “paid contributor upstreamed a change on behalf of their employer”).


Yes and no. Most of the big new languages today are created to support the business of selling things because languages are expensive to make, they don't generate any profit themselves, so the only people who have enough money to fund their development are mega corporations, who act in self-interested ways.

But look at historical languages and why they were created:

Algol - to explore writing algorithms

Fortran - to help scientists write programs using typical math formulas

Matlab - to help write programs in linear algebra

Haskell - to explore lazy program evaluation

ML - to explore how to reason about proof automatically

C - to build an OS

Python - to interface with an OS

LISP - to formalize symbol processing

APL - to explore programs defined over arrays

LOGO - to help young kids to program computers

Prolog - to create a language around the idea of formal logic.

Smalltalk - to create an entire programming system, not just a language

(I've left out C++, Java, and JavaScript because I feel like those languages are mostly about serving business interests)

Pretty much the entire computing landscape over the past 50-70 years has been defined by people writing languages for reasons other than "this is for a business to use to make more money". So if we let business-driven interest dictate the future direction, we will have missed out on all the things that could have been. Would Haskell ever have been invented if businesses interests were the only concern for researchers?


Fortran compilers were historically implemented by hardware vendors in order to sell their hardware, and this still largely holds true across the surviving implementations with the exceptions of GNU Fortran (obviously) and nagfor (commercial s/w product). There's a good reason that Cray Research's software group was initially part of its marketing department.


The compilers were, but Fortran wasn't invented so that hardware manufacturers could sell their wares to scientists.


It was invented so that IBM could sell more gear to the institutions that employed them.


The fact that there is a precise analogy between how Ax + s = b works when A is a matrix and the other quantities are vectors, and how this works when everything is scalars or what have you, is a fundamental insight which is useful to notationally encode. It's good to be able to readily reason that in either case, x = A^(-1) (b - s) if A is invertible, and so on.

It's good to be able to think and talk in terms of abstractions that do not force viewing analogous situations in very different terms. This is much of what math is about.


Ah, it turns out this is a dupe of https://news.ycombinator.com/item?id=45903670. (I thought the submission process used to catch such dupes? The title and URL match exactly.)


6 upvotes and no comments isn't enough attention to trigger dupe direction.


Oh, I didn't realize there was a threshold. What is the threshold?


I don't know the exact threshold the software uses, but my impression is that the mods use something in the ballpark of <20 points and <5 comments?


AI-esque blog post about how infinite AI content is awful, from "a co-founder at Paid, which is the first and only monetization and billing system for AI Agents".


I'm not saying it's actually written with AI (and indeed, I don't think that's the case; hence my calling it "AI-esque" rather than actually AI generated). It's just that it's a particular style of businessy blog writing that, though originated by humans, AI is now often used to crank out. Lots of bullet points, sudden emphases, etc.

It's just funny, even by hand, to be writing in the infinite AI content style while lamenting the awfulness of infinite AI content while co-founding a monetization and billing system for AI agents.


So I can't have opinions?

Also, this is entirely hand-written ;)


I think this basically proves your point. There were things about it that made me think it may have been at least "AI-assisted", until I saw your "guaranteed bot-free" thing at the bottom. Anyone doing entirely hand-written things from now on are going to be facing a headwind of skepticism.


The authentic nature of opinions means that sometimes they suck. Maybe GP is commenting that your opinion sucks?


this is a funny phenomenon that I keep seeing. I think people are going through the reactionary “YoU mUsT hAvE wRiTtEn ThIs oN a CuRsEd TyPwRiTeR instead of handwriting your letter!1!!”

hopefully soon we move onto judging content by its quality, not whether AI was used. banning digital advertisement would also help align incentives against mass-producing slop (which has been happening long before ChatGPT released)


I don't have the time or energy to judge content by its quality. There are too many opportunities for subtle errors, whether made maliciously or casually. We have to use some non-content filter or the avalanche of [mis]information will bury us. We used to be able to filter out things with misspellings and rambling walls of text, and presumably most of the rest was at least written by a human you could harangue if it came to that. Now we're trying to filter out content based on em-dashes and emoji bullet lists. Unfortunately that won't be effective for very long, but we have to use what we've got, because the alternative is to filter out everything.


This didn’t seem AI-generated to me, although it follows the LinkedIn pattern of “single punchy sentence per paragraph”. LinkedIn people wrote like this long before LLMs.

I do love the irony of someone building a tool for AI sales bots complaining that their inbox is full of AI sales slop. But I actually agree with the article’s main idea, and I think if they followed it to its logical conclusion they might decide to do something else with their time. Seems like a great time to do something that doesn’t require me to ever buy or sell SaaS products, honestly.


You're right - it isn't

This is just how I write in the last few years


> This is just how I write in the last few years

"We shape our tools, and thereafter our tools shape us", may be the apposite bon mot.

That aside, I did enjoy your article. Thank you.


You might have become too much like a marketing bot, sad to say.


The 10% number is completely made up. According to Paul Graham, "I was there when this statistic was cooked up, and this was the recipe: someone guessed that there were about 60,000 computers attached to the Internet, and that the worm might have infected ten percent of them."


That figure is probably UUCP mostly not live connected hosts. I could be wrong, but 60k hosts that you could telnet to sounds like a lot of ducking hosts back then. I was there too, in my late teens. God bless PG.


Yeah and a 'host' back then wasn't a cheap PC or something, they tended to be $30000 workstations or $300000 servers. At tech companies and Universities only, and mostly in the US. 60k sounds like a lot for those days. It grew massively from the early 90s.

Even UUCP was still really fringe and those weren't actually connected hosts on tcp/ip. They had their own dialup mail exchange protocol similar to fidonet.


Those were the days. I still remember my fido number. And I still remember just how painful it was to get uucp working properly. Ugh. But my mother had an email address years before any of her contemporaries. Being a geek was fun then.


Yeah I even had multiple fido point numbers. Because there were some alternative networks. I kinda miss it.

I also used uucp for a few years though it soon got replaced with full internet. We were bit behind in Europe and we caught up fast. In the beginning I also had to use bang paths to avoid some misconfiguration upstream. Fido was actually better at this and the tool chain much more user-friendly. Though you still needed multiple. There was one to do the dial up and one to sort the retrieved mail, a "tosser" :)


I remember my elementary school librarian had some kind of networked computer a touch later, 90-92 timeframe. She tried to explain what email was to me and I still remember being super confused. Think she even showed me on screen my I still did not get it.


It’s very brave to admit when something confounds you.

Start here: https://en.wikipedia.org/wiki/Email

If that’s too much, think of the internet as a series of tubes, and email as a digital boomerang that returns with an out-of-office reply attached.


foo@baz!quux, those were the days.


What, no path thru seismo ?


What would distinguish this from existing statically typed, pure, and lazy functional languages such as Haskell?


sexpression based hygienic macros


The title needs to be updated. These are IDEs we had 32 years ago.


This was titled "The IDEs we had 30 years ago..." in 2023. It is now 2025.


32 years ago was 1993. Those IDE's were from the 80's.


They were still very popular in 1993.


Everyone is aware of this. Sites like this aren't created to be useful. They are created to be an amusement, a joke.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: