Hacker Newsnew | past | comments | ask | show | jobs | submit | danslo's commentslogin

It is as long as they're not refunding you when you make a loss.

Couldn’t it just as easily be equivalent to saying “you grew this year, so contribute some money back to society for enabling you to have the educated hiring base/financial infrastructure/physical infrastructure that enabled you to grow”?

Like, sure, you don’t owe growth taxes for a quarter when you didn’t grow. But why should you be refunded just because prior taxable growth isn’t denominated in money in a bank account?


> you grew this year, so contribute some money back to society for enabling you to have the educated hiring base/financial infrastructure/physical infrastructure that enabled you to grow

Apparently paying for gas, water, electricity, property taxes, taxes on everything you buy isn’t enough, now you have to “contribute for enabling”. What’s next? Pay because they “enable you to breathe”?


Plus ignoring that our society depends on businesses.

I have low opinions about unbalanced one sided arguments.


This is slop right?

>This isn’t a minor gap; it’s a fundamental limitation.

>His timeline? At least a decade, probably much longer.

>What does that mean? Simply throwing more computing power and data at current models isn’t working anymore.

>His timeline for truly useful agents? About ten years.


You can tell by the post history.

It's just like with the fake StackOverflow reputation and fake CodeProject articles in the past.

Same people at it again but super-charged.


It has the logical inconsistency of good LLM slop like:

"AGI is not possible"

combined with

"Does this mean AI progress will stall? Absolutely not."


Yup, clocked it in seconds. There's something especially perverse about reading AI slop waxing poetic about AI slop.


s/postgres/sqlite/g


Postgres is simpler. Get your cloud to manage it. Click to create instance, get failover with zero setup. Click button 2 to get guaranteed backups and snapshot point in time.


When you use sqlite, you can distribute your program by distributing a single executable file. That's what I call simple.


Yes that is simple, especially for a desktop app.

I would argue it is not resilient enough for a web app.

No one wrote the rules in stone, but I assume server side you want the host to manage data recovery and availability. Client side it is the laptop owners problem. On a laptop, availability is almost entirely correlated with "has power source" and "works" and data recovery "made a backup somehow".

So I think we are both right?


I was especially thinking about a program running on a server. I wouldn't statically link everything for a desktop program, because it just causes incompatibilities left-and-right. But for a server, you don't have external vendors, so this is a good option, and if you use SQLite as the database, then this means that deploying is not more complicated than uploading a single executable and this is also something, that can be done atomically.

I don't see how this has worse effects on recovery and availability. The data is still in a separate file, that you can backup and the modification still happens through a database layer which handles atomic transactions and file system interaction. The availability is also not worse, unless you would have hot code reloading without SQLite, which seems like an orthogonal issue.


Don't agree. Getting managed postgress from one of the myriad providers is not much harder than using sqlite, but postgress is more flexible and future proof.


Isn't the entire point of this post that many companies opt for flexible+future proof far too prematurely?


I use Postgres for pretty much everything once I get beyond "text in a database with a foreign key to a couple of things".

Why?

Because in 1999 when I started using PHP3 to write websites, I couldn't get MySQL to work properly and Postgres was harder but had better documentation.

It's ridiculous spinning up something as "industrial strength" as Postgres for a daft wee blog, just as ridiculous as using a 500bhp Scania V8 for your lawnmower.

Now if you'll excuse me, I have to go and spend ten seconds cutting my lawn.


ORMs have better support I've found in the past (at least in .NET and Go) for Postgres. Especially around date types, UUIDs and JSON fields IIRC.


You don’t need an ORM either. It’s just another level of complexity for very little to no gain in almost all cases. Just write SQL.


You always get a comment like this. I don't particularly agree. There are pros and cons to either approach

The tooling in a lot of languages and frameworks expects you to use an ORM, so a lot of the time you will have to put up a fair bit of upfront effort to just use Raw SQL (especially in .NET land).

On top of that ORM makes a lot of things that are super tedious like mapping to models extremely easy. The performance gains of writing SQL is very minor if the ORM is good.


This. So much this. Of course, at one point you start wanting to do queues, and concurrent jobs, and not even WAL mode and a single writer approach can cut it, but if you've reached that point then usually you a) are in that "this is a good problem to have" scalability curve, and b) you can just switch to Postgres.

I've built pretty scalable things using nothing but Python, Celery and Postgres (that usually started as asyncio queues and sqlite).


A queue and a database are more shots on the architecture golf though.


Yeah, we run a fairly busy systems on sqlite + litestream. It's not a big deal if they ae down for a bit (never happened though) so they don't need failover and we never had issues (after some sqlite pragma and BUSY code tweaking). Vastly simpler than running + maintaining postgres/mysql. Of course, everything has it's place and we run those too, but just saying that not many people/companies need them really. (Also considering that we see system which DO have postgres/mysql/oracle/mssql set up in HA and still go down for hours do a day per year anyway so what's it all good for).


back in the day, the hype was all arround postgres, but I agree


I tried to switch from neovim to helix for a couple weeks, but noted down the following things that were essential to me and not implemented yet:

- Code actions on save, for example adding Go imports: https://github.com/helix-editor/helix/pull/6486

- Fuzzy search with a filepicker like telescope+rg, seems to have been added earlier this year: https://github.com/helix-editor/helix/pull/11285

- Automatically updating buffers when the files on disk change (claude, templ, sqlc, etc): https://github.com/helix-editor/helix/issues/1125

- File tree in browser, which has been rejected in favor of a plugin system which has not materialized yet: https://github.com/helix-editor/helix/pull/5768

There were a number of other things too, that I could have lived with. I guess I'll try again in a year or two.


I rebuild from HEAD using homebrew quite regularly and have maybe had a single crash in years using Helix, so I'm shocked to see Julia report she's had crashes. But because I run HEAD, I can tell you what's actually merged which might not be released yet:

> Code actions on save

There are command on save hooks which solve the problem for Go, but I could see this being not as applicable for other languages.

> Fuzzy search

Man, I'm pretty sure this shipped before telescope was particular stable or popular. Although, I don't recall if Helix originally used the ripgrep backend at first. They reworked it earlier this year and it was a massive improvement.

> Automatically updating buffers

+1 to this. I constantly suspend (ctrl+z) my editor and sometimes forget about it and could really use this prompt.

> File tree in browser

Space+e or Space+E opens a hierarchical file explorer on HEAD, but this is definitely relatively new


Although the file explorer you link there was abandoned, a vim-telescope style explorer was merged earlier this year.

However, when you already have a fuzzy-search based file picker built in, an explorer doesn't bring much extra utility.


Totally disagree. I find having both extremely useful. Once I start to get to know a code base jumping through files with telescope is super useful. But when I'm getting spun up on a project and don't know what files I'm looking for, a file tree is super nice.


> I tried to switch from neovim to helix for a couple weeks

I bounced off after a couple of hours because 35 years of muscle memory for vim commands meant I was making constant infuriating mistakes[0]. Which is a shame because I'd really like to use helix and avoid the neovim config faff.

[0] I did try one of the "vim keybindings for helix" but they were only partially correct and that annoyed me even more.


I have the same issue with Helix not watching and reloading modified files automatically, as I sometimes run external programs modifying those files (templ and sqlc are good examples). Curious about how experienced Helix users are addressing this.


I have file reload binded to key:

    [keys.normal.space.f]
    f = "file_picker_in_current_directory"
    F = "file_picker"
    b = "file_picker_in_current_buffer_directory"
    "." = ":toggle-option file-picker.git-ignore"
    g = "global_search"
    e = "file_explorer"
    r = ":reload-all"
    x = ":reset-diff-change"
    w = ":echo %sh{git blame -L %{cursor_line},+1 %{buffer_name}}"


After explicitly setting the python version to 3 in package.json? Yes please!


I appear to have crashed the server with "tic 999", sorry guys!


that wasn't it but yeah lol


It doesn't matter anymore, Namecheap has taken down polyfill[.]io: https://x.com/malwrhunterteam/status/1806074377383121148


It matters still in so far as if you turn this feature on your site will continue to work as expected even with Polyfill[.]io offline.


Hi Matthew, Cloudflare just charged my card $235 with no prior warning (for a service I intended to cancel when I received the renewal reminder). I did not receive any renewal reminders, no advance notice of payment, no invoice. Nothing.

Is this normal operating behaviour for Cloudflare? This seems like very very bad anti-consumer behaviour?


Do you have a support ticket number about this? You can email me (email in profile) and I'll make sure support sees it.


If you can understand HTML, you can understand HTMX.


I see no reason to use this if you commit often and know how to use reflog.


So then don't use it.

I prefer not rebasing / editing unless I have to.


What do you mean? The concept of dura is editing. And using reflog isn’t normally a rebase.


>Now I felt even more excited. I could push fixes and refactors without having to wait for someone to code review them.

Am I the only one feeling uneasy about this?


Yes and no. Open source is built on trust, something we're starting to feel the backsides of today, where npm modules sometimes gets compromised, but people also get shared responsibility over shared resources like reusable libraries.

I'm torn if it's good or bad really. I feel like our tools should do more to protect us, but until we get there, maybe we do need to be more careful with who we're giving our trust to?


From a security standpoint, how did NPM become a thing? Bar none, it feels like the most compromisable system short of SCADA.


A small Javascript standard library and more demand for packages with the use cases from running it server-side seem to be contributing factors.


For users of the NPM ecosystem this must be as exciting as this bus voyage: https://youtu.be/2WP5HfRt2Us


There is a lot of unmaintained code out there. When somebody steps up and starts doing meaningful work to fix that, stepping out of the way and letting them do their thing is a very appropriate way to get the changes out to users.

Ultimately open source is based on trust. Code reviews are nice if you can get them but github is full of projects with pull requests that aren't getting reviewed. That's frustrating for contributors and users.

I tend to feel bad when somebody submits a pull request to one of my projects and I don't have time to properly review it. It happens sometimes. But I tend to bias towards merging the work unless I see something really wrong with such pull requests in which case, I'll inform the contributor. Ultimately, OSS should be a meritocracy. People doing the work gain power and trust. Nothing wrong with that.


> Am I the only one feeling uneasy about this?

Why does this make you feel uneasy? Who's to say the original creator of the repo has any better intentions than the author of this article?


It shows that the author is unconcerned about code quality in someone else's project.

But you are right that the original project had the same problem of lack of review. This is a big problem in sole author open source; such projects tend to be poorly documented, have a lot of nasty code that is error prone and hard to maintain and continued fixes to. I know this from bugfix commits I've made to various project.


I am very permissive at giving commit but not release privileges.

I feel that’s the best value for removing blockers to contribution but still retaining some measure of control over security and quality.


I'm not arguing to have code reviews for every commit (or PR). If there's no review with each incremental change, is it still feasible to review at release time?


Hi, author here. The original author of the repo gave me permissions to push because he didn't have time to maintain it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: