Hacker Newsnew | past | comments | ask | show | jobs | submit | RedNifre's commentslogin

"Cargo cult"? As in, "Looks like the genius artists at Pixar made everything extra green, so let's continue doing this, since it's surely genius."


Why does it have Either? Doesn't TypeScript have "A | B" style sum types?


Either is biased, union is not.

Probably we should say "union" instead of sum, as typescript unions are not discriminated. string | string in typescript is exactly the same as just string, while Either[String, String] is a type which is exactly a sum of two string types. Plus Either is biased towards R, the happy path value.


Nice, what's the KMP plan there?

We currently use https://github.com/michaelbull/kotlin-result , which officially should work on KMP, but has some issues.


If you already have quotation marks, what's the point of the commas?


This is a popular question, the most common answer I’ve seen is:

> Commas exist mostly to help JSON be human-readable. They have no real syntactic purpose as one could make their own notation without the commas and it'd work just fine.

https://stackoverflow.com/a/36104693

Elsewhere such commas can be optional, e.g. in clojure: https://guide.clojure.style/#opt-commas-in-map-literals


Try parsing this:

    { foo :"bar" baz :"bak" quux :[ a,b,c,d ] lol :9.7E+42 }
Ref: https://www.json.org/json-en.html, but without commas. It's line noise. Commas allow a nice visual anchor.


Oh, actually that is the syntax I will use for writing abstractions:

    my-abstr x y z foo: "bar" baz: "bak" "quak" quux: [a, b, c, d] lol: 9.7E+42
I don't think

    my-abstr x y z, foo: "bar", baz: "bak" "quak", quux: [a, b, c, d], lol: 9.7E+42
would be better. Indentation and/or coloring my-abstr and the labels (foo:, baz:, quux:, lol:) are the right measures here.


While I have no problems with indentation based syntax, it's not very conductive to minimization, so it's a no go for JSON's case.

Coloring things is a luxury, and from my understanding not many people understand that fact. When you work at the trenches you see a lot of files on a small viewport and without coloring most of the time. Being able to parse an squiggly file on a monochrome screen just by reading is a big plus for the file format in question.

As technology progresses we tend to forget where we came from and what are the fallbacks are, but we shouldn't. Not everyone is seeing the same file in a 30" 4K/8K screen with wide color gamut and HDR. Sometimes a 80x24 over some management port is all we have.


Sure, color and indentation are optional, but even without those, I don't see that a comma in the above syntax helps much, even on a 80x24 monochrome display. If you want separators, note that the labels which end with a colon are just that, and just as clear as commas. There is a reason why you tried to obfuscate that in your example by placing the colons not with the labels, but the arguments.


You can also remove the commas in the arrays.


You could, there is no fixed syntax for arrays in Practal, so it would depend on which custom syntax becomes popular. But for expressions that are bracketed anyway, it makes sense to have commas (or other separators), because otherwise you write something like

    [(a b) (x y)]
instead of

    [a b, x y]
Personally, i like the second option better.


Looks easy to parse to me and you could also remove the commas in the array.

    { foo: "bar" baz: "bak" quux: [a b c d] lol: 9.7E+42 }


I vibe coded an invoice generator by first vibe coding a "template" command line tool as a bash script that substitutes {{words}} in a libre office writer document (those are just zipped xml files, so you can unpack them to a temp directory and substitute raw text without xml awareness), and in the end it calls libre office's cli to convert it to pdf. I also asked the AI to generate a documentation text file, so that the next AI conversation could use the command as a black box.

The vibe coded main invoice generator script then does the calendar calculations to figure out the pay cycle and examines existing invoices in the invoice directory to determine the next invoice number (the invoice number is in the file name, so it doesn't need to open the files). When it is done with the calculations, it uses the template command to generate the final invoice.

This is a very small example, but I do think that clearly defined modules/microservices/libraries are a good way to only put the relevant work context into the limited context window.

It also happens to be more human-friendly, I think?


Could some Xcode uers explain to me why AppCode (IntelliJ IDE) did not take off as an Xcode replacement? I recently had to do some iOS work, knew that Xcode sucks, wanted to try AppCode, only to learn that it was discontinued because of lack of interest.


AppCode was unfortunately extremely limited. Lagged behind Swift versions. No editor for Storyboards/XIBs back when they were ubiquitous, and no preview for SwiftUI. Debugging app extensions or anything that isn't the standard "iOS app" was generally flaky or outright not supported.

There is a lot of interest as IntelliJ is a good IDE, but AppCode just wasn't reaching the bare minimum required to have me not use or worry about Xcode.


My problem with AppCode was that the syntax of Swift was still undergoing rapid development, and AppCode lagged too far behind. So AppCode wouldn't understand parts of my code unless I avoided the latest (often very valuable) additions to the language.

When it did understand my code, it was much better than Xcode for navigating and refactoring.

Also, AppCode couldn't edit or even display XIBs so I often had to keep Xcode running alongside it anyway to edit XIBs and make connections between XIB and code, and that was a hassle.


I'm only guessing and I'm not an xcode user but my general observation is that Apple users have a culture of using whatever Apple gives them, whereas in the Windows world the culture is to go out and look for your favorite third party option.


If I remember correctly, you still needed Xcode to actually compile and run the code, because of restrictions imposed by Apple.


Right, but did this mean you had to deal with all the Xcode problems, or did you have more of an IntelliJ like experience?


AppCode gave you an IntelliJ-like experience, and personally I found it significantly better than Xcode when writing code.

But the friction of needing to keep around Xcode anyway whenever you wanted to run your code meant it was just easier to stick with Xcode.


App code was a really great IDE, Especially if you were developing server-side Swift. I wish JetBrains would've open sourced the plugins so it could carry on


We did something like that on a private Minecraft server 15 years ago, where everybody was required to have an identi.ca account for this server (status.net, like mastodon today). All messages appeared posted to the walls of a large library building in the middle of the world and people could post to their accounts from the ingame chat.

It worked really well for about half the people, the other half ignored it completely.

I wouldn't mind if today's office chats like Teams or Slack added a microblogging feature where you could subscribe to interesting colleagues.


While you're at it, could you also change YouTube so that captions being on/off is a per-language thing and not a "The user turned off subtitles for language X, therefore it's the only they speak, therefore let's turn on subtitles for all other languages." thing? It's really annoying that whenever I watch a video that has a language different from the previous video I watched, the first thing I have to do is to turn off the captions.


This isn't a proper solution, but if you watch with yt-dlp+mpv, you can configure default audio and sub languages in mpv (globally) and they will also work for YT videos in addition to your movies and such. Plus if you do toggle them on/off for one session it won't mess up your mpv config.


Okay, but how about checking the 99.9% policy first and if it doesn't match, run Ken's solution?


You'd be better off with some stupid code that junior devs (or Grug brained senior devs) could easily understand for the 0.1% cases.


That's exactly how you get today's abysmally slow software though, because if you chain enough components you end up running into 1% cases all the time.


(In that specific case, with that specific very skewed data distribution, yes. But not necessarily in the general case. So "they" would be better off with that solution, but not a generalized "you").


Then how do they learn


My guess is that they would be more likely to “fix” and “improve” Ken’s algorithm than understand it (Jon Bentley mentioned, in passing, a similar case where this was the outcome.)


Oh! "Educating the newbies/grunts" is the reason for #$#@&#?

I'd say that an org has a limited number of pennies to spend on non-trivial code. Let's spend them wisely.


They can learn finite automata from computer science text books.


True, i suppose, but theres simply no substitute for contextual, on the job learning. imo


That's what he said they did. that's the joke.

>so they changed the existing code to just check that predicate first, and the code sped up a zillion times, much more than with Ken's solution.

but since you've now spent all this time developing some beautiful solution for .01% of the actual data flow. Not the best use of dev time, essentially.


Ah, thanks for clarifying. I understood it as "Check that predicate first and if it doesn't match, run the old solution".


If a case doesn't match, then speeding up the remaining 0.1% is not going to change much the overall speed. Hence, a non optimized algorithm maybe enough. But when speed and speed variance is critical, then optimization is a must.


The "overall speed" is rarely all that matters.


The devil is in the details. A pair of my coworkers spent almost two years working on our page generation to split it onto a static part and a dynamic part. Similar to Amazon’s landing pages. Static skeleton, but lots of lazy loaded blocks. When they finally finished it, the work was mediocre for two reasons. One, they ended up splitting one request into several (but usually two), and they showed average response times dropping 20% but but neglected to show the overall request rate also went up by more than 10%. They changed the denominator. Rookie mistake.

Meanwhile, one of my favorite coworkers smelled a caching opportunity in one of our libraries that we use absolutely everywhere, and I discovered a whole can of worms around it. The call was too expensive which was most of the reason a cache seemed desirable. My team had been practicing Libraries Over Frameworks, and we had been chipping away for years at a NIH framework a few of them and a lot of people who left had created in-house. At some point we lost sight of how we had sped up the “meat“ of workloads at the expense of higher setup time initializing these library functions over, and over, and over again. Example: the logging code depended on the feature toggle code, which depended on a smaller module that depended on a smaller one still. Most modules depended on the logging and the feature toggle modules. Those leaf node modules were being initialized more than fifty times per request.

Three months later I had legitimately cut page load time by 20% by running down bad code patterns. 1/3 of that gain was two dumb mistakes. In one we were reinitializing a third party library once per instantiation instead of once per process.

So on the day they shipped, they claimed 20% response reduction (which was less than the proposal expected to achieve), but a lot of their upside came from avoiding work I’d already made cheaper. They missed that the last of my changes went out in the same release, so their real contribution was 11%, but because of the extra requests it only resulted in a 3% reduction in cluster size, after three man years of work. Versus 20% and 7% respectively for my three months of work spread over five.

Always pay attention to the raw numbers and not just the percentages in benchmarks, kids.


Bash might not be difficult, but it is very annoying, so I'm happy that the AI edits my scripts for me.


> Bash might not be difficult, but it is very annoying

Just shellcheck the hell out of it until it passes all tests.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: