Hacker Newsnew | past | comments | ask | show | jobs | submit | beautron's commentslogin

> Not a lot of companies allow disabling their garbage, but FF does. > > Can't we be happy with this nice switch?

I want my tools to keep working the way they have been working. I don't want to be paranoid that "garbage" (as you put it), or any other controversial changes, are going to be slipped into my tools while I'm not looking.


There is something to be said about that. Firefox does keep inserting it's 'helpful features' like Pocket on users, which is very annoying.

My point is just that everyone is so critical of Firefox, when the alternative is disproportionately, orders of magnitude worse for the user.

I'd rather bash on minor Firefox grievances when it's the #1 browser, not when it's losing/lost the browser war and it's our last chance at browser engine diversity.


> The typical Go story is to use a bunch of auto generation, so a small change quickly blows up as all of the auto generate code is checked into git. Like easily a 20x blowup.

Why do you think the typical Go story is to use a bunch of auto generation? This does not match my experience with the language at all. Most Go projects I've worked on, or looked at, have used little or no code generation.

I'm sure there are projects out there with a "bunch" of it, but I don't think they are "typical".


Same here. I've worked on one project that used code generation to implement a DSL, but that would have been the same in any implementation language, it was basically transpiring. And protobufs, of course, but again, that's true in all languages.

The only thing I can think of that Go uses a lot of generation for that other languages have other solutions for is mocks. But in many languages the solution is "write the mocks by hand", so that's hardly fair.


Me neither. My go code doesn't have any auto-generation. IMO it should be used sparingly, in cases where you need a practically different language for expressivity and correctness, such as a parser-generator.


Anything and everything related to Kubernetes in Go uses code generation. It is overwhelmingly "typical" to the point of extreme eye-rolling when you need to issue "make generate" three dozen times a day for any medium sized PR that deals with k8s types.


But sometimes it is useful to return both a value and a non-nil error. There might be partial results that you can still do things with despite hitting an error. Or the result value might be information that is useful with or without an error (like how Go's ubiquitous io.Writer interface returns the number of bytes written along with any error encountered).

I appreciate that Go tends to avoid making limiting assumptions about what I might want to do with it (such as assuming I don't want to return a value whenever I return a non-nil error). I like that Go has simple, flexible primitives that I can assemble how I want.


Then just return a value representing what you want, instead of breaking a convention and hacking something and hoping that at use site someone else has read the comment.

Also, just let the use site pass in (out variable, pointer, mutable object, whatever your language has) something to store partial results.


> instead of breaking a convention and hacking something and hoping

It's not a convention in Go, so it's not breaking any expectations


But in most cases you probably want something disjoint like Rust's `Result<T,E>`. In case of "it might be success with partial failure", you could go with unnamed tuples `(Option<T>,E)` or another approach.


I share your unpopular opinion.

While I understand that having identical UI elements across apps aids in discoverability, I just love it so much when an app has its own bespoke interface that was clearly made with love.

Like you, it might be my love of games that has given me this preference. Would StarCraft II have a better UX if its menus used the standard Windows widgets where applicable? I think certainly not. And I think the same can be true for many non-game apps.


My perception is that there's always been some dissonance between loving tech intrinsically for itself vs. the Silicon Valley venture capitalist business model. Conflating tech with its dominant business model enables the paradox where a person deeply in love with tech can also be anti-tech.


> Going from a project with 1,000 to 1,000,000 lines of code is a tiny leap compared to going from 0 to 1000.

Are you sure the leap is tiny? It's a much easier problem to get only 1,000 lines of code to be correct, because the lines only have to be consistent with each other.


Right, and projects that are millions lines of code usually aren't even one codebase or one app or service.


I also love CSV for its simplicity. A key part of that love is that it comes from the perspective of me as a programmer.

Many of the criticisms of CSV I'm reading here boil down to something like: CSV has no authoritative standard, and everyone implements it differently, which makes it bad as a data interchange format.

I agree with those criticisms when I imagine them from the perspective of a user who is not also a programmer. If this user exports a CSV from one program, and then tries to load the CSV into a different program, but it fails, then what good is CSV to them?

But from the perspective of a programmer, CSV is great. If a client gives me data to load into some app I'm building for them, then I am very happy when it is in a CSV format, because I know I can quickly write a parser, not by reading some spec, but by looking at the actual CSV file.

Parsing CSV is quick and fun if you only care about parsing one specific file. And that's the key: It's so quick and fun, that it enables you to just parse anew each time you have to deal with some CSV file. It just doesn't take very long to look at the file, write a row-processing loop, and debug it against the file.

The beauty of CSV isn't that it's easy to write a General CSV Parser that parses every CSV file in the wild, but rather that its easy to write specific CSV parsers on the spot.

Going back to our non-programmer user's problem, and revisiting it as a programmer, the situation is now different. If I, a programmer, export a CSV file from one program, and it fails to import into some other program, then as long as I have an example of the CSV format the importing program wants, I can quickly write a translator program to convert between the formats.

There's something so appealing about to me about simple-to-parse-by-hand data formats. They are very empowering to a programmer.


Totally agree that its biggest strength is how approachable it is for quick, ad hoc tooling. Need to convert formats? Join two datasets? Normalize a weird export? CSV gives you just enough structure to work with and not so much that it gets in your way.


> I know I can quickly write a parser, not by reading some spec, but by looking at the actual CSV file

This is fine if you can hand-check all the data, or if you are okay if two offsetting errors happen to corrupt a portion of the data without affecting all of it.

Also I find it odd that you call it "easy" to write custom code to parse CSV files and translate between CSV formats. If somebody give you a JSON file that isn't valid JSON, you tell them it isn't valid, and they say "oh, sorry" and give you a new one. That's the standard for "easy." When there are many and diverse data formats that meet that standard, it seems perverse to use the word "easy" to talk about empirically discovering the quirks in various undocumented dialects and writing custom logic to accommodate them.

Like, I get that a farmer a couple hundred years ago would describe plowing a field with a horse as "easy," but given the emergence of alternatives, you wouldn't use the word in that context anymore.


> If somebody give you a JSON file that isn't valid JSON, you tell them it isn't valid, and they say "oh, sorry" and give you a new one. That's the standard for "easy."

But it isn't that reliably easy with JSON. Sometimes I have clients give me data that I just have to work with, as-is. Maybe it was invalid JSON spat out by some programmer or tool long ago. Maybe it's just from a different department than my contact, which might delay things for days before the bureaucracy gets me a (hopefully) valid JSON.

I consider CSV's level of "easy" more reliable.

And even valid JSON can be less easy. I've had experiences where writing the high-level parsing for some JSON file, in terms of a JSON library, was less easy and more time-consuming than writing a custom CSV parser.

Subjectively, I think programming a CSV parser from basic programming primitives is just more fun and appealing than programming in terms of a JSON library or XML library. And I find the CSV code is often simpler and quicker to write.


> When there are many and diverse data formats that meet that standard, it seems perverse to use the word "easy" to talk about empirically discovering the quirks in various undocumented dialects and writing custom logic to accommodate them.

But the premise of CSV is so simple, that there are only four quirks to empirically discover: cell delimiter, row delimiter, quote, escaped-quote.

I think it's "easy" to peek at the file and say, "Oh, they use semicolon cell delimiters."

And it's likewise "easy" to write the "custom logic", which is about as simple as parsing something directly from a text stream gets. I typically have to stop and think a minute about the quoting, but it's not that bad.

If a programmer is practiced at parsing from a text stream (a powerful, general skill that is worth exercising), than I think it is reasonable to think they might find parsing CSV by hand to be easier and quicker than parsing JSON (etc.) with a library.


I love that the Go project takes compatibility so seriously. And I think taking Hyrum's Law into account is necessary, if what you're serious about is compatibility itself.

Being serious about compatibility allows the concept of a piece of software being finished. If I finished writing a book twelve years ago, you could still read it today. But if I finished writing a piece of software twelve years ago, could you still build and run it today? Without having to fix anything? Without having to fix lots of things?

> Sure, but now that there's a "correct" way to do this, you don't get to complain that the hacky thing you did needs to keep being supported.

But that's the whole point and beauty of Go's compatibility promise. Once you finish getting something working, you finished getting it working. It works.

What I don't want, is for my programming platform to suddenly say that the way I got the thing working is no longer supported. I am no longer finished getting it working. I will never be finished getting it working.

Go is proving that a world with permanently working software is possible (vs a world with software that breaks over time).


This matches my experience. I think I have a 25 hour circadian rhythm, which has me always wanting to stay up one hour later than the night before.


I think the experience of building something atop a framework should absolutely have bearing on how to build the underlying framework.


> I think the experience of building something atop a framework should absolutely have bearing on how to build the underlying framework.

You'd be wrong. If your job is maintaining a framework then your focus is on internal details, and how the framework is used would be limited to your concerns in putting up test sets.

Believing that working on a framework gives you equivalent or even similar experience to using said framework in professional settings is a kin to believing that all mechanics are excellent drivers just because they work on cars.


Django comes to my mind here. The people behind the framwork had real experience and needs in web publication.


Commenters point is that this is normal


But not necessarily optimal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: