Hacker Newsnew | past | comments | ask | show | jobs | submit | WhyNotHugo's commentslogin

The entire screen is covered by a video which just renders:

> Watch video on YouTube > Error 153 > Video player configuration error

On Firefox/Linux or Safari/iOS.

How is it that this kind of organisation can't properly embed a video player or make a working landing page?


They're definitely reliable routers, just not in the way one typically wishes.

I read this as someone shouting, and cannot override the voice in my head to not-shout while reading it.

It’s a liability for them. They’re a small project and someone who just hates open source community projects can sue them out of existence for non-compliance.

I wouldn't mind it getting fully tested in court, to be honest.

That may be a rationale, but then they act to help governments spy on people. This is a no go. I understand that the government created that problem, but they should just refuse acting as proxy-sniffer for governments here. Who in his or her sane mind would want age-spying daemons to run on their systems? Next thing the BSDs will do is to support systemd ... then we can integrate the sniffing and call it systemd-kiddie-protector.c.

If you’re building from source, the vulkan backend is the easiest to build and use for GPU offloading.

Yes, that's what I tried first. Same issue with trying to allocate more memory than was available.

Somehow this whole ecosystem of tools always gives me a bad vibe, and I can't quite pinpoint why.

All the demos and videos are applications with lots of stacked pop-ups/modal windows, and things moving around continuously. It all reminds me of what we typically see in computers in TV shows or sci-fi films.

It just looks like a chaotic mess of things, and I get this really strong urge to just stay away from it all.


Yeah, similar feelings here. I grew up in the BBS world in the 90s, and love a quality TUI experience, but something about this toolset just gives me the ick. I can't keep any of the naming straight; to me it all reads like "combine Chapstick with Cotton Swab inside of Matcha Latte" and my eyes instantly glaze over.

How does this company make money? Is it all just a ZIRP fever dream?


For me its the fact a large chunk of my terminal experience is over limited bandwidth connections to laggy servers with varying feature support. I appreciate the eye candy and what they have achieved but I don't need it, I just want TUIs to work everywhere with low latency.

Yeah. nothing quite like ANSI code dumps at 9600 bps.

I did a sudo apt upgrade at 600 baud this morning.

Life is weird.


Bubbletea is actually pretty cool. I also agree that the website doesn't look so good.

It looks amazing. Websites and projects are allowed to have personality and use flashy colors.

It reminds me of "web3" marketing. My hackles are immediately raised.

I agree with you about the general vibe being off putting. I’ve been using their libraries for a while now and have to say they are pretty solid though. The terminal components work reliably, and have less UX bugs than the alternatives.

It all reminds me of what we typically see in computers in TV shows or sci-fi films.

The general vibe I get is "script kiddies trying too hard".


I've been doing more CLI tools now than ever, and sometimes it's just fun/cool to tell whatever LLM to "use gum to make this pretty" =)

Performative retro-chic. It’s for people whose first computer was a retina Macbook Pro.

At-least it isn't your generic, 100th billion, AI generated, samey-samey , yawnfest tailwind website. This site has personality!

I would be fine with a chaotic bubbly mess of an outside presentation, if the libraries were more robust and foundational. At the moment the underlying code, when you scratch the surface, have the feel of things thrown together to be replaced at later date.

I bounced off of bubble tea not because of the aesthetics and the unhelpful naming, but because of the programming model: a MVC-architecture cribbed from the Elm language. Why? It completely takes over and rips apart my CLI structure. A CLI is not a DOM or System.Windows.Forms, MVC is scattering around logic and adding indirection layers needlessly.

I am still using huh? and vhs, but their libraries have the feel of looking really good in demo and in the provided examples, but break down quickly when coloring just outside those intended lines.


+1.

Maybe it's just pattern recognition misfiring, maybe I'm too just used to workhorse software with websites that look stuck in 1999, but everything coming out of that company (Charmbracelet, Inc. according to website footer) feels like it could suddenly get monetized and enshittified next Tuesday morning.

There is also something weird about this entire aesthetics. They all got this 2020s startup vibe of smooth gradients, bisexual lighting, infantile mascots with cartoon or anime inspired styles, vaguely Asian and vaguely feminine. It feels predatory not in the Big Cat with Bloody Mouth way, but in the Cocomelon sensory videos for ages 1 to 3 way. Am I crazy or does anyone else feel similarly?


bisexual lighting?!

It has a “gen-AI” vibe to it, and it’s meant to be playful in an alt/psypop sort of way but it is from no culture in particular. It trips up my brain as a bubbly ad from the marketing department of a dystopian corporation.

iPhones have a locked bootloader. Unless a really solid exploit is found (unlikely) this won’t happen.

A terminal isn’t enough for everything, especially developers. I use lots of windows at the same time and plenty of non-terminal applications.

When forced to use macOS, a Linux VM provides a very convenient experience.


As well, if you run an aarch64 VM with virtualization there is essentially no lag. It runs great. I have an Alpine Linux one on my M2 Macbook Pro.

Good to hear. What hypervisor are you using (UTM, VMWare, Parallels)?

I use UTM, it's simple and seems light. I can share my source directory with the VM so I can edit using macos pycharm, and test the containers in the VM.

What really caught me out was I downloaded an x64 image once (there was no arm64 image) and it somehow just ran anyway in the arm64 VM. That may have been some qemu magic?

I love the macos/virtualised linux dev workflow, but is isn't better than plain linux. I'm just still not convinced GUI stuff works on linux as well as it does on macos and macbook hardware is so nice (if you're not paying for it).


This is why you write the tests first and then the code. Especially when fixing bugs, since you can be sure that the test properly fails when the bug is present.

When fixing bugs, yes. When designing an app not so much because you realize many unexpected things while writing the code and seeing how it behaves. Often the original test code would test something that is never built. It's obvious for integration tests but it happens for tests of API calls and even for unit tests. One could start writing unit tests for a module or class and eventually realize that it must be implemented in a totally different way. I prefer experimenting with the implementation and write tests only when it settles down on something that I'm confident it will go to production.

Where I'm at currently (which may change) is that I lay down the foundation of the program and its initial tests first. That initial bit is completely manual. Then when I'm happy that the program is sufficiently "built up", I let the LLM go crazy. I still audit the tests though personally auditing tests is the part of programming I like the very least. This also largely preserves the initial architectural patterns that I set so it's just much easier to read LLM code.

In a team setting I try to do the same thing and invite team members to start writing the initial code by hand only. I suspect if an urgent deliverable comes up though, I will be flexible on some of my ideas.


> When fixing bugs, yes.

One thing I want to mention here is that you should try to write a test that not only prevents this bug, but also similar bugs.

In our own codebase we saw that regression on fixed bugs is very low. So writing a specific test for it, isn't the best way to spend your resources. Writing a broad test when possible, does.

Not sure how LLM's handle that case to come up with a proper test.


I'd argue the AI writing the tests shouldn't even know about the implementation at all. You only want to pass it the interface (or function signatures) together with javadocs/docstrings/equivalent.

I don't think it addresses the problem.

Writing the tests first and then writing code to pass the tests is no better than writing the code first then writing tests that pass. What matter is that both the code and the tests are written independently, from specs, not from one another.

I think that it is better not to have access to tests when first writing code, as to make sure to code the specs and not code the tests that test the specs as something may be lost in translation. It means that I have a preference for code first, but the ideal case would be for different people to do it in parallel.

Anyway, about AI, in an AI writes both the tests and the code, it will make sure they match no matter what comes first, it may even go back and forth between the tests and code, but it doesn't mean it is correct.


Tests are your spec. You write them first because that is the stage when you are still figuring out what you need to write.

Although TDD says that you should only write one test before implementing it, encouraging spec writing to be an iterative process.

Writing the spec after implementation means that you are likely to have forgotten the nuance that went into what you created. That is why specs are written first. Then the nuance is captured up front as it comes to mind.


Tests are not any more or any less of a spec than the code. If you are implementing a HTTP server for instance, RFC 7231 are your specs, not your tests, not your code.

I would say that which come first between specs and code depend on the context. If you are implementing a standard, the specs of the standard obviously come first, but if you are iterating, maybe for a user interface, it can make sense to start with the code so that you can have working prototypes. You can then write formal documents and tests later, when you are done prototyping, for regression control.

But I think that leaning on tests is not always a good idea. For example, let's continue with the HTTP server. You write a test suite, but there is a bug in your tests, I don't know, you confuse error 404 and 403. The you write your code, correctly, run the tests, see that one of your tests fail and tell you have returned 404 and not 403. You don't think much, after all "the tests are the specs", and change the code. Congratulations, you are now making sure your code is wrong.

Of course, the opposite can and do happen, writing the code wrong and making passing test without thinking about what you actually testing, and I believe that's why people came up with the idea of TDD, but for me, test-first flip the problem but doesn't solve it. I'd say the only advantage, if it is one, is that it prevents taking a shortcut and releasing untested code by moving tests out of the critical path.

But outside of that, I'd rather focus on the code, so if something are to be "the spec", that's it. It is the most important, because it is the actual product, everything else is secondary. I don't mean unimportant, I mean that from the point of view of users, it is better for the test suite to be broken than for the code to be broken.


> RFC 7231 are your specs

It is more like a meta spec. You still have to write a final spec that applies to your particular technical constraints, business needs, etc. RFC 7231 specifies the minimum amount necessary to interface with the world, but an actual program to be deployed into the wild requires much, much more consideration.

And for that, since you have the full picture not available to a meta spec, logically you will write it in a language that both humans and computers can understand. For the best results, that means something like Lean, Rocq, etc. However, in the real world you likely have to deal with middling developers straight out of learn to code bootcamps, so tests are the practical middle ground.

> I don't know, you confuse error 404 and 403.

Just like you would when writing RFC 7231? But that's what the RFC process is for. You don't have to skip the RFC process just because the spec also happens to be machine readable. If you are trying to shortcut the process, then you're going to have this problem no matter what.

But, even when shortcutting the process, it is still worthwhile to have written your spec in a machine-readable format as that means any changes to the spec automatically identify all the places you need to change in implementation.

> writing the code wrong and making passing test without thinking about what you actually testing

The much more likely scenario is that the code is right, but a mistake in the test leads it to not test anything. Then, years down the road after everyone has forgotten or moved on, when someone needs to do some refactoring there is no specification to define what the original code was actually supposed to do. Writing the test first means that you have proven that it can fail. That's not the only reason TDD suggests writing a test first, but it is certainly one of them.

> It is the most important, because it is the actual product

Nah. The specification is the actual product; it is what lives for the lifetime of the product. It defines the contract with the user. Implementation is throwaway. You can change the implementation code all day long and as long as the user contract remains satisfied the visible product will remain exactly the same.


> The much more likely scenario is that the code is right, but a mistake in the test leads it to not test anything.

What I usually do to prevent this situation is to write a passing test, then modify the code to make it fail, then revert the change. It also gives an occasion to read the code again, kind of like a review.

I have never seen this practice formalized though, good for me, this is the kind of things I do because I care, turning it into a process with Jira and such is a good way to make me stop caring.


> I have never seen this practice formalized though

Isn't that what is oft known as mutation testing? It is formalized to the point that we have automation to do the mutation for you automatically.


Thank you, I wasn't aware of this, this is the kind of thing I wish people were more aware of, kind of like fuzzing, but for tests.

About fuzzing, I have about 20 years of experience in development and I have never seen fuzzing being done as part of a documented process in a project I worked in, not even once. Many people working in validation don't even know that it exists! The only field where fuzzing seems to be mainstream is cybersecurity, and most fuzzing tools are "security oriented", which is nice but it doesn't mean that security is the only field where it is useful.

Anyways, what I do is a bit different in that it is not random like fuzzing, it is more like reverse-TDD. TDD starts with a failing test, then, you write code to pass the test, and once done, you consider the code to be correct. Here you start with a passing test, then, you write code to fail the test, and once done, you consider the test to be correct.


> I have never seen fuzzing being done as part of a documented process in a project I worked in

Fuzzing, while useful in the right places, is a bit niche. Its close cousin, property-based testing, is something that is ideally seen often in the spec.

However, it starts treading into the direction of the same kind of mindset required to write Lean, Rocq, etc. I am not sure the bootcamp grad can handle writing those kinds of tests. At least not once you move beyond the simple identity(x) == x case.


Also, if you find after implementation that the spec wasn't specific enough, go ahead and refresh the spec and have the LLM redo the code, from scratch if necessary. Writing code is so cheap right now, it takes a different mindset in general.

Agreed 1000%. But that can be a lot of work; creating a good set of tests is nearly as much or often even more effort than implementing the thing being tested.

When LLMs can assist with writing useful tests before having seen any implementation, then I’ll be properly impressed.


from experience, AI is bad at TDD. they can infer tests based on written code, but are bad at writing generalised test unless a clear requirement is given, so you the engineer is doing most of the work anyway.

My day job has me working on code that is split between two different programming languages. I'd say LLMs are pretty good at TDD in one of those languages and a hot mess in the other.

Which, funny enough, is a pretty good reflection of how I thought of the people writing in those languages before LLMs: One considers testing a complete afterthought and in the wild it is rare to find tests at all, and when they are present they often aren't good. Whereas the other brings testing as a first-class feature and most codebases I've seen generally contain fairly decent tests.

No doubt LLM training has picked up on that.


try this for a UI

We’ve had decades during which we could have educated more and more people on how use tech. Instead, the industry embraced dumbing everything down and claiming that everything needs to target technically illiterate people, while insisting that educating everyone was a bad idea.

This is what we end up with: people who don’t understand enough to configure their own phones, and phones full of spyware under the control of consumer-hostile actors.

The most worrying this is that this doesn’t just apply to the elder: it also applies to the newer generation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: