Hacker Newsnew | past | comments | ask | show | jobs | submit | 7734128's commentslogin

I don't understand why people are not more upset at that attempt.

Finally a way to perhaps remove laugh tracks in the near future.

There are examples on YouTube of laughter tracks being removed and there are lots of awkward pauses, so I think you'd need to edit the video to cut the pauses out entirely.

- https://www.youtube.com/watch?v=23M3eKn1FN0

- https://www.youtube.com/watch?v=DgKgXehYnnw


Cutting the pauses will change the beats and rhythm of the scene, so you probably need to edit some of the voice lines and actual scenes too then. In the end, if you're not interested in the original performance and work, you might as well read the script instead and imagine it however you want, read it at the pace you want and so on.

And have a video model render an entirely new version for you, I guess.

No reason to try to avoid semantic search. Dead easy to implement, works across languages to some extent and the fuzziness is worth quite alot.

You're realistically going to need chunks of some kind anyway to feed the LLM, and once you got those it's just a few lines of code to get a basic persistant ChromaDB going.


If you touch the image when scrolling on mobile then it opens when you lift your finger. Then when you press the cross in the corner to close the image, the search button behind it is activated.

How can a serious company not notice these glaring issues in their websites?


AI powered business value provider frontend developers.


Taiwanese companies still don't value good software engineering, so talented developers who know how to make money leave. This leaves enterprise darlings like Asus stuck with hiring lower tier talent for numbers that look good to accounting.


On desktop, clicking on an image opens it but then you can't close it, and the zoom seems to be glitchy.

But I'm not surprised, this is ASUS. As a company, they don't really seem to care about software quality.


Wait until you start using an ASUS computer, and hit the BIOS/UEFI issues...

I learned the hard way that ASUS translates do "don't buy ever again".


Enshittification.

Its not that they dont notice.

They dont care.


but it has AI in it.


Who tolerates a website which immediately pushes three overlapping pop-ups in free user's face?

Why would anyone subject themselves to so much hatred? Have some standards.


Who raw-dogs the internet without an adblocker? Have some standards.


I use uBlock Origin (the full fat version in Firefox, not the lite version). It doesn't help, because the pop-ups aren't ads. There's one asking me if I wanna be spied on, one asking me to subscribe or sign in, and one huge one telling me that there's currently a discount on subscriptions.


I've got uBlock Origin on Firefox desktop too, and none of those show. Turn on more of the filter lists in the settings - especially the stuff in the "Cookie notices" and "Annoyances" sections.


The people making it still worthwhile to post content online.


Rumor has it that some people saw the "ads pay for it all" business model and accepted the deal because they wanted the Internet to be sustainable.


If you don't actually buy anything then you're not sponsoring anything.

In fact generating ad views and not purchasing things from them reduces the value of ads to the website.


Your logic is built on the common misconception that the only ad that has any value is the last ad you see before purchase.


I mean, that's a two-sided deal. "You watch ads, you read content". But that deal has been more and more broken by the ad networks and websites; a lot of sites are unnavigable without an adblocker now.

The days of plain text Google AdWords are long, long gone.


Pi-hole and ad blocker, and I still repro.



Only those who made the mistake of not using a content filter like uBlock Origin or something equally effective. I just visited the site and got neither pop-ups nor ads.


You should probably ask an AI to read it and summarize it for you.


I see what you did there, and I approve.



It's almost like NY Times has a massive bias against generative AI, for some reason.


They are a respectable institution, they only lie about WMDs in order to justify war and death. That is why we need to protect their IP, to preserve their activities in the future.


Just like when they prematurely publish peoples deaths: They write up all this stuff ahead of time, and automatic stock monitoring to TRIGGER FIRST TO GLOAT


Extremely common misconception. NASA even has a website about how it's incorrect

https://www.grc.nasa.gov/www/k-12/VirtualAero/BottleRocket/a...


Would you want a translator to somehow jam that context into the story? Otherwise, I fail to see how it's an issue of translation.

If I had learned Russian and read the story in the original language, I would be in the same position regardless.


It's pretty common for translators to do exactly that, usually via either footnotes or context created by deliberate word choice. Read classical translations for example and they'll often point out wordplay in the original language that doesn't quite work in translation. I've even seen that in subtitles.


LLMs tend to imitate that practice, e.g. Gemini seems to be doing that by default in its translations unless you stop it. The result is pretty poor though - it makes trivial things overly verbose and rarely gets the deeper cultural context. The knowledge is clearly here, if you ask it explicitly it does it much better, but the generalization ability is still nowhere near the required level, so it struggles to connect the dots on its own.


I was going to say that I'm not certain the knowledge is there and tried an experiment: If you give it a random bible passage in Greek, can it produce decent critical commentary of it? That's something it's certainly ingested mounds of literature on, both decent and terrible.

Checked with a few notable passages like the household codes and yeah, it does a decent (albeit superficial) job. That's pretty neat.


Sometimes when re-publishing an older text references are added to clarify the meaning that people would miss otherwise from the lack of knowledge of cultural references.

But here there is need to even put a references. A good translation may reword "too expensive!" into "what? I can live the whole day on that!" to address things like that.


Some translators may add notes on the bottom of the page for things like that.

It's going to greatly vary of course. Some will try to culturally adapt things. Maybe convert to dollars, maybe translate to "a day's wages", maybe translate as it is then add an explanatory note.

You might even get a preface explaining important cultural elements of the era.


I've never seen a 32 bit model. There's bound to be a few of them, but it's hardly a normal precision.


Some of the most famous models were distributed as F32, e.g. GPT-2. As things have shifted more towards mass consumption of model weights it's become less and less common to see.


> As things have shifted more towards mass consumption of model weights it's become less and less common to see.

Not the real reason. The real reason is that training has moved to FP/BF16 over the years as NVIDIA made that more efficient in their hardware, the same reason you're starting to see some models being released in 8bit formats (deepseek).

Of course people can always quantize the weights to smaller sizes, but the master versions of the weights is usually 16bit.


And on the topic of image generation models, I think all the Stable Diffusion 1.x models were distributed in f32.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: