Hacker Newsnew | past | comments | ask | show | jobs | submit | mrob's commentslogin

Unlike with fuel, we're not burning the EVs, so even if China cuts off the supply we can keep using the ones we've already got. It would be inconvenient, but not an urgent problem like loss of access to fuel.

It's less about fuel and more about industrial dependence.

Only true for a plug-in hybrid with a series drivechain (a.k.a. "extended range electric vehicle"). The more common type has two parallel drivechains linked with clutches, so you still have all the drawbacks of a conventional internal combustion engine drivechain when you're using it.

> The more common type has two parallel drivechains linked with clutches, so you still have all the drawbacks of a conventional internal combustion engine drivechain when you're using it

I don't know about the whole world, but in both the US and Europe nearly half of the hybrids on the road are from Toyota, so unless nearly everything else is two parallel drive chains linked with clutches whatever Toyota does is the more common type.

Toyota uses a series-parallel system that works by having a planetary gear system that connects the ICE, a large electric motor, a small electric motor, and a drive shaft all together.

The planetary gear system functions as a power splitting device and a continuously variable transmission. It lets them direct power flow in a bunch of different ways. Here's a summary based on Wikipedia. (MB == the bigger battery, 12V == the regular 12V batter, ICE == the ICE engine, MG1 == the smaller electric motor, MG2 == the larger electric motor):

• Aux power: MB -> DC/DC converter -> 12V

• Charge: ICE -> MG1 -> MB

• EV drive: MB -> MG2 -> wheels

• Moderate acceleration: ICE -> wheels, ICE -> MG1 -> MG2 -> wheels

• Highway: ICE -> wheels, ICE -> MG1 -> MB

• Heavy power, such as on steep hills: ICE -> wheels, ICE -> MG1 -> MB, ICE -> MG1 -> MG2 -> wheels

• Max power: ICE -> wheels, ICE -> MG1 -> MG2 -> wheels, MB -> MG2 -> wheels

• Regenerative braking: wheels -> MG2 -> MB

• B-mode braking: Wheels -> MG2 -> MB, Wheels -> MG1 -> ICE

This is a big part of why Toyota hybrids are at the top of reliability rankings. Compared to a pure ICE they replace the clutch, the transmission, the starter motor, the alternator, the reverse gear set, and the flywheel with the planetary gear power splitting device. the two electric motors, and electronics. The power splitting device has very few movings parts--just the gears themselves, a pawl that can mechanically lock the gears when parked, and fluid pumps. The gears only move by rotating, unlike in a conventional transmission where they also change position. This makes their hybrids mechanically much simpler than a pure ICE.


This is something people say, but in practice the Toyota Prius is still a very reliable car.

The UK is well suited to wind power, already has many wind turbines, and continues to install more. We have a good amount of solar panels too. Renewables provide the majority of electrical power when conditions are good and the share will only increase. Electric vehicles avoid the biggest weakness of renewables (unreliable base load), because they can be set to charge unattended when cheap electricity is available. Electricity suppliers offer variable rate tariffs specifically for electric vehicles.

You start running numbers the cost of solar and wind capacity to power an electric car is about 10% of the purchase price. And considering they have a battery that can store a weeks worth of energy and spend 95% of the time just sitting. Basically not a problem.

Software moving the mouse cursor is only acceptable when the window is full-screen. If the user makes an application go full-screen, they are opting out of the normal desktop UI conventions. It's expected that full-screen software completely takes over the UI, and there are legitimate uses for moving the mouse cursor in full-screen software, e.g. centering an invisible cursor every frame in a first-person shooter game so endless view rotation is possible. But if it's windowed then it should be impossible.

Blender (3D modeling & animation software) implements this cool thing when rotating/resizing objects: if the mouse cursor moves out of the window it reappears on the other side (enabling resizing/rotating ad infinitum).

I think a better way to implement that feature would be a mechanism for programs to temporarily enable off-screen mouse cursors. This should also track the position where the cursor would be if it had been clipped to the screen boundary as normal, and immediately return the cursor to that position when the off-screen mode ends. Note that the OS returns the cursor, not the application, so applications can't abuse this mechanism for repositioning the cursor.

I don’t find that better. Why would it be? Now you don’t see the correlation between the movement of the cursor and its in-app effect.

It's better because it's the minimum change to mouse cursor behavior that allows the feature to work. You don't need to see the cursor while it's off-screen because the point is manipulate the 3D object, and you can look at the 3D object instead. The same is true for things like controls in an audio DAW which might also benefit from off-screen mouse movement.

If there's really a case where you need to see the exact position of the cursor while it's off-screen, you could display it wrapped around only while it's actually off-screen. But this would potentially confuse new users, so it should be optional and disabled by default.


> You don't need to see the cursor while it's off-screen because the point is manipulate the 3D object, and you can look at the 3D object instead.

Disagreed. Seeing the cursor at all times gives you some point of reference, and once you release the tool, you know where your cursor is.

> If there's really a case where you need to see the exact position of the cursor while it's off-screen, you could display it wrapped around only while it's actually off-screen.

I don’t understand what this means. If it’s not off-screen then it’s automatically also not wrapped around.

> But this would potentially confuse new users, so it should be optional and disabled by default.

This presumes that “cursor is suddenly allowed to be off-screen and not visible” is less confusing.


>Seeing the cursor at all times gives you some point of reference, and once you release the tool, you know where your cursor is.

Seeing is an inferior means of knowing where the cursor is compared to intuition. When I move the cursor, I know where it is with no conscious effort because I treat it as part of my hand. I disable mouse acceleration to make this easier. I don't need to look at my hand to know where my hand it. My subjective experience of mouse clicking is the same: I look at the target and the mouse cursor automatically appears there. If you allow software to move the mouse cursor you weaken this intuition.

>I don’t understand what this means. If it’s not off-screen then it’s automatically also not wrapped around.

When the cursor moves off-screen, it could be displayed with position modulus the screen width/height. Additionally, the cursor shape could be changed to make it obvious it's not the true position. This might make sense if you really need to know the exact off-screen position and the GUI control you're manipulating doesn't provide sufficiently precise feedback.

>This presumes that “cursor is suddenly allowed to be off-screen and not visible” is less confusing.

It is less confusing because other than extending the range of the mouse off-screen, the mouse behavior doesn't change. As soon as the off-screen action finishes, the mouse cursor snaps back to the position it would have otherwise been in.

An alternative option would be to snap back to the position it was where the special off-screen mode was initiated. This might actually be better, because it makes the off-screen mousing mode an extension of moving the mouse while it's lifted off the mouse pad, which users already have intuition for.


I generally never want programs to go fullscreen because I like to keep taskbar shown, so I can keep track of time, notifications and whatnot.

Well designed video games that rely on fast and precise mouse input capture the cursor during the gameplay until menu is shown.

The only times I have to go fullscreen is for the games that fail to capture the cursor and where accidentally clicking outside of the game window leads to a loss.

Can't imagine a non-game program other than a video player that I would want fullscreen.


> But if it's windowed then it should be impossible.

I have one monitor, so fairly often have games/editors windowed with something else alongside them (a video, documentation, …). There are also uses where the mouse is only captured temporarily - like FPS-controls flying mode in Godot and Blender. Some image editors also allow for things like moving the cursor with arrow keys, which I find useful.


> But if it's windowed then it should be impossible.

I worked on several apps for the visually impaired that automatically move the mouse cursor to different UI elements in the front-most application, regardless of the window state. It’s a good reminder that “impossible” often just means “I haven’t accounted for that use case yet.”


If it's part of the OS's standard accessibility framework then it's acceptable. The important point is that applications shouldn't be able to arbitrarily move the mouse in situations when it's unexpected.

You are arguing for uniformity. It does make a lot of sense: the global UI makes a considerable effort to build a single perfect UI, but that can only work if the apps actually make use of it.

But why shouldn’t the global UI itself make use of mouse warping?


Coming from Linux, the accessibility framework is just another series of programs. My main a11y program is a tiny little binary that uses the keyboard to move the mouse around at will; I certainly don't want the system to try and restrict that.

> The important point is that applications shouldn't be able to arbitrarily move the mouse in situations when it's unexpected.

That is quite a different statement from "It should be impossible." What should be impossible is for the OS to prevent this type of usage when it is clearly useful. Beyond accessibility, I use these features to automate testing of native macOS GUI apps.


Character counting errors are a side effect of tokenization, which is a performance optimization. If we scaled the hardware big enough we could train on raw bytes and avoid it.

No, tokenization is not the only reason. A next-word predictor has fundamentally a hard time executing algorithms, even as simple as counting.

Counting is one of the algorithms that can be expressed by a RASP program, which transformers closely approximate.

Close famously counts in horseshoes and hand grenades. Algorithms, just as famously, are a domain where off-by-one is still wrong.

Whenever somebody calls LLMs "non-deterministic", assume they meant "chaotic", in the informal sense of being a system where small changes of input can cause large changes to output, and the only way to find out if it will happen is by running the full calculation.

For many applications, this is equally troublesome as true non-determinism.


I don't think LLMs are that chaotic, you can replace words in an input at get a similar answer, and they are very good at dealing with typos.

They are definitely not interpretable, I was reading some stuff from mechanistic interpretability researchers saying they've given up trying to build a bottom up model of how they work.


> I don't think LLMs are that chaotic, you can replace words in an input at get a similar answer, and they are very good at dealing with typos.

Compare "You are a helpful assistant. Your task is to <100 lines of task description> <example problem>"

with

"you are a helpless assistant. Your task is to <100 lines of task description> <example problem>"

I've changed 3 or 4 CHARACTERS ("ful" to "less") out of a (by construction) 1000+ character prompt.

and the outputs are not at all similar.

Just realized I've never tried the "you are a helpless ass" prompt. Again a very minor change in wording, just dropping a few letters. The helpless assistant at least output text apologizing for being so bad at the task.


Sure. What did you expect? You changed the semantic of your prompt to the complete opposite. Of course it will attempt to make sense of it to its ability, and deliver what you requested. The input isn't formally specified, that's inherent for the domain, not the model or a human. GP, on the other hand, is talking about semantically negligible differences like typos.

I still vaguely remember how difficult man pages were to understand when I first started reading them. I'm pretty sure the biggest obstacle is the fact that most documentation is written for people who already know the standard computer science terminology. I have a generally negative opinion of LLMs, but one thing they do very well is function as a "reverse dictionary". You can input a idiosyncratic description of something you want and get the standard terminology. This is a new and valuable capability.

There is a universe out there, where most of the world is reading Solaris man pages, instead of Linux man pages. Whatever your thoughts on the Solaris OS, I think it is fair to say that no operating system has ever matched the quality of its man pages.

Interestingly, I also converged on the "reverse dictionary" usage of LLMs, in around 2024[1], mostly to indulge in (human) language-learning.

An excerpt from the post below:

``` It is a phenomenal reverse dictionary (i.e. which English words mean "of a specific but unspecified character, quality, or degree"). It not only works for English, but also for Esperanto (i.e. which Esperanto words mean "of a specific but unspecified character, quality, or degree"), as well as my own obscure native language. This is a huge time-saver when learning languages (normal dictionaries won't cut it, and bi-lingual dictionaries are limited, if they are available at all). Even if you are just using a language you are fluent in, a reverse-dictionary-prompt can help you find words and usages, and can also help you find "dark spots" in the language's lexicon. ```

[1]: https://galacticbeyond.com/chat-room-dispatches-intelligence...


I've commented on this subject before, but the fact of the matter is that kids getting into high tech and programming mostly don't read books anymore. How do I know? Recently I was hanging out with a bunch of high school students who asked me how I learned. I said it was mostly via books and man pages. "Yeah, don't sleep on high quality written material. O'Reilly. Wiley. Addison-Wesley. Manning. MIT. No Starch Press. &c..."

Well. You should have seen the look on their faces. I might as well have morphed into the Steve Buscemi meme "How do you do, fellow kids?" They looked at me like I was a total relic or greybeard and said things like "Nah, nobody reads tech books anymore; I learned Typescript from YouTube videos."


Already in 2008, as a millennial teen without internet at home, I was learning C# and XNA without a single book, just tutorials and official docs I downloaded from the library alongside Visual Studio Express. I couldn't have afforded books on it anyway, but I can't imagine teens in 2026 using anything other than Youtube and some tutorials to learn this stuff.

I learned programming from tutorials :) Only after I kept encountering terms in tutorials (long after I was building (badly organized) programs) that I didn't understand well did I decide to read my first book, K&R's C. This was when animated gifs were a novelty not worth the data transfer time.

I think every generation feels like their way of learning was the best, but we all make it work. There was a time when the architects of systems directly tutored programmers on how to write programs.


That has been the case for a decade

> most documentation is written for people who already know the standard computer science terminology

Not really. It's probably complexity for the sake of it in some cases. Also it's frequently ambiguous, and I'm really not sure why: it looks like some developers lack the basic logic (?!).


This is the best use case of LLMs, the one I use it the most for.

Sounds similar to the real-life case of ritonavir:

https://en.wikipedia.org/wiki/Ritonavir#Polymorphism_and_tem...


Isn't that somewhat similar to prions? I mean I know they're different things but one triggering the other to change shape? Don't know if prions also fall in some sort of lower energy well.

Veritasium just did a video on this.

https://youtu.be/ksn5yrsC3Wg


Also less recently Asianometry: https://youtu.be/_xPhxtuA_Qc

Repetitive patterns in code is called "idiomatic" and is considered a good thing. Repetitive patterns in writing is just bad writing.

exactly

>I find them all tactily unpleasant

Unlike rubber dome keyboards which trigger at the bottom of the stroke, mechanical keyboards trigger mid-stroke. You don't have to bottom out the keys, which reduces shock loading of your fingers. If you actually want to bottom out the keys, you can approximate the rubber dome feeling by using a linear keyswitch modded with a soft o-ring around the stem to cushion the impact.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: