Hacker Newsnew | past | comments | ask | show | jobs | submit | oceanplexian's commentslogin

> (The) Output was coherent but its ‘style’ was very boring and overtly inoffensive, which was (and still is) a clear limitation of the technology.

The style isn’t a limit of the technology, it’s a limit of the lobotomized models from OpenAI and Anthropic. The open source community has lots of models that are great at creative writing.


> The open source community has lots of models that are great at creative writing.

would you mind sharing some examples please?


I’m a frequent flier and flown into all of the above, the ones with TSA issues have been perpetually mismanaged under the best conditions, it’s not even remotely surprising that they are having issues under pressure.

I have ADS-B in my airplane and can see everything on the ground on a pretty map as if it were literally a video game. I can see landing aircraft in realtime while holding short or crossing a runway. The emergency responder should have had it in their fire truck.

The technology already exists. The problem has already been solved with an iPad and a $200 receiver. Almost certainly some BS regulation or rule was at least partially responsible here.


Information overload is a thing, and there are a lot of ground vehicles at a place like LGA.

Consider that if you have access to all the local ADS-B data you can project paths forward through 3D space for the next, say, 30 seconds or so. Using GPS you can determine your own position in 3D space. At that point it's trivial (and I'm not handwaving here, it is literally extremely trivial) to filter projected paths based on passing close enough to your own in 3D space (ie accounting for altitude). Stick that on a tablet and require it to be present in all vehicles that operate on the tarmac.

It wouldn't need to work 100% of the time because you'd still be required to contact ATC. The only requirement is that it have a reasonably high chance of alerting drivers to potential mistakes before they happen.

Which is to say this incident was trivially preventable had anyone with authority over these sorts of things cared to bother.


> for the next, say, 30 seconds or so

But this is a hand-wave.

This is a situation where both vehicles got explicit permission from someone who's supposed to know what they're doing. These sorts of runway crossings aren't unusual - and this one was responding to an emergency - and at a place like LGA there's always gonna be a plane on approach.

The difference between "hold short at runway 22" and being on runway 22 is much less than 30 seconds in some cases.


A clearance from ATC means you can land not that you must land nor that it is safe to land. PIC still has the ultimate choice. It's common practice in the US to issue landing clearances even when another plane is on the runway or there are two landing planes ahead of it also with landing clearances, if that wasn't done you would be waiting far longer at the airport.

It's obviously the right choice to give the PIC the information via avionics in a graphically concise way that highlights this potential runway contention because it is real and pilots are expected to adjust their speed to maintain the right sequence.

When it isn't possible, which does happen, IE a plane ahead is slow to clear the runway or to takeoff, pilots are expected+required to execute a go-around.


If ATC says you're clear to cross the runway and then you glance down and the screen shows a plane projected to cross directly in front of you in 10 seconds you'd probably think twice, right? This hypothetical cheap appliance has GPS and a compass and probably even a camera feed facing forward. It isn't a difficult technical problem to calculate the time offset at which the object traveling along the crosscutting path will pass in front of you.

> The difference between "hold short at runway 22" and being on runway 22 is much less than 30 seconds in some cases.

What is the typical minimum temporal separation? I would have expected at least 45 or 60 seconds given the cost of a plane and the imminent threat to life.


So, it does appear I was correct here - it's a difficult thing to solve.

https://www.nbcnews.com/news/us-news/investigators-search-an...

> Jennifer Homendy, chairwoman of the National Transportation Safety Board, said at a news conference Tuesday that the airport uses a safety system called ASDE-X to track surface movements of aircraft and vehicles.

> "ASDE-X did not generate an alert due to the close proximity of vehicles merging and unmerging near the runway, resulting in the inability to create a track of high confidence,” Homendy read from an analysis of the system’s performance.


I think the most generous interpretation of using 'all' ADS-B data (including things on the ground) would be to have VR and have boxes for all objects, à la the F-35 helmet:

* https://www.radiantvisionsystems.com/blog/worlds-most-advanc...

Not sure if you can do something on a 'simple' HUD that many planes have, so you could see objects in your flight path.


The entire point is that there's no need for someone driving a ground vehicle to see all the ADS-B data. They only need to know if and when a plane is projected to cross the direction in which they're facing. It might also be useful to know the projected speed as well as how far in front of your vehicle it will pass (but you can presumably figure the latter out on your own because, y'know, the runway).

These things don’t have Flash Attention or either have a really hacked together version of it. Is it viable for a hobby? Sure. Is it viable for a serious workload with all the optimizations, CUDA, etc.. Not really.

It will work fine but it’s not necessarily insane performance. I can run a q4 of gpt-oss-120b on my Epyc Milan box that has similar specs and get something like 30-50 Tok/sec by splitting it across RAM and GPU.

The thing that’s less useful is the 64G VRAM/128G System RAM config, even the large MoE models only need 20B for the router, the rest of the VRAM is essentially wasted (Mixing experts between VRAM and/System RAM has basically no performance benefit).


Could you share what you are using for inference and how you are running it? I have a 64G VRAM/128G system RAM setup.

Most people are using something in the llama family for inference. Llama server is my go to. Unsloth guides describe how to configure inference for your model of choice.

Split RAM and GPU impacts it more than you think. I would be surprised if the red box doesn’t outperform you by 2-3X for both PP and TG

Yeah I've got the q4 gpt-oss-120b running at ~40-60 tokens per second on an M5 Pro.

Delta is still stubbornly refusing to adopt Starlink.

I've got status with them and have started booking with other airlines b/c it doesn't matter how nice the seats are if you can't get any work done. Most airline revenue comes from business flights, I don't think they realize how important this is to their customer base.


Airplanes are one of the places I feel happily disconnected from being online. Never used in-flight WiFi even when my company would have paid for it.

Delta uses Viasat and has been rolling out free wi-fi on more and more of their planes. Is it not usable?

It’s pretty good, but the latency is inherently high since Viasat is in GEO.

~600ms ping times vs ~40ms on Starlink

It's probably what the UA CEO was talking about, trying to get competitors to sign contracts with other providers. Viasat was hot stuff for a long time, wouldn't be surprised if there's a noncompete preventing the change.

Delta has had high speed internet for years on their flights. I’m Platinum Medallion

Starlink failed it's Delta Demo.

The article is online.


> They were thus more likely to be in an economic position where they felt comfortable having another child.

Just a reminder that if you pull up a chart of countries with the highest birth rates, they all have poor economic conditions. If the theory that a better economy correlates with more babies then countries like South Korea would have the world's highest birth rate.


There is evidence that improved economic conditions and flexible work arrangements increase fertility in meaningful amounts. Is it enough to achieve 2.1 replacement rate? It isn't, but the evidence is robust that wealthier people do have higher fertility in some circumstances.

TLDR Fertility declines as countries urbanize, income rises, and women are educated and empowered to make more affirmed fertility choices, but also slightly increases when prospective parents feel economically secure enough to have a child, or more children (within some intent or desire band).

Higher incomes are increasingly associated with higher fertility: Evidence from the Netherlands, 2008–2022 - https://www.demographic-research.org/articles/volume/51/26 | https://doi.org/10.4054/DemRes.2024.51.26 - October 8th, 2024

More Babies For the Rich? The Relationship Between Status and Children Is Changing - https://ifstudies.org/blog/more-babies-for-the-rich-the-rela... - March 18th, 2024

> Yet the trend at the aggregate level of the whole country disguises trends that are emerging among individuals in these countries. In my new book, coauthored with Martin Fieder and Susanne Huber, Not So Weird After All: The Changing Relationship between Status and Fertility, we document that while in much of the twentieth century it was poor people in countries such as the United States who had more children than richer people, there is a new emerging trend where better-off men and women are more likely to have children than less well-off men and women.


I suspect that if you have a situation where people's "situation" (to use the word twice) doesn't change much from year to year but they feel they're slipping behind/worse (e.g, inflation but everything else is basically the same) - you'll find a decline in the birthrate.

If you have a situation where suddenly your life improves noticeably, birthrates will rise - even if the first group is always better off than the second. It's relative.

So WFH may have contributed to a birth rate rise simply because people felt more secure and more in control (or better) than they did before.


Strongly agree, the sentiment must be of a longer term improvement, not a temporary one. “Applied hope” if you will. The hope must be "sticky."

Correlation is not causation. A huge part of this high birth rate is that in these poor countries people leave in rural areas, farming or doing similar work, this essentially “working from home”. They can have many children because they stay close with their families and also having more children is a survival strategy.

> A lot of us strongly push against these types of measures not because we have anything to hide nor because we are on the side of the criminals.

I had this view as well until I realized it’s predicated on living in a high trust society. At some point you reach a critical mass of crime that is so rampant, and the rule of law has so broken down that it’s basically Mad Max out there, and then these idealistic philosophies start to fall apart.

You can look to parts of SE Asia or the Middle East to see some examples where that happened, and where it was eventually reigned in with extreme measures (Usually broad and indiscriminate capital punishment).

I know your comment is about fixing failure modes in the legal system, and I’m not defending government surveillance, or the idea of considering someone innocent until proven guilty, but what happens when the entire system fails due to misplaced idealism? Much worse things are waiting on the other end of the spectrum when people don’t feel like the government is adequately protecting them.


I think a practical argument against what you're saying here is simply that solving the mad max stuff doesn't require anything at all like this. The type of crime that's scary and impactful (e.g. terrorism is scary, but so extremely rare that it can't really be considered impactful) is generally trivial to bust.

Are you of the opinion that peoples' default state is a Mad Max-like existence?

The question isn't about idealism or the realistic possibility of said idealism. The question, in my opinion, is whether we can only succeed as a species if a small number of people are entrusted with creating and enforcing laws by force when necessary.

That isn't to say we never need some level of hierarchy or that laws, social norms, etc aren't important. Its to say that we need to keep a tight reign on it and only push authority and enforcement up the ladder when absolutely necessary.

It will end poorly if we continue down the road of larger and larger governments under the fear of Mad Max, and this idea many people have that "someone has to be in charge."


>I had this view as well until I realized it’s predicated on living in a high trust society.

Building down these high trust scenarios has been the consequence of active policies. You don't just miss these trends and correlations. Not to this extent.


The Mad Max stuff is occurring at scale more due to unchecked governments, and governments that don't work for society than it is from insufficient surveillance

>I had this view as well until I realized it’s predicated on living in a high trust society. At some point you reach a critical mass of crime that is so rampant, and the rule of law has so broken down that it’s basically Mad Max out there, and then these idealistic philosophies start to fall apart.

I see "High Trust Society" so much as a weird racist dogwhistle, but feel free to disabuse me of that notion.

I live in an extremely high crime area. Because cops abuse the law to keep their numbers up. If someone checked they would see that my local McDonalds car park is one of the biggest crime hotspots in the country because of administrative detections made on minor drug deals there.

It just so happens that my area is also where the government dumps migrants, refugees and poor people. Its also the case that they test welfare changes here.

I haven't had a single incident here in 6 years. We often forget to lock our doors. My wife takes my toddler walking around the neighborhood at night. I wave hello to the guy across the road who I have like 99% certainty is dealing drugs (Or just has a lot of friends with nice cars who visit to see how long it has been since he trimmed his lawn).

That said, if you turn on the tv 2 things are apparently happening. 1. We are under attack by hordes of immigrants tearing the country apart. 2. We are under attack by kids on ebikes mowing kids down in a rampage of terror.

Politicians, in order to be seen to be doing things, bring laws in to counter these threats. People bash their chests and demand more be done.

But the issue is that its just not happening. My suburb is great. The people are generally lovely, even those in meth related occupations.

When you complain about the trustiness of the society, consider that your lack of trust might actually be the problem? Nothing is necessarily going to break down because you didnt make your neighbors life worse by supporting another dumb as shit law. "Oh no crime is so rampant" buddy you need to get over yourself. Societies don't fail because of socially defined Crime they fail because people prioritise their perceived safety over everyones freedom.

> I’m not defending government surveillance, or the idea of considering someone innocent until proven guilty

Exactly what you are defending.

>what happens when the entire system fails due to misplaced idealism?

Its at threat from the idealism that you can just pass one more law to fix society.

>don’t feel like the government is adequately protecting them.

They come up with a bunch of dumbshit laws like the OP. Thats the result.


Re: High trust society general means people are pointing to some implicit unwritten structures that stop something from happening.

Collective notions of shame, actual networks of friends and families that reinforce correct behaviour or issue corrections.

Think about simply how credit networks form and function. And why visiting a food truck or medieval travelling doctor for your vial of ointment is different from buying special products from a brick and mortar establishment.

Basically if you or the network has a harder time back propagating defaults and bad credit in a way that prevents future bad outcomes then that is a loss of high trust.

This isn't about race really unless you are operating at the level of some biological or genetic connection to behaviour ... But that is a pretty strange place to be as there a whole host of confounding factors that are much more obvious and believable and I cast serious doubt that even a motivated racist would ever credibly be able to do empirical studies showing causal links between any given genetic population cluster and the emergent societal behaviour. These are such high dimensional systems it just seems insane to even think one could measure this effect.

The invisible substrate is the society unfortunately ... And we are all bad at writing it down and measuring it.


It seems to me that society isnt anything but a stick to beat against ones hobby horse. "Society is bad because of the thing that happened to me, save society by changing things my way!!!" etc. Where really if you turn off the tv and go to the shops its fine.

  > until I realized it’s predicated on living in a high trust society.
I don't think it's predicated on that. It's based on low trust of authority. Not necessarily even current authority. And low trust of authority is not equivalent to high trust in... honestly anything else.

  > You can look to parts of SE Asia or the Middle East to see some examples where that happened
These are regions known for high levels of authoritarianism, not democracy, not anarchy (I'm not advocating for anarchy btw). These regions often have both high levels of authoritarianism AND low levels of trust. Though places like China, Japan, Korea etc have high authoritarianism and high trust (China obviously much more than the other two).

  > but what happens when the entire system fails due to misplaced idealism?
It's a good question and you're right that the results aren't great. But I don't think it's as bad as the failure modes of high authoritarian countries.

High authority + low trust + abuse gives you situations like we've seen in Russia, Iran, North Korea. These are pretty bad. The people have no faith in their governments and the governments are centered around enriching a few.

High authority + high trust + abuse is probably even worse though. That's how you get countries like Nazi German (and cults). The government is still centered around enriching a few but they create more stability by narrowing the targeting. Or rather by having a clearer scale where everyone isn't abused ad equally. (You could see the famous quotes by a famous US president about keeping the white population in check by making them believe that at least they're not black)

None of the outcomes are good but I think the authoritarian ones are much worse.

  > when people don’t feel like the government is adequately protecting them.
But this is also different from what I'm talking about. You can have my framework and trust your government. If you carefully read you'll find that they are not mutually exclusive.

The road to hell is paved with good intentions, right? That implies that the road to hell isn't paved just by evil people. It can be paved even by good well intentioned ones. Just like I suggested about when programming. We don't intend to create bugs or flaws (at least most of us don't), but they still exist. They still get created even when we're trying our hardest to not create them, right? But being aware that they happen unintentionally helps you make fewer of them, right? I'm suggesting something similar, but about governments.


This and the previous post is well thought out, thank you for the clarity.

"He who gives up a little freedom for security deserves neither"

I never understood this quote. I happily gave up the freedom of driving without a seatbelt for security, what does that say about me?

Exactly nothing because you can release the seat belt yourself.

It's about giving up freedoms you might never get back, because it's not your decision anymore after giving them up.


The quote refers to a Faustian bargain offered by the Penn's. They'd bankroll securing a township, as long as the township gave up the ability to tax them. The quote points out that by giving up the liberty to tax, for short term protection, ultimately the township would end up having neither the freedom to tax to fund further defense, or long term security so might as well hold onto the ability to tax and just figure out the security issue.

Moral: don't give up freedoms for temporary gains. It never balances out in the end.


It's become more a shorthand for saying much more. Though the original context differs from how it is used today (common with many idioms).

People do not generally believe a seat belt limits your liberty, but you're not exactly wrong either. But maybe in order to understand what they mean it's better to not play devil's advocate. So try an example like the NSA's mass surveillance. This was instituted under the pretext of keeping Americans safe. It was a temporary liberty people were willing to sacrifice for safety. But not only did find the pretext was wrong (no WMDs were found...) but we never were returned that liberty either, now were we?

That's the meaning. Or what people use it to mean. But if you try to tear down any saying it's not going to be hard to. Natural languages utility isn't in their precision, it's their flexibility. If you want precision, well I for one am not going to take all the time necessary to write this in a formal language like math and I'd doubt you'd have the patience for it either (who would?). So let's operate in good faith instead. It's far more convenient and far less taxing


You dont deserve either.

The issue I have with this quote is that it implies that some people deserve freedom and others do not.

I think a better way to phrase it would be:

> he who gives up a little freedom for a little security ends up with neither


Surprised the US hasn’t set up some kind of DTC-via-loitering drone technology to allow stock, unmodified cell phones to bypass the internet restrictions.

With air superiority they could do it indefinitely. It could be backhauled via Starlink, each one acts as a Stingray-style cell tower and you launch a couple of them over every major city. Would be slow for tens of thousands or millions of users, but quite technically possible. The same technology would also be practical for disaster relief anywhere else in the world.


Doesn't GSM require mutual authentication via ki that's stored on the sim card and only known to the operator? Getting stingrays to work is easy mode because all it has to do is relay the traffic while capturing the IMEI/IMSI in the process, but if you want to act as a real cell tower that's much harder.

https://en.wikipedia.org/wiki/SIM_card#Authentication_key_(K...


I don’t know how many people in Iran have devices with eSIM support but with that you could probably do it. You would need internet access once to activate it but after that you could use the mobile network (and also provide a WiFi hotspot to other people to activate the eSIM).


You could airdrop simms to seed the process

What motive would they have to do that?


Giving Iranian civil society the ability to show the world what the government repression is like, and to organise resistance.


> But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work

The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them. Neither is a letter published by a few disgruntled employees of a San Francisco based company any kind of evidence or form of consensus.


> The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them.

I assure you that Chinese researchers have a diversity of philosophical and political alignment, much the same as other researchers. I also assure you that top researchers as a whole are not all Chinese, though the ones that are that I know are all very thoughtful.


> The "top researchers" in AI are Chinese. And I am skeptical that they have even remotely the philosophical or political alignment you are attempting to project on to them.

What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.


Idealism of what? That the government shouldn't use AI for surveillance or the military?

You really think the average Chinese worker thinks their government should stop working on AI because of liberal western values or something? This is nothing short of delusional.


I have my doubts that top Chinese AI researchers want to work for an AI company with direct tires to the white house and zero morals. Not for any great ethical concerns mind you. Simply because the US is a geological rival to China.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: