Hacker Newsnew | past | comments | ask | show | jobs | submit | berndi's commentslogin

Someone claimed they used fiber optic drones [1]. So perhaps the drones were connected to the trucks via optical fibers and the trucks carried the modems. That way, jamming over the airbases would have had no effect.

[1] https://nitter.net/bayraktar_1love/status/192915556386414634...


> you're telling me it's possible to be functional enough to eat and sleep for a week, but not know that your wife is dead and the barking dog needs food and water

Absolutely, in late stages of Alzheimers you’re a vegetable, but basic bodily functions still work to some degree.


Well that's unfortunate. Although I still don't understand why she wouldn't have sought medical attention as her own condition worsened, but I suppose that's at least conceivable.


Hantavirus Pulmonary Syndrome starts with flu-like symptoms that are easy to write off as not requiring medical attention. Once symptoms worsen, it can be fatal in mere hours.


Common sense means taking the less risky option.

Assuming that the survival rate is constant as a function of trip length, the empirical estimate of the risk is fully determined by the number of deaths per distance.


In this case, they’re not fully determined by distance.

Airline travel is much riskier in takeoff and landing phases. A bunch of 200 mile flights is riskier than a 3500 mile trans-con trip.

Car travel is much safer on controlled access highways than 40 mph surface streets.


> Assuming that the survival rate is constant as a function of trip length

That's an assumption that doesn't hold up, at least that's the argument I was trying to make above.

Generalizing so much data down to a single statistic makes the number useless. Almost all context is lost, really the only context kept is mode of transportation and distance traveled. That isn't enough to help anyone make a decision for any particular trip.


I’m not a vegetarian or animal rights advocate, but an argument could be made that cruelty to non-human animals is similarly normalized in today’s world, perhaps seen as a necessary evil to satisfy our appetite for food or fashion items.


“Cruelty” is the default state of things. It is not a behavior we lower ourselves to, it is who we are. What we must do is rise above it.

> The Patrician took a sip of his beer. ‘I have told this to few people, gentlemen, and I suspect never will again, but one day when I was a young boy on holiday in Uberwald I was walking along the bank of a stream when I saw a mother otter with her cubs. A very endearing sight, I’m sure you will agree, and even as I watched, the mother otter dived into the water and came up with a plump salmon, which she subdued and dragged on to a half-submerged log. As she ate it, while of course it was still alive, the body split and I remember to this day the sweet pinkness of its roes as they spilled out, much to the delight of the baby otters who scrambled over themselves to feed on the delicacy. One of nature’s wonders, gentlemen: mother and children dining upon mother and children. And that’s when I first learned about evil. It is built in to the very nature of the universe. Every world spins in pain. If there is any kind of supreme being, I told myself, it is up to all of us to become his moral superior.’

- Terry Pratchett, Unseen Academicals


Cruelty isn't common in the animal world, but it is very common in humans. Some animals play with their food, but they're not doing it because they get joy from inflicting pain, like we do.

As far as "an otter eating a fish alive", that's not cruel or evil, that's survival. Weird quote, man.


Yes, people and their inventions belong to the Malign Realm and are completely different from animals and nature, which belong to the Wholesome Realm.


Joy is a release of serotonin. I can promise you orcas toying with their prey are getting huge releases of it.

And just to give another data point that some animals chase joy: there are certain types of dolphins and monkeys that use animal toxin to get high.

Time for you to go back to the school benches, man.


bro animals can be vengeful see-

https://www.fox26houston.com/news/elephant-kills-indian-woma...

that, and cats, cats can be very vengeful creatures


Assuming a bedroom with surface area of 50 m^2 and an insulation R-value of 2 m^2K/W and an outside temperature of 30C, 300W would be required to keep the room at 18C. Cooling down 8 billion such bedrooms would require 24TW, which represents approximately a 10% increase in global power consumption.

Certainly a lot, but doesn't seem "catastrophic" and is realistic with today's technology.


Do the cows know they are herbivores? Why would the fact that they don’t predate on other animals in the wild make it “abhorrent” to feed them animal products in captivity?


> Do the cows know they are herbivores?

How can we really know what cows know? What we know is they follow a herbivorous diet when possible. Their teeth and digestive system are optimized for consuming plants.

> Why would the fact that they don’t predate on other animals in the wild make it “abhorrent” to feed them animal products in captivity?

The reason you said.


TLDR: no evidence is presented for automated terminal attack capabilities. Even a heat-seeking missile is more automated than these drones which simply maintain course.


It did look like there was some sort of object tracking indicator in the video feed afaict but could easily be added in post-production.

You can imo see RF jamming of the fpv feed when the drone approaches the tank, quite interesting. Probably what neccesassites the automated terminal attacks.

Imo it doesn't matter how sophisticated the tech is, if it works with a meaningful success rate than it's quite an effective weapon against a multi-million rubel tank.


You’re right! The trapped ion approach (IonQ) is the most promising direction toward scalable quantum computing. Superconducting qubits — such as those used by IBM and Google — require extreme cooling while ions can be trapped at room temperature. Superconducting qubits are also plagued by substrate imperfections, while trapped ions — being “nature’s qubits” — are absolutely identical in their quantum mechanical properties. This allows trapped ion quantum computers to realize the best demonstrated gate fidelities.


Is there any evidence that Ozempic causes more muscle loss than what is to be expected from caloric restriction? I haven’t seen any.


You’re confused about what “statistical parrot” means and you don’t seem to understand the difference between an optimization objective and the resulting model.

The term “parrot” is used to imply inference by something akin to a look-up table, specifically it is used to indicate poor out-of-sample performance and a lack of a proper world model. The optimization objective is irrelevant when determining the generalization performance of a model and when judging whether it can reason beyond looking up answers in a table.

As the user above noted, it is now quite well established that GPT-4 has impressive out-of-sample performance which can be explained by it possessing an actual model of the world and not being a “parrot”.


> it is now quite well established that GPT-4 has impressive out-of-sample performance

Err... I can show this is false, kinda trivially. People who engage in prompt-confirmation-bias aren't aware of what the in-sample is.

It's basically everything ever digitised: you can ask it for the first paragraph of every dickens novel, to what the average petal length of an iris flower is -- etc.

How are you measuring the in-sample here?

If you engage in straightfoward reasoning from first principles, and are basically aware of what the training data is, you can show in 10 seconds critical failures of generalisation.

If you want a recipe: go find some fringe api docs. Establish that it has been trained on them. Then, since they're fringe there wont be much code on github, etc. Now ask it do something non-trivial with that API. It will fail, and the mechanism will be obvious: it'll jam in correlated code that lacks relevance.

Do the same on a popular API, and see it succeed.

The in-sample will be obvious for both, and the bounday of generalisation


You can make it invent a new language: https://maximumeffort.substack.com/p/i-taught-chatgpt-to-inv...

I am sure you will continue to argue that this is still in line with everything-thats-ever-written prediction but my opinion is that at that point, it's a meaningless distinction. The human brain is also just a machine.


So I was with a financial researcher recently, and he wanted to use ChatGPT to summarise some reference financial data -- and it did so, actually correctly.

Being sceptical, as every person ought in these matters, I changed the finical data and performed the same analysis (both in a new tab, and within the same convo). The results were the same!

How strange?

Well, in being reference financial data ChatGPT was reporting prior reference summaries of it. When that data was changed it was reporting the very same reference summaries (which were now wrong).

Since it's incapable of actually summarising financial data. It's only capable of selecting combinations of pieces of its training set.

Now, is this distinction "meaningless" ?

No, it's the difference between this guy being fired for causing a massive loss on a major project; and this guy keeping his job and doing it well.


>Since it's incapable of actually summarising financial data. It's only capable of selecting combinations of pieces of its training set.

Third completely off misconception from you today.

This is not at all what it is doing. "Supercharged Interpolation" is false and makes no sense. It's not a lookup table either. It doesn't memorize enough of what it needs to to make your assertion possible.

https://arxiv.org/abs/2110.09485


at 500gb, you can store nearly everything ever written -- let alone compressed.

all statistical learning is a variation on k-nn (see the relevant paper on this) but likewise this is obvious a priori

k-nn is the ideal learner, and a good starting point for analysis

the question for any given system is: what is the learning space, what is the distance function, and how many points are being considered

NNs set up a compressed X,y space, in that space choose points via an empirical expectation, and obtain a weighted average as their prediction

That's just what they do -- there isn't any other mechanism here. The whole formal structure of the NN can be written down on a page of paper

your paper above doesn't deal with this -- it's a reply to the 'forced interpolation' view, which i haven't espoused. but often NNs are forced interpolated

'extrapolation' is of course a part of the possible predictive output of a statical learning system -- in that it's latent space is taken to be embedded in R^n and so one can 'veer off' into R.

Whenever you attribute a higher fidelity space to a small latent space you are, in effect, extrapolating


>at 500gb, you can store nearly everything ever written -- let alone compressed.

No you cannot.

>That's just what they do -- there isn't any other mechanism here.

That's not what they do. They are many papers now showing ICL demonstrating some kind of optimization method during inference which would not be happening if all they did was retrieval.

I'm come to realize you don't know what you're talking about. Your level of denial is scary to see.


just do the calculation yourself: how many books is 500gb at, say, a few bits per character?

more than all every written -- and so on

perhaps apply a single drop of scepticism to this credulity

even, just ask chatgpt to repeat the first paragraph of some book -- say, a dickens novel


>how many books is 500gb at, say, a few bits per character?

In what reality is a character taking up only a few bits ? Certainly isn't this one.

Your denial is so weird. Why does ICL use Higher-Order Optimization Methods when it's just a lookup table ? https://arxiv.org/abs/2310.17086

How do you explain small language models that have much less space than the text they were trained on even with your nonsensical calculations ?

LLMs are not lookup tables and there's plenty evidence to support that. You look insane insisting they are.

>even, just ask chatgpt to repeat the first paragraph of some book -- say, a dickens novel

Not only will this not work for the vast majority of books it sees during training, Why only the first paragraph ? are you not insisting they memorize everything ? Why can't it repeat the whole page ? the whole chapter ? the whole book ?


I actually did that, and I asked for the first paragraph from "Great Expectations", and the answer was correct.

Of course the first paragraph of any famous novel is in itself famous, so I also asked for the first paragraph from Chapter XL, from the same novel. Interestingly the first 12 words were correct, but the rest was hallucination.

Original: It was fortunate for me that I had to take precautions to ensure (so far as I could) the safety of my dreaded visitor; for, this thought pressing on me when I awoke, held other thoughts in a confused concourse at a distance.

ChatGPT 3.5: "It was fortunate for me that I had to take precautions to ensure the safety of my treasure, for with such a boy as Compeyson on the loose, I was not likely to find much mercy at his hands. I had not told Herbert about the money, and I was glad to have the chance to count it before he came back. I found that I had still twenty-one sovereigns, six shillings, and sixpence left."


>Since it's incapable of actually summarising financial data

It's not, though. It is in fact able to summarize financial data, just as it's able to write code and diagnose a medical condition. It makes mistakes, yes, even grave ones, much more so than experts in those fields would.


It isnt making mistakes ... its never actually doing it.

Do you see a difference between the process of adding numbers and dividing by their count (taking a mean) and emitting numeric tokens which are most probable for a given input?

The former is called "taking a mean" the latter isnt. This system never engages in any method to summarise financial data. It's method is always the same: to emit tokens most probable given a set of historical tokens.

It's the difference between saying "the average of 1,2,3" is 2 because that sentence occurs 1,000,000 times and saying it's 2 because you've literally computed it.

This system does not run financial summary algorithms. It's a trick


To add to your point: try asking ChatGPT to do basic arithmetic on numbers it hasn't seen before. You'll see just how good it is at computation.


It's better (GPT-4) than you could manage without an external tool or pad. and that's after being severely hampered by tokenization. https://arxiv.org/abs/2310.02989


The brain is a machine, the issue is the difference between 2 claims

LLMs are enough to be a brain

LLMs are not enough to be a brain.


But “everything ever digitised” includes a tonne of linguistics information - it’s still in sample.


That out of sample performance is a mirage.

Yes it’s impressive. Yes it’s got amazing zero shot performance in domains.

But there’s a pattern of failure in production which describe a limit, that shouldn’t exist if the emergent properties were stable.

You can build this right now and test it.

Build a sequence of agents to work on a domain you are not an expert in.

Let them loose. See what happens.

Do the same thing on a domain you have expertise in.

Assume the number of errors you find, the number of modifications you have to make are stable for other domains.


I'd phrase characterizing the reliability of out-of-sample performance a priori as impossible, but not necessarily automatically failing.

There may be a subtle correlation between properties needed to answer a specific out-of-sample request and in-sample features.

Unfortunately, prior to training/testing and without recognizing that correlation in the data set, I believe it's impossible to guarantee the model will include it. (Corrections welcome)


In essence: “You cant know in advance how far the model can approximate semantic patterns”

So claiming that out-of-sample performance is a mirage, would be a bridge too far?


Maybe "a mirage that might actually be true"? Which is a terrible thing to rely on! Unless it's usually true?


That measurement is the core of my current tasks. If you don’t know the error rate - then what are you doing ?


Delivering what some executive promised when they told investors 'the company is using AI.' /s


A Virtual beer/poison of choice to you and mjburgess in this thread.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: