Hacker Newsnew | past | comments | ask | show | jobs | submit | Tuna-Fish's commentslogin

In a lot of places in the world, the marginal cost of electricity is zero, if your capital costs are low enough to only purchase when there is excess wind.

The cost of batteries for long-term storage is still prohibitively high. In contrast, large hydrogen (or methanol, etc further products) are relatively cheap to store.

Those two things put together is pretty much it. There is massive room for additional wind capacity in northern europe (and solar in north africa, etc). In order for constructing that additional capacity to make any sense, there needs to be more demand that can idle for ~2/3rds of the time, and make economic sense to run a third of the time. In these conditions, the roundtrip efficiency is an entirely uninteresting statistic, and the capital cost of capacity is what matters.


How strange utility grids are spending on BSS and not hydrogen infrastructure.

How strange utility grids are spending on HVDC transmission and not hydrogen infrastructure.

HN commenters should ring up their local electrical grid operators and set them straight /s

Also, if you have extremely low cost of electricity: you build manufacturing nearby that needs massive amounts of energy, like metal refineries. Or you subsidize electric transport.

You don't pour money into a fuel that is a logistical headache and a half, a fuel that nobody uses, and can only be converted back into electricity with the standard terrible internal combustion / turbine efficiencies.


So, do you have a materials science degree, or are you claiming you know more than the trained experts already tackling these issues IRL?

I'm tired of Internet Experts(tm) announcing how dumb the specialists are for not seeing the Obvious Facts.


> How strange utility grids are spending on BSS and not hydrogen infrastructure.

BSS is usable when you need hours of storage, not when you need days.

> How strange utility grids are spending on HVDC transmission and not hydrogen infrastructure.

HVDC makes sense in certain conditions, but not others. You need to have alternate consumers/producers available that are not correlated with you.

> Also, if you have extremely low cost of electricity: you build manufacturing nearby that needs massive amounts of energy, like metal refineries. Or you subsidize electric transport.

Extremely low costs some of the time. Not low at all average costs. Metal refineries have significant capital costs and shutdown costs. You are not going to profitably operate one if you need to shut it down when the wind calms down, or if you are running it on batteries. The kind of existing industries that can make use of intermittently cheap power have already been scaled up, and we need more to keep building more renewables.

> HN commenters should ring up their local electrical grid operators and set them straight /s

I don't have to, because there are significant pilot projects ongoing.

This is new, and requires higher initial capital outlay than batteries (which have the significant advantage that it's easier to do small projects and then scale them up), so of course it's going more slowly. But there are things that hydrogen (+ things derived from hydrogen. Storing it as gas is not usually the best option, but if you have the gas you can refine it further at very low cost.) can in principle do that batteries simply cannot, like time-shift production by 3 months.

But seriously, you need to consider different metrics for different situations. If your data is from California or Australia, maybe consider that it is not applicable to all of the rest of the world?


The reason he was so skeptical is that for other engine manufacturers, there are generally different teams working on different parts of the engine, and because Convay's law the final artifact generally ends up looking like the organizational boundaries of the company that made it, with cleanly separated parts for every sub-organization that you can see in the final assembly. One of the things that SpaceX is good at is optimization across these kinds of boundaries, integrating hardware in ways that would be difficult for a more traditional organization.

Reducing the capital cost of electrolysis is extremely good, because it makes plants that only produce when electricity is cheap (midday in sunny climes, when wind is blowing in the Nordics) more feasible.

If this works out at scale (lots of problems can be found between a lab discovery and mass production), this is legitimately a very good thing for renewables.


The potential needed for electrolysis of water is 1.23V in theory, in practice a bit more to overcome inefficiencies. 1.7V is enough.

That's entire unrelated to ELF. The frequencies used by Sharp were in the VHF to UHF bands, and the effect just plain doesn't work at the rock bottom of the spectrum used by ELF and VLF.

This highlights a huge problem that ELF faced: Most people don't understand this stuff at all, and cannot tell the difference. On the other hand, the researchers and Navy were always very reluctant to go into the specifics of the technology, for military secrecy concerns. Beyond the sensible secret keeping, this always results in a much larger vague area where people don't want to talk even though nothing serious would be leaked because the laws are strict and figuring out the exact limits of what's classified is itself fraught.

So if on the other side you got people who are chaining together all the even vaguely EMF-related news and discoveries, and associating it all with a huge military secret project that no-one wants to talk about, and on the other side you got a bunch of people who actually know what's going on but are unwilling to give straight answers to even relatively simple questions because they are scared of accidentally divulging some key details that are classified, lots of people drew the frankly reasonable conclusion that there is something rotten here.

To put it simply, the kind of massive transmitters used by ELF and VLF projects would not be useful for working in the bands where the Frey effect works. The most efficient antennas are half- or quarter-wavelength, which for the Frey effect would be somewhere around 10-20cm (4-8 inches).


Also, ironically for the opposition 5th gen communications networks received, 4th gen stuff operates largely in the relevant bands, but 5th gen moves to higher frequencies.

Both of those of course use a way too high frequency signal for it to be meaningfully received by the Frey effect.



and therefore I'm one of the most people. I do realize ELF & VLF ain't microwaves but I think it is somewhat related compared to other articles. point is - someone is blasting constant speech straight to heads of population and nobody is willing to do anything about it. it's 21st century yet the world we live in has gone back to Dachau methods.

Sometimes the brain’s auditory cortex can 'misfire' and create sounds or voices that feel 100% real, even when there's no outside signal. It’s actually a documented medical phenomenon.

Comparing this to Dachau suggests you’re feeling a massive amount of psychological pressure. Usually, when the mind is under extreme, prolonged stress, it can start to externalize internal thoughts as voices or 'beamed' messages. It might be worth talking to a professional about the distress this is causing you—they might have ways to help mute those signals.


There is no mechanism by which ELF or VLF could be accidentally received without physically large antennas, or very fancy signal processing. The wavelengths are just too long, VLF is 10-100km and ELF is longer. The slopes are just imperceptible. Imagine trying to notice a wave in water that's 10 km long (and not very tall). You cannot even transmit speech with ELF, it carries information so slowly it's only usable for more code or very slow digital signals.

Heterodyne systems don't let you pack more information into a lower-frequency signal, they go the other way.


Another paper I find important: https://ieeexplore.ieee.org/document/6711930 What are your thoughts on radiomyography?

If you have a look at the paper, their antennas are designed to operate between 902-928 MHz, and they test for response at 600MHz-1200MHz, with it attenuating very quickly away from the peak at 900MHz. This is, again, an entirely different thing from VLF and ELF, which operate below 30kHz, at frequencies 20000 times lower.

Different indeed. Sorry about that - it's just that with voices in head there's barely any other priority in your life. Have a nice day.

What are your thoughts on FMCW radars? "Emotion recognition method using millimetre wave radar based on deep learning" research paper is what I find interesting. There also exists research paper that deals with recognition of drivers behavior (logically meaning - FMCW radars are capable of locking onto moving targets in quite long radius). From what I understood (with my limited understanding of the subject) - such radars are capable of picking up EMG signals. and according to wikipedia - EMG is enough to feed neural networks and decipher so called "silent talk" aka your inner monologue.

Those operate in the gigahertz ranges. Note that the emotion detection is done by scanning the facial expression, gigahertz range is starting to get pretty close to light, and it penetrates skin to a depth of less than a millimeter, so it can be used to draw pictures.

Not sure what you mean by "draw pictures". But I would be forever grateful if you could confirm or dismiss technical possibility of scanning EMGs remotely. It seems to be that way to me. At least to some degree which is already sufficient for stuff like neural fingerprinting (yielding individual identification and positioning in radar range) but I do lack understanding of actual physics to assess nature limits. Rest falls into place easily - Frey effect is proven science and NASA decoded EEG signals into words already back in 2003.

Sounds like you know a thing or two which is what I appreciate. I don't know what's used and don't have a budget to perform any meaningful analysis. But I do know that GWEN towers are 30 years old tech that's capable of transmitting signal in 200mhz to whatever ghz range 200 miles afar. Frey effect according to wikipedia is audible in 200 mhz - 3 ghz range. According to James C Lin research - signal doesn't have to be strong (in fact, strong signal can get hazardous to health and used like a weapon like in Venezuela operation). Surely radio folks can match up 30 years old tech. And signal processing capabilities nowadays can get absurdly sophisticated.

That doesn't matter. The statement wasn't "faster than AI right now", it was "will always be faster than AI". And that's just nonsense.

Current AI systems are extremely serial, in that very little of the inherent parallelism of the problem is utilized. Current-gen AI systems run at most a few hundreds of thousands of operations in parallel, while for frontier models, billions of operations could be run in parallel. Or in other words, what currently takes AI 8 hours will take it barely long enough for you to perceive the delay after you release the enter key.

For a demo, play around with https://chatjimmy.ai/ , the AI chatbot of Taalas, where they etched the model into silicon in a distributed way, instead of saving it in RAM and sucking it to execution units by a straw. It's a 8B parameter model, so it's unsuitable for complex problems, but the techniques used for it will work for larger models too, and they are working to get there.

And even Taalas is very far from the limits. Modern better quality LLM chatbots operate at ~40 tokens per second. The Taalas chatbot operates at 17000 tokens/s. If you took full advantage of parallelism, you should be able to have a latency of low hundreds of clock cycles per token, or single request throughput of tens of millions of tokens per second. (With a fully pipelined model able to serve one token per clock cycle, from low hundreds of requests.) Why doesn't everyone do it like that right now? Because to do this, you need to etch your model into silicon, which on modern leading edge manufacturing is a very involved process that costs hundreds of millions+ in development and mask costs (we are not talking about single chips here, you can barely fit that 8B model into one), and will take around a year. So long as the models keep improving so much that a year-old model is considered too old to pay back the capital costs, the investment is not justified. But when it will be done, it will not just make AI faster, it will also make it much more energy-efficient per token. Most of the energy costs are caused by moving data around and loading/storing it in memory.

And I want to stress that none of the above is dependent on any kind of new developments or inventions. We know how to do it, it's held back only by the pace of model improvement and economics. When models reach a state of truly "good enough", it will happen. It feels perverse to me that people are treating this situation as "there was a per-AI period that worked like X, now we are in a post-AI period and we have figured out that it will work like Y". No. We are at the very bottom of a very steep curve, and everything will be very different when it's over.


Huh, I have to say that I am impressed with Chat Jimmy. No doubt that the hardware running this model operates faster than any human. If this was possible to scale, (and I'm not saying it isn't, I just don't think it's likely right now) LLM's have a real shot of replacing real-time graphics, frontend UIs, and all sorts of interactive media if the market allows it.

I still think regardless of how fast a model outputs tokens, it still benefits the person responsible for that output to be well informed and knowledgeable about the abstractions they're piling on top of. If you have deep knowledge, you can operate faster than other people, and make those important decisions in a more intelligent manner than any model.

Maybe in the model we do get super intelligence and my point will finally break, but at that time I don't think I'll be worried about being wrong on the internet.


2 needs a more substantive rebuttal. LCDM correctly predicts where the dark matter is located after a galaxy collision, such as in the bullet cluster. There is no reasonable interpretation of MOND that has the center of mass of the galaxy shifted away from where it's visible matter lies, precisely how LCDM says it should be.

There is a reason why LCDM used to be a lot more disputed before the work of Clowe, Gonzales and others on the bullet cluster, and is now generally treated as settled science by practitioners. We might still be surprised by something, the universe is more wondrous and complex than we can possibly understand, but Occam's razor massively supports LCDM now. If you want to propose any alternative, you need to start by showing how it explains bullet cluster as well or better than LCDM. (And the bullet cluster specifically is not the only place where this is visible, there are others like MACS J0025.4-1222.)


> LCDM correctly predicts where the dark matter is located after a galaxy collision, such as in the bullet cluster. There is no reasonable interpretation of MOND that has the center of mass of the galaxy shifted away from where it's visible matter lies, precisely how LCDM says it should be.

It does not really make that "prediction", its a post hoc assignment of dark matter density based on weak lensing for which you can make a plausible "this is how it started" explanation.

you can counter with lcdm cant explain tons of stuff that MOND can, from tully fisher relation through barred spiral galaxies (n >> thousands) etc.


We have indirectly observed it a bunch, and our understanding of physics allow the existence of material that flatly cannot be directly observed.

The -85 was released in 1992, iirc it's TI's second graphing calculator. The -83 is a later model.

I was told that one of the designers graduated high-school in '81 and college in '85, so the HS calculator was an 81 and the college calculator was an 85.

Time for my daily "HBF is coming" comment.

The next step for models is to put the weights on flash, connected with a very wide interface to the accelerator. The first users will be datacenters, but it should trickle down to consumer hardware eventually. A single 512GB stack is expected to cost about $200, and provide 1.6TB/s of reads.

You still need some fast DRAM for the KV cache and for activations, but weights should be sitting on flash.


Reading from Flash is too power-intensive compared to DRAM, this is why Flash offload isn't used in the data center today. Flash is also prone to wearing out quickly so ephemeral data like the KV-cache can't really be stashed in there. Unless your model has an unprecedented level of sparsity I just don't see how HBF could ever be useful.

Currently available flash is obviously unusable. HBF is not that.

The reason HBF is (about to be) a thing is that flash manufacturers realized that if you heavily optimize flash for read throughput and energy, as opposed to density, you can match DRAM on throughput and get to within 2x on energy, at the cost of half your density. That would make the density still ~50 times better than DRAM, built on a cheap mass-produced process. All manufacturers are chasing this hard right now, with first samples to arrive later this year.

You are correct that it would absolutely not be used for any mutable data, only weights in inference. This is both because there is insufficient endurance (expected to be ~hundreds of drive writes total), but also because it will be very slow to write compared to the read speed. A single HBF stack is expected to provide 1.6TB/s reads, and single-digit GB/s writes. That's why I wrote the last sentence of my post that you replied to.


You're thinking in a provably-useful direction:

https://arxiv.org/pdf/2312.11514


HBF is not that. The paper you linked is about how to use flash memory that exists to boost LLM performance, with all kinds of optimization tricks. HBF is about making flash memory that doesn't require any of those tricks, and just has the read throughput that's needed for inference.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: