Hacker Newsnew | past | comments | ask | show | jobs | submit | generuso's commentslogin

Sure. Line scan indoor units are extremely affordable, and some cost less that $20, sold as spare parts for robot vacuum cleaners. Outdoor units (with higher ambient light tolerance and longer range) are an order of magnitude more expensive, but also available.

Here is some detailed information about low cost units: https://github.com/kaiaai/awesome-2d-lidars/blob/main/README...


The efficiency of X-ray tubes is proportional to voltage, and is about 1% at 100kV voltage. This is the ballpark for the garden variety Xray machines. But the wavelength of interest for lithography corresponds to the voltage of only about 100V, so the efficiency would be 10 parts per million.

The source in the ASML machine produces something like 300-500W of light. With an Xray tube this would then require an electron beam with 50 MW of power. When focused into a microscopic dot on the target this would not work for any duration of time. Even if it did, the cooling and getting rid of unwanted wavelengths would have been very difficult.

A light bulb does not work because it is not hot enough. I suppose some kind of RF driven plasma could be hot enough, but considering that the source needs to be microscopic in size for focusing reasons, it is not clear how one could focus the RF energy on it without also ruining the hardware.

So, they use a microscopic plasma discharge which is heated by the focused laser. It "only" requires a few hundred kilowatts of electricity to power and cool the source itself.


Neato from San Diego has developed a $30 (indoor, parallax based) LIDAR about 20 years ago, for their vacuum cleaners [1].

Later, improved units based on the same principle became ubiquitous in Chinese robot vacuums [2]. Such LIDARs, and similarly looking more conventional time-of-flight units are sold for anywhere between $20-$200, depending on the details of the design.

[1] https://scholar.google.com/scholar?q=%22A+Low-Cost+Laser+Dis... [2] https://github.com/kaiaai/awesome-2d-lidars/blob/main/README...


You could [1], but it is not very cheap -- the 32GB development board with the FPGA used in the article used to cost about $16K.

[1] https://arxiv.org/abs/2401.03868


LSI Logic and VLSI Systems used to do such things in 1980s -- they produced a quantity of "universal" base chips, and then relatively inexpensively and quickly customized them for different uses and customers, by adding a few interconnect layers on top. Like hardwired FPGAs. Such semi-custom ASICs were much less expensive than full custom designs, and one could order them in relatively small lots.

Taalas of course builds base chips that are already closely tailored for a particular type of models. They aim to generate the final chips with the model weights baked into ROMs in two months after the weights become available. They hope that the hardware will be profitable for at least some customers, even if the model is only good enough for a year. Assuming they do get superior speed and energy efficiency, this may be a good idea.


The document referenced in the blog does not say anything about the single transistor multiply.

However, [1] provides the following description: "Taalas’ density is also helped by an innovation which stores a 4-bit model parameter and does multiplication on a single transistor, Bajic said (he declined to give further details but confirmed that compute is still fully digital)."

[1] https://www.eetimes.com/taalas-specializes-to-extremes-for-e...


It'll be different gates on the transistor for the different bits, and you power only one set depending on which bit of the result you wish to calculate.

Some would call it a multi-gate transistor, whilst others would call it multiple transistors in a row...


That, or a resistor ladder with 4 bit branches connected to a single gate, possibly with a capacitor in between, representing the binary state as an analogue voltage, i.e. an analogue-binary computer. If it works for flash memory it could work for this application as well.

That's much more informative, I think my original comment is quite off the mark then.

It is not the same kind of equipment. ASML machines use a 500 watt EUV source in order to be able to expose a few wafers per minute. The tabletop device has the output power listed as "1 uw-10 mw". This is a source intended of spectroscopy instrumentation, not for exposing wafers.


One of their patents describes exactly that -- driving a hardened stud into the softer metal of the deck, essentially by using a gunpowder actuated nail gun:

https://patents.google.com/patent/US20240092508A1/en

They have also included a way to disconnect the stud from the leg afterwards, such that the deck can be tidied up conveniently after the rocket had been removed. This is a neat idea -- the damage to the deck should very localized, and the rocket gets secured quickly and without putting human welders at risk.


The idea was always appealing, but the implementation has always remained challenging.

For over a decade, "Mythic AI" was making accelerator chips with analog multipliers based on research by Laura Fick and coworkers. They raised $165M and produced actual hardware, but at the end of 2022 have almost gone bankrupt and since then there has been very little heard from them.

Much earlier, the legendary chip designers Federico Faggin and Carver Mead founded Synaptics with an idea to make neuromorphic chips which would be fast and power efficient by harnessing analog computation. Carver Mead published a book on that in 1989: "Analog VLSI and Neural Systems", but making working chips turned to be too hard, and Synaptics successfully pivoted to touchpads and later many other types of hardware.

Of course, the concept can be traced to an even older and still more legendary Frank Rosenblatt's "Perceptron" -- the original machine learning system from 1950s. It implemented the weights of the neural network as variable resistors that were adjusted by little motors during training. Multiplication was simply input voltage times conductivity of the resistor producing the current -- which is what all the newer system are also trying to use.


I know of only one real world successful product using analog computation in place of expensive high end micro. It was the first proper (no dedicated special mousepads) Optical Mouse designed and build by HP->Agilent->Avago and released by Microsoft in 1999 as IntelliMouse Optical. https://gizmodo.com/20-years-ago-microsoft-changed-how-we-mo... Afaik Microsoft bought 1 year explosivity for the sensor. Avago HDNS-2000 chip did all the heavy lifting in analog domain.

Travis Blalock Oral History https://www.youtube.com/watch?v=wmqa9XJED-Q https://archive.computerhistory.org/resources/access/text/20...:

"each array element had nearest neighbor connectivity so you would calculate nine correlations, an autocorrelation and eight cross-correlations, with each of your eight nearest neighbors, the diagonals and the perpendicular, and then you could interpolate in correlation space where the best fit was."

"And the reason we did difference squared instead of multiplication is because in the analog domain I could implement a difference-squared circuit with six transistors and so I was like “Okay, six transistors. I can’t do multiplication that cheaply so sold, difference squared, that’s how we’re going to do it.”

"little chip running in the 0.8 micron CMOS could do the equivalent operations per second to 1-1/2 giga operations per second and it was doing this for under 200 milliwatts, nothing you could have approached at that time in the digital domain."

Extra Oral History with inventor of the sensor Gary Gordon: https://www.youtube.com/watch?v=TxxoWhCzIeU


The optical mouse is great example. There are lots pre-90s ofc such as in military applications.

One of the reasons for failure to compete is that actually all computers are physical computers. Therefore digital is still tethered to one of the greatest analog components ever discovered and as a result when you do analog ai you are really competing with the physics of the transistor. The digital computation is the complex icing on the top of an analog cake.


The idea of analog neural networks is appealing. I bought Analog VLSI and Neural Systems in 1989 and still have it as a trophy on my bookshelves. My gut feeling says one day analog neural networks will be a thing, if only for the reason of considerable lower power consumption.

I’m not saying that life is analog, DNA is two bits. IMHO life is a mix of Analog & Digital.


It is very difficult to scale digital-analog hybrids, because of amount of DAC-ADC components required.


From what I have seen, Ethernet ports always have a small isolation transformer for each twisted pair, between the connector and the PHY. Usually four of such transformers are combined in one small magnetics package. The insulation in the transformer is specified to withstand over a kilovolt of lightning induced voltage -- that's one of the purposes of such galvanic isolation.

The data travels as the differential voltage in each of the twisted pairs, and is transmitted magnetically by the transformer to the secondary winding. The power is applied between different pairs, and in each pair appears as a common mode voltage. This is all stopped by the transformer, and in devices designed to support PoE, the PoE circuits tap the mid-point of the primary windings to access the supplied voltage.

So at a first glance, it seems that if 48 volts is applied between the twisted pairs to a non-PoE device, this voltage would simply be blocked by the transformer. But since there is a widespread concern about this, there must be more to the story -- maybe somebody who actually worked with these circuits can explain why this is more complicated than it seems at first?

Edit: Found an answer. It seems that at least some of the designs of non-PoE Ethernet jacks terminate the common mode signals to a common ground though 75 Ohm resistors. In this case, if the voltage were applied between the twisted pairs, the resistors would dissipate far too much power and would burn out. So there is definitely a concern with the dumb PoE injectors and at least some non-PoE devices. https://electronics.stackexchange.com/questions/459169/how-c...


Never noticed the 75ohms before- but it'd be 150ohms for passive PoE. (Through one pair and to another, two 75's)

Theres fixes, but passive PoE was a pretty dirty hack- so negotiation got added.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: