it would be great if this were the case. unfortunately, Uber has been documented to practice individual price discrimination at a massive scale, using factors like if you’re in a low-income vs high-income neighborhood, individual rider “price sensitivity”, etc, in addition to market conditions (surge pricing), and as a result they have netted billions in profit [1]. i would guess this is why Uber AI researchers are paid so much.
It would be difficult, but not impossible. The main problem is tracing out all the circuitry, which is very time-consuming and error-prone. Trust me on this :-)
The second problem is that converting the circuitry to Verilog is straightforward, but converting it to usable Verilog is considerably more difficult. If you model the circuit at the transistor level in Verilog, you won't be able to do much with the model. You want a higher-level model, which requires converting the transistors into gates, registers, and so forth. Most of this is easy, but some conversions require a lot of thought.
The next issue is that you would probably want to use the Verilog in an FPGA. A lot of the 8087's circuitry isn't a good match for an FPGA. The 8087 uses a lot of dynamic logic and pass transistors. Things happen on both clock edges, so it will take some work to map it onto edge-trigger flip-flops. Moreover, a key part of the 8087 is the 64-bit shifter, built from bidirectional pass transistors, which would need to be redesigned, probably with a bunch of logic gates.
The result is that you'd end up more-or-less reimplementing the 8087 rather than simply translating it to Verilog.
ah, i see, thanks for the insight! do you have any advice on how one might get started with IC reverse-engineering? i think it would be interesting to reimplement these chips in a way that's at least inspired by the original design
How to get started reverse engineering? That's a big topic for a HN comment, but in brief... Either get a metallurgical microscope and start opening up chips, or look at chip photos from a site like Zeptobars. Then start tracing out simple chips and see how transistors are constructed, and then learn how larger circuits are built up. This works well for chips from the 1970s, but due to Moore's Law, it gets exponentially more difficult for newer chips.
Do you have any good tips on what to look out for when buying a used metallurgical microscope for looking at decapped chips? Even if not a complete set constraints, I'd appreciate some off the cuff thoughts if you have the time.
I use a basic metallurgical microscope (AmScope ME300TZB). An X-Y stage is very useful for taking photos of chips and stitching them together. A camera is also important; my scope has a 10MP camera. I'm not into optics, so I don't know what lens characteristics to look for.
There's a difference between what Verilog will allow and what is "synthesizable". In other words, there is a lot of stuff that you can express in Verilog, but when you try to turn it into an FPGA bitstream, the software will say, "Sorry, I don't know how to do that." Coming from a software background, this seems bizarre, as if C++ compilers rejected valid programs unless they stuck to easy constructs with obvious assembly implementations.
Using both edges of a clock is something that you can express in Verilog, but can't be directly mapped onto an FPGA, so the synthesis software will reject it. You'd probably want to double the clock rate and use alternating clock pulses instead of alternating edges. See: https://electronics.stackexchange.com/questions/39709/using-...
Coming from an electronics design background, I'm even more amazed that Verilog can't gracefully handle multi-phase clocks, let alone two phases of a single clock. That's a big part of getting the most out of your power and timing budget. Also, it seems half the discussion around clocking in FPGAs are around the metainstability of communicating between logic on separate single-phase clocks. If even one clock used two phases, you'd have entirely stateful conditions.
I've found that the FPGAs themselves can handle multi-phase clocks in combinatorial logic. If you want to use the built-in clock routing and latches, I would recommend running the output of the PLL to a LUT input, then outputting that input as well as its inverse from the LUT, routing each to a global clock input. That will keep the phase right at 180°, let you drive directly off global clock fanout, and let you run the clock at the highest frequency that the fabric supports.
> Coming from a software background, this seems bizarre, as if C++ compilers rejected valid programs unless they stuck to easy constructs with obvious assembly implementations.
To my understanding, isn’t it more like there being a perfectly good IR instruction coding for a feature, but with no extant ISA codegen targets that recognize that instruction? I.e. you get stuck at the step where you’re lowering the code for a specific FPGA impl.
And, as with compilers, one could get around this by defining a new abstract codegen target implemented only in the form of a software simulator, and adding support for the feature to that. Though it would be mightily unsatisfying to ultimately be constrained to run your FPGA bitstream on a CPU :)
The non-synthesizable features of Verilog not only work in current simulators, they were expressly developed for that purpose. Verilog has those features to describe conditions that might exist in a semiconductor as manufactured, but aren't part of any design, so that they can be more accurately simulated. For example, a pin state can be unknown, or two pins can be connected with a delay line. These allow a real-life semiconductor to be characterized well enough to insert into a simulation of the electronics circuit as a whole.
It's more akin to directives than instructions. Debug instructions can also serve a similar purpose, although they actually run on the hardware, whereas compiler directives and non-synthesizable verilog instructions are never executed.
personally, the evolution of replit makes me sad. i remember writing some of my very first ever programs in python using replit in middle school, as it wasn't blocked by the school network and it was the best way of running arbitrary code online back then. i used it to execute java code for AP computer science in high school, and i improved a ton at using the terminal as well. At some point, I stopped using the replit web editor and was coding by full-screening the built-in terminal and using vim. it was a formative experience and really helped me develop as a programmer even though all i had access to was a locked-down chromebook. but now, going back to the website and seeing the first thing it shows to you is how you can "build apps using AI", not even being able to even create an environment to run some python code without talking to an LLM, and the company focusing on ARR and becoming "AI-native" and creating value and all that jazz, and it feels like the magic of learning to code for the first time has been lost. luckily, kids these days are spoiled with webassembly and can run pretty much whatever they want in the browser, so i'm sure the next generation of young programmers will be alright
I agree so much with this. I used Replit extensively to prototype things in different languages and share my work with my teammates, who could then suggest tweaks to my ideas. We never came close to the limits of the free tier, but I paid for it anyway because I loved the product.
You should be grateful and happy for the people that helped you in your journey, they made it! You know, apps are not AI (yet), what you used for free was built by someone that deserves the success.
But that’s true of everything we use. Someone made that fork you use. That plate. The shoes you wear. Same for an app. You don’t deserve anything. If you make something, and release it into the world, you have a responsibility. Not a reward. A reward may come. People may pay you for your services or your novelty but in no way shape or form are you deserving of it. Deserving of something is a 3rd person observation. You can not demand that you deserve anything.
> But that’s true of everything we use. Someone made that fork you use
I paid a fair price for my forks and plates and shoes, I don't need to be grateful for that. Using a platform for free and then complaining that now they focus on making money is not the same.
> Deserving of something is a 3rd person observation. You can not demand that you deserve anything.
>I paid a fair price for my forks and plates and shoes, I don't need to be grateful for that. Using a platform for free and then complaining that now they focus on making money is not the same.
They were focused on making money back then too. Nobody's complaining that they were trying to make money!
If someone is grateful that the old product existed at the time, because it helped them, it's completely understandable and valid that they'd be sad that it no longer exists to help other people. What there is now has a hundred almost-the-same alternatives[0]; it sounds like what they used to be was unique.
[0] the article quotes a claim about “the first agent-based coding experience in the world” "last fall" but that seems to ignore not-as-successful earlier things like AutoGPT that had code agent projects by early 2024 if not before, with initial "agent" behavior in 2023.
> Did he consider that the LLM wouldve done it the right way after the first message if prompted correctly?
I think the article is implicitly saying that an LLM that's skilled enough to write good code should have done it "the right way" without extra prompting. If LLMs can't write good code without human architects guiding it, then I doubt we'll ever reach the "10x productivity" claims of LLM proponents.
I've also fell into the same trap of the author in assuming that because an LLM works well when guided to do some specific task, that it will also do well writing a whole system from scratch or doing some large reorganization of a codebase. It never goes well, and I end up wasting hours arguing with an LLM instead of actually thinking about a good solution and then implementing it.
AOPS is a great community but it's for competitive mathematics (AIME, AMC, USAMO, etc.)
For OP: I think the problem with your question is that "math" is too broad, e.g. Math Overflow is for professional mathematicians, things like AOPS are very specific and most online communities are about what you would call "recreational" math (think 3blue1brown or standupmaths), likely far below what you want.
ok, I guess from what you say Math Overflow might be my best bet then, I'd like to have the competence of a professional mathematician, even if I work as a software engineer because it pays more :)
The kind of maths they do on Mathoverflow is PhD level and above... they probably get paid more if they leave academia, but then they are hired to do the kind of maths on math.stackexchange.
Not really, as a former theoretical physicist who also got offers from many math PhD programs and have many math PhD friends, by far the most recruiting emails we get are from quant trading firms, and you won’t find that kind of math on Math.SE.
But yeah, roughly speaking you need to at least be a math PhD student who have passed your quals to be at the entry level of MathOverflow.
This is a really fun exploration of an old premise, thanks! Great little easter eggs with color zone and the wedding.
(But I'm completely stuck in the hard rain, after 'Emergency - connect to snake', sometimes i survive the stream, sometimes i can't, i haven't figured out the state that makes the stream traversable.)
[1] https://len-sherman.medium.com/how-uber-became-a-cash-genera...