Currently it looks like at least Firefox and Chromium both cache stylesheets and included files as you'd expect. In fact, you can use this to increase cacheability in general. e.g. when this site is having performance issues, it often works logged out/when serving static versions of pages. It's easy to make every page static by including a `/myuser.xml` document in the xsl template and using that to get the current logged in user/preferences to put on the page. This can then be private cached and the pages themselves can be public cached. You can likewise include an `/item-details.xml?id=xxxx` that could provide data for the page to add the logged in user's comment scores, votes, etc. If the included document fails to fetch, it falls back to being empty, and you get the static page (you could detect this and show a message).
I work on compilers. A friend of mine works on webapps. I've seen Cursor give him lots of useful code, but it's never been particularly useful on any of the code of mine that I've tried it on.
It seems very logical to me that there'd be orders of magnitude more training data for some domains than others, and that existing models' skill is not evenly distributed cross-domain.
In theory you could do interferometry with a lot of orbiting radio telescopes. I'm no radio astronomy expert, but I can see a lot of practical problems with this idea.
For one thing, if you want the same signal-collecting power as the Arecibo observatory, you need dishes with the same total area. Since Arecibo is 1000 feet in diameter, if you put dishes on every one of the 12,000 satellites in the initial Starlink constellation, they would each have to be over 9 feet. That's about the same size as the chassis of the satellite itself.
In order to do anything useful with the collected data, the receivers need to have very precisely synchronized clocks, and their relative positions need to be known to within a small fraction of the wavelengths you're interested (which for Arecibo can be on the order of centimeters). I'm not sure whether GPS receivers alone would be enough to meet these requirements -- you might need to add atomic clocks to every satellite as well.
Now you have to think about how to aim the antennas. Presumably you can't just reorient the entire satellite, because its main job is to keep its ground-facing antennas aimed at the ground and its solar panels aimed at the sun. So you need to add a separate antenna pointing mechanism, with a fairly wide range of very accurate movement along multiple axes, so that all of the radio antennas can observe the same region of the sky simultaneously.
Presumably the Arecibo telescope itself is connected to fairly sensitive, low-noise, specialized signal processing equipment. You would have to take all of this equipment, design a space-rated version that can fit on a satellite, and then manufacture 12,000 of them. You also need to add enough solar panels to power it.
All of this would add a huge amount of mass to every satellite, which would make them way more expensive to launch. Note that this applies to to both the monetary cost and the opportunity cost of SpaceX's annual launch capacity.
Finally, the Starlink satellites have a roughly 5-year design lifetime, so it's not enough to build this colossally expensive telescope array once; you have to keep building and launching half a dozen replacements per day for as long as you want to continue using it. There's no way it would ever be cost-competitive with a ground-based observatory.
As a younger developer I was part of a 3 person team that was interviewing people at a telecom. I mostly kept quiet and observed since it was the first time I'd done this but we had one member of the team who was bent on trying to show how smart he was.
My absolute favorite interview of the group, a guy who ended up getting hired elsewhere, was the most calm and professional developer I'd ever been around. Sitting in the room, I felt like he was interviewing us rather than the other way around. It was just in the way that he carried himself, answered questions.
And then the guy on our team started asking questions. The gentleman we were interviewing, rather than answer, kept calmly asking more questions of him instead.
- Can you describe the situation where this would be used?
No
- Have you ever run into an issue like this before that could be an example?
No
- How do I know that by solving this problem we would be addressing the business or customer problem?
We don't
- Then shouldn't I be solving questions related to the work?
...
I loved that guy. Wish we could have hired him because I really wanted to work with him.
Back in 1996 I developed a TCP/IP stack from the ground up based on the the DARPA and RFC specifications from that era. The impression I has from the quality of consistency was that TCP/IP was a students project. Delayed-ACKs was described briefly as a concept in addendum. Implementing it was a nightmare.
What happened was that with bulk unidirectional flow (FTP) the window would slowly fill up until it was completely full after which transport would halt to a grinding stand still. Only when retransmit timers triggered would transport resume at full speed, slowly filling the window to repeat the cycle. As I can remember, this cycle would repeat every 5 seconds or so.
Out of sheer frustration I logged all traffic with (then) high resolution timestamps and hand drew both sides of the connection on 132 columns zigzag printer paper which filled the corridor.
As it turned out, the protocol state timings assume that the transit time is 0ms. For one-on-one REQ/ACK this isn't a problem, for delayed ACK it was a game-breaker.
When a packet arrives, several tests are performed to determine if it is within sequence and rejected otherwise. With delayed ACKS the administration did not represent the actual state triggering lots of false negatives.
Solution was to create dual state information. One being the actual values for the end-point, the other being the projected/assumed state of the other end, taking into account it is suppressing ACKs. Then, for incoming packets, the headers are tested against the projected other-end state and transmitted packets constructed using the this-end state.
Performance hit the roof and filled the cable with 98% of the theoretical bandwidth. Much more than stacks implemented by competitors.
Sadly my employers did not allow me to publish the findings.
HN generally sets the expectations that downvotes are only for off-topic or non-helpful posts. Someone disagreeing productively should get your upvote - even if you still disagree with them.
Subreddits often become a way for mods to push their agenda. There are exceptions to this - /r/moderatepolitics has done a decent job of becoming a good place for across-the-aisle discussion, etc. Unfortunately even productive subreddits get raided by crazy people and extremists from time to time.
HN by not having scores (visible to anyone but you) prevents the "playing for internet points" game. As a user you have very little history and exposure to others so each argument is generally standalone. Whereas on reddit someone will dig through my comment history and bring up the subreddits I'm on ("Oh, you claim to be a moderate so you're really just a nazi!") or stalk me, that kind of bad behavior is just not possible on something like HN.
Now they are free to spend their (free) time on what they want. When most of the core contributors where payed by Mozilla, they could not chose to eg. "focus on web compat", hence they went on something you consider useless, but that kept the project alive. That allowed a few other things to be done like the re-write of the parallel layout.
Of course we can't know for sure what would have happened if they refused to work on VR, but my gut feeling is that this would not have helped the project.
They're not interested in "[providing] an independent, modular, embeddable web engine," they're interested in writing software in Rust and having their name associated with a Mozilla/Linux Foundation project. Go look at their governance.[1]
Their webpage tells you what they really care about, and it isn't embedding.
"It's not hard to get an account unbanned and/or unpenalized if you really want to use HN as intended."
For the most part, I DO use HN as intended, IN SPITE of your shadowbanning. You just refuse to read my posting history to see that much.
This clearly shows I have more integrity than you, as you won't dare unhide any of my decent comments, instead preferring to make yourself look the victim by only unbanning my comments which you can unban to make me look bad. You're the real issue here, not me.