There was a period of like 2 years when I was a kid where chuck Norris jokes were all the rage on the playground and I made an iPhone app that listed them all.
Jokes like “Chuck Norris is able to slam a revolving door.”
Anyway, I “built” this stupid app when I was like 13, copy-pasted like 300 jokes in there and a random one would show every time you tapped the screen.
Chuck Norris’s estate blocked the app from going live. I wish I had printed that rejection out and framed it.
When I visited LA, I rode in a Waymo going the speed limit in the right lane on a very busy street. The Waymo approached an intersection where it had the right of way, when suddenly a car ignored its stop sign and drove into the road.
In less than a second, the Waymo moved into the left lane and kept going. I didn't even realize what was happening until after it was over.
Most human drivers would've t-boned the car at 50+ km/h. Maybe they would've braked and reduced the impact, which would be the right move. A human swerving probably would've overshot into oncoming traffic. Only a robot could've safely swerved into another lane and avoid the crash entirely.
Unfortunately, the Waymo only supported Spotify and did not work with my YouTube Music subscription, so I was listening to an advertisement at the time of my near-death experience. 4.5 stars overall.
I used to work at a startup that was trying to replace ads as the funding source for news (we failed, obviously)
but the crazy thing we discovered is that the people who run news websites mostly don’t know where their ads are coming from, have forgotten how the ad system was installed in the first place, and cannot turn them off if they try
we actually shipped a server-side ad blocker, for a parter who had so completely lost control of their own platform that it was the only way to make the ads stop
I was a developer at Iris Associates--I worked on versions 2 through 4. For version 3 I stuck in an easter egg in the About box. A certain combination of keys would produce a Monty-Python-like cut-out of Ray Ozzie's head and the names of the developers would fly out of his mouth. [This was when the software world was young and innocent and developers were trusted far beyond what they probably should have been.]
Lotus Notes was, I firmly believe, a glimpse of the future to come. In 1996, Lotus Notes had encrypted messaging, shared calendars, rich-text editing, and a sophisticated app development environment. I had my entire work environment (email, calendar, bugs database, etc.) fully replicated on my computer. I could do everything offline and later, replicate with the server.
And this was two years before the launch of Google and eight years before GMail!
In the article, the author speculates that the simplicity of the Lotus Notes model--everything is a note--caused it to become too complicated and too brittle. I don't think that's true.
Lotus Notes died because the web took over, and the web took over because it was even simpler. Lotus Notes was a thick client and a sophisticated server. The web is just a protocol. Even before AI, I could write a web server in a weekend. A browser is harder, but browsers are free and ubiquitous.
The web won because it could evolve faster than Lotus Notes could. And because it was free. The web won because it was open.
I will say this is one of the few pieces of prose I've read that was AI generated that didn't immediately jump out as it (a couple of inconsistencies eventually grabbed me enough to come to the comments and see your post details which mention it - I'd clicked through from the HN homepage), so your polishing definitely worked! Quite a neat little story
As a Microsoftie of more than a decade... Yeah, I see this.
We have an internal system called Cosmos[0] that does a great job of processing huge quantities of data very fast. And we sat on it for years while the rest of the industry moved to Spark and its derivatives. We finally released it as Azure Data Lake Analytics (ADLA) but did a shit job of supporting/promoting it.
We built Synapse, and it's garbage. We've now got Fabric which I guess is the new Synapse. I wouldn't really know because I probably have five different systems that I use that basically do large-scale data processing, and yet Fabric isn't one of them; who knows, maybe it will become the sixth?
We've had numerous internal systems for orchestrating jobs, and it wasn't until Azure Data Factory that we finally released something externally that we sort-of-kind-of-but-not-really use internally. (To be fair, some teams do use it internally, but we're not all rowing in the same direction.)
I regularly deal with multiple environments with different levels of isolation for security. I don't even know how it's all supposed to work -- I have my regular laptop and a secure workstation and three accounts that work on the two. Yet I have to do some privileged account escalation to activate these roles; when I'm done, there's no apparent way to end the activation early, so I just let it time out.
These things are but a fraction of the Azure offerings, but literally everything I have used in Azure makes me absolutely HATE working in the cloud. There's not a single bright side to it AFAICT. As best as I can tell, the only reason why Azure makes so much damn money is because Microsoft is huge and can leverage its size into growth. We're very much failing up here.
> The Xbox One has been emulated though (well not emulated, it's a compatibility layer like Wine).
The parenthetical is not needed. It is OK to call Wine an emulator. The "Wine Is Not an Emulator" thing came about later and was essentially a marketing change. How it came about is interesting.
The first suggestion to change the meaning of the word from a shortening of "windows emulator to the not an emulator backronym was in 1993 over concern that "windows emulator" might run into problems with Microsoft trademarks, but no action was taken.
Over time the not an emulator usage became an accepted alternative. The Wine FAQ in late 1997 for example said:
The word Wine stands for one of two things: WINdows
Emulator, or Wine Is Not an Emulator. Both are right.
Use whichever one you like best.
The release notes stopped calling it an emulator at the end of 1998. The 981108 release notes said:
This is release 981108 of Wine, the MS Windows emulator.
The 981211 release notes said:
This is release 981211 of Wine, a free implementation of
Windows on Unix.
As far as I have been able to tell from my recollections of that time and what I was able to find when I looked into it later is that this happened for two reasons.
1. Wine was useful for more than just running Windows binaries on Unix. It could also be used as a library you could link with code compiled on Unix as an aide to porting Windows programs to Unix.
2. Hardware emulators that emulator old systems like GameBoy or Apple II had become popular. Many people were only familiar with that kind of emulator, and those (the emulators, not the people!) tended to be slow.
That was fine when your emulator is running on a machine with a clock speed 300x that of the machine you are emulating and that has a much more efficient CPU, but when you tried to use a hardware emulator for something comparable to your machine it was usually unbearably slow.
People only familiar with such hardware emulators might see Wine described as a Windows emulator and think it was doing hardware emulation and not even give it a try. By dropping calling it an emulator Wine sidestepped that problem.
My uncle Richard is one of the inventors on Honeywell’s early phase‑detect autofocus. Patent US4333007A, which figures out both the direction and amount the lens needs to move instead of hunting.
Modern systems like Canon’s Dual Pixel AF in bodies such as the EOS R5 are very direct descendants of that idea, just implemented on‑sensor with far more processing power.
Every time I see an article such as this, I beam with pride. (Pun intended).
I remember an anecdote our robotics lecturer told our university class in 1995, which was about how in the west we try to make expensive things that are the absolute best of technology and how the other side didn't have that luxury and relied on ingenuity.
He described a cold war Russian missile they had somehow obtained and were tasked with trying to reverse engineer. Ostensibly, it was thought to be a heat seeking missile, but there seemed to be no control or guidance circuitry at all. There was a single LDR (light dependent resistor) attached to a coil which moved a fin. That was it. Total cost for the guidance system maybe a couple of dollars, compared to hundreds of thousands for the cheapest guidance systems we had at the time.
The key insight was that if you shined a light at it, the fin moved one way and if there was no light the fin moved the opposite way. That still didn't explain how this was able to guide a missile, but the next realisation was that the other fins were angled so when this was flying (propelled by burning rocket fuel), the missile was inherently unstable - rotating around the axis of thrust and wobbling slightly. With the moveable fin in place, it was enough to straighten it up when it was facing a bright light, and wobble more when there was no bright light. Because it was constantly rotating, you could think of it as defaulting to exploring a cone around its current direction, and when it could see a light it aimed towards the centre of that cone. It was then able to "explore the sky" and latch on to the brightest thing it could see, which would hopefully be the exhaust from a plane, and so it would be able to lock on, and adjust course on a moving target with no "brain" at all.
This is really funny. My wife and I watched all of New Scandinavian Cooking over a few months and there was an episode where he made butter. It blew our minds at how simple it was. We had no idea!
So we bought a couple of liters of cream (35% fat), put it in the stand mixer and made butter. There's a Serious Eats page about it.
The butter we made was better than what we normally buy. We live in Switzerland so the normal grocery store butter is very good. Our butter had less water in it (you can tell in a frying pan) and more flavor. Plus we take the resulting buttermilk and make ricotta cheese and then we take the leftover whey and make Norwegian cheese (more like fudge). So we get three products from one batch of cream. The butter comes out to be about 20 cents cheaper per 250g than store bought and then the ricotta and "fudge" are free, so financially you come out ahead. The cleanup is a bit of a pain though.
We've also made cultured butter from crème fraiche. It's tasty but even when the crème fraiche is on sale it's still like 2x the cost of using cream so probably not worth it other than gifts and special occasions. We made mandarin sorbet with the sour buttermilk after the crème fraiche butter and that was excellent.
When I tell old Swiss people (people in their 70s/80s) that we make butter they think it's hilarious. They tell me about how when they were kids their parents made their own butter and also at parties/gatherings the parents would give the kids a jar of cream and it was their job to shake it and pass it around until it was butter.
If you have an hour on the weekend and if you have a stand mixer I suggest just trying it. Start with the balloon whisk and when the peaks start forming switch to the paddle watch it because when the butter forms it happens quick and you get a big clump of butter rattling around in the mixer knocking it off balance. It takes maybe 25 minutes and then you have to wash it in ice water, mold it, then clean up. About an hour.
That was back when Altavista, the first search engine, was in downtown Palo Alto.
Brian Reid was behind that. It was intended as a demo for the DEC Alpha CPU. They wanted to show
that a large number of little machines could do a big job, which was a radical idea at the time.
They were leasing an old telco building, on Bryant St. behind the Walgreens on University Avenue. The telco had moved to a larger building nearby when they went from crossbar to 5ESS, leaving behind the very tall racks typical of electromagnetic central offices.
That's where the modern data center began. Before this, data centers were raised floor operations. This one was racks and racks of identical servers, with cable trays overhead. This was the first one to look like a telephone central office. Because that's what it was before.
The building is still some kind of data center. For a while, it was PAIX, the Palo Alto Internet Exchange, the peer meeting point for west coast ISPs. Equinix has it now; it's their SV8 location, offering colocation services. Small by modern standards, but close to the early HQs of many famous startups, including Facebook.
The grease problem was written up in the local newspaper, back when Palo Alto had one. Palo Alto Utilities (the city owns its power company) got the report, and quickly realized someone was dumping grease into their transformer vault. So they put someone on stakeout, watching all night. The offending restaurant employee was caught. The restaurant was fined and billed for the cleanup.
In 2006, there was another grease dumping incident in a transformer vault a block further north. This one did result in a grease fire.[1]
Palo Alto Fire Department has a CO2 truck, and dumped enough CO2 in to put out the fire. Power was out for most of the night.
One of my favorite quotes:
“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.”
I think about this a lot because it’s true of any complex system or argument, not just software.
He came to give a lecture at UT Austin, where I did my undergrad. I had a chance to ask him a question: "what's the story behind inventing QuickSort?". He said something simple, like "first I thought of MergeSort, and then I thought of QuickSort" - as if it were just natural thought. He came across as a kind and humble person. Glad to have met one of the greats of the field!
Fun story - at Oxford they like to name buildings after important people. Dr Hoare was nominated to have a house named after him. This presented the university with a dilemma of having a literal `Hoare house` (pronounced whore).
I can't remember what Oxford did to resolve this, but I think they settled on `C.A.R. Hoare Residence`.
It looks like this is a community continuation of the axlsx gem which was maintained back in the day by Randy Morgan (randym) over at https://github.com/randym/axlsx. One of my earliest open source contributions was adding support so that you could output spreadsheets with "conditional" formatting (color something red if it is below some value, for instance). I remember Randy being extremely supportive of new contributors and it made me want to be a part of the ruby community.
FidoNet was a simply wonderful innovation, and it was a reflection of the creativity of its author - Tom Jennings - and his views of community and identity.
https://grokipedia.com/page/tom_jennings
Tom was working on FidoNet in 1984, the same time my Iris co-founders and I had begun work on what became Lotus Notes. Architecturally, those of us who were working on collaborative systems in that era were shaped by the decentralized architecture of USEnet - inspired and motivated by the observation that a community could be brought together by something technologically as simple as uucp.
Both dial-up focused, Tom took this in the direction of a decentralized BBS, while I took it in the direction of masterless replicated nosql databases we called 'notefiles'. Identity being at the core, Tom was focused more on public community while we focused on private collaboration.
It was such an exciting time for emergent decentralization, shaped by a strong dose of 60's idealism.
I have such strong nostalgia for that era, but man, every time I try to go back and experience a BBS like this is just feels so empty. There really isn't a way to experience the feelings back then. I admire them for keeping it alive, but the magic was long ago dispelled by ubiquitous internet connectivity.
I can go play a retro videogame and be taken back, but I've never felt that way with a BBS. Maybe it's just the intensity of what the BBS world was back then. It was a way into another world.. an exclusive world.. the first taste of digital life, long before it was taken over by the masses. An intimate community, but also a gateway to esoteric and faraway lands.
I was 12 when I got my first modem in '87. Suddenly I was no longer trapped in my town but connected to something secret yet global. Sure, long-distance charges kept things local for the most part, but it wasn't long before I found a way around that. Stolen calling cards, open PBXes, then Tymnet/Telenet and then in '90 an internet gateway of a local university. Wardialing, finding strange systems in the night... poking around until something gave way. Arrested. Reset. Probation. No computers. It all came to a halt. Then one day at Boeing Surplus I found an old green screen terminal and a 300 baud acoustic modem. Back online.. but the world began to change. MBBS, multi-line systems, and the world began to open. The world wide web began to take shape, Yahoo awoke, and the old steamship rolled into port for the last time.
The few times I've baked there, it's been a pretty good experience. There's a full height proving cabinet, yeast works really well at altitude, the ovens have steam injectors, there are good mixers, a commercial fryer. In many ways much easier than baking at home, but probably not a patch on a good bakery.
We almost ran out of sugar in 2021 and Rothera sent us a bag of Tate and Lyle in break-glass-in-emerhency box on one of the early transit flights the following summer. That's still hanging in the galley. Cream also goes pretty quickly, and forget about eggs. But you only need "egg product" anyway.
The foods that tend to be avoided are pasta and beans, or really anything which has to be boiled. There's a massive pressure cooker but it's a pain to use and clean. It's also hard to brew coffee if you tend to use off-the-boil. The best you'll get is about 93 C. Espresso is fine as its pressurised anyway.
Jay here: this is a transition I've been working towards for awhile, and I'm looking forward to advancing the vision and ecosystem as CIO (Chief Innovation Officer). Toni has been an advisor to us for years, and I personally recruited him to take over as CEO while I focus on new projects within the company. It's an honor to have him on board to lead us into this next stage of growth.
I think Steve Lemay is a good guy. I kind of fought with him when I was an engineer, he was a young, new designer (at Apple). But I always respected his point of view—even when we argued.
When Jobs came back to Apple in the latter 1990's "Design" slowly came to have an outsized role. I was one half of the engineering team that owned Preview (the application) when Steve Lemay became a seemingly regular presence in the hallway. As the new "Aqua" UI elements arrived in the OS like the "drawer" and toolbar, Steve and his boss (forgetting his name right now—Greg Somebody?) were often making calls about our UI implementation.
The bigger argument I remember with Steve revolved around the drawer UI element. With regard to PDF's, (the half of Preview that I worked on, another engineer handled images), the drawer was to display thumbnails for each page. If the PDF had a TOC (table of contents) the drawer is where we would display that as well.
So when you opened a PDF in Preview, the PDF content of course would appear in the large window—thumbnails, TOC (later search) would be relegated to a vertical strip of drawer real estate alongside the window—the user could open/close the drawer if they liked to focus perhaps on the content.
Steve Lemay insisted the drawer live on the right side of the window [1]. This was inexplicable to me. I saw the layout of Preview as hierarchical: the left side of the content driving the right side. You click a thumbnail on the left (in the drawer) the window content on the right changes to reflect the thumbnail clicked on.
Steve said, no, drawer on the right.
"Why? Why the hell would we do that?"
Steve was quick: "The Preview app is about the content. The content is king."
I admit that I still disagreed with him after the exchange, but I had a new respect for him as a designer because he was able to articulate a rationale for his decision. I suppose I was prejudiced to expect hand-waving from designers.
(Coda: some years later after I had left the Preview team, an engineer still on the app let me know that the thumbnails, etc, were at last moving to the left side of the app. The "drawer" as as a UI element had by this time gone away: resizable split-views were the replacement.)
(Addendum: Steve also invented the early Safari URL text field that also doubled as a progress bar. Instant hate from me when I saw it: it was as if the text of the URL you entered was being selected as the page loaded. So I'm old-school and Steve had some new ideas…)
[1] Localization was such that in countries where right-to-left was dominant, the drawer would of course follow suit.
I grew up on a small village in a small island.
The yogurt lady was an essential part of the community.
Many stay-at-home moms (including my mom) seemed to enjoy her visit.
She and my mom talked a lot, sometimes for hours (I still can't figure out how she completed her job when she spent so much time with one person).
They chatted about recent events, like the daughter of the fisherman gave birth, the great-grandpa of the liquor shop died of cancer, a newly opened restaurant in the nearest town sucked, and sometimes shared even personal struggles or family matters.
It really helped a lot of people combat mental struggles caused by the isolation of being traditional stay-at-home wives in a super rural area.
The only downside was anything you shared with her would be spread in the entire village before dawn.
Depending on the author 17th century English can also be very close to modern English. A couple phrases will be off and the spelling is different, but most of the difficulty is more the author using constructions that have fallen out of use or "showing off" with overly complicated sentences.
For example here's an excerpt from 1688's "Oroonoko"
I have often seen and convers'd with this great Man, and been a Witness to many of his mighty Actions; and do assure my Reader, the most Illustrious Courts cou'd not have produc'd a braver Man, both for Greatness of Courage and Mind, a Judgment more solid, a Wit more quick, and a Conversation more sweet and diverting. He knew almost as much as if he had read much: He had heard of, and admir'd the Romans; he had heard of the late Civil Wars in England, and the deplorable Death of our great Monarch; and wou'd discourse of it with all the Sense, and Abhorrence of the Injustice imaginable. He had an extream good and graceful Mien, and all the Civility of a well-bred great Man.
I've told this story before on HN, but my biz partner at ArenaNet, Mike O'Brien (creator of battle.net) wrote a system in Guild Wars circa 2004 that detected bitflips as part of our bug triage process, because we'd regularly get bug reports from game clients that made no sense.
Every frame (i.e. ~60FPS) Guild Wars would allocate random memory, run math-heavy computations, and compare the results with a table of known values. Around 1 out of 1000 computers would fail this test!
We'd save the test result to the registry and include the result in automated bug reports.
The common causes we discovered for the problem were:
- overclocked CPU
- bad memory wait-state configuration
- underpowered power supply
- overheating due to under-specced cooling fans or dusty intakes
These problems occurred because Guild Wars was rendering outdoor terrain, and so pushed a lot of polygons compared to many other 3d games of that era (which can clip extensively using binary-space partitioning, portals, etc. that don't work so well for outdoor stuff). So the game caused computers to run hot.
Several years later I learned that Dell computers had larger-than-reasonable analog component problems because Dell sourced the absolute cheapest stuff for their computers; I expect that was also a cause.
And then a few more years on I learned about RowHammer attacks on memory, which was likely another cause -- the math computations we used were designed to hit a memory row quite frequently.
Sometimes I'm amazed that computers even work at all!
Incidentally, my contribution to all this was to write code to launch the browser upon test-failure, and load up a web page telling players to clean out their dusty computer fan-intakes.
In the 1990s, in the UK, my secondary school English teacher, who had Shakespearian actor vibes and wore dark tweed trousers and a plain white shirt—imagine Patrick Stewart if you may—brought this poem to life in my class by vividly reenacting a soldier dying from mustard gas poisoning by falling onto a desk and flailing about in front of the stunnned students sitting at it. I've never forgotten the closing line since.
I attached a generator with some supercaps and an inverter to a stationary bicycle a few years ago, and even though I mostly use it as a way to feel less guilty watching Youtube videos, it does give me a quite literal feel for some of the items on the lower end of the scale.
- Anything even even halfway approaching a toaster or something with a heater in it is essentially impossible (yes, I know about that one video).
- A vacuum cleaner can be run for about 30 seconds every couple minutes.
- LED lights are really good, you can charge up the caps for a minute and then get some minutes of light without pedaling.
- Maybe I could keep pace with a fridge, but not for a whole day.
- I can do a 3D printer with the heated bed turned off, but you have to keep pedaling for the entire print duration, so you probably wouldn't want to do a 4 hour print. I have a benchy made on 100% human power.
- A laptop and a medium sized floor fan is what I typically run most days.
- A modern laptop alone, with the battery removed and playing a video is "too easy", as is a few LED bulbs or a CFL. An incandescent isn't difficult but why would you?
- A cellphone you could probably run in your sleep
Also gives a good perspective on how much better power plants are at this than me. All I've made in 4 years could be made by my local one in about 10 seconds, and cost a few dollars.
As others have said, keep the battery in the 80%-30% range. Use the `batt` CLI tool to hard limit your max charge to 80%. Sadly, if you're already down to <2hrs, this might not make sense for you. Also prevent it being exposed to very hot or cold temps (even when not in use)
I type this from an M3 Max 2023 MBP that still has 98% battery health. But admittedly it's only gone through 102 charge cycles in ~2 years.
(use `pmset -g rawbatt` to get cycle count or `system_profiler SPPowerDataType | grep -A3 'Health'` to get health and cycles)
These sorts of core-density increases are how I win cloud debates in an org.
* Identify the workloads that haven't scaled in a year. Your ERPs, your HRIS, your dev/stage/test environments, DBs, Microsoft estate, core infrastructure, etc. (EDIT, from zbentley: also identify any cross-system processing where data will transfer from the cloud back to your private estate to be excluded, so you don't get murdered with egress charges)
* Run the cost analysis of reserved instances in AWS/Azure/GCP for those workloads over three years
* Do the same for one of these high-core "pizza boxes", but amortized over seven years
* Realize the savings to be had moving "fixed infra" back on-premises or into a colo versus sticking with a public cloud provider
Seriously, what took a full rack or two of 2U dual-socket servers just a decade ago can be replaced with three 2U boxes with full HA/clustering. It's insane.
Back in the late '10s, I made a case to my org at the time that a global hypervisor hardware refresh and accompanying VMware licenses would have an ROI of 2.5yrs versus comparable AWS infrastructure, even assuming a 50% YoY rate of license inflation (this was pre-Broadcom; nowadays, I'd be eyeballing Nutanix, Virtuozzo, Apache Cloudstack, or yes, even Proxmox, assuming we weren't already a Microsoft shop w/ Hyper-V) - and give us an additional 20% headroom to boot. The only thing giving me pause on that argument today is the current RAM/NAND shortage, but even that's (hopefully) temporary - and doesn't hurt the orgs who built around a longer timeline with the option for an additional support runway (like the three-year extended support contracts available through VARs).
If we can't bill a customer for it, and it's not scaling regularly, then it shouldn't be in the public cloud. That's my take, anyway. It sucks the wind from the sails of folks gung-ho on the "fringe benefits" of public cloud spend (box seats, junkets, conference tickets, etc...), but the finance teams tend to love such clear numbers.
A couple years back John Reilly posted on HN "How I ruined my SEO" and I helped him fix it for free. He wrote about the whole thing here: https://johnnyreilly.com/how-we-fixed-my-seo
Happy to do the same for you if you want.
The quickest win in your case: map all the backlinks the .net site got (happy to pull this for you), then email every publication that linked to it. "Hey, you covered NanoClaw but linked to a fake site, here's the real one." You'd be surprised how many will actually swap the link. That alone could flip things.
Beyond that there's some technical SEO stuff on nanoclaw.dev that would help - structured data, schema, signals for search engines and LLMs. Happy to walk you through it.
update: ok this is getting more traction than I expected so let me give some practical stuff.
1. Google Search Console - did you add and verify nanoclaw.dev there? If not, do it now and submit your sitemap. Basic but critical.
2. I checked the fake site and it actually doesn't have that many backlinks, so the situation is more winnable than it looks.
3. Your GitHub repo has tons of high quality backlinks which is great. Outreach to those places, tell the story. I'm sure a few will add a link to your actual site. That alone makes you way more resilient to fakers going forward. This is only happening because everything is so new. Here's a list with all the backlinks pointing to your repo:
4. Open social profiles for the project - Twitter/X, LinkedIn page if you want. This helps search engines build a knowledge graph around NanoClaw. Then add Organization and sameAs schema markup to nanoclaw.dev connecting all the dots (your site, the GitHub repo, the social profiles). This is how you tell Google "these all belong to the same entity."
5. One more thing - you had a chance to link to nanoclaw.dev from this HN thread but you linked to your tweet instead. Totally get it, but a strong link from a front page HN post with all this traffic and engagement would do real work for your site's authority. If it's not crossing any rule (specific use case here so maybe check with the mods haha) drop a comment here with a link to nanoclaw.dev. I don't think anyone here would mind if it will get you few steps closer towards winning that fake site
Oh, this is really interesting to me. This is what I worked on at Amazon Alexa (and have patents on).
An interesting fact I learned at the time: The median delay between human speakers during a conversation is 0ms (zero). In other words, in many cases, the listener starts speaking before the speaker is done. You've probably experienced this, and you talk about how you "finish each other's sentences".
It's because your brain is predicting what they will say while they speak, and processing an answer at the same time. It's also why when they say what you didn't expect, you say, "what?" and then answer half a second later, when your brain corrects.
Fact 2: Humans expect a delay on their voice assistants, for two reasons. One reason is because they know it's a computer that has to think. And secondly, cell phones. Cell phones have a built in delay that breaks human to human speech, and your brain thinks of a voice assistant like a cell phone.
Fact 3: Almost no response from Alexa is under 500ms. Even the ones that are served locally, like "what time is it".
Semantic end-of-turn is the key here. It's something we were working on years ago, but didn't have the compute power to do it. So at least back then, end-of-turn was just 300ms of silence.
This is pretty awesome. It's been a few years since I worked on Alexa (and everything I wrote has been talked about publicly). But I do wonder if they've made progress on semantic detection of end-of-turn.
Edit: Oh yeah, you are totally right about geography too. That was a huge unlock for Alexa. Getting the processing closer to the user.
Author of LFortran here. The historical answer is that both LFortran and Flang started the same year, possibly the very same month (November 2017), and for a while we didn't know about each other. After that both teams looked at the other compiler and didn't think it could do what they wanted, so continued on their current endeavor. We tried to collaborate on several fronts, but it's hard in practice, because the compiler internals are different.
I can only talk about my own motivation to continue developing and delivering LFortran. Flang is great, but on its own I do not think it will be enough to fix Fortran. What I want as a user is a compiler that is fast to compile itself (under 30s for LFortran on my Apple M4, and even that is at least 10x too long for me, but we would need to switch from C++ to C, which we might later), that is very easy to contribute to, that can compile Fortran codes as fast as possible (LLVM is unfortunately the bottleneck here, so we are also developing a custom backend that does not use LLVM that is 10x faster), that has good runtime performance (LLVM is great here), that can be interactive (runs in Jupyter notebooks), that creates lean (small) binaries, that fully runs in the browser (both the compiler and the generated code), that has various extensions that users have been asking for, etc. The list is long.
Finally, I have not seen Fortran users complaining that there is more than one compiler. On the contrary, everybody seems very excited that they will soon have several independent high-quality open source compilers. I think it is essential for a healthy language ecosystem to have many good compilers.
Jokes like “Chuck Norris is able to slam a revolving door.”
Anyway, I “built” this stupid app when I was like 13, copy-pasted like 300 jokes in there and a random one would show every time you tapped the screen.
Chuck Norris’s estate blocked the app from going live. I wish I had printed that rejection out and framed it.