Hacker Newsnew | past | comments | ask | show | jobs | submit | more moring's commentslogin

Self-inflicted by who?


In case you can’t see, VerifiedReports states:

“Developers who ignored proper color-palette practices.”

Agreed. Pls vouch for the comment.


Developers who ignored proper color-palette practices.


> But it’s hard to deny scale: the US is a populous country. But there are more than ONE BILLION more people in China.

This is true for the US alone, but US+Canada+EU is on the same order of magnitude, all of which prefer a more-than-zero-trust situation.


Which is also very spread out, with an ocean bisecting. Not exactly "skilled workers right across the street" for most of that land.


China is huge in area too. You don't have one billion people right across the street. Anything that is across the street is, by definition, a localized thing involving far fewer people, which you can totally have in US+CA+EU too.


not just an ocean, the various governments and visas make it hard to move people you need


Not just people, parts too. Importing parts from the USA to the UK often takes longer and has more paperwork than from China.


Also don't forget tolls / tariffs. Building products that relies on parts from different countries (at least from/to USA) reduces the margins compared to a unified Chinese market.


Visas are a political human construct subject to change, not a immovable force of nature. The same visa keeping workers out, can always be removed or changed over night if desired to achieve the opposite effect: move masses of skilled people in. See operation paperclip.


If you look at historic locations of hyperinnovation there are bunch of different things. One of which is density of the activity and the supply chain. Imagine needing a new gearset for a robot, you roll down to Gear Set Alley looking for a used unit, you hit 4 different robot wrecking yards, explaining to 5 people along the way. They eventually point you over to machine shop that has modified an existing part into exactly what you are looking for.

You can solve in a day what might take you a day, what might take you 20 days, 3x the price and a lead time of weeks. These kinds of hyperfocused, super dense innovation zones have existed in many places across all of time.


A bit late, but I'll try...

This assumes that Gear Set Alley is near you. At >3km you'd go by car and the "meet people on the way" breaks down. At >100km you'd rather call them and miss the wrecking yards and other factories nearby.

In China, Gear Set Alley may easily be 1000km from you. Or it may be <3km from you, if you're lucky. The point is that China, taken as a whole, has no advantage over EU or US in geographical proximity. Certain regions may have that advantage, but that is totally possible in every country on earth (beyond a certain, very tiny minimum size).


> The first thing in the decision tree is "do you send a crew?" and you're trading off the hard problem of teleoperating the thing (...)

Tele-operating a whole facility and more is something where you can make huge improvements down here on earth, before you even start into space, and even make a profit from it.


Two things that come to my mind:

1. Sometimes "lock-free" actually means using lower-level primitives that use locks internally but don't expose them, with fewer caveats than using them at a higher level. For example, compare-and-set instructions offered by CPUs, which may use bus locks internally but don't expose them to software.

2. Depending on the lower-level implementation, a simple lock may not be enough. For example, in a multi-CPU system with weaker cache coherency, a simple lock will not get rid of outdated copies of data (in caches, queues, ...). Here I write "simple" lock because some concepts of a lock, such as Java's "synchronized" statement, bundle the actual lock together with guaranteed cache synchronization, whether that happens in hardware or software.


Reminder that lock-free is a term of art with very specific meaning about starvation-freedom and progress and has very little to do with locking.


I understood the criticism to be about describing it as open source when it isn't, i.e. that

"We say it's open source because we expect the reader to know that we're not telling the truth"

should be replaced by

"It's open source except for the BLE firmware blob, which can't be open source due to regulatory reasons."

To be fair, the article just repeated the claims made on the GitHub page for the SDK.


Some rambling...

I always wonder if something like these undocumented opcodes could be used as a concept in more modern processors. Backend, transistors were a precious resource, and the result was those opcodes. Nowadays, instruction encoding space is more precious because of pressure on the instruction cache. Decoding performance might also be relevant.

The result of these thoughts is something I called "PISC", programmable instruction set computer, which basically means an unchanged back-end (something like RISC + vector) but a programmable decoder in front of it. So then different pieces of code could use different encodings, optimized for each case.

...which you get in RISC with subroutines + instruction cache, if you regard the CALL instructions as "encoded custom instructions", but not quite because CALLs waste a lot of bits, and you need additional instructions to pass arguments.

For pure RISC, all of this would at best take some pressure of the instruction cache, so probably not worth it. Might be more interestring for VLIW backends.


ARM has the Thumb opcodes, which aren't your "PISC" concept (which is a limited form of loadable microcode, another thing that's been done) but special, shorter encodings of a subset of ARM opcodes, which the CPU recognizes when an opcode flips an internal bit. There's also Thumb-2, which has a mix of short (16-bit) and full-size (32-bit) opcodes to help fix some performance problems of the original Thumb concept:

https://developer.arm.com/documentation/dui0473/m/overview-o...

> ARMv4T and later define a 16-bit instruction set called Thumb. Most of the functionality of the 32-bit ARM instruction set is available, but some operations require more instructions. The Thumb instruction set provides better code density, at the expense of performance.

> ARMv6T2 introduces Thumb-2 technology. This is a major enhancement to the Thumb instruction set by providing 32-bit Thumb instructions. The 32-bit and 16-bit Thumb instructions together provide almost exactly the same functionality as the ARM instruction set. This version of the Thumb instruction set achieves the high performance of ARM code along with the benefits of better code density.

So, in summary:

https://developer.arm.com/documentation/ddi0210/c/CACBCAAE

> The Thumb instruction set is a subset of the most commonly used 32-bit ARM instructions. Thumb instructions are each 16 bits long, and have a corresponding 32-bit ARM instruction that has the same effect on the processor model. Thumb instructions operate with the standard ARM register configuration, allowing excellent interoperability between ARM and Thumb states.

> On execution, 16-bit Thumb instructions are transparently decompressed to full 32-bit ARM instructions in real time, without performance loss.

For an example of how loadable microcode worked in practice, look up the Three Rivers PERQ:

https://en.wikipedia.org/wiki/PERQ

> The name "PERQ" was chosen both as an acronym of "Pascal Engine that Runs Quicker," and to evoke the word perquisite commonly called a perk, that is an additional employee benefit


And for a description of how to build a computer with loaded microcode you could start with Mick and Brick [1] (PDF). It describes using AMD 2900 series [2] components, the main alternative at the time was to use the TI 74181 ALU [3] and build your own microcode engine.

[1] https://www.mirrorservice.org/sites/www.bitsavers.org/compon... [2] https://en.wikipedia.org/wiki/AMD_Am2900 [3] https://en.wikipedia.org/wiki/74181


> On execution, 16-bit Thumb instructions are transparently decompressed to full 32-bit ARM instructions in real time, without performance loss.

That quote is from the ARM7TDMI manual - the CPU used in the Game Boy Advance, for example. I believe later processors contained entirely separate ARM and Thumb decoders.


Sounds a lot like what Transmeta was doing.


> because the point of the site is not to give you a personalized answer, but to build a reference where the questions are useful to everyone

This is a strawman. Marking two different questions as duplicates of each other has nothing to do with a personalized answer, and answering both would absolutely be useful to everyone because a subset of visitors will look for answers to one question, and another subset will be looking for answers to the other question.

To emphasize the difference: Personalized answers would be about having a single question and giving different answers to different audiences. This is not at all the same as having two different _questions_.


>This is a strawman. Marking two different questions as duplicates of each other has nothing to do with a personalized answer, and answering both would absolutely be useful to everyone because a subset of visitors will look for answers to one question, and another subset will be looking for answers to the other question.

What you're missing: when a question is closed as a duplicate, the link to the duplicate target is automatically put at the top; furthermore, if there are no answers to the current question, logged-out users are automatically redirected to the target.

The goal of closing duplicates promptly is to prevent them from being answered and enable that redirect. As a result, people who search for the question and find a duplicate, actually find the target instead.

It's important here to keep in mind that the site's own search doesn't work very well, and external search doesn't understand the site's voting system. It happens all the time that poorly asked, hard-to-understand versions of a question nevertheless accidentally have better SEO. I know this because of years of experience trying to use external search to find a duplicate target for the N+1th iteration of the same basic question.

It is, in the common case, about personalized answers when people reject duplicates - because objectively the answers on the target answer their question and the OP is generally either refusing to accept this fact, refusing to accept that closing duplicates is part of our policy, or else is struggling to connect the answer to the question because of a failure to do the expected investigative work first (https://meta.stackoverflow.com/questions/261592).


> The goal of closing duplicates promptly is to prevent them from being answered and enable that redirect. As a result, people who search for the question and find a duplicate, actually find the target instead.

Why would you want to prevent answers to a question, just because another unrelated question exists? Remember that the whole thread is not about actual duplicates, but about unrelated questions falsely marked as duplicates.

> ... because objectively the answers on the target answer their question ... > ... because of a failure to do the expected investigative work first ...

Almost everybody describing their experience with duplicates in this comment section tells the story of questions for which other questions have been found, linked from the supposedly-duplicate question, and described why the answers to that other question do NOT answer their own question.

The expected investigative work HAS been done; they explained why the other question is NOT a duplicate. The key point is that all of this has been ignored by the person closing the question.


> Why would you want to prevent answers to a question, just because another unrelated question exists? Remember that the whole thread is not about actual duplicates, but about unrelated questions falsely marked as duplicates.

Here, for reference, is the entire sentence which kicked off the subthread where you objected to what I was saying:

> It is without merit ~90% of the time. The simple fact is that the "nuance" seen by the person asking the question is just not relevant to us, because the point of the site is not to give you a personalized answer, but to build a reference where the questions are useful to everyone.

In other words: I am defending "preventing answers to the question" for the exact reason that it probably actually really is a duplicate, according to how we view duplicates. As a reminder, this is in terms of what future users of the site will find the most useful. It is not simply in terms of what the question author thinks.

And in my years-long experience seeing appeals, in a large majority of cases it really is a duplicate; it really is clearly a duplicate; and the only apparent reason the OP is objecting is because it takes additional effort to adapt the answers to the exact situation motivating the original question. And I absolutely have seen this sort of "effort" boil down to things like a need to rename the variables instead of just literally copying and pasting the code. Quite often.

> Almost everybody describing their experience with duplicates in this comment section tells the story of questions for which other questions have been found, linked from the supposedly-duplicate question, and described why the answers to that other question do NOT answer their own question.

No, they do not. They describe the experience of believing that the other question is different. They don't even mention the answers on the other question. And there is nowhere near enough detail in the description to evaluate the reasoning out of context.

This is, as I described in other comments, why there is a meta site.

And this is HN. The average result elsewhere on the Internet has been worse.


> "It's not a good idea for me to be trying to kill demons while I'm driving,"

Driving was the first thing that came to my mind. It's also dangerous to drive when tired, distracted, drunk, or one of a hundred other conditions. Yet somehow GTP is portrayed as the problem here when, in fact, driving a car is simply one of the most dangerous daily activities, even absurdly dangerous compared to other tasks.


The garbage that dominates the web has everything to do with centralization of power, and nothing with HTML vs JS. The former is a people problem and the latter is just tech.


The rest of the world has decided that the web is for applications at least as much as it is for documents.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: