Hacker Newsnew | past | comments | ask | show | jobs | submit | abiloe's commentslogin

> attitude towards user experience was more prevalent in mainstream OSs.

> It's little things like errors automatically prompting you to open a graphical debugger

If this is the concept of "mainstream" and sensible "user experience" it sounds like Haiku is a complete bunch of horseshit. (And this feature is easily available in Windows anyway)


developers are users too my dude


But but -- the New York Times had said this is just fine for over a year:

https://www.nytimes.com/wirecutter/blog/picking-smart-deadbo...

Notice that they essentially rake (with the godawful demonstration of "picking") on what looks like a Level lock to then make the conclusion that AlL Locks are pickable because even an idiot that doesn't know the difference between picking and raking can do it.


I mean he could have watched 2 videos of the LPL channel as research and he'd know better that not all locks can be just opened by a rake tool..


Oh this again. The idiots at the WireCutter already covered this and they think it's just fine.

Mass market smart locks are a dumpster fire and they have the full support of the mass media. The #1 Wirecutter smart lock pick has this exact same flaw - a shitty cylinder that is prone to trivial bumping and that could easily be improved - the cheapest big box deadbolts have better cylinders. Someone had commented on that article (since removed) about this flaw - and the moron behind the Wirecutter smart section has just gone on and on about how this isn't a real problem, bla, bla - even had the audacity to post of a video of I'm guessing him demonstrating a poor raking attempt on one of these crappy locks to "prove" how "easy" locks are to pick - yes you dumbass, if you only use shitty locks with no protection.

He followed up with the gem that "the point of the lock picking lawyer is to show that any lock can be picked" - Again, no, idiot - the LPL does a tremendous amount of education to expose trivial weaknesses in particular products that can and should be addressed - he often points out when products provide a reasonable level of security for the price. The Level lock and the Ultraloq (the NYT top pick), do not. Fuck Wirecutter and the New York Times, enjoy your referral money you dishonest pricks. (U-Tecs customer service is also shit tier, something they even admit in their own review)

https://www.nytimes.com/wirecutter/blog/picking-smart-deadbo... The premise of this article seems to be - there are some high priced smart locks (that we promote) that are easy to bump open, therefore all locks are easy to bump, and you shouldn't factor pick resistance into buying decision at all. And also: raking and bumping is in the same class of difficulty as pin picking (this difference has been explained to them - this is not for lack of information).

https://www.nytimes.com/wirecutter/reviews/the-best-smart-lo...

Actually is that the Level lock they demonstrate on back in October? Notice that isn't a Schlage - I'd like to see him try that.

Another thing they keep bringing up this ANSI Grade 1 nonsense - I have looked - I cannot find any evidence of 3rd party certification for their top pick. It's all bullshit.

https://buildershardware.com/Certification-Program/Certified...


You have to keep in mind that any of these "review" websites are in it to maximize their revenue with referrals. They will spout whatever it takes to get you to click on the links that get them the largest kickbacks. They certainly aren't going to come out and say, "All smart locks are terrible, don't buy one."


Wirecutter definitely has written articles saying they don't recommend anything in a category in the past. That's the reason people trust them!


> Mostly everyone else seems to define the bitness of CPUs by their capacity to add numbers in one go, not by the address space they can address.

Nah, only historically. Yes, "8-bit" refers to the ALU. Back in the day, 16-bit was similar. But even then it was muddy, because when talking about operating systems like Unix or NT the key question about bitness would be in the context of a 32-bit flat addressing model, not really the data width.

By the time the 64-bit era rolled around, the "64-bits" definitely referred to address space.. The original Pentium had a 64-bit data path and had instructions (MMX, eg PADDQ) that could operate on 64-bit numbers - no one would call it 64-bit. By the early 2000s with the big push to mainstreaming 64-bit, it was all breaking out of the 4GB address space limitation - not the width of data.

> define the bitness of CPUs by their capacity to add numbers in one go

This is nebulous and therefore troublesome to define. Are we talking about the ISA or the internal circuitry (ALU and/or data path)? Is the 68000 a 16-bit or 32-bit CPU?


> By the time the 64-bit era rolled around, the "64-bits" definitely referred to address space.. The original Pentium had a 64-bit data path and had instructions (MMX, eg PADDQ) that could operate on 64-bit numbers - no one would call it 64-bit. By the early 2000s with the big push to mainstreaming 64-bit, it was all breaking out of the 4GB address space limitation - not the width of data.

We're ~20 years down the line and none of these "64-bit" processors support 64-bit addressing yet. Most support up to 48-bits of addressing, with some newer intel chips supporting 57-bit addresses with 5-level paging.

I always took the bit-ness of the processor to refer to the data bus size between the cpu and main memory, aka, the size of a machine word.


> Yes, "8-bit" refers to the ALU.

By that definition, the Z80 would be a 4-bit CPU ;)

https://www.righto.com/2013/09/the-z-80-has-4-bit-alu-heres-...

...but if you take the data bus width, then the 8088 would be an 8-bit CPU, which isn't quite right either...

But you also can't take the address bus width, because modern "64-bit" CPUs can't actually address 64 bits of physical memory...

I think it's best to treat the 'bit-ness' of a CPU or computer system purely as a marketing term.


You're right. Having written quite some Z80 code and 68k code, I need to think about this. This is decades in the past, I think outside of demo code I often used move.l in 68k so it would be a 32bit CPU to me, but seldom used double registers on the Z80 so it feels like an 8bit CPU. Perhaps that was one of the reasons the jump from the Z80 to the 68k felt so huge (perhaps it was the number and flexibility of registers). That said, "Z80 assembly language subroutines" is my most precious computer book still, the one I would take through the apocalypse.


There's not much to think about. Bitness refers to the ISA, not the implementation. It's about what registers there are and what operations on them. It is what the programmer can write.

All CPUs that execute the same programs (in their native mode) are the same bitness. This is fundamental.

If a 68040 is 32 bit then so are the 68000 and 68008.

Most of the so-called "8 bit" CPUs weren't really. They were mixed 8/16 bit. Especially something like the z80 which had a lot of 16 bit registers, could do 16 bit adds and loads and stores. But not 16 bit compares or moves in the registers (needed two 8 bit moves). But even the 6800 and 6502 had certain 16 bit operations to support the 16 bit address space. The 6809 was even more 16 bit in being able to do 16 bit moves and subtracts and compares -- other than the lack of segment registers it was very very similar to the 16 bit 8088.

One problem with the terminology is that those early microprocessor instruction sets weren't Instruction Set Architectures. There was only ever one implementation of them, so people (including their manufacturers) tended to conflate implementation details and instruction set details.


> They went from 95 and XP, some of the best operating systems from a user's perspective

I mean that's some rose-tinted shit if I ever heard it, 95 especially. 98 OSR2 improved quite a bit on the situation, but I don't recall having a great time with 95. It was still a glorified DOS extender in many ways, often slow to start up and shutdown, PC hardware and drivers were a mess and system stability therefore poor. This was right at the beginning of the concept of "PnP" in the PC world (plug and play) and it was a cruel joke most of time.

Windows 2000 being from the NT line was peak Windows in many ways. I resisted XP but it was ok for most people admittedly.


As someone who grew up with Apple computers, yes the overall fit/integration was not as good as Apple's but it did have some semblance of preemptive multitasking (which System7/8 did not have).

The "UI that runs on DOS" was a feature not a bug for legacy game/software support.


> I mean that's some rose-tinted shit if I ever heard it,

I was trying to figure out how to express myself, but I think you got it.

98SE was still pretty bad because pre-NTFS filesystems would decay over time (of course so would ext2, but linux generally crashed a lot less).


XP, yes. 95, not so much.


> don't all modern file systems do that?

Define this.

But the simple, practical answer is no.


Practical answer? when was the last time you heard about someone having file corruption on their local OS due to using a bad filesystem technology? I've been an enterprise developer for storage companies for more than 15 years, I'm very familiar with different file systems, and I've never cared what filesystem was on my local PC because it's basically irrelevant with modern tech. I feel like I'm being trolled on technical points that have nothing to do with the topic I was trying to discuss. I guess I should have learned my lesson that even on HN, don't feed the trolls.


> when was the last time you heard about someone having file corruption on their local OS due to using a bad filesystem technology?

Quite frequently. It's not that uncommon at all. Bad disk cables, defective disks, etc it's not all that rare. The filesystem is in a place to either detect and/or correct these problems. Only some do.

> I've never cared what filesystem was on my local PC because it's basically irrelevant with modern tech.

I'm glad you've been lucky but silent file corruption is not an "irrelevant" problem as data densities increase.

Filesystems with data checksums include btrfs, ZFS, ReFS, NILFS. But very commonly used filesystems, xfs, ext4, NTFS, APFS, exFAT do not have this feature (some of them do have metadata support).

> it's basically irrelevant with modern tech.

Unnecessary e-peening aside - Why do you think this is the case? What specifically about modern tech makes it irrelevant?


> the original process

The "original process" of integrated circuit design was cutting rubylith - this was well into the 70s. The venerable MOS 6502 was done this way as was the Intel 8008.

The verb tapeout was used for litho prior to integrated circuits though, back into the 40s.

But you should realize that PCBs and integrated circuits predate generally available computers and digital data storage.

> exported to a data tape for transport to the fab.

There was an article in The Register years ago that promulgated this misattribution - it was generally never a reliable source, no exception here.


That's why IC layouts used to be called "floor plans". They were floor-sized drawings on which people laid out lines with tape. The floor plan was then photographed from above and color-separated into masks.

"Back into the 40s?" ICs aren't that old.

Printed circuit boards have long been laid out by hand, and sometimes still are, but they're not usually photo-reduced. They're laid out at full scale.


> "Back into the 40s?" ICs aren't that old.

I think the parent is saying that the verb 'tapeout' goes back to litho processes before litho's use in IC production, not that IC production went back to the 40s.


Can confirm. When I was a kid I was an offset printer. We used rubylith for that. Then I learned PC design. Rubylith again. Then I went to work for an IC design firm. No rubylith there; our product produced Gerber file tapes. But the old-timers still called it "taping out" because it used rubylith before the Gerber days.


> Namecheap immediately resold the domain to a squatter

No I don't see how their post allows one to infer that Namecheap "immediately" resold the domain. Their policy is a 30 day grace period, and within that 30 days the domain should become non functional, so if you're actively using it then you would know something is up.

Sounds like jamiek88 was squatting on a somewhat desirable domain and is just sad they couldn't follow some basic instructions.

I have no idea what the "because of COVID" thing is about.


I took it to mean that they got COVID and were out of commission long enough that the domain was sold by the time they “came back round to reality”.


[flagged]


Breaking the site guidelines like this is definitely not ok and will get your account banned, regardless of how bad another comment is or you feel it is. If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.


> My register doesn’t allow registrations longer than 1 year.

Ummm.. then maybe get a better registar? It's not like there aren't many to choose from.


I think you missed the last part of my comment.


> When the 801 shipped, almost no system had hard coded microcode.

1990? No system without soft microcode? That's a very restricted definition of system for the field of computing in 1990.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: