Hacker Newsnew | past | comments | ask | show | jobs | submit | more wahern's commentslogin

Coca-Cola is sort of like the Apple of cola in that they're the upmarket brand almost everywhere around the globe. Unless Coke has a sales, marketing, or branding angle (see, e.g., Disney deal mentioned elsethread), they won't discount nearly as deeply as Pepsi, which is perennially in second-place at best (Mt. Dew notwithstanding). Pepsi is the obvious choice for any outlet where your customers are captive (e.g. sit-down restaurants) and you don't otherwise care about looking cheap for not offering Coca-Cola.

For convenience stores, particularly ones with few or no built-in wall coolers, the typical deal is the Coca-Cola or Pepsi distributor will provide and maintain a free-standing cooler, but it can only hold products from that distributor (often the distributor stocks it for you). Thus you'll typically see Coca-Cola and Pepsi products segregated in different coolers, if the store sells both.

I presume, but don't know first-hand, that for built-in coolers you want stocked by the distributor, they'll also require segregation. Frito-Lay distributors operate similarly--they'll come in and stock your shelf if you want (I dunno if there's a sales premium), but typically they'll require the Frito-Lay products be segregated, and they'll provide branded shelving if you want.


Red Bull gives you a discount if their mini fridge is close to the register

> This is the kind of process that happens with any new technology. Hinton probably just didn't know because he's never worked outside of academia.

Economists certainly know this, as would many historians.


You would need Chinese laws and regulations. This is one of the reasons why when building Belt & Road Initiative projects in SE Asia, Africa, etc, China demands exclusion from local regulations and insulation from local politics, at least after initial negotiations and before work begins. In many nations the problem is corruption and kickbacks, but in a country like Britain the problem is bureaucratic red tape and "community input" (i.e. every Tom, Dick, and Harry effectively has veto power).


Request smuggling is an issue when reverse proxying and multiplexing multiple front-end streams over a shared HTTP/1.1 connection on the backend. HTTP/2 on the front-end doesn't resolve that issue, though the exploit techniques are slightly different. In fact, HTTP/2 on the front-end is a deceptive solution to the problem because HTTP/2 is more complex (the binary framing doesn't save you, yet you still have to deal with unexpected headers--you can still send Content-Length headers, for example) and the exploits less intuitive.

HTTP/1.1 is a simpler protocol and easier to implement, even with chunked Transfer-Encoding and pipelining. (For one thing, there's no need to implement HPACK.) It's trying to build multiplexing tunnels across it that is problematic, because buggy or confused handling of the line-delimited framing between ostensibly trusted end point opens up opportunities for desync that, in a simple 1:1 situation, would just be a stupid bug, no different from any other protocol implementation bug.

Because HTTP/2 is more complicated, there's arguably more opportunities for classic memory safety bugs. Contrary common wisdom, there's not a meaningful difference between text and binary protocols in that regard; if anything, text-based protocols are more forgiving of bugs, which is why they tend to promote and ossify proliferation of protocol violations. I've written HTTP and RTSP/RTP stacks several times, including RTSP/RTP nested inside bonded HTTP connections (what Quicktime used to use back in the day). I've also implemented MIME message parsers. The biggest headache and opportunity for bugs, IME, is dealing with header bodies, specifically the various flavors of structured headers, and unfortunately HTTP/2 doesn't directly address that--you're still handed a blob to parse, same as HTTP/1.1 and MIME generally. HTTP/2 does partially address the header folding problem, but it's common to reject those in HTTP/1.x implementations, something you can't do in e-mail stacks, unfortunately.


Argubaly, the complexity issue is not only the protocols themselves but also the fact that thanks to the companies pushing HTTP/2 and 3, there are now multiple (competing/overlapping/incompatible) protocols

For example, people passing requests received by HTTP/2 frontends to HTTP/1.1 backends


It does do something similar... and more. IIUC, when unwinding on panic Rust does unlock the mutex, but it also "poisons" it so that a subsequent attempt to lock will return an error. This is because on panic all the code protected by the mutex didn't get to run to completion, possibly leaving an inconsistent state. A poisoned mutex can be reset, though.


These are distinctions without a difference. Events replicated across several independent Matrix servers are not meaningfully different than events broadcast across independent clients in terms of observability or repudiation.


But normally when you join a conversation and are not allowed to see previous messages, you don't see anything about them. A matrix server does.


> unless you restart all of userspace (at which point you might as well just reboot).

I can't speak for FreeBSD, but on my OpenBSD system hosting ssh, smtp, http, dns, and chat (prosody) services, restarting userspace is nothing to sweat. Not because restarting a particular service is easier than on a Linux server (`rcctl restart foo` vs `systemctl restart foo`), but because there are far fewer background processes and you know what each of them does; the system is simpler and more transparent, inducing less fear about breaking or missing a service. Moreover, init(1) itself is rarely implicated by a patch, and everything else (rc) is non-resident shell scripts, whereas who knows whether you can avoid restarting any of the constellation of systemd's own services, especially given their many library dependencies.

If you're running pet servers rather than cattle, you may want to avoid a reboot if you can. Maybe a capacitor is about to die and you'd rather deal with it at some future inopportune moment rather than extending the present inopportune moment.


I don't know how it was approached for vitamin D, but it's all about the model they choose, which in the first instance is just something they pull out of thin air. For many water soluble vitamins and minerals the model is based on a threshold for urine excretion; up the dose until the study group is excreting as much as they take in. Until someone figures out otherwise--i.e. that it's too little, too much, or that other considerations need to be made--that's the basis for the RDA.


AFAIU, cmov wasn't originally intended to be a guaranteed constant-time operation, Intel and AMD won't commit to keeping it constant-time in the future, but it just so happened that at one point it was implemented in constant-time across CPUs, cryptographers picked up on this and began using it, and now Intel and AMD tacitly recognize this dependency. See, e.g., https://www.intel.com/content/www/us/en/developer/articles/t...

> The CMOVcc instruction runs in time independent of its arguments in all current x86 architecture processors. This includes variants that load from memory. The load is performed before the condition is tested. Future versions of the architecture may introduce new addressing modes that do not exhibit this property.


At your link there is a link to the list of instructions that guarantee constant execution time, independent of the operands.

The list includes CMOV.

However, the instructions from the list are guaranteed to have constant execution time, even on any future CPUs, only if the operating system sets a certain CPU control bit.

So on recent and future Intel/AMD CPUs, one may need to verify that the correct choice has been made between secure execution mode and fastest execution mode.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: