Agreed. I vaguely remember another HN link that said Apple tried a competing-team approach to building a better siri, but it fell apart due to internal politics reasons?
Nonlin ar partial differential equations through a large continuum are expensive. Even if you can scale this particular example, it doesn't refute the point, the universe does a tremendous amount of compute that we don't know how to exploit.
I think it is totally possible for the computing to take non-zero time, but we observe it in zero-time as our consciousness only steps forward only with each iteration of computing the world state. So we observe zero time reality computations.
I don't know about GP's view, but to me this does seem like a criticism:
"Women have a long history of being depicted as technical objects in computing... gendered assumptions about the characters of Alice and Bob have been read into their fictional lives. Images of Alice, Bob, and Eve depict the three as in love triangles, with Alice and Eve alternately portrayed as disrupting one another’s blissful domestic life with Bob. Visual depictions of Alice, Bob, Eve, and others used in university classrooms and elsewhere have replicated and reified the gendered assumptions read onto Alice and Bob and their cryptographic family, making it clear that Bob is the subject of communications with others, who serve as objects, and are often secondary players to his experience of information exchange. Thus, while Rivest, Shamir, and Adleman used the names “Alice” and “Bob” for a sender and receiver as a writing tool, others have adapted Alice and Bob, in predictable, culturally-specific ways that have important consequences for subsequent, gendered experiences of cryptology."
The previous paragraph criticized Ivan Sutherland for so much as drawing a girl's face with sketchpad, so in context this does seem
to be critical of RSA.
I skimmed through the article but that's a lot of assumptions there if so.
1. So let's say that possible range of values is true (10 characters of specific range + 1). That would represent one big circle of possible area where videos might be.
2. Distribution of identifiers (valid videos) is everything. If Youtube did some contraints (or skewing) to IDs, that we don't know about, then actual existing video IDs might be a small(er) circle within that bigger circle of possibilities and not equally dispersed throughout, or there mught be clumping or whatever... So you'd need to sample the space by throwing darts in a way to get a silhouette of their skew or to see if it's random-ish, by I don't know let's say Poisson distribution.
Only then one could estimate the size. So is this what they're doing?
I see what you did there. So basically an overlapped proportion (or hits proportion) would be overlapping hits divided by samples run, and then an estimated total would be this proportion divided by total space of possibilities. That would work.
In your example, the amino acids order is sufficient to directly model the result: the sequence of amino acids can directly generate the protein, which is either valid or invalid. All variables are provided within the data.
In the original example, we are testing weather using the previous day’s weather. We may be able to model using whatever correlation exists between the data. This is not the same as accurately predicting results, if the real-world weather function is determined by the weather of surrounding locations, time of year, and moon phase. If our model does not have this data, and it is essential to model the result, how can you accurately model?
In other words: “Garbage in, garbage out”. Good luck modeling an n-th degree polynomial function, given a fraction of the variables to train on.
electrostatic protein interaction, hydrophobic interaction, organic chemistry etc
all variables are in fact not provided within the data. Protein creation is not just _poof_ proteins. There are steps, interactions and processes. You don't need to supply any of that to get a model accurately predicting proteins. That is the main point here, not that you can predict anything with any data.
> This is not the same as accurately predicting results, if the real-world weather function is determined by the weather of surrounding locations, time of year, and moon phase.
How many have the "human intelligence" to do this? Especially more accurately than a computer (and without using any themselves) training on the same inputs and outputs?
So sign your UUIDs and combine them into “$UUID:$HASH” strings for the same benefit. Or a more structured JWT-like payload that still verifies auth against the DB (as opposed to carrying authorization within the token).
No need to reinvision the rest of the auth flow if you just want to add hashing to reduce DB load.
reply