How does Spotify defend people who actually make art? There's virtually no difference between pirating and steaming through Spotify for the vast majority of artists.
Personally as an artist I'd rather give it to people directly for free but I'll meet the audience where they are. The "compensation" does not factor into it at all.
Interestingly, I'm seeing more and more small bands stepping off of Spotify, mainly because of AI clones and botted stream scams. Apparently they've decided losing that reach is acceptable. (anecdotal ofc. but even on local scale it's an interesting choice)
Same argument applies. You, as the developer, always know in what context the text appears. Whether it's "dwa zdjęcia przesłane" or "dodano tagi do dwóch zdjęć".
I have heard it claimed, at least for US construction, that a nuclear plant under construction has to implement new safety measures even if those measures were adopted after the design approval or construction start date.
This means that the design can change multiple times during construction, which both slows construction and exposes the project to even more safety design changes.
Ironically, the creaky old plants that were built long ago don't need to adopt such new safety requirements. They are grandfathered in, but can't be economically replaced because the costs of a replacement are artificially inflated.
A car analogy would be that we continue driving 1955 Chevy Bel-Airs with no seat belts since an up-to-date car is too expensive to develop, since we can't start production until the latest LIDAR and AI has been added. Once the LIDAR is in, pray that there's no new self-driving hardware released before full production, or we'll have to include that too.
Thank you for being specific! This is no longer the case under modern licensing.
Look at Vogtle and Summer, who were so expensive and disastrous that the Summer build was abandoned with billions of dollars sunk in construction.
Nothing was changed on the regulatory side, and it was licensed under a new regulatory model requested by industry, that let them start construction without everything fully designed yet. There were many super expensive changes during the build, but that was due to EPC, not regulatory stuff.
The NRC has been extremely open to regulatory changes since the 2000s, especially with the "nuclear renaissance" push around 2008. I'm not aware of any suggested regulatory changes that were not adopted.
Huh, what do you mean by this? There are many sound and complete systems – propositional logic, first-order logic, Presburger arithmetic, the list goes on. These are the basic properties you want from a logical or typing system. (Though, of course, you may compromise if you have other priorities.)
My take is that the GP was implicitly referring to Gödel’s Incompleteness Theorems with the implication being that a system that reasons completely about all the human topics and itself is not possible. Therefore, you’d need multiple such systems (plural) working in concert.
> If you take multiple systems and make them work in concert, you just get a bigger system.
The conclusion may be wrong, but a "bigger system" can be larger than the sum of its constituents. So a system can have functions, give rise to complexity, neither of its subsystems feature. An example would be the thinking brain, which is made out of neurons/cells incapable of thought, which are made out of molecules incapable of reproduction, which are made from atoms incapable of catalyzing certain chemical reactions and so on.
This is just emergence, though? How is emergence related to completeness?
This happens over and over with the relatively new popularization of a theory: the theory is proposed to be the solution to every missing thing in the same rough conceptual vector.
It takes a lot more than just pointing in the general direction of complexity to propose the creation of a complete system, something which with present systems of understanding appears to be impossible.
I didn't make that argument. I think, the original conclusion above isn't reasonable. However, "a concert" isn't "just" a bigger system either, which is my point.
Sort of, the guardrail here IMO is you have an ontology processor that basically routes to a submodule, and if there isn't a submodule present it errors out. It is one large system, but it's bounded by an understanding of its own knowledge.
I believe, neither the expansion of Gödel's theorems to "everything", non-formalized systems, nor the conclusion of a resolution by harnessing multiple systems in concert, are sound reasoning. I think, it's a fallacious reductionism.
I am very curious on this. In particular, if you are able to split systems into formalized and non formalized, then I thinks there are quite some praise and a central spot in all future history books for you!
I am not a native speaker, so please don't get hung up on particular expressions.
I meant, the colloquial philosophies and general ontology are not subject of Gödel's work. I think, the forgone expansion is similar to finding evidence for telepathy in the pop-sci descriptions of quantum entanglement. Gödel's theorems cover axiomatic, formal systems in mathematics. To apply it to whatever, you first have to formalize whatever. Otherwise, it's an intuition/speculation, not sound reasoning. At least, that's my understanding.
Same background here. I finally got into stochastic calculus last year thanks to a local college course (after several unsuccessful attempts on my own).
You need at least
1. a basic grasp of classical calculus, measure theory and topology
2. solid understanding of probability theory
3. basics of stochastic processes
I believe you should be able to dive in from there. It's good to have an idea where you're heading as well (mathematical finance and modelling and pricing derivatives? Bayesian inference and MCMC? statistical physics?).
Practice a few polyrhythms, get used to things like:
X . X X X . X . X X X .
A . . A . . A . . A . .
B . B . B . B . B . B .
and:
X . . X . X X X . X X . X . X X . . X . X X . . X X . X X . X . . X . X X . . X X . X . . X . . X X X X . . X X X X . . X . . X . X X . . X X . X . . X . X X . X X . . X X . X . . X X . X . X X . X X X . X . .
A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . . A . . . .
B . . . . . . B . . . . . . B . . . . . . B . . . . . . B . . . . . . B . . . . . . B . . . . . . B . . . . . . B . . . . . . B . . . . . . B . . . . . . B . . . . . . B . . . . . . B . . . . . . B . . . . . .
C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . . C . .
Learn to do them with one limb (or finger) per line, and also with all the lines on the same limb (or finger). And then suddenly, they'll start to feel intuitive, and you'll be able to do them by feel. (It's a bit like scales.)
Of course, getting a computer that's useful in practice out of this would require some thought.
A simple model: you could only allow programs written in Coq (or similar), ie progams that come with a proof of termination (or a slight generalisation, that allows for infinite event loops, as long as each run threw the loop behaves well, in some sense).
There's a trivial escape hatch, where you just take your normal unproven program but forcefully terminate it after 2^64 steps. That's strictly speaking not Turing complete, but you wouldn't be able to tell the difference during the lifetime of the computer.
Perhaps I belong to the minority, but I really don't think about containers as Docker. Actually, I don't remember the last time I used Docker for anything. For the past several years, I've been using either Podman or systemd-nspawn, as yourself.