Yes but it only one face to face meeting is needed in the process to see if someone is using AI to answer interview. The 13 other interviews can then be online.
That's the problem, if you are working in a compagny which have mostly junior (1 or two year of programming), it is better for you to not implement to complicate pattern otherwise your day will be fill of explaining what a Factory is.
Switzerland has no oil and depend on its banking sector. If USA crashes then likely all modern supply line will be cut and finance will be something of the past.
The current well being of a country does not indicate much how it will survive a global crisis.
A farmer in rural Africa would be less affect by the implosion of USA than a trader at Geneve.
What do you think happens when the US goes down? You don't need oil to move within a small country and we self supply on food more or less, many areas are already power independent. Who cares about the banks then?
You may also overestimate the relevance of banking for the countries finances.
Western country are self suplient of food because we have engine and chemical industry, ones this goes down, shortage of food will be quick to come. In part of the world where labour is still mostly manual, it will be more resilient.
Also without bank, a lot of people will find themself without any properties and will likely get more violent. Again, in part of the world where people own real objects and not number in a computer in a datacenter, this won't happend.
That is the license for the anaconda package channel, not conda. The page you linked explains that conda and conda-forge are not subject to those licensing issues.
Just to throw my anecdote in: I used to work at the mypy shop - our client code base was on the order of millions of lines of very thorny Python code. This was several years ago, but to the best of my recollection, even at that scale, mypy was nowhere near that slow.
Like I said, this was many years ago - mypy might've gotten slower, but computers have also gotten faster, so who knows. My hunch is still that you have an issue with misconfiguration, or perhaps you're hitting a bug.
My current company is a Python shop, 1M+ LOC. My CI run earlier today completed mypy typechecking in 9 minutes 5 seconds. Take from that what you will.
Ditto, same order of magnitude experience; at least for --no-incremental runs.
Part of the problem for me is how easily caches get invalidated. A type error somewhere will invalidate the cache of the file and anything in its dependency tree, which blows a huge hole runtime.
Checking 1 file in a big repo can take 10 seconds, or more than a minute as a result.
I think you have something misconfigured, or are timing incorrectly. I'm working on a project right now with ~10K LOC. I haven't timed it, but it's easily <= 2 seconds. Even if I nuke MyPy's cache, it's at most 5 seconds. This is on an M3 MBP FWIW.
In my organisation, some co-workers used to write def func(*args,**kwargs) all the time.
That was so tiring to look for what you should put as argument. Type checking is mandatory for well organized large project.*
Why is this relevant when presenting scientific research? Or is the point of your comment to say, they are incentivized to "brand" their research in a way which is attractive to a VC audience?
It's offered as one possible explanation for the tone or style of the language that GP commented on. I don't think their observation applies to ML research at large, this group seems to be more eccentric in their writing (see their history of submissions on HN and their blog more generally)
> Why is this relevant when presenting scientific research?
I’m guessing that the difference lies in the potential value extraction possibilities from the idea.
If comparing the transformers paper to an algorithm or geometry, that is not used by anyone, I think the differences are obvious from this perspective.
However, if that paper on geometry led to something like a new way of doing strained silicon for integrated circuit design that made manufacturing 10 times cheaper and the circuit 10 times faster, then that would be more important then that would the transformers one.
In ML results are often a score (accuracy or whatever) which makes it more gamefied
It's common to have competitions where the one with the highest score in the benchmark "wins". Even if there is no formal competition, it's very important being the SOTA model.
Results are more applicable to the real world, and more "cool" subjectively (I don't think there's a 2 minutes paper equivalent for math?), which increases ego.
And often authors are trying to convince others to use their findings. So it's partly a marketing brochure.
- There is also (but on a smaller scale) a gamification of math with bounties (https://mathoverflow.net/questions/66084/open-problems-with-...) but when a result is proved you cannot prove it "better than the first time". So it is more a "winner take it all" situation.
- I am not sure but the "2-minute papers" equivalent would be poster sessions, a must-do for every Ph.D. student
- For the marketing side, there are some trends in math, and subtly researchers try to brand their results so they become active research fields. But since it cannot be measured with GitHub stars or Hugging Face downloads, it is more discreet