Hacker Newsnew | past | comments | ask | show | jobs | submit | ssivark's commentslogin

Yes, and that becomes more intuitive when you "un-curry" the nested lambdas into a single lamba with twice the number of arguments. The point is that the state of a constant does not depend whatsoever on the state of the (rest of the) world, how much ever of that state piles on.

But that also just means that BEVs will become way cheaper as companies rush to optimize value and capture users at more attractive prices.

Yes. Cars have fixed cost ranges for people so the end result is pretty much predetermined - electric cars settling at the same price and quality points as ICE cars are today.

> what's wrong with legitimately not knowing what e.g. the data structure will end up looking?

But that's not what the above comment said.

> Just let it run, check debugger/stdout/localhost page and adjust: "Oh, right, the entries are missing canonical IDs, but at the same time there are already all the comments in them, forgot they would be there

So you did have an expectation that the entries should have some canonical IDs, and anticipated/desired a certain specific behavior of the system.

Which is basically the meaning of "what will the output be?" when simplified for programming novices at university.


I wonder whether the blast radius of the law might interfere with OSs running on cloud machines. That might explain why California based companies in the cloud business might want to ensure that the bits they resell are compliant.

To elaborate on @jeswin's point above (IDK why it got downvoted)... a data structure is basically like a cache for the processing algorithm. The business logic and algorithm needs will dictate what details can be computed on-the-fly -vs- pre-generated and stored (be it RAM or disk). Eg: if you're going to be searching a lot then it makes sense to augment the database with some kind of "index" for fast lookup. Or if you are repeatedly going to be pllotting some derived quantity then maybe it makes sense to derive that once and store with the struct.

It's not enough for a data structure to represent the "fundamental" degrees of freedom needed to model the situation; the algorithmic needs (vis-a-vis the available resources) most definitely matter a lot.


Bad analogy. The things I delegate to a calculator, I'm absolutely sure I understand well (and could debug if need be). These are also very legible skills that are easy to remind myself by re-reading the recipe -- so I'm not too worried about skills "atrophying".


> speculative decoding for bread and butter frontier models. The thing that I’m really very skeptical of is the 2 month turnaround. To get leading edge geometry turned around on arbitrary 2 month schedules is .. ambitious

Can we use older (previous generation, smaller) models as a speculative decoder for the current model? I don't know whether the randomness in training (weight init, data ordering, etc) will affect this kind of use. To the extent that these models are learning the "true underlying token distribution" this should be possible, in principle. If that's the case, speculative decoding is an elegant vector to introduce this kind of tech, and the turnaround time is even less of a problem.


Ugh, this almost feels like flame-bait. This question invariably leads to a lot of bike-shedding around comments from people who feel strongly about some choices in the Julia language (1-based indexing and what not), and the fact that Julia is still not as polished as some other languages in certain aspects of developer experience.

"Data science" is an extremely broad term, so YMMV. That said, since you asked, Julia has absolutely replaced Python for me. I don't have anything new to add on the benefits of Julia; it's all been said before elsewhere. It's just a question of exactly what kind of stuff you want to do. Most of my recent work is math/algorithms flavored, and Python would be annoyingly verbose/inexpressive while also being substantially slower. Julia also tends to have many more high-quality packages of this kind that I can quickly use / build on.


I imagine the actor model implemented on top of a stable core (eg. Erlang/Elixir on BEAM) might be a natural fit for a "society of mind" like zoo of agents bustling around, working asynchronously, communicating with each other other and with the user(s), etc. Personally, I'm also excited to see how Spritely Goblins shapes up, esp with because of its support for object capabilities -- which should hopefully be an excellent security model for agentic architectures.


Google Chat is definitely a product that could use more love, but it is situated in a specific internal landscape, and grows out of it. Slack is built for a very different context, and I doubt Google would build something like that. Google simply doesn't see the world the way someone who likes Slack would (and I also doubt a large co like Google could operate out of Slack).


> and I also doubt a large co like Google could operate out of Slack

Plenty of corporations much larger than Google operate out of Slack.


"plenty of corporations much larger than Google"?

Google is the third largest company by market cap in the world. I suppose by "much larger" you mean number of employees? Walmart maybe?

I doubt there's many out there using slack


By market cap? Is the money using slack?

Company size when you're talking about tools for humans makes no sense in terms of market cap.

Plenty of companies with many more employees than Google use slack.


Such as who? And are most of their employees actually using Slack or are a few white collar employees using it while 90% of their workforce has no idea?


IBM has ~300k employees and uses Slack.


AFAIK there are about ~100 companies in the world with more employees than Alphabet/Google.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: