Yes absolutely. Text casing is part of communication, by skipping it an author is saying: "I'm going to prioritise my preferences and making a statement above your understanding and clarity". The bigger the audience the more negative impact it has, and the more entitled the author appears.
Along the same lines though, txt spk to friends is a) far lower impact with the smaller audience, and b) communicates other factors such as what device you're on or how close you are to someone, so this is not me just hating on bad grammar.
Bad grammar is usually lack of care or education or knowledge.
100% lower case is 100% a choice.
Thanks jack dorsey, for letting us know you're that sort of person. At least he refers to himself that way too, although he should sign off with: jack off.
Oracle is definitely an AI stock, as much as that's silly. Between being a cloud provider with GPUs, and investments in OpenAI, it's certainly part of the AI meme in the stock market, and possibly even a reasonable way to get some AI exposure if that's what you want to invest in.
> I appreciate the one clean cut vs prolonged bleeding.
That's a false dichotomy, you could reduce headcount via attrition which is better in some ways.
There's also no reasoning on product impact. Is the strategy to cut products that aren't making money? Is the strategy to cut 40% across everyone because everyone can go faster?
> Owning the decision
Does it? It came across to me as an inevitability of AI, not "we over-hired". Layoffs are always a mis-management issue, because the opposite (hiring) is a management issue. If management failed to see where the market was going and now needs a different workforce, that's still a management issue.
> respecting the people that got you there
There's words, and there's money, and on these it's pretty good. But there's also an empathy with the experience they're about to go through and I'm not sure there's much of that here beyond the words. To do this well you'd need to think through what folks are about to go through and look for ways you can positively impact that beyond actions today. I've seen some companies do this better, helping teams get re-hired elsewhere, splitting off businesses to sell to other companies, incubating startups, there are lots of options. Hard, especially at this scale, but possible.
> But realistically, you can't pen a better (or, well, less bad) layoff announcement.
And this is the crux of my point, I really think you can. This was a good one, one of the better I've seen, but it's still within the realm of SV companies laying people off. In some companies, countries, industries, this would look very different, and better.
You cannot attrite 40% of the company in 5 months, without creating an incredibly toxic environment. Dorsey knows this; ultimately he lost Twitter over his inability to right size it. I would bet dollars to donuts he promised himself he wouldn't do it again - under no circumstances is 40% cut over six months preferable to a clean fast cut.
What is the age distribution of employees at Block? I'd guess it skews quite young. You won't have much natural attrition or interest in early retirement. And in this job market, voluntary severance is also not very attractive.
Young people love to be paid to switch jobs too - who would refuse 20 weeks of doubled income? From Squares perspective, this would lead to an adverse selection, because Square would lose the people who can easily get jobs elsewhere.
Sure but 5 months isn't the comparison. They state that they're cutting early. We don't know the exact position, but perhaps over 2-3 years would be ok, and 40% in 3 years isn't unheard of, particularly in the valley, and particularly with incentives.
> you could reduce headcount via attrition which is better in some ways
I don't think reducing via attrition is better for the company, for the employees 100%, but attrition would be your people moving to other companies and retirement. It means that you are effectively bleeding your people with options (usually above average) and those with the most experience in favor of "the rest".
It's a nuanced trade-off. It's worse for the company as you said, it may be worse for the employees because some will leave from burnout without severance, those remaining will have more work to do typically.
But my point was that what was presented was a false dichotomy and that framing it as such is disingenuous to employees receiving those comms.
I guess you could consider it that, I read "prolonged bleeding" as more smaller layoffs. That's a fair point. Although then I'd say it's still disingenuous to frame it that negatively when many may see it as a better option.
On paper you're right, but in reality while doing so you give the incentive to higher-ups to set in place measures that make the life of their underlings atrocious. Mandatory RTO for no clear reason, jumping through hoops to get anything done, cut to budgets, ... . At least that what I experienced and talking with friends that was the case for them as well.
>That's a false dichotomy, you could reduce headcount via attrition which is better in some ways.
I disagree. Slow bleeding just means everyone in the company walks around thinking they are next, never knowing when the next set of cuts are going to happen or when they are finished. Cutting 40% is a quick blow, and everyone that is left knows they are safe.
Attrition by definition implies no cuts: just people leaving for the usual reasons people leave, and not replacing them. Attrition can be accelerated by providing incentives like exit bonuses.
Not surprising, Nvidia's margin was just a huge incentive for companies/countries to develop their own solutions. You don't have to be 100% as good if you're 80% cheaper. It's unsurprising that this is being driven by Chinese companies/labs who often have a lot less funding than the US, and the big tech companies (Google, Microsoft, Amazon) who will benefit the most from having their own compute.
I've never believed in Nvidia's moat, and it seems OpenAI's moat (research) has gone and surprisingly is no longer a priority for them.
It seems like it’s really only China that’s pursuing the route of doing more with smaller/cheaper models, too, which also has a lot of potential to give the whole bubble a good shake.
To me it seems like the most obvious thing to do. More efficient models both make up for whatever you lost by using cheaper hardware and let you do more with the hardware you have than the competition can. By comparison the ever-growing-model strategy is a dead end.
I think you might be underestimating the use of small models in proprietary systems. The progress from China is very visible because it's very open, but the big tech companies are doing this too for cost savings.
Feels a bit crazy saying this but I can imagine a weird future where we have some outlawed Chinese tokens situation under some national security guise. No clue how that would work but nothing surprises me anymore.
Nvidia's margins are a wake-up call for anyone reliant on their tech. As companies in places like China pursue self-sufficiency, the competitive landscape is shifting quickly, opening up space for innovation from unexpected sources.
Cars worked fine without seatbelts too. Just because the world goes on doesn't mean we can't do better.
Taking a step back though, I suspect there are cultural differences in approach here. Growing up in Europe, the idea of a regulation to make everyone safer is perfectly acceptable to me, whereas I get the impression that many folks who grew up in the US would feel differently. That's fine! But we also have to recognise these differences and recognise that the platforms in question here are global platforms with global impact and reach.
OTOH the controlling way modern software behaves is an US artifact, so the differences are not necessarily clear-cut like this.
I grew up and live in Europe. I support the general idea of "regulation to make everyone safer" being an acceptable choice. At the same time, I vehemently oppose third-party interests reaching into my computing device and dictating what I can vs. cannot do with it.
But as you say, "global platforms with global impact and reach" - and so I can't set up my phone to conditionally read out text and voice messages aloud, because somewhere on the other side of the world, someone might get scammed into installing malware, therefore let's lock everything down and add remote attestation on top.
Unfortunately, the problem is political, not technological, and this here is but one facet of it. Ultimately, what SaaS does is give away all leverage: as users, it doesn't matter if we fully own the endpoints, or have a user-friendly vendor: any SaaS can ultimately decide not to serve a client that doesn't give the service a user-proof beachhead.
It might not "solve" the problem, but I'd expect it to significantly address the problem no?
I've heard much criticism of it being too heavy-handed, but I don't think I understand criticism that it won't improve security. Could you expand on that?
No. You seem to be implicitly arguing that that unsigned apps are inherently less trustworthy than PlayStore apps. That's a claim that needs to be proven first. And based on the huge amount of documented data exfiltration performed by Google-approved apps, I'm going to say that claim is false.
I'm arguing that a curation process that includes security review is likely to produce a more secure set of software. Admittedly it might be completely ineffective, but I think that's an unreasonable assumption. So some review is more secure than no review. Now I'm not saying "better", you could argue it's a false sense of security, but it's still more security.
> I'm arguing that a curation process that includes security review is likely to produce a more secure set of software
I actually totally agree! There is no external entity users can rely on to make sure apps they download are legitimate. I read the thread from root to this comment and I don't see it mentioned, so I'm not sure if you know this and are just arguing something else but...
There is actually nothing about testing or verifying apps themselves in the announcement made by Google. It's just about enforcing developer verification in some Google service and "registering the apps".
EDIT: I checked your profile, and I now see that you actually work at Google, on Android... Is there something I misunderstood about these announcements?
> you could argue it's a false sense of security, but it's still more security
Well here I don't agree, I would much rather be aware of the dangers than think I'm safe when I'm actually not.
LLMs using code to answer questions is nothing new, it's why the "how many Rs in strawberry" question doesn't trip them up anymore, because they can write a few lines of Python to answer it, run that, and return the answer.
Mathematica / Wolfram Language as the basis for this isn't bad (it's arguably late), because it's a highly integrated system with, in theory, a lot of consistency. It should work well.
That said, has it been designed for sandboxing? A core requirement of this "CAG" is sandboxing requirements. Python isn't great for that, but it's possible due to the significant effort put in by many over years. Does Wolfram Language have that same level? As it's proprietary, it's at a disadvantage, as any sandboxing technology would have to be developed by Wolfram Research, not the community.
I also think that sandboxing is crucial. That’s why I’m working on a Wolfram Language interpreter that can be run fully sandboxed via WebAssembly: https://github.com/ad-si/Woxi
Awesome. I'm pretty unfamiliar with the Wolfram Language, but my understanding that the power of it came from the fact it was very batteries-included in terms of standard library and even data connections (like historical weather or stock market data).
What exactly does Woxi implement? Is it an open source implementation of the core language? Do you have to bring your own standard library or can you use the proprietary one? How do data connections fit into the sandboxing?
I realise I may be uninformed enough here that some of these might not make sense though, interested to learn.
Yes, we agree that a lot of the value comes from the huge standard library. That's why we try to implement as much of it as possible. Right now we support more than 900 functions. All the Data functions will be a little more complicated of course, but they could e.g make a request to online data archives (ourworldindata.org, wikidata.org, …). So I think it's definitely doable.
We also want to provide an option for users to add their own functions to the standard library. So if they e.g. need `FinancialData[]` they could implement it themselves and provide it as a standard library function.
> it's why the "how many Rs in strawberry" question doesn't trip them up anymore, because they can write a few lines of Python to answer it, run that, and return the answer.
That still requires the LLM to ‘decide’ that consulting Python to answer that question is a good idea, and for it to generate the correct code to answer it.
Questions similar to ”how many Rs in strawberry" nowadays likely are in their training set, so they are unlikely to make mistakes there, but it may be still be problematic for other questions.
>LLMs using code to answer questions is nothing new, it's why the "how many Rs in strawberry" question doesn't trip them up anymore, because they can write a few lines of Python to answer it, run that, and return the answer.
False. It has nothing to do with tool use but just reasoning.
Oh right you're very focused on specifically the strawberry problem. I just gave that as a throwaway example. It's a solution but not necessarily the solution for something that simple.
My point was much more general, that code execution is a key part of these models ability to perform maths, analysis, and provide precise answers. It's not the only way, but a key way that's very efficient compared to more inference for CoT.
Seems poor translation. The German version only speaks about rising costs in various areas, no mention of any IT branch. They probably meant the whole IT market in general, not specifically their own company or some branch of it.
It's a German hosting company making a translation error from the German "IT-Branche". The wording doesn't appear in the German version, but very well could have at some point in the process.
Antigravity runs on your machine, the secret is there for the taking.
This is true of all OAuth client logins in this way, it's why the secret doesn't mean the same thing as it does with server to server login, you can never fully trust the client.
OAuth impersonation is nothing new, it's a well known attack vector that can't really be worked around (without changing the UX), the solution is instead terms of service, policies, and enforcement.
No, 100% would be equal to paying someone 50% of market rate. If market is $100k and someone was paid $80k, you could say “paid 80% market rate” or “25% less than market rate” (since a 25% pay increase would bring them to market rate)
No, in the same way that a 100% increase is a doubling, a 100% decrease is to zero.
If you said that the market rate was 100% more than the workers were being paid, that would be correct, but that's a different baseline and not what was stated in the title.
You're mixing up your baselines. I don't know how you got there in your second example. 25% less than "some number" is always (some number * (1 - 0.25)).
If you go to an all you can eat buffet, ignore the plates they give you, and start filling up your own takeaway boxes with days worth of food, you'd expect to be kicked out.
No one would think this is unreasonable. You're not paying for unlimited food forever, you're paying for all you can eat in the restaurant right there.
I'm confused why the presence (or lack of) a limit is relevant to the pretty simple analogy...
A buffet is saying "pay $X to eat food one plate at a time [up to 100 lbs of food]", and you show up and start shoveling the food into your bag. Does not really matter if we remove the 100lbs part.
Could you technically eat the same amount of food one plate a time? Sure. But if everyone does this, $X needs to be significantly more: even for the people who eat one plate at a time.
-
You could also argue they're playing a mean trick and deceiving people because technically someone could eat the same amount of food 1 plate at a time...
But they priced $X based on how much the average person can eat, not how much food they can carry in their arms. If the limits are so high that people don't leave hungry eating 1 plate at a time, it still seems like a fair deal.
I'm not exactly the type to jump for joy at siding with a corporation, but I really don't get why people are in a hurry to ruin a good thing.
I don't think there's even a limit. The limit is a soft limit enforced through the UX of the tool, the features it provides, or even how it's marketed. There are always going to be high cost users and low cost users, service providers know this and build it into their revenue modelling.
Another example is home internet connections. They're unmetered where I live, but I'm also told I can't run public internet services on it. Why? Because with "personal/home usage" there's just a practical limit to how much I can use my ~1Gbps pipe, whereas if I ran a public service I might max out that pipe. I'm a pretty heavy user (~60GB a day), but that's a world of difference from the >10TB I could theoretically hit.
> but I really don't get why people are in a hurry to ruin a good thing
This is the crux of it. I like services limited by practicality because they're a heck of a lot cheaper. If people want more usage there's always API billing, they just have to pay for what they're actually using.
No, if you did that, they'd start by saying "hey, stop that", not jump immediately to "you're banned from every Golden Corral location for the rest of your life".
Along the same lines though, txt spk to friends is a) far lower impact with the smaller audience, and b) communicates other factors such as what device you're on or how close you are to someone, so this is not me just hating on bad grammar.
reply