This is a false dichotomy. The choice is not lab grown or suffering. Farmed animals could live happy, healthy lives and then be culled in a humane way.
The problem is that it costs slightly more and our society is more concerned with cost than animal suffering.
> Farmed animals could live happy, healthy lives and then be culled in a humane way.
Breeding animals _specifically for killing them_, no matter how they are killed, is not what I'd consider humane. If we take 'humane' literally, it means to be treated as you would treat a human. I doubt we'd do this to humans. So the only way to be okay with this is adhering to a form of specieism.
If so, by how much? 1.5 billion pigs are slaughtered each year, is that too few to matter?
It's so normalized to think of animals as worthless that I don't blame anyone for not having thought about it, but the moral calculus is really easy. Most people wouldn't be comfortable with killing a dog, and yet, every three days, we have two more Holocaust-worths of a smarter animal (namely, pigs).
I happen to not eat meat, but I still think that your bar is too high.
If those animals would not be grown by humans, it could still be said that in their natural conditions they are grown specifically to be killed at some time, since few animals die "of natural causes", instead of being killed either by predators or by parasites.
For examples, my grandparents grew chicken. There is no doubt that those chicken had a more pleasant life than any wild chicken. During the day, they roamed through a vast space with abundant vegetation, insects and worms, where they fed exactly like any wild chicken would do. They also received from time to time some grain supplement that was greatly appreciated. They were not permanently harassed and stressed by predators, as it would have happened in their natural environment.
From time to time, some chicken was caught and killed out of sight. This would shorten the lives of the chicken that were not left for egg laying, but in comparison with the lives of wild chicken, most of these domestic chicken would have still lived longer on average.
So I think that as long as domestic animals do not live worse than their wild ancestors, this can be called as a "humane way", even when they are grown with the purpose of eventually being killed. At least being grown by humans has ensured the survival of the species of domesticated animals, while all species of not very small non-domesticated terrestrial animals have been reduced to negligible numbers of individuals in comparison with humans and domestic animals and for most of them extinction is very likely.
Unfortunately, today it has become unlikely to see animals grown in such conditions, like at my grandparents. Most are grown in industrial farms in conditions that cannot be called anything else but continuous torture, and I despise the people who torture animals in order to increase their revenues.
> I doubt we'd do this to humans. So the only way to be okay with this is adhering to a form of specieism.
The obvious historical pointer to the Holocaust as a well covered example of people being treated like that - and a fact often swept under the rug: people are being treated essentially like that in North Korea, today.
Yeah, but it seems we do think it's okay, after all North Korea has been doing that for decades and people don't care whatsoever. NK contractors continue to be employed all over the world.
Instead people are outraged about the dictators in the middle east etc.
Whatever you wanna say about the Gaza atrocities - it applies 100x to North Korea. And yet, nobody cares.
I think that's the real thing you can take away from these issues: nobody cares unless there is a politically motivated party that wants to achieve their goals and thus shapes the public narrative enough to get it's will through
That may be true in your region. I myself live in Germany. Here, you get subventions by animal on the farm - hence here, you're literally paying for and enabling it wherever you actually purchase the meat or not. But I get where you're coming from.
I mean businesses, the government, and capital economies in general treat me like garbage to be disposed of when its convenient, so while ideally I would like to agree with you, in practice I don't see that much difference. I can't even count how many times my life has been put on the line because some asshole wanted to save $10 or 2 minutes.
Also many domesticated animals would be completely extinct if we didn't eat or farm them, which isn't a great prize, but it isn't nothing either.
Personally I don't believe, until energy is so plentifully produced to be approaching on free for individuals, that we can even really afford to forgo animal agriculture. Oh yeah sure, we use farmland to grow animal feed and not human feed, but the animal feed we grow is often much easier to grow, and in many cases reduces artificial fertilizer usage which is a huge usage of fossil fuels. Take cows for example, they are fed 90% grass or alfalfa to grow, yeah their last month or two of life they are given grain to fatten them up (which is optional but customers prefer it), but that alfalfa they ate for 90% of their growth required no fertilizer or pesticides, and even produces nitrogen fertilizer in the ground, while harvesting consists of mowing it down and packing it into bails, and can be stored for years almost anywhere, including outside or in just random pits or 100 year old barns. Dent corn does require fertilizer (especially if you don't rotate with alfalfa or other legumes), however dent corn can also be stored for years with no more processing than is done within 10 seconds of harvest by the combine that is harvesting it, and can be fed to animals directly, while sweet corn is far more delicate, less fertilizer efficient, and doesn't store for long without refrigeration, canning, or freezing. Also animals like pigs are often fed otherwise bad or inedible food.
Meat/animals it also a great emergency store of food for disaster. A volcano can reduce global yields by 30%, and while crop subsidization will help as we might have an extra 10% growing to start with, that still may end with shortages because people keep fighting crop subsidization to keep it low not understanding why it exists (to prevent famine which it has successfully done for over 100 years in every country that implements it). But you can give yourself an extra few years of food from animal stores by slaughtering them down and feeding them on old animal feed stockpiles, either just to wait for overall crop yields to increase again, or to give time to increase farming efforts.
In a more ideal world I would definitely go along with reducing or even eliminating most animal agriculture, but in the world we currently live in it seems like a liability to me. I simply don't trust the people controlling most of our capital and wielding most of the power of human civilization to manage it well enough now or anytime in the near future. Not to mention the educational requirements to most cultures to switch to a pure vegetarian diet without all sorts of side effects from poor nutritional balance.
It is not at all a logical, ethical, or emotional contradiction to say that we should have humanely-raised livestock whose purpose is to be slaughtered for meat. We can ensure that they are kept in healthy, safe, and humane ways during their lifetimes, and are killed as quickly, cleanly, and painlessly as we can reasonably manage.
And, um, the meaning of "humane" is only loosely related to "treat like a human." It means "treat well, view with compassion", and similar. We talk of things being "humane" or "inhumane" in how we treat each other, too. [0]
Humans have been raising meat animals with compassion and treating them well during their lifetimes for longer than we have had written language. Veganism/vegetarianism is not even physically an option for many people, excludes many cultures' traditional practices, and very, very often requires (or at least tends toward) supplementing with foods that are at least as unethical as factory farming practices.
I don’t think you can call it a compassion when you own an animal with sole purpose of efficiently growing it and then killing it so its body can be dismembered and sold off. This is treating animals as property, that produces profit.
Raising animals compassionately, slaughtering them for meat as painlessly as possible after a healthy, happy life
OR
Letting people who require meat in their diet to live (there are a number of reasons this may be the case) die slow, painful deaths as their bodies fail around them?
It's real easy to say that "no one should ever kill an animal to live" when you ignore the disabilities and chronic conditions that make surviving on plants alone impossible, or prohibitively expensive.
Why wouldn't they? Animals definitely have moral sense. Monkey for example react violently to social injustice against them. Animals also take compassionate actions that bring them no benefit or even incur cost. Straightly moral act.
> supplementing with foods that are at least as unethical as factory farming practices
That's the first time I ear about supplement being unethical, let alone "compared to factory farming". Stretching the usual arguments, it may be almonds' water or soy in brazil ? I'd be glad you clarify your point.
The harvesting of a number of the common foods used to supplement vegetarian and vegan diets—eg, soy, agave, quinoa—are variously destructive to the environment and based on labor practices exploitative enough that they sometimes verge on slavery.
Oh I understand the confusion: soy, quinoa and agave are not supplements but food. I guess we might agree on "alternative" but the word choice isn’t your point.
Soy is a strange pick at it’s mostly cultivated to feed livestock (77%) and using it instead for humans instead would require substantially less crops.
Agave… is mostly water and glucose, without much minerals or protein. How does vegs requires or tends toward that aliment more than other?
Quinoa original region doesn’t have the same working standards as un US/UE but I’m not aware of a difference with banana, coffee, avocado, cacao, vanilla, coconut, palm oil (or soy)… however Quinoa also grow in other regions: here in France you can find local quinoa at the same price as the one from Bolivia: around 8€/kg (organic). It’s super healthy but not very popular through.
Beans and lentils are more popular I think (self non-scientific estimation) but yeah soy is great and tasty.
Many farm animals aren't bred specifically for killing them. Think egg-laying hens and ducks, milk-producing cows and goats, etc.
Not too different from humans in that respect; humans are bred systematically (we have dedicated hormonal supplements, birth facilities, documented birthing procedures, standardized post-birth checklists of forms of vaccination regiments, standardized mass schooling, government-subsidized feeding programs, etc) and most are used machinistically by society exclusively for productive output, regardless of whether the society is corporatist, capitalist, socialist, communist, etc.
Yup exactly. And when animals are bred specifically for milk, they aren't treated well even before they are killed. Dairy cows need to be kept continuously pregnant / in lactation state through artificial insemination. They don't magically produce milk all-year round.
And pregnancy is _hard_ on animals (including humans), it changes your physiology and psychology. Even if we take for granted that a cow isn't as conscious as a human (IMO consciousness is a sliding scale, not a binary), then they are still being primed for giving birth and taking care of offspring which never comes. Imagine doing that to a human - it's a definite form of cruelty.
To be fair you can keep a cow milking for LONG after it gave birth, but yields are a bit lower and baby cows are very valuable in themselves, which is generally the same kind of problem that keeps most bad farming practices going. Dairy farming is already a bottom of the barrel low margin business. And running a dairy farm while not maximizing calf output is like running a low-yield gold mine while dumping gem quality emeralds out with the tailings. Yeah it can be done, but you would be foolish not to do both, especially if it means your competitors across the road are able to squash your business with far higher capital returns after a few years.
What do you think happens to a cow, when she stops producing milk? She is kept constantly pregnant, what do you think happens to most of her male offsprings? In the end, almost every single cow’s life ends in violence.
I think what GP meant is that when it involves money, suffering is nearly impossible to prevent. That's why you have puppy mills, for example. Most people don't know how the puppies are raised, they just see the cute puppy in the shop. The same way people see a pretty piece of meat in the supermarket and don't know its history.
Raising animals for meat is theoretically doable with no suffering (not sure about milk), but it's not happening in practice. With pets the situation is better - a lot of people adopt and some care about how their pet was raised if they buy it from a breeder.
I didn't intend to. I think that domesticated animals have long had a harmonious relationship with humans so I find it a bit difficult to believe that it's always an ethical dilemma. Pets are just the most obvious lens to identify that.
I also think we need to be careful with the idea that we should entirely avoid suffering because it's impossible to do.
I think that it is what you know of the history of animal domestication and of pets that makes you think that there is an acceptable and low amount of suffering.
For pets, I don't think you understood what GP was saying: pet breeding involves massive amounts of death of puppies/kittens that aren't pretty enough or don't manage to survive infancy, the female breeders are basically confined to cages and "producing" all their life, some short-nosed breeds of dogs and cats are even illegal in some countries because they spend their life unable to breathe properly, pets are abandoned and killed, etc. The happy pets you see in the street are not representative of what it is to be a pet. But yes, these ones are not suffering.
As for long and harmonious, as much as we tend to see anything in the distant past as innocent, I'd remind you that the systematic killing of male chicks, the killing of veals to avoid them drinking all the milk, the killing of all animals as soon as productivity drops beyond a threshold, are not new practices. No animal wants to be enslaved. Same as no human wants to be enslaved.
I'm not attacking you, just attempting to give you an idea of why other commenters believe animal domestication is not ethical.
> Farmed animals could live happy, healthy lives and then be culled in a humane way.
> The problem is that it costs slightly more and our society is more concerned with cost than animal suffering.
IDK about other livestock, but this definitely doesn't hold for chickens, one of the cheaper meat sources in the US. Switching to breeds that could live more than a very-few weeks(!) before getting too overweight to walk, would increase price by far more than "slightly more", and there's no hope of anything fitting any sane definition of "humane chicken farming" without that step.
I suspect it's also true for pigs, not necessarily the "we bred them so wrong that their very existence is a crime against god and nature" part but that the price increase from a "healthy, happy life" would be a lot larger than "slightly more". Maybe also cows, dunno about that one.
It's not like chickens chose this way of life. Such breeds were developed for the specific purpose of meat, with no regard for their wellbeing. Don't shame the chickens for what human bastards do.
Oh, sure, our fault, but fact remains that modern meat-chicken breeds are so incredibly fucked up that it’s not really possible to humanely farm them. They’re like that because of what we did, yes, but step one toward comprehensive humane-farming for chickens would have to be “let those breeds entirely die out” regardless of who’s at fault (and it ain’t the chickens).
> society is more concerned with cost than animal suffering.
Yes, and it's politically very hard to change. I totally understand price sensitivity around food. At least where I live milk and meat is extremely subsidised. How can you have chicken that is grown, slaughtered, cleaned, packaged, distributed, kept cold all the way, etc. and sell it for 5eur/kg (and cheaper on discounts). There's s much human work, resources, fuel used - I cannot understand.
Also - being a vegetarian/vegan is more expensive than being omnivore.
> being a vegetarian/vegan is more expensive than being omnivore.
Being vegan is cheaper where I live regardless of whether we factor in the subsidies or not. Beans, chickpeas, lentils and sometimes soy (examples of protein sources) are pretty inexpensive. Peanuts, some other nuts (in the culinary sense of the word) and some of the vegetable oils are also inexpensive.
If you factor in foods meant to replace or replicate the taste of a carnivore diet like vegan yogurt, milk or cheese, or things like Beyond Meat burgers, it might become expensive, but you don't have to limit yourself by trying to replicate what you used to eat - you can make lots of things from 10-20 basic ingredients.
Beans, lentils etc. may be inexpensive but you skip over the fact that they and any other kind of plant source contain too little protein in comparison with the amount of energy.
Being vegan is easy for someone who does hard physical work, so they have to eat more than 3000 kcal per day. In that case, eating enough proteins that come from plant sources will still leave enough in the energy budget to also eat other food, for a complete diet.
On the other hand, if you have a sedentary lifestyle, working in the front of a computer, it is impossible to eat enough protein without eating too many calories.
There are plant protein extracts, but those are at least 3 to 5 times more expensive than cheap animal protein, e.g. chicken breast.
I have eaten for 4 years only plant-based proteins, but to satisfy simultaneously 3 constraints, enough proteins, not so many calories as to cause weight gain and price no greater than when eating chicken meat, in Europe where I live there was only 1 solution, with no alternatives.
The solution was to extract at home gluten from wheat flour, to supplement the proteins provided by lentils or other legumes. Any plant-based product that I could buy for enough protein would have been more expensive than chicken meat.
Extracting gluten, which is done by washing dough with abundant water, works. However it requires much time and much water. Extracting pure gluten requires so much water and so much time that I never did this regularly. I was typically removing around 75% of the starch from wheat flour, and with the result I was baking a bread that had about 50% to 60% protein content.
Besides wasting a lot of water and time every day, this procedure had the additional disadvantage that the amount of calories provided by eating enough protein was still rather large. This limited severely the possible menus, e.g. any starchy vegetables, e.g. potatoes or sweet potatoes or rice, and any starchy fruits, e.g. bananas, had to be forbidden. I gain weight extremely easily if I exceed my allowable daily energy intake.
These inconveniences have made me eventually abandon this approach. While I still eat mostly vegan food, I also use in cooking some whey protein concentrate, which can increase enough the protein content of plant-based food.
There already exists a technology for making whey protein concentrate otherwise than by the filtration of milk, i.e. by extracting it from a culture of genetically-modified Trichoderma fungus. I hope that this technology will become viable commercially, because unlike with fake meat produced from cell cultures, it is certain that with such a fungal culture one can make proteins less costly than by growing chicken or other domestic animals.
The availability of such a protein powder would solve completely the problem of vegan food for me. I do not need fake meat.
> Being vegan is easy for someone who does hard physical work, so they have to eat more than 3000 kcal per day. In that case, eating enough proteins that come from plant sources will still leave enough in the energy budget to also eat other food, for a complete diet.
> On the other hand, if you have a sedentary lifestyle, working in the front of a computer, it is impossible to eat enough protein without eating too many calories.
Anecdotal counterpoint - I've done hard physical work for ~2 years. For ~10 years before that and for about 1.5 after I stopped the physical work I've been spending most of my days sitting in front of the computer with the occasional walk to the grocery store. No sport or hiking or fitness.
I don't eat TVP or other condensed protein food. I don't count calories or even consciously decide when to eat protein rich food. Sometimes I'll even just eat pasta or other food relatively low in protein and rich in carbs and calories. Yet, I'm in the same physical health as I've always been. Not athletic, but normal weight - the type of weight an annoying aunt would see and say "ooh, you gotta eat more". :) The first BMI calc I tried put me right in the middle of the green zone (yes, I know BMI is not that important). And I could return to the same physical work if I wanted to.
Besides my anecdotal counterpoint, there are many sources online you can find where people discuss their diet, how much it costs and their lifestyle.
> The solution was to extract at home gluten from wheat flour, to supplement the proteins provided by lentils or other legumes.
> any starchy vegetables, e.g. potatoes or sweet potatoes or rice, and any starchy fruits, e.g. bananas, had to be forbidden.
That seems very extreme to me. Were you trying to be a bodybuilder or maintain some low body fat or something while maintaining this sedentary lifestyle? I eat potatoes, sweet potatoes and bananas all the time.
> I gain weight extremely easily if I exceed my allowable daily energy intake.
If that's the case, why not spend some of the calories on exercise? Although I find it hard to believe that with my current and previous sedentary lifestyle I expend enough calories to not care what I eat on a vegan diet, but you gain weight "extremely easily". Do you have some rare disease, if you don't mind my asking? Cause the only fat people I know are the ones who overeat, regardless of diet. Even if they start blaming "slow metabolism" or something else, it's obvious when you see them eat.
> There are plant protein extracts, but those are at least 3 to 5 times more expensive than cheap animal protein, e.g. chicken breast.
Economy of scale and subsidies.
> There already exists a technology for making whey protein concentrate otherwise than by the filtration of milk, i.e. by extracting it from a culture of genetically-modified Trichoderma fungus.
Interesting. Although I don't see myself buying it, I'll look it up.
> fake meat
It's as real as you can get. "Fake meat" would be TVP prepared like meat. You wouldn't say "fake whey protein" if you extract it from genetically-modified Trichoderma fungus.
> being a vegetarian/vegan is more expensive than being omnivore.
I read this often but the long term vegs usually says the opposite as does the studies [0]. Bonus point: The veg options are often the cheapest in the non-vegan restaurants (although not the tastiest). I have some hypothesis where it comes from:
- Meat substitues are seen as a necessary replacement. They are transformed, which require more work - and therefore more expensive. However they are as (un)necessary as a fined-prepared piece of charcuterie, which isn't cheap neither.
- Cost is evaluated at the supermarket shelf but as you noted the animal products are extremely subsidised. Vegs pays for them but don't use it. Infrastructures like airport, rails, road and urban amenities are not free either even if you don't pay for them. How you evaluate the price is at your own discretion.
- Fancy products are placed in the most visible shelf and thats the people see first, but they compare it with the cheapest animal alternative. I'm an engineer and by my fancy organic tofu 7-20€/kg but used to buy chicken at 25-35€/kg in the same fancy segment. If I'd be on a budget I'll probably buy the bottom shelf one at 3.5€/kg, next to the 5€/kg chicken.
- Cost of change: changing habits require to re-create the optimization you build during the previous years: where to find the best price for the product X, what quantity should you get or what daily stable you can add in the routine for cheap (fake meat and fancy milk aren't).
0:
> Main findings suggest that food expenditure negatively relates to vegetarian food self-identity, and unemployment status mediates the link between vegetarianism and food expenditure.
This is whitewashing to make consumers happy and less concern. Nothing is humane about murdering an animal to sell its body parts for profit. Thats just marketing.
Strong disagree. There absolutely are “more humane” methods but they are still not humane. Forced separation of mothers from their children (standard practice) and killing the animals before their natural death are two inhumane features of even “free range” practices.
There is no humane way to kill a creature against its will.
To be clear, free range represents a major improvement over the unconscionable horrors of factory farms, but it is not flatly “humane” without qualification.
That's exactly why lab grown meat is the best way to solve the issue.
Society is only concerned with cost, regulations are weak and rarely enforced and companies are operating in a capitalistic market where they can't compete unless they squeeze every last cent out of each animal. That's hard to change as lots of people have an interest in keeping the status quo and the citizens who vote don't have the time to read everything that comes their way. We can't expect that society will wake up, that people will start voting with more conscience or that everyone will go vegan.
Lab grown meat (or growing brainless animals or something similar) is a technological solution. When it becomes cheaper than normally-grown meat and similar in quality, the atrocities committed in the farms would cease to exist as the farms themselves would cease to exist. The same market forces that are responsible for what's happening to the animals now would prevent any future torture.
I'm skeptical of this claim because there's clearly a growing population that hates the idea of putting anything they don't understand in their bodies. Genetically modified vegetables, food dyes, vaccines, etc.
I find it hard to believe you could convince a large portion of Americans to eat lab grown meat just to save a buck.
I think that population is shrinking, not growing. They're surely vocal, though.
But if lab grown meat is cheaper, some part of the population would buy it. The farms would lose part of their business so economy of scale would help lab grown meat and hurt the farms. I think it would lead to a feedback loop where lab grown meat will get even cheaper and farm meat would get more expensive.
With lab grown meat you also have the option not only for a perfect piece of meat, but for different kinds of tastes, textures and compositions that haven't existed before. Just like people eat processed meat (think ham or nuggets or deep fried pieces), they would love to try the new tastes. I know I would.
I think a lot of people will be suspicious of lab grown meat after years of fake meats marketing of substances filled with some unholy combination of seed oils and chemicals (in before some nerd pipes up with a comment about how water is a chemical too kekeke) specifically designed to poison our bodies. Well maybe I exaggerate about specifically designed but fake meats are certainly not healthy for us.
Not to be glib, but the shelves of most American grocery stores do a pretty good job demonstrating that that segment of the population isn't dominant yet. There's a glut of processed food full of ingredients much harder to pronounce than an ingredient list that would read "Ingredients: Beef (cultured)".
And to be glib, I'm not thrilled about the idea of catering to the bar set by "things they don't understand" from that group in particular.
Sure you could, just don't tell them. The fast food burgers are already part soy and no one is yelling. If they replaced that with lab meat people would probably like it better.
Worse. We were told we couldn't have happy, cruelty free meat because it would be expensive. We were told we couldn't have clean meat because it would be expensive. We couldn't have sustainable meat because it's too expensive. We couldn't reduce the environmental harms of meat because it's too expensive. We couldn't have locally produced meat because it's too expensive.
Well we got none of those things. But beef steak is still $25 a pound. I don't really care if it gets more expensive because at current prices I can't eat it regularly anyway and rich assholes will have no problems eating their steaks every day even at $100 a pound so why don't we just have a sustainable, clean, less cruel industry?
Countries reduced their demand of our meat, because of horseshit tradewar games, and yet the price went up. Demand for American industrial crops like soybeans, which is used significantly as a cattle feed, cratered, to the point we will have to hand the farms tens of billions of dollars, yet somehow beef still got more expensive. Meat processing uses illegal immigrant labor, sometimes even child labor, and all regulation of those facilities has dramatically curtailed under Trump administrations, and yet beef still gets more expensive.
Here's what beef producers say:
>“It’s hard as a beef producer to necessarily say that beef prices are too high. I mean, if people are paying $6 for a latte at Starbucks, but then they’re paying $6 for a pound of beef, they’re able to feed a family for a family of three with that pound of beef,” said Taylon Lienemann, co-owner of Linetics Ranch in Princeton, Nebraksa.
In other words, fuck you pay me. "Starbucks makes great profit so we should make more". The reason for the price increase is a "very small herd", which producers have been reducing because of droughts and otherwise because they don't think the profit is high enough to invest in future production.
I've noticed that open weight models tend to hesitate to use tools or commands unless they appeared often in the training or you tell them very explicitly to do so in your AGENTS.md or prompt.
They also struggle at translating very broad requirements to a set of steps that I find acceptable. Planning helps a lot.
Regarding the harness, I have no idea how much they differ but I seem to have more luck with https://pi.dev than OpenCode. I think the minimalism of Pi meshes better with the limited capabilities of open models.
+1 to this, anecdotally I’ve found in my own evaluations that if your system prompt doesn’t explicitly declare how to invoke a tool and e.g. describe what each tool does, most models I’ve tried fail to call tools or will try to call them but not necessarily use the right format. With the right prompt meanwhile, even weak models shoot up in eval accuracy.
> [...] _but not necessarily use the right format._
This has also been my experience. But isn't the harness sending the instructions on how to invoke a tool? Maybe it is missing the formatting part. What do you think?
I really hope this doesn't hinder development too much. As Simon says, Qwen3.5 is very impressive.
I've been testing Qwen3.5-35B-A3B over the past couple of days and it's a very impressive model. It's the most capable agentic coding model I've tested at that size by far. I've had it writing Rust and Elixir via the Pi harness and found that it's very capable of handling well defined tasks with minimal steering from me. I tell it to write tests and it writes sane ones ensuring they pass without cheating. It handles the loop of responding to test and compiler errors while pushing towards its goal very well.
I've been playing with 3.5:122b on a GH200 the past few days for rust/react/ts, and while it's clearly sub-Sonnet, with tight descriptions it can get small-medium tasks done OK - as well as Sonnet if the scope is small.
The main quirk I've found is that it has a tendency to decide halfway through following my detailed instructions that it would be "simpler" to just... not do what I asked, and I find it has stripped all the preliminary support infrastructure for the new feature out of the code.
That sounds awfully similar to what Opus 4.6 does on my tasks sometimes.
> Blah blah blah (second guesses its own reasoning half a dozen times then goes). Actually, it would be a simpler to just ...
Specifically on Antigravity, I've noticed it doing that trying to "save time" to stay within some artificial deadline.
It might have something to do with the system messages and the reinforcement/realignment messages that are interwoven into the context (but never displayed to end-users) to keep the agents on task.
As someone that started using Co-work, I feel like I am going insane with the frequency that I have to keep telling it to stay on task.
If you ask it to do something laborious like review a bunch of websites for specific content it will constantly give up, providing you information on how you can continue the process yourself to save time. Its maddening.
That’s pretty funny when compared with the rhetoric like “AI doesn’t get tired like humans.” No, it doesn’t, but it roleplays like it does. I guess there is too much reference to human concerns like fatigue and saving effort in the training.
This is what happens when a bunch of billionaires convince people autocomplete is AI.
Don't get me wrong, it's very good autocomplete and if you run it in a loop with good tooling around it, you can get interesting, even useful results. But by its nature it is still autocomplete and it always just predicts text. Specifically, text which is usually about humans and/or by humans.
You are not wrong, but after having started working with LLMs, I have this feeling that many humans are simply autocomplete engines too. So LLMs might be actually close to AGI, if you define "general" as "more than 50% of the population".
Humans are absolutely auto-complete engines, and regularly produce incorrect statements and actions with full confidence in it being precisely correct.
Just think about how many thousands of times you've heard "good morning" after noon both with and without the subsequent "or I guess I should say good afternoon" auto-correct.
Well the essence of software engineering is taking this complex real world tasks and breaking them down into simpler parts until they can be done by simple (conceptually) digital circuits.
So it's not surprising that eventually autocomplete can reach up from those circuits and take on some tasks that have already been made simple enough.
I think what's so interesting is how uneven that reach is. Some tasks it is better than at least 90% of devs and maybe even superhuman (which, in this case, I mean better than any single human. I've never seen an LLM do something that a small team couldn't do better if given a reasonable amount of time). Other cases actual old school autocomplete might do a better job, the extra capabilities added up to negative value and its presence was a distraction.
Sometimes there is an obvious reason why (solving a problem with lots of example solution online vs working with poorly documented proprietary technologies), but other times there isn't. They certainly have raised the floor somewhat, but the peaks and valleys remain enormous which is interesting.
To me that implies there is both lots of untapped potential and challenges the LLM developers have not even begun to face.
Yep. The veil of coherence extends convincingly far by means of absurd statistical power, but the artifacts of next token prediction become far more obvious when you're running models that can work on commodity hardware
> As someone that started using Co-work, I feel like I am going insane with the frequency that I have to keep telling it to stay on task.
Used to have the same thing happening when using Sonnet or Opus via Windsurf.
After switching to Claude Code directly though (and using "/plan" mode), this isn't a thing any more.
So, I reckon the problem is in some of these UI/things, and probably isn't in the models they're sending the data to. Windsurf for example, which we no longer use due to the inferior results.
In my experience all of the models do that. It's one of the most infuriating things about using them, especially when I spend hours putting together a massive spec/implementation plan and then have to sit there babysitting it going "are you sure phase 1 is done?" and "continue to phase 2"
I tend to work on things where there is a massive amount of code to write but once the architecture is laid down, it's just mechanical work, so this behavior is particularly frustrating.
I hope you will excuse my ignorance on this subject, so as a learning question for me: is it possible to add what you put there as an absolute condition, that all available functions and data are present as an overarching mandate, and it’s simply plug and chug?
Recently it seems that even if you add those conditions the LLMs will tend to ignore them. So you have to repeatedly prompt them. Sometimes string or emphatic language will help them keep it “in mind”.
If found it better to split in smaller tasks from a first overall analysis and make it do only that subtask and make it give me the next prompt once finished (or feed that to a system of agents). There is a real threshold from where quality would be lost.
Yeah that happened to me with Claude code opus 4.6 1M for the first time today. I had to check the model hadn’t changed. It was weird. I was imagining that maybe anthropic have a way of deciding how much resource a user actually gets and they had downgraded me suddenly or something.
But how do you see the current thinking level and how do you change it? I’ve been clicking around and searching and adding “effortLevel”:”high” to .claude/settings.json but no idea if this actually has any effect etc.
Opus 4.6 found in my documentation how to flash the device and wanted to be clever and helpful and flash it for me after doing series of fixes. I've got used to approving commands and missed that one. So it bricked it.
Then I wrote extra instructions saying flashing of any kind is forbidden. Few days later it did again and apologised...
Haha yeah I've had this happen to me too (inside copilot on GitHub). I ask it to make a field nullable, and give it some pointers on how to implement that change.
It just decided halfway that, nah, removing the field altogether means you don't have to fix the fallout from making that thing nullable.
> to decide halfway through following my detailed instructions that it would be "simpler" to just... not do what I asked
That's likely coming from the 3:1 ratio of linear to quadratic attention usage. The latest DeepSeek also suffers from it which the original R1 never exhibited.
For the uninitiated: Interestingly, it is not advisable to take this to the extreme and set temperature to 0.
That would seem logical, as the results are then completely deterministic, but it turns out that a suboptimal token may result in a better answer in the long run. Also, allowing for a little bit of noise gives the model room to talk itself out of a suboptimal path.
I like to think of this like tempering the output space. With a temperature of zero, there is only one possible output and it may be completely wrong. With even a low temperature, you drastically increase the chances that the output space contains a correct answer, through containing multiple responses rather than only one.
I wonder if determinism will be less harmful to diffusion models because they perform multiple iterations over the response rather than having only a single shot at each position that lacks lookahead. I'm looking forward to finding out and have been playing with a diffusion model locally for a few days.
Yup. I think of it as how off the rails do you want to explore?
For creative things or exploratory reasoning, a temperature of 0.8 lends us to all sorts of excursions down the rabbit hole. However, when coding and needing something precise, a temperature of 0.2 is what I use. If I don’t like the output, I’ll rephrase or add context.
> The main quirk I've found is that it has a tendency to decide halfway through following my detailed instructions that it would be "simpler" to just... not do what I asked,
This is my experience with the Qwen3-Next and Qwen3.5 models, too.
I can prompt with strict instructions saying "** DO NOT..." and it follows them for a few iterations. Then it has a realization that it would be simpler to just do the thing I told it not to do, which leads it to the dead end I was trying to avoid.
Are you running it locally with llama.cpp? If so, is it working without any tweaking of the chat template? The tool calls fail for me when using the default chat template, however it seems to work a whole lot better with this: https://huggingface.co/Qwen/Qwen3.5-35B-A3B/discussions/9#69...
Yes, it fails too. I’m using the unsloth q4_km quant. Similarly fails with devstral2 small too, fixed that by using a similar template i found for it. Maybe it’s the quants that are broken, need to redownload I guess.
I've been testing the same with some rust, and it's has spent a fair bit of time going through an infinite seeming loop before finally unjamming itself. It seems a little more likely to jam up than some other models I've experimented with.
It's also driving itself crazy with deadpool & deadpool-r2d2 that it chose during planning phase.
That said, it does seem to be doing a very good job in general, the code it has created is mostly sane other than this fuss over the database layer, which I suspect I'll have to intervene on. It's certainly doing a better job than other models I'm able to self-host so far.
> it's has spent a fair bit of time going through an infinite seeming loop before finally unjamming itself.
I think this is part of the model’s success. It’s cheap enough that we’re all willing to let it run for extremely long times. It takes advantage of that by being tenacious. In my experience it will just keep trying things relentlessly until eventually something works.
The downside is that it’s more likely to arrive at a solution that solves the problem I asked but does it in a terribly hacky way. It reminds me of some of the junior devs I’ve worked with who trial and error their way into tests passing.
I frequently have to reset it and start it over with extra guidance. It’s not going to be touching any of my serious projects for these reasons but it’s fun to play with on the side.
Some of the early quants had issues with tool calling and looping. So you might want to check that you're running the latest version / recommended settings.
> and it's has spent a fair bit of time going through an infinite seeming loop before finally unjamming itself
I can live with this on my own hardware. Where Opus4.6 has developed this tendency to where it will happily chew through the entire 5-hour allowance on the first instruction going in endless circles. I’ve stopped using it for anything except the extreme planning now.
I don't know much about how these models are trained, but is this behavior intentional (ie, the people pulling the levers knew that this is how it would end up), or is it emergent (ie, pulling the levers to see what happens)?
I'm getting ~30 tok/s on the A3B model with my 3070 Ti and 32k context.
> Do you feel you could replace the frontier models with it for everyday coding? Would/will you?
Probably not yet, but it's really good at composing shell commands. For scripting or one-liner generation, the A3B is really good. The web development skills are markedly better than Qwen's prior models in this parameter range, too.
In my experience Qwen3.5 is better even at smaller distillations. From what I understand the Qwen3-next series of models was just a test/preview of the architectural changes underpinning Qwen3.5. So Qwen3.5 is a more complete and well trained version of those models.
In my experience qwen 3 coder next is better. I ran quite a few tests yesterday and it was much better at utilizing tool calls properly and understanding complex code. For its size though 3.5 35B was very impressive. coder next is an 80b model so i think its just a size thing - also for whatever reason coder next is faster on my machine. Only model that is competitive in speed is GLM 4.7 flash
I use the term "harness" for those - or just "coding agent". I think orchestrator is more appropriate for systems that try to coordinate multiple agents running at the same time.
This terminology is still very much undefined though, so my version may not be the winning definition.
It's really easy to setup with any OpenAI compatible API and I self host Qwen Coder 3 Next on my personal MBP using LM Studio and just dial in from my work laptop with Zed and tailscale so i can connect from wherever i might be. It's able to do all sorts of things like run linting checks and tests and look for issues and refactor code and create files and things like this. I'm definitely still learning, but it's a pretty exciting jump from just talking to a chat bot and copying and pasting things manually.
In my tests, Qwen3.5-35B-A3B is better, there is no comparison. Better tool calling and reasoning than Qwen3-Coder-Next for Html/Js coding tasks of medium size. Beware the quants and llama.cpp settings, they matter a lot and you have to try out a bunch of different quants to find one with acceptable settings, depending on your hardware.
It's the number of active parameters for a Mixture of Experts (misleading name IMO) model.
Qwen3.5-35B-A3B means that the model itself consists of 35 billion floating point numbers - very roughly 35GB of data - which are all loaded into memory at once.
But... on any given pass through the model weights only 3 billion of those parameters are "active" aka have matrix arithmetic applied against them.
This speeds up inference considerably because the computer has to do less operations for each token that is processed. It still needs the full amount of memory though as the 3B active it uses are likely different on every iteration.
It will benefit from a full amount of memory for sure, but AIUI if you use system memory and mmap for your experts you can execute the model with only enough memory for the active parameters, it's just unbearably slow since it has to swap in new experts for every token. So the more memory you have in excess to that, the more inactive but often-used experts can be kept in RAM for better performance.
Agreed but for a dense model you'd have to stream the whole model for every token, whereas with MoE there's at least the possibility that some experts may be "cold" for any given request and not be streamed in or cached. This will probably become more likely as models get even sparser. (The "it's unusable" judgmemt is correct if you're considering close-to-minimum reauirements, but for just getting a model to fit, caching "almost all of it" in RAM may be an excellent choice.)
Unlike offloading weights from VRAM to system RAM, I just can't see a situation where you would want to offload to an SSD. The difference is just too large, and any model so large you can't run it in system RAM, is going to be so large it is probably unusable except in VRAM.
Unusable for anything like realtime response, yes. Might be usable and even quite sensible to power less-than-realtime uses on much cheaper inference platforms, as long as the slow storage bandwidth doesn't overly bottleneck compute.
On Google, just clicking "AI Mode" gives you a substantially smarter model, and it's still pretty weak. But I assume the OP wasn't talking about Google because it doesn't seem to make this mistake even in a search.
It was bing as that is the default for Edge as supplied on my work laptop. It doesn't do this now, but it does do something else quite weird:
search: was val kilmer pregnant or in heat
answer:
Not pregnant
Val Kilmer was not pregnant or in heat during the events of "Heat." His character, Chris Shiherlis, is involved in a shootout and is shot, which indicates he is not in a reproductive or mating state at that time.
And then cites wikipedia as the source of information.
In terms of cognition the answer is meaningless. Nothing in the question implies or suggests that the question has to do with a movie. Additionally, "involved in a shootout and is shot, which indicates he is not in a reproductive or mating state" makes no sense at all.
If you asked a three-year-old a question that they proceeded to completely flub, would you then assume that all humans are incapable of answering questions correctly?
Nobody is arguing for the quality of the search overviews. The models that impress us are several orders of magnitude larger in scale, and are capable of doing things like assisting preeminent computer scientists (the topic of discussion) and mathematicians (https://github.com/teorth/erdosproblems/wiki/AI-contribution...).
Microsoft is bad at AI and this is a great example. I'm wondering if someone saw your post on HN and tried to hardcode a rule here, because I agree, it's nonsense. None of the actual AI companies are emitting nonsense like this.
My understanding, from listening/reading what top researchers are saying, is that model architectures in the near future are going to attempt to scale the context window dramatically. There's a generalized belief that in-context learning is quite powerful and that scaling the window might yield massive benefits for continual learning.
It doesn't seem that hard because recent open weight models have shown that the memory cost of the context window can be dramatically reduced via hybrid attention architectures. Qwen3-next, Qwen3.5, and Nemotron 3 Nano are all great examples. Nemotron 3 Nano can be run with a million token context window on consumer hardware.
I don't disagree with this, but I don't think the memory cost is the only issue right? I remember using Sonnet 4.5 (or 4, I can't remember the first of Anthropic's offerings with a million context) and how slow the model would get, how much it wanted to end the session early as tokens accrued (this latter point, of course, is just an artifact of bad training).
Less worried about memory, more worried about compute speed? Are they obviously related and is it straightforward to see?
The compute speed is definitely correlated with the memory consumption in LLM land. More efficient attention means both less memory and faster inference. Which makes sense to me because my understanding is that memory bandwidth is so often the primary bottleneck.
We're also seeing a recent rise in architectures boosting compute speed via multi-token prediction (MTP). That way a single inference batch can produce multiple tokens and multiply the token generation speed. Combine that with more lean ratios of active to inactive params in MOE and things end up being quite fast.
The rapid pace of architectural improvements in recent months seems to imply that there are lots of ways LLMs will continue to scale beyond just collecting and training on new data.
The parent commentator is a bit confused - most of the innovation in these hybrid architectures comes from reducing the computation pressure not just the memory pressure.
I challenge each and every one of you to make a pie by the end of the month.
I made one, for the first time in my life, last week. It brought me tremendous joy not only to make it, but to have something nice to share with friends.
My grandma made Platonically Ideal Pies, and I took up the art years ago. Mine, if I say so myself, are quite good, given that with Grandma's example I know what I'm shooting for.
I haven't made one for a few years, though - having a pie in my house is a recipe for me eating 5000 calories of pie and vanilla ice cream over the next few days.
When my grandma died a few years ago, I asked my aunts if I could have one of her pie pans. Apparently none of her other 17 grandkids thought to ask that - so I got all three (philistines!). Those basic metal pans are among my most cherished possessions.
In case you did not actually nail perfect, flaky crust the first time, that's a fun parameter to try to optimize. I finally got it at some point, and when I did, I realized that all those old cookbooks that said things like "use little water" and "keep the dough cold"---all the tricks where I thought "that has to be a myth"---turned out to be essential. The Joy of Cooking is full of old wisdom like this that has taken me ages to appreciate.
The "trick" to baking all kinds of things well for the first time is to follow the recipe fanatically. It says high protein flour? Use that, not all purpose flour. It says 500g caster sugar? Don't think that's going to be too sweet and add 400g, the sugar is where the texture comes from (and there's plenty of less sweet recipes to choose instead). It says make sure the dough is chilled? Chill the damn dough!
Once you've baked it perfectly to the exact recipe a few times, then you can start adapting.
Of course, there will come a point in your skill level where you will have the intuition to adapt recipes that you've never cooked before. But many people assume they can do that immediately, fail, then assume they can't cook and give up.
I will say though that the other biggest area where people fail is having a janky oven that can't maintain a stable temperature, or where you set it to 200C but it only reaches 160C. So an oven thermometer is a useful tool to buy.
I did this recently, and you know what I really loved about it? It's a great entry-level baking activity where the upside is that you have a pie (something you can gift or just eat!) and the downside is that you have a sort of cobbler.
You really can't !@#$ up a pie. Omelette is another good one. At worse you have scrambled eggs.
I mean, yes, at worse you burn your neighbourhood down and your dog runs away. But in terms of the more likely failure modes like screwing up the dough, breaking it, messing up how watery it is, etc. you can mostly just keep baking until it's done, mix it up, put into bowls, serve with ice cream, down the hatch.
I agree. When it comes to baking, making it tasty is mostly a matter of including the correct ingredients. Nailing the texture is the hard part, and that’s where technique and practice comes in.
I mean, it's hard to end up with something completely inedible but you absolutely can mess it up. Soggy wet pastry at the bottom is the biggest problem but there's plenty of advice around about how to avoid that.
Even broader, honestly. Make something culinary! It's amazing what the simple tactile experience of making something can bring when so much of our existence is doing things by proxy.
A fun hobby I picked up during Covid was trying to cook food from countries I had never been to - since traveling anywhere wasn't an option.
Pick a country, research what food it has that you've never tried, find a few online recipes and YouTube guides and give it a go.
This was a ton of fun. I have no idea if anything I cooked was even remotely like the authentic original, but it was still a very rewarding exercise.
If you live somewhere with a lot of international supermarkets (the SF Bay Area is great for those) it also gives you an excuse for a shopping adventure for ingredients.
I think you're under-estimating how much personal taste applies in that industry. Yes, there's a lot of free content but it's often low quality and/or difficult to find for a particular niche. The OF pages, and other paid sites, are curated collections of high quality stuff that can satisfy particular cravings repeatedly with minimal effort.
A big part of it also the feeling of "connection" with the creator via messages and what not, but that too can be replicated (arguably better) by AI. In fact, a lot of those messages are already being generated haha.
I was mostly hinting towards the 'connection' part of it, yes - I think that's really where the money is made more than anything else. That's the part that'll start killing the industry once some company tunes it in.
This is the dystopia of that pacified moon from "Mold of Yancy" by PKD but taken to the next level.
What's astonishing abut the present is that even PKD did not foresee the possibility of an artificial being not only being constructed from whole cloth but actually tailored to each individual.
Doesn't Grok allow users to create lewd content or did they roll that back?
Also, I suspect that we'll soon see the same pattern of open weights models following several months behind frontier in every modality not just text.
It's just too easy for other labs to produce synthetic training data from the frontier models and then mimic their behavior. They'll never be as good, but they will certainly be good enough.
Have you tried Qwen3 Coder Next? I've been testing it with OpenCode and it seems to work fairly well with the harness. It occasionally calls tools improperly but with Qwen's suggested temperature=1 it doesn't seem to get stuck. It also spends a reasonable amount of time trying to do work.
I had tried Nemotron 3 Nano with OpenCode and while it kinda worked its tool use was seriously lacking because it just leans on the shell tool for most things. For example, instead of using a tool to edit a file it would just use the shell tool and run sed on it.
That's the primary issue I've noticed with the agentic open weight models in my limited testing. They just seem hesitant to call tools that they don't recognize unless explicitly instructed to do so.
The problem is that it costs slightly more and our society is more concerned with cost than animal suffering.
reply