Not just insider information, but insider access. If the outcome of some prop bet is under the control of a handful of people, those people can trivially conspire to produce whatever outcome is most profitable to them.
If the outcome of a prop bet really is fully controlled by insiders, so that those insiders are making decisions based on betting outcomes, then allowing that betting to occur seems antisocial and counterproductive to begin with. This is another problem with the Polymarket/Kalshi species of "prediction market".
The problem is it's pretty hard to tell ahead of time whether that's what happens.
Suppose some large private company has to decide whether they're going to build a new facility in city A or city B. This is useful information for all kinds of reasons. If you're a vendor then you need to start making preparations to set up shop in the city where your big customer is moving etc.
The company's analysis shows it would derive a $10M advantage from building in city A. The prediction market is correctly leaning that way. If there are only enough counterparties that someone who now bets on city B and wins would make $5M, everything works the way it should and the company goes with city A. But if there are enough counterparties that a winning bet on city B would net you $25M then the company can place the bet, eat the $10M loss by choosing city B and come out $15M ahead.
But the $10M number isn't public. It's essentially the thing you wanted the market to predict and it could be arbitrarily larger or smaller than that. So how are you supposed to know if the prediction market will be predicting the result or determining it?
A private company of any real size isn't plausibly going to choose Atlanta over Chattanooga to win a prediction market bet. This is a good example of the kind of prediction that can theoretically be prosocial, and one strong indicator that it might be is that an insider bet is helpful rather than harmful.
On the other hand, at the point where the prediction market winnings are material enough that they might alter the underlying decision itself, you've clearly got an antisocial structure. Prediction markets that don't want to be seen as mere prop betting venues should refuse to run markets on those questions.
> Does JJ really prefer for me to think backwards?
No, you run `jj new` when you’re done with your work just like you’d run `git commit`. You can even just run `jj commit` which is a shorthand for `jj describe && jj new`.
Jujutsu uses git as its primary backing store and synthesizes anything else it needs on top on-the-fly. Any incompatibility here is considered a serious bug.
Obviously I can’t argue against your lived experience, but it is neither typical nor common. This is quite literally an explicitly-supported use, and one that many people do daily.
> Obviously I can’t argue against your lived experience, but it is neither typical nor common.
I consider myself a proficient jj user, and it was my lived experience too. Eventually you get your head around what is going on, but even then it requires a combination of jj and git commands to bring them into sync, so `jj status` and `git status` say roughly the same things.
The friction isn't that jj handles state differently, it's that `jj git export` doesn't export all of jj's state to git. Instead it leaves you in a detached HEAD state. When you are a newbie looking for reassurance this newfangled tool is doing what it claims by cross checking it with the one you are familiar with, this is a real problem that slows down adoption and learning.
There are good reasons for `jj git export` leaving it in a detached head state of course: it's because jj can be in states it can't export to git. If we had a command, say `jj git sync` that enforced the same constraints as `jj git push` (requiring a tracked branch and no conflicts) but targeted the local .git directory, it would bridge the conceptual gap for Git users. Instead of wondering why git status looks wrong, the user would get an immediate, actionable error explaining why the export didn't align the two worlds.
What problems, exactly, are you suggesting exist? I have used jj extensively on git teams and it has been seamless. The only people who have noticed or cared are the coworkers I’ve gotten to join me.
> No matter where I look, up and down the stack, across different OSes and tech stacks, there are bugs.
I’m not sure I’d go quite as far as GP, but they did caveat that we often choose not to write software with few bugs. And empirically, that’s pretty true.
The software I’ve written for myself or where I’ve taken the time to do things better or rewrite parts I wasn’t happy with have had remarkably few bugs. I have critical software still running—unmodified—at former employers which hasn’t been touched in nearly a decade. Perhaps not totally bug-free, but close enough that they haven’t been noticed or mattered enough to bother pushing a fix and cutting a release.
Personally I think it’s clear we have the tools and capabilities to write software with one or two orders of magnitude fewer bugs than we choose to. If anything, my hope for AI-coded software development is that it drops the marginal cost difference between writing crap and writing good software, rebalancing the economic calculus in favor of quality for once.
> I’m not sure I’d go quite as far as GP, but they did caveat that we often choose not to write software with few bugs. And empirically, that’s pretty true.
Blame PMs for this. Delivering by some arbitrary date on a calendar means that something is getting shipped regardless of quality. Make it functional for 80% of use, then we'll fix the remaining bits in releases. However, that doesn't happen as the team is assigned new task because new tasks/features is what brings in new users, not fixing existing problems.
I don’t disagree but is the alternative unbounded dev where you write code until it’s perfect? That doesn’t sound like a better business outcome. The trade off can’t be “take as long as you want”
“The alternative is that nothing will ever get released because devs will take forever making it perfect” is a really lame take.
We have literally countless examples of software that devs have released entirely of their own volition when they felt it was ready.
If anything, in my experience, software that’s written a little slower and to a higher standard of quality is faster-releasing in the long (and medium) run. You’d be shocked at how productive your developers are when they aren’t task-switching every thirty minutes to put out fires, or when feature work isn’t constantly burdened by having to upend unrelated parts of the code due to hopelessly interwoven design.
I think PMs fail to understand categories of change in terms of complexity because they focus on the user facing surface and deal in timelines. A change that brings in a big feature can be straightforward because it perfectly fits the existing landscape. A seemingly trivial change can have lot of complexities that are hard to predict in terms of timelines.
There is also the angle of asking for estimate without allocating time for estimation itself.
For lack of a better word, I think it should drive from "complexity". Hardness of estimate should be inversely proportional to the complexity. Adding field to a UI when it is also exposed via the API is generally low complexity so my estimate would likely hold. We can provide estimate for a major change but the estimate would be soft and subject to stretch and it is the role of the PM to communicate it accordingly to the stakeholders.
Some coding doesn't fit your schedule. If you've scheduled 2 weeks, but it takes 3, then it takes 3. Scheduling it to take 2 does nothing to actually make the coding faster.
The fundamental problem remains: it’s difficult to predict how long it will take to solve a series of puzzles. I worked in a dev group where we’d take the happy path estimate and double it… it didn’t help much. So often I’d think something would take me a week, so two walls was allotted, but I made a discovery in my first like hour/day whatever that reduced the dev time to like a couple days. Then, there were tasks that I thought I’d solve in a few days that took me weeks because I couldn’t foresee some series of problems to overcome. Taking a guess and adding time to it just shifts the endpoint of the guess. That didn’t help us much.
That's the point I am making, and the point of asking "what is the alternative"
Developers aren't alone in adhering to schedules. Many folks in many roles do it. All deal with missed deadlines, success, expectation management, etc. No one operates in magical no-timeline land unless they do not at all answer to anyone or any user. Not the predominant model, right?
So rather than just say "you can blame the PMs" I'd love to hear a realistic-to-business flow idea.
I am not saying I have the answers or a "take". I've both asked for and been asked for estimates and many times told people "I can't estimate that because I don't know what will happen along the way."
So, it's not just PMs. It's the whole system. Is there a real solution or are we pretending there might be? Honest inquiry.
Software release dates are so arbitrary though. We no longer make physical media that needs time to make and ship. Why does software need to be released on February 15th instead of March 7th?
You could ask the same question about the contents of the release. Why does software need to be released with features X, Y, and Z on March 7th when it could be released with features X and Y on February 15th?
It's inevitable that work will slip. That doesn't necessarily mean the release will slip. Sometimes you actually need the thing, but often the work is something you want to include in the release but don't absolutely have to. Then you can decide which tradeoff you prefer, delaying the release or reducing its scope.
Earlier discussion focuses on writing software at a slower pace to inject more accuracy and robust thinking/design/code. Conceptually, yes, I get it!
But in numerous practical scenarios, some adherence to a recurring schedule seems like the only way to align software to business outcomes. My thinking is tied more to enterprise products (both external and internal) rather than open-source.
I like an active dialog with engineers. (I'm neither SWE nor PM). Let's talk together about estimates. What's possible and not possible. Where do you feel most uncertain, most certain. What dependencies/externalities do you expect to cause problems.
Those conversations help me (business/analytics-side) do things like adjust my own deadlines, schedules. Communicate with c-suite to realign on what's possible and not. Adjust time.
The main problem I’ve had is the unpredictability of where the complexity lies. Unless you’ve done exactly what you’re doing, before, with the same tools and requirements, there’s a good chance that some discrete trivial aspect could take up an incredible amount of time, and that won’t indicate whether the main goal will take more or less time. I’ve worked both as a developer and as a designer, and while some aspects of design can be really nebulous and uncertain compared to dev work, it lacks some of the unpredictability — it’s not like I’m going to unexpectedly have to re-make the logo.
I feel for anyone that has to wrangle these tasks into a business-consumable time frame.
> The main problem I’ve had is the unpredictability of where the complexity lies. Unless you’ve done exactly what you’re doing, before, with the same tools and requirements, there’s a good chance that some discrete trivial aspect could take up an incredible amount of time, and that won’t indicate whether the main goal will take more or less time
What a great articulation. Completely agree.
This is why I don't blame PMs anymore than devs anymore than business folks throwing requirements at PMs. Possible to find fault everywhere.
I think the broader problem is scale and growth. Many people in many roles are caught in growth-mind or scale-mind companies where the business wants to operate at a velocity that may not align with the realistic development work we're discussing. PMs are similarly caught with less time to understand, scope, plan, etc. Business folks ask questions like "why isn't this ready" to devs that may not understand the reasons why the business operates the way it does, or the business at all.
Full disclosure: I'm in insurance. Seeing lots of these problems play out in front of me. C-suite moving at speed 100, devs moving at a perceived speed of 50. Silos and communication problems and unclear requirements up and down the stack.
So, in my interactions, the way I try to help is just to understand the most basic components and their ability to come alive or not. Is there anything to show? Yes, ok - let's celebrate a small win. Is there a rather large delay? Why - ok, let's use that to reinforce building something robust vs. crap.
But, there are schedules! Someone above mentioned sqlite. Another example comes to mind: Obsidian. I think they're anomalies (good ones) rather than examples that broadly prove the point to slow down.
I disagree with that entirely. Some features just take longer to develop. If that feature is part of the release, then release it when it is finished and not kind of working. If that feature is just not achievable, then PMs have really screwed up their role by putting it in the release in the first place.
Hearty agree. I think the PMs fall victim to wildly optimistic imagination of how fast and easy it will be to correct from “good enough to not get yelled at by CEO for not shipping by X date” to “works correctly and isn’t creating more bugs” - and importantly, it seems like they repeat this mistake every project, compounding the problem. So we perpetually have an increasing number of hacks, interacting with each other to cause difficult issues, all of which the PM says we will fix next sprint, just as soon as we ship one more Important Feature.
Not all orgs of course. But most I’ve personally seen, seem to be like this.
> Personally, I've swung over to the laissez-faire side of medicine.
Chesterton’s Fence rears its ugly head again. This is the same thing as vaccine skepticism (those diseases can’t be that bad, I never hear about them killing anyone these days) applied to a different context
Arguing for modern reforms is one thing, but there’s a reason we have the FDA. Statistically, most individuals do not have the medical expertise or the desire or ability to wade through enough clinical data to make these sorts of decisions with any hope of good outcomes, particularly in the face of an entire Internet of people trying to push questionable substances on them.
If only there were a federal administration whose responsibility it was to collect data about food and drugs so we could rely on something more than anecdotes from random strangers on the Internet.
I emphasize it's like the drug disulfiram: Very effective as long as patients take the full dose, but the lack of real-world efficacy stems from the difficulty in adhering to the treatment.
> the lack of real-world efficacy stems from the difficulty in adhering to the treatment
Do you have a source for this "lack of real-world efficacy"?
> This study found that 84.4% non-diabetic patients stop taking GLP-1 drugs within two years
"With a with a median on-treatment weight change of −2.9%" [1]. Of those who discontinued and experienced "weight gain since discontinuation," they were "associated with an increased likelihood of GLP-1 RA reinitiation."
I'm genuinely struggling to see how this source shows real world inefficacy. In my friends, all of them stopped taking GLP-1 drugs within 2 years because all of them lost the weight they wanted to.
Out of curiosity, what sources lead you to believe this?
They absolutely do not, unless you’re getting too many calories.
Individual foods are—with some exceptions—neither bad for you nor good for you. A healthy diet can occasionally include doughnuts, and milkshakes. Your overall diet is what matters.
Most green vegetables you can eat unlimited amount and stay healthy. They are absolutely "good" food. (Please don't reply with something trite like "oh, but what about the pesticide residues?") The same can be said for high fiber (soluable and insoluable) fruits like apples, oranges, and bananas. As long as eaten whole (minus skin for oranges and bananas), it is almost impossible to overeat these and they are absolutely "good" foods.
Sure, they are not mercury-level toxic. However, these recommendations are for people who consume way too much of these dishes, and it's a safe assumption that this is the case for a significant part of the population.
Sure. We’re saying roughly the same thing. For most Americans, hamburgers cause heart disease because we don’t exercise enough or eat enough plants. If you’re backpacking twenty miles a day, sure, eat whatever, you won’t suffer inflammation or obesity from it. (Though you may run nutritional deficiencies. And you’re building bad habits for when your activity necessarily tapers off.)
Hamburgers are not causing heart disease and diabetes for most Americans. Bad diets loaded with too many calories, too many saturated fats, and too many simple carbs are.
Messaging matters. When you tell people hamburgers and bacon and everything they love are bad, they stop listening, give up, or just eat some other junk that wasn’t prohibited. When you tell them some foods are good, they start buying into superfood marketing.
Diet is the only thing that matters. Lots of veggies are extremely useful because they add bulk without adding calories, and along with fresh fruits are great sources of fiber. Cheeseburgers can only come so often because they’re extremely calorie dense and send enormous reward signals to your brain.
Give people the tools they need to thrive, not just “don’t eat these specific bad foods, eat these specific good foods”.
reply