Hacker Newsnew | past | comments | ask | show | jobs | submit | SirensOfTitan's commentslogin

I started exploring Christianity from an archetypal or psychological lens last year, and have found it really rewarding. I've put in thousands of hours of westernized Buddhist oriented meditation (I think "Pragmatic Dharma" is the term), and ultimately found it and the communities attached to it cultures of avoidance that loses something in its detachment of meditation technology from its larger context. I also grew up vaguely Presbyterian and hated it, so this was a great moment for me to reclaim my heritage on my own terms.

I started with various books of the Nag Hammadi collection, reading the excellent Meyer translations, and started noticing some metaphors that felt like "hidden signposts" in the text (and had some relevance to some ideas in Buddhism). Gospel of Thomas and especially Gospel of Philip felt like they map quite well to non-dual ideas in Buddhism.

I decided after some explorations of gnostic text to jump back into the gospels, wondering if I noticed the same kinds of hidden signposts there. I started this exploration during a trip to London with my wife, where I went and hunted down a copy of Bruce Rogers's amazing Oxford Lectern Bible at the Church of England reading room. What a beautiful bible -- it's so forward thinking that it feels like it was typeset last year, but while it is a beautiful piece, the King James translation of the bible is pretty incomprehensible. This little journey led me to the Sarah Ruden translations of the gospels, and as soon as I read them I felt the same kind of resonance.

This all eventually led me to Cynthia Bourgeault's amazing "The Heart of Centering Prayer," which explores the non-dual kind of ideas in esoteric Christianity and lays out the practice of centering prayer as a basis of Christian spirituality. And I would remiss if I didn't mention Jacob Needleman: Esoteric Christianity was good, but his "Money and the Meaning of Life," really helped me put my own relationship with money in perspective.

This is all a long winded way of saying: Christianity has a rich set of amazing spiritual resources, but they need to be consumed in a sort of non-literal way, where you're meeting the authors in the same mind as they were when they wrote the text. I'd also note that this kind of reading is not scholarly, the point isn't to find the right answer but to impute a larger meeting by meeting the author with your own struggles.

We live in a time that is committed to a materialist reductionist mindset, but I believe that humans are naturally mystical beings, and that we leave a lot of real meaning on the table when we reduce the world down into solely material order.

Rob Burbea explored these ideas (largely inspired by James Hillman's concept of "soulmaking") in his soulmaking dharma (https://hermesamara.org/), the idea being an extension of emptiness: if all is fabrication, why wouldn't we make meaning that is beautiful?

I'm sure I'm coming off quite a bit rambly, but it's very exciting to see such a resource on the HN front page. If you read my comment and feel any similar excitement, please check my profile and feel free to email me!


I'm just excited someone has one of my favorite novels of all time as a user name. Already a win!

> It struck me recently how few of the most successful people I know are mean. There are exceptions, but remarkably few.

*mean to Paul Graham. I’ve worked with a lot of mean people in important positions in my career, and they all have a kind, charismatic side when they need to. Those same people are awful to subordinates or people that can’t do something for them. Paul is high value to many people, so they treat him well.

People like Graham who aren’t often in positions where they’re taken advantage of or humbled like to pretend they and their peers are magnanimous and kind but often enough they’re just not exposed to the forces that make people ugly. All other things being equal: it’s often lack of agency over work and over their own lives—this shows up in work where people are given lots of responsibility but without the freedom to fulfill it.

I often find it concerning how elementary a lot of well off tech peoples’ theory of mind is. People are not acausal personalities, they are functions of their internals and their environments. A person mean at a stressful job might be delightful at a party after.


> I often find it concerning how elementary a lot of well off tech peoples’ theory of mind is.

This is a great way of putting it. I always get surprised when I discover that others aren't constantly modeling a theory-of-mind of other people as a part of interacting with people. That's leaving aside whether those models are accurate, people have varying degrees of skill at it, but it shocks me when I discover that some people don't do it at all, badly or otherwise.


Isn't that why a lot of us went into tech in the first place, because other people's minds are weird and confusing and they keep doing inexplicable stuff like saying things that actually mean something totally different, or not thinking about how something works when trying to use it?

Yeah, and I think that avoidance is the cardinal sin of the modern world. You can’t avoid the dissonance between how a person acts and what they say or want to be. This is especially true within yourself: you can’t ignore what James Hillman might call the “multitudes of the soul” because the pot eventually boils over, and you wonder why you cannot measure up to this ideal version of yourself.

When I’ve been mean at work, and I’ve had many moments, I’m often unsure where it comes from: why did I react that way? It’s not what I wanted. And I’ve been fortunate to be humbled so much in my life and career, now perhaps more than ever, that I’m forced to juggle with the parts of myself that I’ve left in the shadows to my own detriment. I think this is the central paradox of a lot of folks like Graham: when you have capital, even your losses can be manufactured to be wins, and so you become estranged from dissolution and decay and humbling necessary to be born again into someone with fresh perspective.

I think Jung said something like: it is better to be yourself and accept the consequences of being you versus forcing yourself into a mold of what you or others think you ought to be. And often this means engaging deeply with what you’re avoiding and coming to terms with why.


Read more, like biographies and good fiction. Meditate and see into yourself. Understanding minds is a skill like any other, and gets better with practice.

There's understanding, and there's understanding.

I think I'm pretty decent at understanding people these days, as in predicting how they'll react to things, figuring out what they want, that sort of thing.

But I don't understand it on a fundamental level, the way I understand something like math. I have a decent grasp of the game, but I don't understand why the rules of the game are what they are. I get the impression this is how a lot of people feel about math, or computers. They know you can compute a 20% tip by sliding the decimal point over and doubling, but it's just a procedure they follow, they don't understand it.

I've read and thought and interacted plenty. This has built up the skill just fine, but the fundamental understanding is something I don't think will ever come.


And I’m telling you that’s not a rare sentiment. I do a lot of zen, and one famous teacher talks about feeling disconnected from people for a long time. I’m not saying you have to do anything, it’s your life, but just don’t assume it’s something unique or permanent.

I don't think it's unique, but it doesn't seem to be the norm. Most people seem to just accept how people behave without thinking about it too much. When they respond to "how are you?" used as a greeting, they just do it, they're not calculating the right answer.

As far as permanence goes, I'm probably more than halfway through my lifespan at this point and there's no sign of any improvement there. Like I said, as a skill I do just fine. But that deeper understanding isn't there and I doubt it will be.


I'm frankly exhausted from AI takes from both pessimists and optimists--people are applying a vast variety of mental models to predict the future during what could be a paradigm shift. A lot of the content I see on here is often only marginally more insightful than the slop on LinkedIn. Unfortunately the most intelligent people are most susceptible to projecting their intelligence on these LLMs and not seeing it: LLMs mirror back a person's strengths and flaws.

I've used these tools on-and-off an awful lot, and I decided last month to entirely stop using LLMs for programming (my one exception is if I'm stuck on a problem longer than 2-3 hours). I think there is little cost to not getting acquainted with these tools, but there is a heavy cognitive cost to offloading critical thinking work that I'm not willing to pay yet. Writing a design document is usually just a small part of the work. I tend to prototype and work within the code as a living document, and LLMs separate me from incurring the cost of incorrect decisions fully.

I will continue to use LLMs for my weird interests. I still use them to engage on spiritual questions since they just act as mirrors on my own thinking and there is no right answer (my side project this past year was looking through the Christian Gospels and some of the Nag Hammadi collection from a mystical / non-dual lens).


Yep. I've been around long enough to not give a fuck about any technology until it has been around for at least a decade. We're not there yet.

I think that's a very extreme take in the software industry. Sure you don't need to pick up every new trend, but a ridiculous amount has changed in the past 10 years. If you only consider stuff from 2016, you're missing some incredible advancements.

You'd be missing stuff like: - Containers - Major advancement in mainstream programming languages - IaC

There's countless more things that enable shipping of software of a completely different nature than was available back then.

Maybe these things don't apply to what you work on, but the software industry has completely changed over time and has enabled developers to build software on a different scale than ever previously possible.

I agree there's too much snake-oil and hype being sold, but that's a crazy take.


Weeeeelllll...

Post-CFEngine (Puppet, Ansible, Terraform) and cloud platform (CloudFormation) infrastructure-as-code is over a decade old.

Docker's popularisation of containers is just over a decade old.

But containers (and especially container orchestration, i.e. Kubernetes) are still entirely ignorable in production. :-D


It's not that I refuse to acknowledge they exist, just don't give a fuck. I mean do I really care about Kubernetes CNI? Nope it doesn't actually make any money - it's an operational cost at the end of the day. And the whole idea of Kubernetes and containers leads to a huge operational staffing cost just to keep enough context in house to be able to keep the plates spinning.

And it's not at all crazy. We sold ourselves into over-complex architecture and knowledge cults. I've watched more products burn in the 4-5 year window due to bad tech decisions and vendors losing interest than I care to remember. Ride the hype up the ramp and hope it'll stick is not something you should be building a business on.

On that ingress-nginx. Yeah abandoned. Fucked everyone over. Here we go again...


where these tools really shine is in the hand of someone who knows what they want soup-to-nuts, knows what is correct and what is not, but just doesn't want type it all out and set it all up. For those people, these tools are a breath of fresh air.

I remember reading a comment a few days ago where someone said coding with an agent (claude code) made them excited to code again. After spending some time with these things i see their point. You can bypass the hours and hours of typing and fixing syntax and just go directly to what you want to do.


We need to define terms precisely first and the industry seems allergic to that, likely because precise terms would undermine hype marketing necessary for companies like Anthropic to justify their valuations.

We need clear definitions and clear ways of evaluating toward those definitions, as human evaluation of LLM is rife with projection.

Generally speaking, scaling is clearly not going to get LLMs there, and a lot of the gains over the past year or so have been either related to reasoning or domain-specific training and application.

I do think world models are the future and we’ll likely see some initial traction toward that end this year. Frontier AI labs will have to prove they can run sustainable businesses in pursuit of the next stage though, so I’d anticipate at least one major lab goes defunct or gets acquired. It may very well be that the labs that brush up against AGI according to conventional definitions are still nascent stage. And there’s a distinct possibility of another AI winter if none of the current labs can prove sustainable businesses on the back of LLMs.

I think a lot of the west is undergoing the early stages of a Kuhnian paradigm shift in many ways, so I’ve found it difficult to take the signaling from the macro environment and put it to work in my decision making.


And we have plenty of utility everywhere in tech nowadays but very little soul.


I honestly couldn't care less if my UI has soul. I just want it to work and get out of the way.


What an impoverished way of looking at relationship. I’m not surprised Boz wrote this one—someone with a reputation of being high friction and being hard to work with.

I couldn’t imagine thinking of relationships so transactionally, like every moment I spend with someone is just increasing or decreasing my score with them. There is very little room in this tersely communicated philosophy for intimacy and vulnerability, and in fact, the “hard feedback” he mentions can only be delivered successfully within the context of a trustful relationship.


It can be an exhausting way to view relationships, but I think it’s true. I’d argue there also is plenty of room for intimacy and vulnerability when it’s genuine. I think people appreciate these traits when they are genuine and appropriate, and prefer it to a fake aura of confidence


Red vs blue pill


Yes, viewing relationships transactionally is not good for either participant. But I think you have taken a rather distorted view of the article - and there’s a more charitable way to view this than a brutal utility optimization:

> someone comes with a question and leaves feeling small, they’ll stop asking. If they bring you a hard problem and you meet it with curiosity, you’ll get more of those. If you always solve things for people, they’ll outsource their judgment. If you always critique, they’ll start hiding the work.

I take this as a reminder that my off-hand remarks to people can really make a difference. I don’t think that is “impoverished” at all.


It’s always important to remember what your position is when making off hand remarks.

An off the cuff comment to a friend or a colleague where you are both equal in stature/responsibility - probably fairly harmless. But important to also remember that you often don’t know what someone else is going through.

An off the cuff comment when you are the CEO or CTO to someone junior - potentially catastrophic for them.


> like every moment I spend with someone is just increasing or decreasing my score with them

This is more of a statement about the other person, especially if true, than the person trying to estimate the score, who is just trying to model their world as accurately as possible.

If you don't like it, the only thing you can do is try to be more complicated than a single score yourself. If it is in fact a good model of most human, then there is nothing you can do to change it, and being angry at the person who made you aware of the model doesn't help either.


This is the rule - with the notable exceptions being the people that that society lionizes as “good” or “empathetic” or “kind.” For example MLK, Fred Rogers, Steve Irwin, Bob Ross etc…. these are people whose avatars demonstrate relational capabilities that transcend transactional.

In day to interactions with people in modern industrial society, 99% of the interaction is transactional by default. However if you look around you’ll notice that again the plurality of relationships are transactional at their root.

This is in contrast to transcendental relationships, like the achievable ideal relationship between parent and child, between siblings or romantic partners.

This is especially true for people who got into a position of power via “climbing the ladder”

The ladder in this case is made up of other people that you step on in order to get to the next rung in the ladder.

Transactionalism is ultimately the foundational basis for capitalism and our existing social order globally, and unfortunately also the root of all evil.


Right, but Linus also has an extremely refined mental model of the project he maintains, and has built up a lot of skills reading code.

Most engineers in my experience are much less skillful at reading code than writing code. What I’ve seen so far with use of LLM tools is a bunch of minimally edited LLM produced content that was not properly critiqued.


Here's some of the code antirez described in the OP, if you want to see what expert usage of Claude Code looks like: https://github.com/antirez/linenoise/commit/c12b66d25508bd70... and https://github.com/antirez/linenoise/commit/a7b86c17444227aa...


This looks more worrying than impressive. It's long files of code with if-statements and flag-checking unicode bit patterns, with an enormous number of potential test-cases.

It's not conceptually challenging to understand, but time consuming to write, test, and trust. Having an LLM write these types of things can save time, but please don't trust it blindly.


I see dividing the tests and code into two different changes is pretty nice, In fact I have been using double agent thing where one is writing tests and other is writing the code, solves the attention issue also. Although the code itself looks harder to read, but that is probably more on me than Claude.


I have a weakly held conviction (because it is based on my personal qualitative opinion) that Google aggressively and quietly quantizes (or reduces compute/thinking on) their models a little while after release.

Gemini 2.5 Pro 3-25 benchmark was by far my favorite model this year, and I noticed an extreme drop off of quality responses around the beginning of May when they pointed that benchmark to a newer version (I didn't even know they did this until I started searching for why the model degraded so much).

I noticed a similar effect with Gemini 3.0: it felt fantastic over the first couple weeks of use, and now the responses I get from it are noticeably more mediocre.

I'm under the impression all of the flagship AI shops do these kinds of quiet changes after a release to save on costs (Anthropic seems like the most honest player in my experience), and Google does it more aggressively than either OpenAI or Anthropic.


This is a common trope here the last couple of years. I really can't tell if the models get worse or its in our heads. I don't use a new model until a few months after release and I still have this experience. So they can't be degrading the models uniformly over time, it would have to be a per-user kind of thing. Possible, but then I should see a difference when I switch to my less-used (wife's) google/openAI accounts, which I don't.


It's the fate of people relying on cloud services, including the complete removal of old LLM versions.

If you want stability you go local.


Which models do you use locally?


I can definitely confirm this from my experience.

Gemini 3 feels even worse than GPT-4o right now. I dont understand the hype or why OpenAI would need a red alert because of it?

Both Opus 4.5 and GPT-5.2 are much more pleasant to use.


Only tangentially relevant, but I’ve dealt with mouth and gut microbiome issues my whole life, the latter exacerbated by a strong antibiotic I had to go on in mid 2017 for a super resistant staph infection. L Reuteri supplementation and “L Reuteri yogurt” was one of those alternative methods I read about (though I’m skeptical that reuteri is the dominant strain in this “yogurt”)

Doctors don’t really care to look at these kinds of issues. It took years of suffering and autoimmune issues (particularly muscle spasms and joint pain) alongside gut problems before I demanded a gastroenterologist test me for H pylori and SIBO: I was positive for both.

H pylori was a painful treatment process, but I cleared it after one round of quad therapy. SIBO on the other hand, a condition I think we hardly understand, has been hard to deal with. Many rounds of rifaximin with very minimal relief and no real answer as to how to deal with it.

Doctors are hesitant to help, so I’ve resulted to a lot of personal experimentation to deal with it. The only thing that ever worked (and it’s just anecdata so unsure) was sulbultiamine supplementation, but I can’t actually get that anymore and normal thiamine doesn’t help.

This is all to say: I think microbiome is supremely important to health, very few things seem to really impact it, and doctors are hesitant to deal with these systems at all. I’m sure FMTs will become much more popular for a variety of conditions, but it seems like it’s a real risk where not only might someone else’s microbiome not be a fit for your physiology, but you could be inheriting a variety of risks the donor is susceptible to but you are not.

I am not a doctor and much of what I’m saying may be wrong. Don’t quote me please.


Not a doctor either.

Japan seems to love creating fat soluble forms of thiamine. I've been experimenting with a form of thiamine called TTFD. TTFD is synthetic, there's a natural form called allithiamine, derived from garlic. There's also another form called benfotiamine. All of these are fat soluble and highly highly available forms of thiamine. TTFD in particular is associated with paradoxical effects where a person can have a temporary worsening of thiamine deficiency symptoms when first consuming TTFD. Thiamine is generally considered very safe, but these supplements are pretty hefty doses, so I would suggest treading lightly.

There's also some thinking amongst some doctors that sub-clinical thiamine deficiencies being more common than most doctors realize [0] [1]

[0] Thiamine Deficiency Disease, Dysautonomia, and High Calorie Malnutrition

[1] https://www.sciencedirect.com/science/chapter/monograph/pii/...


> Doctors don’t really care to look at these kinds of issues

Perhaps for good reasons?

The science is messy, there are few proven interventions and every woowoo worrywart will be pestering their doctor. Your doctor is in an unenviable position.

With doctors in New Zealand, my one trick is to find good specialists and pay them privately.

I believe that a GP only helps point you in the right direction. Our public health system is mostly too overloaded to help (unless you have a critical problem and your GP helps you get in a queue).

Not sure what helps in other countries.

But I 100% agree that you need to take responsibility for healing yourself. Only you have the motivation, and the context and experience to judge your own problems -- however one needs to take care not to get caught in irrelevant or misleading deadends (especially when mislead by corporations or alternative woowoo freaks).


> Doctors don’t really care to look at these kinds of issues. It took years of suffering and autoimmune issues (particularly muscle spasms and joint pain) alongside gut problems before I demanded a gastroenterologist test me for H pylori and SIBO: I was positive for both.

I went through a similar post-antibiotics gut nightmare. There are good doctors and there are bad doctors and like everything, there are fewer good ones than bad+average ones.

Seems like you got testing and treatment eventually, I'm sorry it didn't work better; I'm replying less for you and more for anyone who encounters similar. Shop around for your docs!

I got tested very quickly for both H Pylori and SIBO in 2019 on doctor suggestions, I'd never heard of either. Sounds like this was probably around the same time as you went through this based no the antibiotic course that messed up your gut being in 2017).

I went to three doctors in six months, the one that did the testing was the second one. The one who was confident in their knowledge but didn't do anything, including the testing -> immediate no-return-visit from me. The one who said "we don't really know how this works" but also didn't do anything -> no return visit, but appreciate the candor. The one I went back to is the one who said "we don't really know how this works, but let's test for these other things we've learned more about recently, and let's also try some experimental/off-label things." I was actually negative for both of those things, so there was even more random stuff beyond that, but the only one the doctor I liked was really resistant to was a poop transplant, though personally... seems like the only known way to repopulate some of the shit, pun intended.


Can you actually provide any proof, even top-line stats from GitHub or other software forges that show the productivity boost you’re claiming?

It’s not up to the skeptics to prove this tech doesn’t work, it’s up to the proponents to show it does and does so with a similar effect size as cigarettes cause lung cancer.

There are a tremendous amount of LLM productivity stans on HN but the plural of anecdote is not data.

Certainly these tools are useful, but the extent to which they are useful today is not nearly as open and shut as you and others would claim. I’d say that these tools make me 5% more productive on a code base I know well.

I’m totally open to opposing evidence that isn’t just anecdote


I think it’s pretty obvious that is the OP automates this manual part of their workflow that it will improve their iteration speed. The thread root is just saying stop copy and pasting and use the built in tooling to communicate with the LLM apis


They aren’t responding to thread roots extended comment, just the first part about the tone and rhetoric of AI proponents. Your comment isnt really a response to anything in their comment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: