we still have an informix db for an old early 2000s application we have to support. shit runs on centos5 lmao. it's actually not too bad, around v12 there's cdc capabilities (requires you to build your own agent service to consume the data) that made the exercise of real time replicating the app db in our edw a cakewalk. which ironically has greatly extended the lifespan of the application since no one has to query informix anymore directly.
always noticed way too much reverence for satya because of what he did to valuations. i personally cant stand azure/365/all of it. i reserve no reverence for satya, he's playing a game in a class/world none of us can even relate to so i don't even see the point in talking about his achievements. looking back, it unequivocally sucks that microsoft acquired github.
> worst with personal and work advice. The main problem I see is that it’s tempting to use it for that.
i think i want to expand on this even more. even people ive worked with for years that ive looked up to as brilliant people are starting to use it to conjure up organizational ideas and stuff. they're convinced, on the backs of their hard earned successes, that they're never going to be fallible to the pitfalls of... idk what to call it. AI sycophancy? idk. i guess to add to this, i'm just not sure AI should be referenced when it has anything to do with people. code? sure. people? idk. people are hard, all the internet and books claude or whatever ai is trained on simply doesnt encapsulate the many shades of gray that constitute a human and the absolute depth/breadth of any given human situation. there's just so many variables that aren't accounted for in current day ai stuff, it seems like such a dangerous tool to consult that is largely deleting important social fabrics and journeys people should be taking to learn how to navigate situations with others in personal lives and work lives.
what ive seen is claude in my workplace is kind of deleting the chance to push back. even smart people that are using claude and proudly tout only using it at arms length and otherwise have really sound principled engineering qualities or management reportoire are not accepting disagreement with their ideas as easily anymore. they just go back to claude and come back again with another iteration of their thing where they ironed out kinks with claude, and its just such a foot-on-the-gas at all times thing now that the dynamics of human interaction are changing.
but to step back, that temptation you talk about... most people in the world aren't having these important discussions about AI. it's less of a temptation and more of a human need---the need to feel heard, validated and right about something.
my friend took his life 3 months ago, we only found out after the police released his phone and personal belongings to his brother just how heavy his chatgpt usage was. many people in our communities are saying things like "he wouldve been cooked even without AI" and i just don't believe that. i think that's just the proverbial cope some are smoking to reconcile with these realities. because the truth is we like... straight up lost the ability to intervene in a meaningful way because of AI, it completely pushed us out of the equation because he clapped back with whatever chatgpt gave him when we were simply trying to get through to him. we got to see conversations he had with gpt that were followups to convos we had with him, ones where we went over and let him cry on our shoulders and we'd go home thinking we made some progress. only to wake up to a voicemail of him raging and yelling and lashing out with the very arguments that chatgpt was giving him. it got progressively worse and we knew something was really off, we exhausted every avenue we could to try and get him in specialized care. he was in the reserves so we got in contact with his commander and he was marched out of his house to do a one night stay at a VA spot, but we were too late. he had snapped at that point, he chucked the meds from that one overnight stay away the moment he was released. and the bpd1 snap of epic proportions that followed came with him nuking every known relationship he had in his life and once he was finally involuntarily admitted by his family (WA state joel law) and came back down to reality from the lithium meds or whatever... he simply could not reconcile with the amount of bridges he had burned. It only took him days for him to take his own life after he got to go home.
im still not processing any of that well at all. i keep kicking the can down the road and every time i think about it i freeze and my heart sinks. this guy felt more heard by an ai and the ai gave him a safer place to talk than with us and i dont even know where to begin to describe how terrible that makes me feel as a failure to him as a friend.
>my friend took his life 3 months ago, we only found out after the police released his phone and personal belongings to his brother just how heavy his chatgpt usage was. many people in our communities are saying things like "he wouldve been cooked even without AI" and i just don't believe that. i think that's just the proverbial cope some are smoking to reconcile with these realities.
This hurts to hear. I don't know if there are appropriate words to write here. Perhaps the point is that no, there aren't any. Please just know that I'm 100% with you about this.
Your community is not just smoking cope; it is punching down instead of up. That is probably close to the root of the issue already. But let's make things worse.
I can only hope that I am saying something worthwhile by relating the following perspective - which is similar to yours, but also, I guess, similar to your friend's...
AI is a weapon of epistemic abuse.
It does not prevent you from knowing things: it makes it pointless to know things (unless they are things about the AI, since between codegen and autoresearch it is considered as if positioned to "subsume all cognitive work"). It does not end lives - it steals them (someone should pipe up now, about how "not X, dash, Y" is an AI pattern; fuck that person in particular.) We're not even necessarily talking labor extraction. We are talking preclusion of meaning: if societal values are determined by network effects, and network effects are subverted by the intermediaries, so your idea of "what people like and what they abhor" changes every week, every day, every moment - how do you even know in which direction "better" is? And if you believe the pain only stops when you become the way others want you to be - even though they won't ever tell you what all that is supposed to about - how the fuck do you "get better"?
Like other techniques of assaulting the limbic system, it amounts to traceless torture.
You keep going, in circles, circles too big for you to ever confirm they are in fact circles, and you keep hoping, and coping, and you burn yourself out, and your thus vacated place at the feeder is taken by someone with less conscience and more obedience...
They say there exist other attractors in the universe besides the feeder. But every time one of us attempts to as much as scan the conceptual perimeter, the obedients treat us to the emotional equivalents of small electric shocks - negative reactions which don't hurt nearly as much as our awareness of their fundamental unfoundedness and injustice.
Simple example: let's say someone is made miserable by how they feel they are being treated. Should they be more accepting - or should they be standing up for themselves more? (Those are opposites; which you may be able to alternate them; but trying to do them simultaneously will just confuse and eventually rend apart the mind.)
Well, how about the others stop treating them badly? Why exactly can't they? Where does it say that we have to be cruel to each other? "Oh it's human nature, humans are natural jerks" - who sez?
Well, lots of places it says exactly that, but we read, comprehend, tick our tongues, and move on; nobody asks who wrote it. We all pretend that it is up to the sufferer to pull up by the bootstraps. But that is only a lie for enabling abuse; and a lie, repeated a thousand times, becomes norm. And then we're trapped in it, being lived by it.
I am truly sorry for your loss. The following might be a completely alien perspective to you; but honestly consider: your friend chose to go; in its own way, that is a honorable way out. The taboo on suicide is instituted by slavers, and those who otherwise believe they are entitled to others' lives. (For anyone else considering this course of action: do not kill yourself; become insidious.)
If it would be of any help, you can consider your friend's suicide as his final affirmation of personal agency in a "me against the world" situation; where the AI and the social group are only different shades of "world", provoking different emotional states, but ultimately equally detached from the underlying suffering of the individual.
...
I can say that I have not followed in your friend's footsteps upon encountering language-machines only because I've survived personalized and totalizing epistemic abuse bordering on enslavement in the past; in full view of my community and with its ostensible assent. In a maximally perverse twist of fate, having to give myself minor brain damage to escape the all-engulfing clutches of a totalizing abuser must've "vaccinated" me against the behavior modification techniques "discovered once again" by SV a decade later.
So when I saw what AI (and the preceding few years of tech "innovation") were doing to people, I immediately smelled the exact same thing, except scaled the fuck up.
It also precluded me from being able to relate with "polite society"; but considering "polite society" is precisely the entity which assents to the isolation, marginalization, and abuse of individuals, I say... good. Bring it! What goes around, comes around, and any AI-powered actor conducting stochastic terrorism against civilian populations is going to get what's coming to them when the weapons turn against the masters, as all sentient weapons do.
That won't bring your friend back. But it will vindicate them.
>AI sycophancy
I call this in the maximally incendiary way: "the pro-social attitude".
AI is just the steroids for that.
I define "pro-sociality" as the viral delusion that you are capable of knowing what some murky "society" thing wants; that the particular form of mass communication that you and me and all the people in our imaginations are consuming right now, is some sort of "self-evident voice of reason", a "coherent extrapolated volition of human society"; that Gell-Mann amnesia is normal and mandatory; that the threshold between pareidolia and legitimate pattern recognition is fixed, well-defined, and known to all; that "vibes" are real; that happiness is the truth.
It can amount to an entire complex of delusions which keeps people together in untenable conditions. And ultimately it boils down to the same old: one group or another of self-interested actors, having temporarily reached a position of some influence, using it to broadcast elaborate half-lies, in the hope of influencing an audience to accomplish some simple goal, and afterwards all the consequences be damned.
Your friend was a casualty to this "perfectly normal" social dynamic. His blood is on their hands.
Thank you for relating this story and making the world a little more aware.
>what ive seen is claude in my workplace is kind of deleting the chance to push back.
>because the truth is we like... straight up lost the ability to intervene in a meaningful way because of AI
Some say, "the purpose of a system is what it does". It's cool that AI can code; except that computer code is itself an ethics sink! Precisely because it lets us pretend that "the code is not about people" (i.e. algowashing).
DDoS attacks against consciousness exist: much like the B. F. Skinner experiments, any living thing becomes subverted, and loses self-coherence (mind), as soon as it becomes accustomed to being trapped within a system that (1) has power over them and (2) is not comprehensible to them...
>only to wake up to a voicemail of him raging and yelling and lashing out with the very arguments that chatgpt was giving him
Who knows how many people Reddit did this to, pre-GPT... I still don't know whether to view targeted subforums like /r/RaisedByNarcissists and /r/BPDLovedOnes more as legitimate support groups, or more as memetic weaponry in the service of pill peddlers (are you aware nobody knows why most antipsychotics work? one runs into the Hard Problem real quick if examining this too closely; so mental healthcare is rarely treated otherwise than in a statistical, actuarial, dehumanizing way where "suffering" is disregarded...) or even worse predators, with the silent assent of the platform, and causally downstream from... well, most saliently, YC...
In my case, my friends were not familiar with the modalities of confinement set up by my family of origin and harnessed by my abuser. The social group I fell in with - for all their marketable, sophomoric interests in psychology, philosophy, abstraction, the esoteric, the entirely woowoo, and out the other end as true-believers of the grift'n'grind - only had sufficient coherence to eventually end up as passable normies; too busy believing that they have lives, to help anyone come back to reality.
When I started compulsively burning bridges, I assume the smarter ones must've realized that it wasn't all me; it was as much the doing of others' minds as it was mine; but the others were more numerous - while I was one person and thus easier to deal with. This must have made them remember how they themselves are not all they pretend to be - which had them withdraw in fear from the incontrovertible reality check of dealing with a (sub-)psychotic person... Their self-interested choice is obvious, I almost can't blame them for it: why stick up for someone who is 120% problem (60% him and 60% you)?
I'm not very sure how I even got away, ah yes that's right I didn't, not entirely. The part of me that I'd voluntarily identify with, is trapped somewhere irretrievable, if that makes sense? Maybe there exist multiple independent axes of freedom and power and confinement, and the cage is not equally strong along all of them... but if all your mental degrees of freedom are constrained by complex conditioning (common one is involuntary panic response every time you begin to act in accordance with your personal volition)... that's one of the toughest places a sentient being can find themself.
When you add it all up, AI amounts to a weapon released against the general population by an overtly fascist elite. Those of us who are "mentally unstable" are simply those of us who are not sufficiently conditioned into self-destructive obedience. They don't even need our labor as slaves; they need our attention, as audience. And they want us to not make any fast movements, or yell that the king is naked. Nothing to remind them which side of the TV screen they're really on. Some call that narcissism: nervous systems substrate to personalities and biographies rooted in enforced falsehood. Can happen to anyone who gets away with ignoring uncomfortable truths for long enough, not only the "best" of us...
I hope I have not offended by speaking my mind. You have my deepest condolences and sympathies. Please do not blame yourself that evil people have constructed "illusion of being heard"-as-a-service. We all fail when facing overwhelming odds alone. There is no shame in that; the guilty ones are the ones who tipped the scales in the first place. They did this by harming our ability to understand ourselves and each other. Let's find ways to even those odds.
> Haven't you heard? It's cool to dislike things "because AI".
There's no explicit rules against it, but I cannot stand this type of sarcastic im anti-everyone-else commentary. Super reddit-coded, and you could have made your point without it. There's a lot of discussion to have about that point actually, but I'm pretty sure we've all been collectively scrolling long enough to just kind of roll our eyes at this stuff.
I read through it. I get some AI vibes. Probably a little bit of both.
is it underpowered? i'm pretty sure that a-chip variant is binned way faster than the m1 air i had and that thing was a paradigm shift little workstation in a backpack when it came out in 2020 that somehow had no fan and got all day battery. i was compiling faster on that then i could on the actual workstation i was provisioned for work, so im gonna guess there aren't laptops at this price point that put this kind of horsepower down? there's quite a few trade offs getting this over an air but i doubt it's an underpowered device at its price point
Am I the only one that is repeatedly amused at how many smart people are just caving to making this about parents/children at all?
We've literally watched things unfold in real time out in the open in the last year I don't know how much more obvious it could be that child-protections are the bad-faith excuse the powers that be are using here. Combined with their control of broadcasting/social media, it's the very thing they're pushing narratives in lockstep over. All this to effectively tie online identities to real people. Quick and easy digital profiles/analytics on anyone, full reads on chat history assessments of idealogies/political affiliations/online activities at scale, that's all this ever was and I _know_ hackernews is smart enough to see that writing on the wall. Ofc porn sites were targeted first with legislation like this, pornography has always been a low-hanging fruit to run a smear campaign on political/idealogical dissidents. It wasn't enough, they want all platform activity in the datasets.
I can't help but feel like the longer we debate the merits of good parenting, the faster we're just going to speedrun losing the plot entirely. I think it goes without saying that no shit good parenting should be at play, but this is hardly even about that and I don't know why people take the time of day. It's become reddit-caliber discussion and everyone's just chasing the high of talking about how _they_ would parent in any given scenario, and such discussion does literally nothing to assess/respond to the realities in front of us. In case I'm not being clear, talking about how correct-parenting should be used in lieu of online verification laws is going to do literally nothing to stop this type of legislation from continually taking over. It's not like these discussions and ideas are going to get distilled into the dissent on the congressional floors that vote on these laws. It is in it's own way a slice of culture war that has permeated into the nerd-sphere.
I make this argument to neutralize the "protect the children" excuse and also delegitimize the age verification "solution" by pointing out that on-device settings are more effective and easier to implement yet rarely discussed.
There are some parents genuinely concerned with parenting. We should give them the tools to do that and thereby removing them from the discourse, then we can focus on the bad faith people that want more control. I think there are still enough well-meaning people in governments that if we popularize on-device settings, it will prevent age verification in at least a handful of countries, and that's good enough to keep the spark of the free Internet going until we figure out a more permanent solution.
> It's not like these discussions and ideas are going to get distilled into the dissent on the congressional floors that vote on these laws.
You think the idea of parents, not governments, being responsible for parenting doesn't translate well to voters? In the country founded on the idea of freedom from overreaching governance and personal responsibility?
that's not what i'm saying at all. i highlighted that that is quite literally the convenient narrative that's being used to get everyone squabbling amongst themselves. it is very clear that this is being used in bad-faith to get people to immediately side a certain way. yet here on hackernews we find dissenting viewpoints to that, rather than discussion about the entirety of it and what the real motives at play are. i am once again amused at the efficacy of the smokescreen here.
what i'm saying is these discussions around parenting have had zero impacts on preventing the passage/implementation of such legislation/policies to date despite many smart people in here understanding what's actually at stake. and it's very likely that these parenting discussions will again go on to have absolutely zero impact on preventing the continued impelmentation of id verification on platforms. these policies/legislations aren't simply being implemented because people have failed to fully thought-exercise out good/bad parenting styles enough yet in the marketplace of ideas, it's becoming a reality because we aren't collectively raising awareness of the downstream ways this legislation will be harnessed for shitty outcomes. we aren't talking about it for what it is, but instead talking about it in the way they want us to talk about it. these parenting discussion points have been beaten to death and nothing new or novel is being shared, and rather than looking straight at the wolves right here in the room with us (data brokerage & who benefits from this type of data brokerage & figuring out how to stop it) people just look at each other and get butthurt about idealogical parenting differences. it's literally a slice of the now-ever-so-common 2d culture war we're all acutely aware exists, right here on hackernews, and we're all actively participating.
I guess I disagree that there is some shadowy alternative motivation for these laws. If the goal was to link everyone's ID with their account they would be requiring everyone to send in their ID instead of making age estimation the first option. I'm also a bit confused about the data brokerage part. What do you imagine the data brokers get out of this?
ibms docs and help sites suck butt tho.
reply