Hacker Newsnew | past | comments | ask | show | jobs | submit | hn_asker's commentslogin

Feels like we live in a society where self-compassion is not tolerated. How can we be kind to ourselves if there are mechanisms in society that keep us distracted from ourselves?


Learn to reject some of the distractions. You don't even have to reject them entirely, you mostly just need moderation. I used to play video games essentially any time I wasn't at work or eating (which was not a healthy way to deal with my anxiety and depression and chronic pain, but it was an effective distraction). I cut back on that to let myself focus on healthier or more productive activities. I still play games, not nearly as much as before obviously, but it's no longer at that former extreme, self-indulgent and destructive level.


I'm glad you cut back and realized that your health is more important. I haven't had a TV since I was in high school. I don't really play video games. It's the notion of "work" in western society that is a distraction from self-compassion. Some of it is fulfilling, some of it is soul-sucking.

I think to implement self-compassion more effectively, we need less of these soul-sucking jobs. Everyone is different though. To some, chopping wood may be fulfilling, to others not so much. So perhaps, we need more fulfilling notions of "work" and making a living.


I think that in your case, allow yourself some time for unproductive rest and relaxation. I don't mean this as a joke. I means this as a "it is ok and beneficial to just chill for some time" and "it is ok to spend time in an unproductive way". Because I noticed that your fulfilling hobby is also basically work.


You can be normal, or you can live well.


Reminds me of a documentary called "Medicating Normal". Try being self-compassionate when you have unholy side effects of benzos, anti-depressants, and anti-psychotics prescribed to you.

Normal by "western" standards is unhealthy. We need a renaissance of self-compassion but it has to be protected from the systems that be.


I am recognise that here we have no problem for to simpley solve. But, however, are you haveing idea on how protected from the systems that be means and ought to be enactment? We have jobs if are demanding normal; how ought one to holding down job without normal?


Jobs are just one way of living. I feel like there are more efficient ways to implement prosperity. Or have less suffering keeping the level of prosperity the same.

What is a job? Is it really beneficial? This reminds me of David Graeber's Bullshit Jobs lecture. Too many people are suffering because of their trust in systemic inefficiency.


well we are having to support the self in some how. i am not sure if there are diferent way. only other option appear to be take some money from a person and give to other person with state force. this behavior is not ethical


That's like asking how one can be in shape if there is so much sugar in the world.


I am percieveing that your statment is twenty and forty and even fifty years out from date. This have been the case in past times but we have now had pendulem swing in to far the other direction. There exists in today to much "if you are feel depresed that is normal." It, is not a normal thing and shold be pitied not be calling normal. Calling a thing a normal makes it more encouraging to a person and so it is not a good thing.


Keep Austin weird. If you don't like it, leave.


Big Pharma uses this heavily to bias research in academia for their marketing purposes. It used to be that academic research was rigorous and objective. An academic title had prestige and respect. Now it's so watered down.


This heavily biases in favor of lone wolves in college. I was one myself. I'd say we are rare. Most people are social and learn by collaborating with others. It's the more natural approach. Evolution has groomed us to be social after all.


Maybe 15 years ago, Georgia Tech rolled out a pretty great homework copy detector for the Java kids. The guy who wrote it was well aware that students knew to change variable names and such, so he just had it compare generated byte code instead. It caught something like 200 cheaters. It was a huge problem because Tech really, really didn't want to just expel or fail everyone like the academic rules required, so they created a sort of case-by-case comparison and punished students to various lesser degrees based on some rubric or other.

Anyway, the next year they adjusted by saying "it's absolutely fine to collaborate on homework and projects. Go nuts. Copy off each other all you want. Also, homework is just 25% of your grade now, quizzes and exams are everything." This made for pretty terrifying quizzes, but things worked out for everybody (except folks with a lot of test anxiety) because folks who actually did the homework tended to be the ones that did well on exams anyway.


Collaboration to learn isn't penalized. Just the collab to cheat part.


Depends on the incentive schemes too.

Back when I was an undergrad studying Applied CS, we[0] had a really friendly, cooperative attitude - we would help each other learn, do homework together; people who turned out to be good at a particular topic would often compile guides and learning material and FAQs for everyone else. Late in my studies I discovered this was seen as a highly unusual thing about our sub-faculty[1]. Other faculties had much more competitive, every person for themselves attitude.

It turns out, the driving factor was that our sub-faculty had different rules about scolarships: you had to cross a threshold of high grades on a given year to be eligible for one, and the amount of money you got was purely determined by your grade average. Everywhere else, scolarships were limited to top % best students. Where everyone else competed and kept their hard-won knowledge to themselves, we'd routinely assist each other, so that everyone could get a shot at getting the scolarship.

I hear that after I graduated, they normalized the incentive scheme to % top students everywhere, and that new Applied CS groups got as unfriendly as everyone else was.

--

[0] - All years studying Applied CS in our sub-faculty[1], not just my year. Don't know what the proper English term for it is, in Poland we call it "kierunek" (literally: direction), vs. "rocznik" meaning a class of a particular year studying on a "kierunek".

[1] - Not sure what's the right term for this either. Our faculty had essentially two branches that dealt with overlapping fields of study; as I was graduating, they ended up splitting into two separate faculties.


Huh. Your comment makes me think. I wonder if some of the push-back on standardized testing is partly because students cannot chea— I mean rely on “social collaboration”? (Not ALL of the pushback, of course. But part of it..)


Yes, this is what backlog grooming and sprint planning are for.


Agreed. How consensus is achieved varies. As an engineer, I am biased towards numbers because numbers are easy to compare. Use data to guide you on what the desired state is and getting to it.


State the basics about the person. It can often help to give context about who they are in the time they're in.


gRPC docs do need some tlc. Cloud providers providing support for gRPC also need to do better documenting their support for gRPC.

AWS Application Load Balancer is advertised to work with gRPC but we've been seeing sporadic errors. Keepalive isn't the issue, we set the IdleTimeout to the max (4000 seconds) and use server side keepalive to gc old connections.


Before you judge it, I implore you to ask whether your organization is actually doing Agile/Scrum correctly. I doubt most orgs are and I suspect most negative opinions come from doing it incorrectly/ineffectively. I think the core principles of agile are well-intended. It's the desired state we want to achieve. However, most of us are still trying to reconcile with our current state.

There are many pitfalls. For example, giving product owner and even manager roles to the lead engineer or architect, daily standups devolving into telling people what ticket number they have in progress as opposed to blockers they are seeing or actually working on, product owners not showing up to backlog grooming sessions leaving engineering to make their own priorities, product managers being the engineering manager and leading retrospectives making everyone biased to say what went well but not things to improve on, agile release trains having teams that don't really interact with each other leaving engineering teams tuned out during system level PI planning and demos. The list goes on.

The current state of an org implementing agile/scrum can easily lead to what David Graeber calls Bull*hit jobs: https://www.youtube.com/watch?v=kikzjTfos0s. The onus is on the organization's leaders to take everyone to the desired state. However, all too often the actual leaders of an organization are the engineers themselves and would much rather get work done than to micromanage.


Yet somehow Pharma can prescribe drugs that affect the brain's functions. How do they get away with that knowing so little about the brain?


Psychiatrists are doctors who prescribe substances that have been closely studied - but their effects may be easily and specifically understood, their mechanisms may not be.

Lithium is a great example - very effective treatment for bipolar. No one really knows why. Prescribed for decades as they've tried to figure it out because tests showed it was effective and relatively safe, just no one knew exactly what it was doing in there.

It's sort of how you can be a woodworker without knowing the cellular biology of trees, and without being an electrical or mechanical engineer who can build a table saw from scratch.


The discovery of it is pretty interesting.

Lithium-rich mineral springs have historically been touted for their healing properties. It was first used for mania in the late 1800s, with Denmark leading the way, but little was published about the medication for more than half a century.

https://www.verywellmind.com/lithium-the-first-mood-stabiliz...


This is a good argument as to why we should look closely at traditional medicines of various cultures. We (1) coevolved with this stuff and (2) have gone through generations of trial and error.

I'm a believer in modern scientific medicine, but think we often have it backwards. Before reinventing the wheel we should exhaustively test what we used traditionally. Maybe the reason we don't do much of that is that it's not possible to patent, and so there's no financial incentive to do so?


There seems to be a good bit of different literature studying traditional cultural practices. Maybe it flies under the radar compared to flashy high-tech stuff. Below is a link to related articles about fermented beverages.

https://pubmed.ncbi.nlm.nih.gov/?linkname=pubmed_pubmed&from...


I absolutely agree with this. We should go through the many cultures' lists of traditional medicines with the lens of modern chemistry and determine what compounds lead them to be effective pharamacologically.


You do realize what you're proposing happened and happens extensively?


I've read a decent chunk of that research, but it's often relegated to backwater low-reputation journals focused on alternative medicine and is largely ignored by mainstream science. For an example, see the research on various traditional sleep-inducing drugs.

It also doesn't make its way into mainstream practice among GPs and psychiatrists. As an example, which mainstream practitioner would ever prescribe or recommend curcurmin with piperine for any condition, aside from alternative medicine practitioners? Which psychiatrist would recommend EPA fish oil for depression? I could go on. The research that does exist is largely ignored.


Not in that way, no. Pharma, on the other hand, often looks into various remedies etc. in order to draw from and produce more distilled and controllable substances. Aspirin for example, from 'cook a tree bark and drink the solution' into a pill process. Penicillin.. etc.


I am by no means an expert in pharmaceutical studies, but I suppose they can afford a huge number of dead ends in the initial phase of research due to the massive wealth and in the end, they have to test only for two variables: efficiency and side-effects. An explanation on how it actually works, is a very nice extra, but is not required. The problem for Pharma is that at some point you require tests on human subjects, which is very expensive and dangerous.


A lot of original psych drugs were also discovered by accident, intending to address some other medical issue. For example MAOIs were found to have antidepressant effects during a trial to use them for tuberculosis. Development of SSRIs (e.g. Prozac) then came from trying to create a similar drug with less side effects.

Also, the many dead ends thing is true in general for pharma, but at some point there are too many dead ends for it to be profitable even given their bankroll. This is happening a lot lately with neuro-related drug development. In the last 10 years I know Amgen, Pfizer, Novartis, Eli Lilly have all had shut downs/lay offs in their neuroscience research divisions.


You might be interested in reading about Stuart Hammeroff who is an anesthesiologist and professor focused on studies of consciousness...

https://en.wikipedia.org/wiki/Stuart_Hameroff


They give random drugs to rats until something happens.


And then to humans


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: