Academic performance is somewhat of a burning fire - I'd guess there is less of a translation issue with the article than a cultural and political issue around the topic.
So, it's maybe not too much of a stretch to say that "a Copenhagen way" to authenticate is to integrate with MitID, either through a certified broker or by becoming one: https://www.mitid.dk/en-gb/broker/broker-certification/
There was a longer comment here a minute ago with several paragraphs the reading of which felt like watching Trumpian politics at its worst: So much to disagree with that one doesn’t know where to even start. I just wanted to write a note and thank whoever removed it.
No, that's not from downvotes, that's from "flagging".
Personally I only flag spam and hate speech, but some people get really upset by certain posts and will flag them rather than (or in addition to) downvoting.
With enough flags the post becomes "dead" and then you can't reply to it, and it's hidden by default. It's possible to un-flag it with the "vouch" option.
I think flagged posts are often reviewed by dang (a moderator).
Trump has criticized Europe for not spending enough on defence and for being too friendly with Russia. His administration also stopped Nord Stream 2, a decision which later Biden administration reversed. So while this comment is horrible, I really don't see how could you call it trumpist.
This sounds like burnout to me. I am in my mid-40s and used to have spells like this, but no more. I needed some kind of large change, back then, to get myself sorted out, but this could be pretty destructive because if I waited too long, the change I'd need wouldn't really be conducive to staying on whatever project I was on.
The key to not having to deal with this problem anymore, for me, was starting to proactively switch things around to break the routine of consecutive work-weeks. One of my tricks was to do some kind of mini-vacation every 6-8 weeks, go somewhere new, leave work behind for 3-4 days. Even smaller things like regular social events can work wonders - anything that breaks the weekly routine.
Back when I'd get myself into burnout periods the most effective way to recover enthusiasm was to pick up a new skill, work-related or not. I was in my mid-30s in the late aughts and not entirely sure I wanted to keep coding - so I signed myself up for an 18 month "executive MBA" program to find out if I might want to do something else, and instead came out of that with a whole new outlook on how and why to write code.
Then around 5 years later I started writing code on the side, for myself, to gradually improve over the long term, and this can be absolutely therapeutic.
Try to switch things around a little bit, do something new, see if that helps?
Yeah always switching tools basically resets your experience to zero, so you have to do the same mistakes over and over, no wonder it's hard to stay motivated. And the "senior" jobs have zero power, so you can't stop people from making mistakes, and trying to "influence" just makes the experience even more exhausting and frustrating when people have no reason to listen to you.
I wish there was real senior roles you could grow into where your experience is actually valued, and you would gain certain power to make decisions, but then the argument is that you can't hire juniors anymore because they think it's too uncool to have a boss.
> I wish there was real senior roles you could grow into where your experience is actually valued, and you would gain certain power to make decisions
In my experience, the only way of getting some decision power is moving into management. Even team lead roles don't count and don't give you any ownership over the product direction.
In big tech engineers are generally trusted more, but still product ownership is dedicated to management.
the whole tech thing is changing fast, ageism is a true thing, in a scenario where most of the previous knowledge can be ignored, being a senior with 5 years of experience, 10 or 20 doesn't change much. Given that young people usually simp for the companies much more due to being naive, they have a huge preference in the hiring process.
Tech is removing the root of the knowledge, migrating from understanding the solution, to quick copy&paste from some places.
There's a huge difference between understanding tools/libraries/frameworks/programming languages/APIs etc vs understanding how to build a system on top of any of these things that scales well, is maintainable, is easy to collaborate with others on, that can be extended quickly, that doesn't cost that much time/money to build in the first place, etc.
Yes the former changes every 1-5 years. Doing the latter well is much harder, no single tool can solve these problems, and I think years of experience really does help.
> Half life of knowledge in our profession is more or less a year and a half.
Only if you define your profession as knowing the latest front-end frameworks. In terms of concrete technical knowledge that half-life lengthens as you go down the stack. But beyond knowledge of existing tools and frameworks, there is also the understanding of systems and how they interact with the real world. This is what you need to understand to really give yourself a life-long career. It can still be hard tech (distributed systems, scalability, etc), or it can be a little softer (UX, maintainability, collaboration, etc), but these skills will give you the ability to dwarf the actual business impact of the 25 year-old who has maximized knowledge of the latest tools.
I agree that years aren't necessarily a good metric for experience, although I have decades in IT - started when I was 19 and retirement is a real thing I have to consider.
Years do give you some experience that isn't translatable to the resume: after a while, you've seen through most of the tricks that management likes to try but which the younger colleagues haven't learned. Having older folks around can spoil the surprise.
My personal theory of the roots of ageism has this as a pillar.
They try to make you commit to additional unofficial work/projects for which you get no extra time or resources. If you fail and burn out, that's just you being a bit "overambitious", and if you succeed, they will just make if official. In that way, they can avoid owning up to any failures and only take responsibility for the successful ones. So you have to be a bit careful in casual conversations with the boss about ideas and possible improvements.
On the other hand, AWS Lambda seems like CGI/FastCGI all over again, but with proper automation, so I have at least one data point on 20 year cycles (to confirm we need someone who is in profession for at least 40 years).
Developer here who has been programming over 40 years (since I was a teenager in the late 1970s).
I know I am stretching things a bit here, but IBM mainframes, multi-user Forth systems, and distributed QNX systems ranging from the 1970s to the 1980s -- not to mention UNIX systems -- could all support remote procedure calls or interprocess/interapplication scripting across standard APIs to some extent (for a loose sense of process or application, especially with Forth). Even Smalltalk back then could do that to an extent but mostly from a single-user perspective in the sense that Smalltalk is mostly about message-passing objects. Essentially, you could have a system that could talk to itself or other similar systems in standard ways.
Yeah, there have been so many cycles of forgetting and reinventing with new generations of programmers. Although it is true some things improve even as sometimes other things decay for a constantly changing kaleidoscope of opportunities and risks (a bit like host/parasite arms races in evolutionary cycles).
And also from the 1960s-1970s:
https://en.wikipedia.org/wiki/PLATO_%28computer_system%29
"Although PLATO was designed for computer-based education, perhaps its most enduring legacy is its place in the origins of online community. This was made possible by PLATO's groundbreaking communication and interface capabilities, features whose significance is only lately being recognized by computer historians. PLATO Notes, created by David R. Woolley in 1973, was among the world's first online message boards, and years later became the direct progenitor of Lotus Notes."
And from a different perspective, what is email but a standard way to do a remote procedure call to hopefully invoke some behavior -- even if a human may often be in the loop?
https://en.wikipedia.org/wiki/History_of_email
And from the 1930s an earlier Paul Otlet invented the idea of using a standard 3x5 index card to store and transmit information (mainly metadata):
https://en.wikipedia.org/wiki/Paul_Otlet
"Otlet was responsible for the development of an early information retrieval tool, the "Repertoire Bibliographique Universel" (RBU) which utilized 3x5 inch index cards, used commonly in library catalogs around the world (now largely displaced by the advent of the online public access catalog (OPAC)). Otlet wrote numerous essays on how to collect and organize the world's knowledge, culminating in two books, the Traité de Documentation (1934) and Monde: Essai d'universalisme (1935)."
For another example of cycles, my current favorite UI technology is Mithril+HyperScript+Tachyons for JavaScript (although Elm is great too conceptually, and likely inspired Mithril and React in part) which is so easy to use from a developer ergonomic point of view in part by (simplifying with a very broad brush) re-inventing the OpenGL video game paradigm of redrawing everything (with behind-the-scenes VDOM optimizations) from essentially a global state tree whenever the UI is considered "dirty" because someone touched it. Mithril is so much easier to use than UI systems that are all about creating dependencies (like most Smalltalk systems) or which require storing and updating state in carefully managed components (like React) or similar constrained models. But sadly React+JSX+SCSS has so far won the mindshare war despite overall worse developer ergonomics. I hope that cycle continues to turn someday and the Mithril approach will win out (if maybe in some other implementation by then).
https://github.com/pdfernhout/choose-mithril
Frankly it has been frustrating over the decades to see great ideas lose out for a time to lesser ones with better marketing or other institutional advantages or other non-technical issues (Forth vs. DOS, CPM vs. DOS, Smalltalk vs. Java, Mithril vs. React) or which better fit with the familiarity of developers with earlier systems (HyperScript vs. JSX, Lisp vs. C++). Yet, I can also still be hopeful things may improve as social dynamics and technical dynamics change over time in various ways. Like was said about JavaScript which I mainly program in now: "It is better than we deserve..."
So if you had 6 of those, you could support 96 users. You could get expansion units too for the main bus:
https://www.reddit.com/r/retrobattlestations/comments/dpt47y...
"It takes up one ISA slot in the main PC, and then hauls the signal to the external box, where you can plug in up to 7 more cards, plus some RAM"
So, using 6 slots in the first box, and 6 slots in the next, and 16 port serial cards, that's in theory 192 users on RS-232 lines. Anyway, this is just a guess. I vaguely remember hearing of some actual (lesser) systems with lots of RS-232 ports, but don't recall exactly how they worked.
So, that's 12K to support 150 input buffers, plus probably at least 1K for each user dictionary on top of the shared dictionary (12K + 150K = 172K total). That is probably low -- if users might want 4K each that's 600K. Throw in 28K for an extensive base system to round things off and that is 640K for a great low-latency system supporting 150 users all simultaneously having a command line, assembler, compiler, linker, and editor. And I'd guess probably a database too on a shared 10MB hard disk. And it might even feel more responsive than many modern single-user systems (granted, expectations were lower back then for what you could actually do with a computer). So, yes, "640K of memory should be enough for 150 anyones." :-)
Related: "Why Modern Computers Struggle to Match the Input Latency of an Apple IIe" https://www.extremetech.com/computing/261148-modern-computer...
"Comparing the input latency of a modern PC with a system that’s 30-40 years old seems ridiculous on the face of it. Even if the computer on your desk or lap isn’t particularly new or very fast, it’s still clocked a thousand or more times faster than the cutting-edge technology of the 1980s, with multiple CPU cores, specialized decoder blocks, and support for video resolutions and detail levels on par with what science fiction of the era had dreamed up. In short, you’d think the comparison would be a one-sided blowout. In many cases, it is, but not with the winners you’d expect. ... The system with the lowest input latency — the amount of time between when you hit a key and that keystroke appears on the computer — is the Apple IIe, at 30ms ... This boils down to a single word: Complexity. For the purposes of this comparison, it doesn’t matter if you use macOS, Linux, or Windows. ..."
Seriously? Amending a trash fire with a mound of glowing embers (that can't all be extinguished because precious backwards compatibility -- e.g., `auto_ptr`) is "just reinvention"?
You can only hold that view if you don't understand C++11. :p You're more accurately complaining about "invention" (well, in the C++ world; in the Rust world, it's "C++ implemented our stuff").
> I wish there was real senior roles you could grow into where your experience is actually valued, and you would gain certain power to make decisions, but then the argument is that you can't hire juniors anymore because they think it's too uncool to have a boss.
Uhh... who are these junior people who don't like having a boss? I read the first part of this sentence and wondered why wouldn't a lower level colleague not want a senior helping them avoid potholes in the road...
> why wouldn't a lower level colleague not want a senior helping them avoid potholes in the road
Because they become much more influential in a shorter time, if they make a coup d'Etat by paving a completely new road, now they are the new road master. The old potholes are gone so they don't have to worry about those, and the new potholes are still unknown and yet to be discovered.
Notice the fast decline in the last 20 years?
Even active X was less crappy than the most well-polished actual react project, and active X was crap.
Even java applets did more and in an easier way than modern frameworks and JS shit.
One simple page, with login, logout, some search, and navigation nowadays require a few plugins, router, state management, lib for requests, lib to handle cookies, lib for JWT, etc...
It's the developers not companies that make this decision. Out of boredom or trying to get promotion. I'm just so tired of seeing something that could have been one static html page but was built with NextJS+lambdas+terraform and a hundred more buzzwords.
Seeing the same trend in web/ecommerce development. The brand needs a simple site with a little bit of dynamic sprinkled through it. The agency chooses to build a full SPA with all the bells and whistles. In doing so they neglect/break basically everything else - SEO, connected martech, analytics, etc etc. Sure, some bits are a little 'faster', but then there's all sort of UX issues with parts that update too slowly and goodness knows what else. And all at a cost dramatically greater than necessary.
I'm sure some agencies do a fantastic job (those that think about the bigger 'more than just dev' picture). But on 95% of the sites I'm seeing right now the downsides far outweigh the benefits and it feels like dev for the sake of dev.
> In doing so they neglect/break basically everything else - SEO, connected martech, analytics, etc etc.
This is exactly how it looked like when Flash was a popular choice for making web interfaces. We've fought a long, hard battle to finally get back to indexable, interoperable, standards-based HTML, CSS, and JS. It was fine for a while, then Angular happened. Fast-forward a decade and we're right back when we started. Amazing.
> It's the developers not companies that make this decision.
I don't see developers making these decisions anymore, not in large web-based tech companies.
My experience has been that management controls a lot of these decisions and/or steers them in the direction that they want them to go. And the more power a manager holds over a team or department, the more influence they can exert to get their way.
As an example, I'm hearing from a colleague in another department that they're being told by an engineering SVP that all new backend services are to be written in NodeJS. These are .NET developers. How does this guy who is 3 layers above these engineers intend to enforce this "rule?" The implication is you can do this or get fired, I guess, so it's happening regardless of how stupid it is. This was all explained to me when I noticed that I had gotten 3 "so long and thanks for all the fish" emails from long-timers in that department.
As someone who recently returned to Rails after 7 years of searching for a better option, I totally agree. For the types of problems I solve (not FAANG problems), Rails is the by far the most productive option. Not perfect, mind you, but better than anything else I've run across.
I feel like I am one of the few that switched from angular to react in 2014 and while enjoying how "simple" it was started noticing how much people liked building complexity in it.
Now I have used react at a few different sized companies, taught it to some students and completely stopped using it for personal projects. It just seems like too much added complexity for almost every situation and people just see everything as a SPA now.
I'm a full stack engineer that does lots of react in my day job. I now use jquery in all my side projects because I want to get them done instead of spending lots of time getting the project setup.
Nothing beats adding a script tag in the footer and being done with project setup.
Can you describe the vue 3 part more? As an outsider it just looks for me they just added another api with a different style, so people can choose what they like.
What do you think of Svelte? I haven’t used Vue since I discovered Svelte but I’m always curious about peoples perspectives and try to keep my finger on the pulse. The new Nuxt looks pretty awesome! Love the api design. I’m hoping Sveltekit steals a few ideas from it before 1.0.
I looked at it the other day after the discussion thread about JS frameworks, but bounced when I saw ".svelte" files.
I accept (demand, really) TypeScript but I've become allergic to any attempt to add much more on top of JS than that. I can just see the next poor bastard coming along in a short year or two and going "oh god, WTF is a '.svelte' file? What did my piece of shit predecessor fall for?"
I'm looking into Vue today. Possibly I'll settle on something even simpler.
React's certainly out, and thank god the mood is finally shifting enough that I can abandon it without harming my career (much). Slow, janky, and god they've made some weird choices with it in the last few years. It was always a bit heavy, but it felt like it had some degree of elegance to it before that—if only in parts of the API itself, not the implementation.
[EDIT] Oh good lord, '.vue'. Don't any of these just use normal-ass code? Sigh.
Things like .vue and .svelte are really more like hints to your Editor for which linter to use, which code highlighting etc. Also your build chain I guess.
I'll admit it's annoying but it's still just "normal-ass code". Vue, for instance, is just html, JavaScript and css. A .vue file is just all three in one file with special syntax to indicate each section.
That's true of Vue. Svelte has a compiler which changes and augments your code with additional code for it's state management system. So it's definitely not normal code.
I generally agree with your don't "add much more on top of JS" sentiment, but I like .vue files. It's "just" HTML, CSS, and JS in one file, which I find convenient for components. But it's optional--you're free to use three separate files.
Seems interesting but quite young. When I looked it up it there was discussion about Sapper vs Sveltekit etc and quite frankly I can't stand that kind of thing anymore, it just leads to confusion, a lot of Googling and SO, partial or wrong docs etc and in the end a LOT of lost time and energy.
And I say that as someone who is currently refactoring a Vue 3 app from JS/VueX to TypeScript/Pinia ... oh the irony.
I'd like to see it that way for my own situation. But I have no alternatives for making decent money. It's not wise to be a slacker without a contingency plan. I'm just dumb.
We switched to using Elm for our front end. One of the complaints about Elm is doesn't get upgrades very often (like years in between).
This has been a feature for us – we don't need to be upgrading or fixing for upgrades or learning new "things". We focus on building with what we have and know and it works.
The things I was doing in my 20s made me feel as though I was changing the world, and I had an impressive project list.
In my 40s, I can hardly stand certain aspects of the tech space... and keeping motivation is all about things I can make, but not make for others.
Find something that sparks your imagination.
Skill atrophy is my main thing. Mitigating skill atrophy on legacy skillsets, sub-sets of skills you have used for 20 years, but have no passion for any longer is tough as HELL -- and it makes me feel I can't learn... but the fact is I fail to learn anything I dont have a spark of passion about.
And I am not talking that romantic passion that some billionaire founder is taking about -- I am talking about *enough* passion that your ADHD can be quelled, and that small distractional things dont have undue heavy draw against your attention (passion)
I do think this is true - if you don't see the advantages of learning a new tool then you won't feel motivation or energy to do so - but I can also understand that it has to be done because of career reasons.
I mean if you were all in on e.g. Backbone.js or Dojo ten+ years ago, you're kinda running behind now and it'll get harder and harder to find a gratifying job.
> most effective way to recover enthusiasm was to pick up a new skill, work-related or not. I was in my mid-30s in the late aughts and not entirely sure I wanted to keep coding - so I signed myself up for an 18 month "executive MBA" program to find out if I might want to do something else
This might sound tangential, but I am at this exact stage in my life. I am in my early 30s and have signed up for the executive MBA program. I will be looking to start my term in Sept. Any tips/suggestions/warnings that you can share?
Not OP, but in my early thirties finishing an MBA now.
Here's how I sum it up. Pure CS is about determining what's theoretically possible, then software engineering is "applied CS", about taking what's theoretically possible and making it in the real world, which includes a mature understanding of costs (both upfront to build v1 and long-term maintenance). Thus an MBA is "applied software engineering." It's not sufficient to understand the costs of the engineering we build, because it doesn't matter if it costs $2,000 or $2,000,000 if we don't have the money for it. We also need to understand how to make it actually self-sufficient, by ensuring that it brings in enough revenue to cover the costs. $2,000,000 in costs, let alone $2,000 in costs, may be too much for a college student to afford out of pocket, but if you can show that you can earn it back and more, then you will find investors - be they angel or VC investors for a new venture, or your company's Finance division for a new project in a Fortune 500 company.
Ultimately the skills you get are about convincing people to fund what you want to build, for different definitions of "fund", whether it's literally cash, or persuading people to invest blood-sweat-tears equity by joining you, or just getting work to allow you to put time into it. Instead of working on what others want you to work on, you will learn to persuade The Powers That Be that your work should be funded. The main caveat: working on the MBA may change your mind about what's worth working on.
Tips, suggestions, warnings - very subjectively and at random...
- The "hard" stuff (with numbers in it...) didn't really grab me at all, I originally studied math and was somewhat disappointed in much of that part of this kind of MBA program, my hunch is that a regular full-time MBA would have been better for this, more immersion, this was all a little in the one, out the other for me because there wasn't much time to practice.
- But, the "soft" stuff on the other hand, was a goldmine, all the personal development, organizational psychology, negotiations, etc. This alone was worth the tuition.
- The best part was gaining a far better understanding and tolerance of why and how pretty much everything we work with in software is more or less "broken", it's actually not broken, it's as good as allowed by budget and organizational circumstance, and if something is to be improved, well, then that background has to allow for that improvement or else the improvement is just a pipe dream.
And you meet interesting people who will do interesting things in the community where you live, assuming you stay put, which I didn't, so I can't say much about that.
That's why I decided against the MBA. I was in this stage of life in my early-mid 30s, and asked for some advice from other engineer-with-MBAs. All of the ones who had done things of value with their MBA were no longer engineers.
What's your endgame?
Do you want to be a CEO some day? A product manager? Business development? Work in something other than software? An MBA will teach you useful things and help you get your foot in the door.
Do you want to be a CTO? Do you see yourself creating software down the road? None of the MBA things explicitly help you, and executive MBAs are very expensive to do "just for fun". If you want to go back to school, go get a masters in CS if you don't have one yet.
>The key to not having to deal with this problem anymore, for me, was starting to proactively switch things around to break the routine of consecutive work-weeks. One of my tricks was to do some kind of mini-vacation every 6-8 weeks, go somewhere new, leave work behind for 3-4 days. Even smaller things like regular social events can work wonders - anything that breaks the weekly routine.
I wonder if the VW ID Buzz California Camper van will be great for this. Take it for a drive to wherever. Go fishing, whatever. No need to rent a hotel or anything like that. Just hit the road and enjoy.
Burnout has become such a catch-all term as to be effectively meaningless by now.
In our professional lives, we are used to set quantified KPIs in a SMART way, and I wonder, why is it that our expectations are so comparatively low in our personal lives?
Measuring for the sake of measuring isn't useful. Burnout may have become a catch-all term but it isn't meaningless, it means that the person is currently fatigued and action is required. It matters that the root cause is identified, it could be a health issue, an environmental issue or more probably a combination of factors. The KPIs you set as I see are arbitrarily chosen, not sure what you are measuring or why and can't even say that the goals are SMART (specific, measurable, achievable, realistic, time-sensitive).
I don't find KPIs terribly useful in the software development at the level of an IC (which is btw my own level), unless we talk about performance & latency of large services where it is very useful and helps save millions. Software IC is special because motivation-wise it is almost self-sustaining, which is evident from massive open-source participation and pet projects.
On the contrary, at larger scale (starting from middle management and all the way to the top), I have an educated opinion that structured measurement of KPIs and clearly defined goals is what differentiates "tech" companies from all the rest - which is to say, tech-companies are known for their powerful growth.
It's really obvious in the hindsight: managers are usually pretty disillusioned types and will avoid doing hard work unless properly incentivized, thus fine-grained unforgeable growth-adjacent KPIs are really at the heart of the tech-company's success. Overall corporation's fast growth is a direct consequence of the synergy of KPI growth across the org-chart.
You don't gave to measure every one of these, of course. In my experience they are more or less correlated: good lifestyle interventions improve many measures at once.
SMART goals regarding these KPIs are pretty obvious.
I have a strong dislike of the modern focus on personal measurement and metrics. It implies a sort of mechanistic existence. It’s also often connected to a focus on productivity optimization, which given that the OP may be suffering from burnout, seems like it might be the wrong direction.
My advice to OP: whether it’s burnout or not (and it does sound like it), you aren’t liking what you’re doing right now, so if you can, stop doing it for a while. Summer is coming. Can you take a sabbatical? If not, can you quit? If you are able to regain your energy and enthusiasm you will surely be extremely employable, so your overall risk seems low.
Use the time to nourish your body and your spirit. Get off the internet and into the outdoors. Don’t measure your steps or your sleep duration, instead, reflect on how you feel. Lay back in the grass and watch the stars and ponder your place in this vast universe.
I wish you good luck and if you are able to start this journey, I’m excited for you.
If I could upvote your comment twice, I would. Measurement has become the be all and end all, and it's useful of course. But it's easy to make the mistake that you've captured the whole of something on your graph, or spreadsheet, and usually that is far from the case. The spirit of a thing is not easily captured.
> I have a strong dislike of the modern focus on personal measurement and metrics. It implies a sort of mechanistic existence. It’s also often connected to a focus on productivity optimization, which given that the OP may be suffering from burnout, seems like it might be the wrong direction.
Those things actually help reduce burnout, in my experience. An hour of sleep can make a big difference.
Compare "take at least two weeks of vacation, where vacation is defined as not checking any email or voicemail and engaging in purely arbitrary activities not directed by an external authority, within the next six months" to "you need a sabbatical."
Heck, even your own wording is already edging toward SMART. Staying off the Internet and not measuring steps or sleep duration are quantifiable goals. Binary, but still quantifiable.
I too would like if my life were nice by default, but it is not. When faced with a hard problem we have to resort to hard measurements of progress, because otherwise we tend to go in circles in high-dimensional parameter space.
Otherwise taking a sabbatical is a nice decent feel-good advice.
> If you are able to regain your energy and enthusiasm you will surely be extremely employable, so your overall risk seems low.
5 - I fail to see how a personality test can measure mental clarity? Even if they aren't useless constructs. Thought "core self-evalutions" if taken regularly can be a good indicator of issue.
6 - Not sure what are you measuring. Work memory?
7 - "Available crystallized intelligence". Isn't this an oxymoron?
8 - Only if our bodies didn't show signs of aging.
My goals, as I have already said, are pretty obvious: find a good set of lifestyle changes (including exercise types and patterns, diet, sleep conditions, outdoor activities, supplements and drugs, but also including choice of country & city to live in), so these metrics are optimized in good direction, and I feel better. I tried less systemic approach and it didn't work for me. In my impression our genetic makeup tends to make us choose a complementary sort of environment, so it all (behavior, health, mood) comes to equilibrium and balances out - it's really hard to make consistent progress when you are inside such perverse equilibrium. Thus the need for heavy-handed hard measurement approach.
1. Mood diaries are more about trends and avoiding depressive episodes, it's better to rate your mood in the evening so your professional life is included in the rating. For example if your manager stresses you out on your job, you may not think about it in the moment, but it may show on your mood diary as a week-scale trend.
2. Completely avoiding social media is an unattainable goal, thus usage should be limited to 0.5-1.0 hr.
3. Yes, and sleep well, which is quite hard.
4. 3000 is too little, I'd aim to 5000-10000.
5. There are various tests, I'm specifically interested in IQ-test https://openpsychometrics.org/tests/FSIQ/ but it's more or less interchangeable with N-back. IQ is a scary number, but it's a good barometer for how good you really feel. A difference between "a good day" and a "bad day" is clearly seen on such test.
6. Working memory and attention, yes. These are degraded by lack of sleep & stress & aging.
7. Again, lack of sleep & stress & aging tends to degrade active vocabulary, in my case.
8. Of course we age, but this aging process is malleable: some interventions are shown to decrease (!) the value of various aging clocks. Yes, the aging clocks themselves are imperfect, but this decrease is often correlated with subjective & objective improvements on other axes.
If you accept fundamentally mechanistic view of nature, biology and ourselves, you might as well position yourself to reap the benefits.
> If you accept fundamentally mechanistic view of nature, biology and ourselves, you might as well position yourself to reap the benefits.
You're missing my point. What I deny is the usefulness of presented frameworks and tools. But it's fine if it's working for you and can work for others.
You can track only the most important ones. Biological age isn't something you need to track every day - more like every month, or every week if you are rich.
Step-tracking & sleep tracking are given for free.
To be honest I don't have much energy either, unless I take stimulants. Which I don't do often due to reasons.
Take one full day every other week where you play around, learn and explore.
If you have not been able to do this at least 5 times last quarter - why not? What can we do to allow for that to happen.
This is an example of an actual goal I have for members of my team - it is Specific, Measurable, Attainable, Realistic/Relevant and Time-bound.
It's also tangential to the OP's topic here in a couple of ways.
Good things happen when you allow for slack, but we often put too much pressure on ourselves, and won't allow it.
It sure sounds smart, just isn't. Another example of a mechanistic approach in a field were humans are the major factor. Goals at best are arbitrary targets reached via consensus.
In my honest opinion one of the largest meta-problems ever amounts to decent mechanistic routes of helping each other being not taken in favor of more feel-good decent sounding verbal coping.
it's an acronym, Specific, Measurable, Achievable, Relevant, and Time-Bound. So if you're struggling to read books say, "Finish this book by the end of the month" is:
Specific: read the book
Measurable: no ambiguity as to whether you've read it
Achievable: a month is a reasonable amount of time to finish a book in
Relevant: read a book to improve your reading habits
Time-bound: it's not a project that'll hang over you for ages, you're done at the end of the month
I switched from an MBP to a Dell Precision 5520 with Ubuntu, then upgraded to a 5540 and am looking forward to the 5760. My colleagues are also using the 5540. These machines have been rgreat for us. We don't have much time to spend to tweak things, instead do a clean reinstall every few weeks and apply all upgrades as quickly as they come. Dell and Canonical have been supporting these machines quite well, in our experience.
I looked at the Samsung 34" model quite carefully about a month ago, but bought three U28E590D 28" 4K screens instead. I'm not done with this but will be driving them with an MBP when Apple finally gets the Skylake upgrade done, and a Surface Pro. I spend 99% of the time writing code in Idea. It has to be directly in front of me, and I very frequently glance at other windows to either side.
I decided against the 34" because I could not really figure out how to make the wide screen work out for me:
- One of those is not enough screen to put the browsers and terminal windows on either side of the IDE
- Two screens always gives me a neckache because of turning from the screen in front to either left or right but not the other...
- Putting a 34" in the center of three screens moves the edges of the side screens really far out, which I think would be odd for my field of vision, force me to turn my neck too far, and cost unnecessary desk space - I have this problem already to some degree with the three smaller screens
- I don't really feel like I have time to hassle with unexpected problems that pop up because of using a non-standard resolution, I want mainstream and simple so I can focus on my stuff; I think this is relevant both to driving the display, eg. at the OS, graphics card, drivers level, and also at the application level because it is unlikely that any UI will be designed with the wide screen in mind
- I like to run full screen, so I don't have to futz with border dragging etc.
In the end, I decided that 34" is actually not enough - I'd want a screen like this in the 50-60" range, but pixels above eye level also cause neckache, so this would be a resolution of something like 10000x2160. I'd also want better window manager support for getting the right window placements without having to that myself every time. I saw a backlit projector-driven screen like that in a research lab 15 years ago - perhaps we can actually buy one in another 10-15 years?
I also did think that the 34" might be really great if I either spent all of time in a single full screen application without the side windows, or if I went back to coding in vim and arranged everything into a smaller area - which I don't want to do, but lots of folks prefer that.
I am with you, 10k x 2160 (or really 6k x 2k would be fine).
2 screens is hard because of not centering well... but 3 giant screens is overkill.. so for now at least, stuck with 2 giants, and I just mostly ignore 1 of them (emails and chats, not my main stuff)
https://www.oecd.org/en/publications/pisa-2022-results-volum...
https://icelandmonitor.mbl.is/news/news/2023/12/05/concernin...
Academic performance is somewhat of a burning fire - I'd guess there is less of a translation issue with the article than a cultural and political issue around the topic.