The issue here is, "physical" is a misleading word. Digital works are also held on physical media. The distinction is whether the work is stored on a dedicated physical object.
Edit: I suppose a jukebox confuses things as I think it belongs in the "physical media" box, but it isn't dedicated to a specific work. Hmm.
> Is this something that generally goes beyond school?
The things that make you vulnerable change depending on what year and situation you're in. I can very much get behind the idea that you should consider whether your legacy sense of what makes you vulnerable is relevant to your current circumstances. I'm not so much behind the "freely dispense the rope people will use to hang you" version.
There's a lot of abstraction in this thread, but I would like to hear specifics.
What are the exact vulnerabilities that we are talking about?
From my side I guess I can say I frequently feel like impostor type of things or that I'm not doing enough. I won't mention that at work, but I definitely share those feelings to my partner.
I would hate not being able to share something like that to my partner for instance.
When I was at school (and in the 20th century generally) admitting to anything outside traditional masculinity / heterosexuality made you vulnerable to physical / verbal attack. Which remains the case for a lot of people in the 21st century. If they want to be loud and proud then good for them, but I can understand it if they prefer to keep it quiet. Whereas, at least around me, now, I think you can come out as gay without too much concern for your physical safety.
Conversely, at my school you could be as overtly homophobic as you wanted with no consequences, whereas now you should probably be a lot more cautious if you harbour homophobic sentiments.
Talking about partners in particular, I've had partners I felt fairly safe sharing anything (most things anyway) with, and I've also had partners who would mine our conversations for any kind of viable ammunition. Which led to me being a bit more careful what I said. We can perhaps agree the first kind of relationship is better.
Yeah, I think the 2nd type of relationship is much worse than no relationship, I'd say the problem there wouldn't be with someone being vulnerable, it's the problem with the relationship...
Yeah, during school it's difficult since you are forced together with potentially toxic people. As an adult you can choose at least in personal life and to an extent workplace, although sometimes workplace can also be difficult to get right.
I'd 100% rather be alone than around people who might judge or use in someway against me anything about me. It would feel internally disgusting for me to think that someone might be trying to get at my expense and that I'm not around people who are there to try and build each other. What a waste of time.
The thing is, what you want is specifically a relationship where you are not vulnerable. If you're not worried about the consequences of the things you say, there's no actual vulnerability. You're just adapting to a safe situation. In which case good for you and you partner.
Ultimately, what I'm trying to do though, is to build myself such a life that if my internal principles are good, I shouldn't have to worry in most cases about what I'm saying since I want to believe in my principles. I want my interactions with people to be win-win, and I want to surround myself with people who want that too. If someone displays lose-win behavior, I should always naturally have the "moral" upper-hand assuming other people around me are reasonable. And if none of the people around me are reasonable, I should go and find the reasonable people.
People seem to be romanticizing the term "vulnerable" though. I think it would be important to go deeper into this. What does "vulnerability" exactly mean. I have had depression, anxiety diagnosed in the past and addictions and other similar issues, are these vulnerabilities because they may interfere with me acting optimally or are they vulnerabilities because they provide someone a tool to try and get at me if they so wanted because they think there's stigma around those labels to influence others to think worse of me?
> 24 million people receiving benefits (aka cash) in uk
It looks like most of those people are claiming State Pension / Pension Credit. Which doesn't make it not true, but it's maybe not what most people will think of first when you talk about benefit claimants.
Maybe, but what the article is really about is how these two people responded to being marked for greatness at a young age. I don't see a reason to disbelieve that part.
Then the article should be titled "being famous as a child is not enough". But then it would not be a very interesting article.
Also, it's not just "maybe" that neither of these people have exceptional IQ. It is more like "most certainly not, unless they have some strong proof, because they lied multiple times".
Out of interest, do you identify any of the comments in this discussion as that kind of posturing? The "pro-EA" comments I see here seem (to me) to be fairly defensive in character. Whereas comments attacking EA seem pretty strident. Are you perceiving something different?
My impression of EA is not based on the comments here but the more public figures in this space. It is likely that others attacking EA are reacting to this also, while those defending it are doing so about the general concept of EA rather than a specific realization of EA that commenters like myself are against.
> Subtract billionaire activity from your perception of EA attitude
But that's the problem, that is my entire perception of EA. I see regular altruism where, like in the shopping example I gave above, wanting to be effective is already intrinsic. Doing things like giving people information that some forms of giving are better than others is just great. No issues there at all, but again I see that as a part of plain old regular altruism.
Then there is Effective Altruism (tm) which is the billionaire version that I see as performative and corrupt. Even when it helps people, this seems to be incidental rather that the main goal which appears to be marketing the EA premise for self promotion and back patting.
Obviously EA has a perception problem, but I have to admit it’s a little odd hearing someone just say that they know their perception is probably inaccurate and yet they choose to believe and propagate it regardless.
If it helps, instead of thinking of it as a perception problem, maybe think of it as a language problem. There are (at least) two versions of EA. One of them has good intentions and the other doesn't. But they are both called EA, so its not that people are perceiving incorrectly, its that they hear the term and associate it with one of those two versions. I tried to disambiguate by referring the one just regular altruism and other by the co-opted name. EA has been negatively branded and its very hard to come back from that association.
"A lot of people think that EA is some hifalutin, condescending endeavor and billionaire utilitarians hijack its ideology to justify extreme greed (and sometimes fraud!), but in reality, EA is simply the imperative (accessible to anyone) to direct their altruistic efforts toward what will actually do the most good for the causes they care about. This is in contrast to the most people's default mode of relying on marketing, locality, vibes, or personal emotional satisfaction to guide their generosity."
See? Fair and accurate, and without propagating things I know or suspect to be untrue!
This is perfectly fine definition, if you change the "but in reality" to "and". Like it or not, EA means both of these things simultaneously. So its not that if someone uses one definition that they are wrong, only that they are using that definition. Language is like that. There is no official definition, its whatever people on mass decide to use and sometimes there is a split vote.
I see your point, but if the only red-headed people ever saw was Kathy Griffin and Carrot Top and they were unfunny to them, and also Kathy and Carrot Top were loudly and sincerely proclaiming that they were funny, and that they were funnier than any other comedians, and that it was because they were red headed. How irrational is that perception?
As per conversation elsewhere, I think you've fallen for some popular but untrue / unfair narratives about EA.
But I want to take another tack. I never see anybody make the following argument. Probably that's because other people wisely understand how repulsive people find it, but I want to try anyway, possibly because I have undiagnosed autism.
EA-style donations have saved hundreds of thousands of lives. I know there are people who will quibble about the numbers, but I don't think you can sensibly dispute that EA has saved a lot of lives. This never seems to appear in people's moral calculus, like at all. Most of those are people who are poor, distant, powerless and effectively invisible to you but nevertheless, do they not count for something?
I know I'm doing utilitarianism and people hate it, but I just don't get how these lives don't count for something. Can you sell me on the idea that we should let more poor people die of preventable diseases in exchange for a more morally unimpeachable policy to donations?
Lots of people and organizations make charitable donations. Often that's done in the name of some ideology. Always they claim they're doing good, not throwing the money away.
None of this is new. What may be new is branding those traditional claims as a unique insight.
Even the terrible behavior and frightening sophistry of some high-profile proponents is really nothing groundbreaking. We've seen it before in other movements.
I don't think the complaint is really the donations or the impact, rather it's that the community has issues?
Whether you agree that someone can put money into saving lives to make up for other moral faults or issues or so on is the core issue. And even from a utilitarian view we'd have to say that more of these donations happened than would have without the movement or with a different movement, which is difficult to measure. Consider the usaid thing - Elon musk may have wiped out most of the EA community gains by causing that defending, and was probably supported by the community in some sense. How to weigh in all these factors?
> Whether you agree that someone can put money into saving lives to make up for other moral faults or issues or so on is the core issue
For me the core issue is why people are so happy to advocate for the deaths of the poor because of things like "the community has issues". Of course the withdrawal of EA donations is going to cause poor people to die. I mean yes, some funding will go elsewhere, but a lot of it's just going to go away. Sorry to vent but peoplearesoendlesslydisappointing.
> Elon musk may have wiped out most of the EA community gains by causing that defending
For sure!
> and was probably supported by the community in some sense
You sound fairly under-confident about that, presumably because you're guessing. It's wildly untrue.
I can't imagine EA people supported the USAID decision specifically - but the silicon valley environment, the investing bubble, our entire tech culture is why Musk has the power he does, right?
And the rationalist community writ large is very much part of that. The whole idea that private individuals should get to decide whether or not to do charity, or where they can casually stop giving funds or etc, or that so much money needs to be tied up in speculative investments and so on, I find that all pretty distasteful. Should life or death matters be up to whims like this?
I apologize though, I've gotten kinda bitter about a lot of these things over the last year. It's certainly a well intentioned philosophy and it did produce results for a time - there's many worse communities than that.
> the silicon valley environment, the investing bubble, our entire tech culture is why Musk has the power he does, right?
For sure, not quibbling with any of that. The part I don't get is why it's EA's fault, at least more than it's many, many other people and organizations' fault. EA gets the flak because it wants to take money from rich people and use it to save poor people's lives. Not because it built the Silicon Valley environment / tech culture / investing bubble.
> Should life or death matters be up to whims like this?
Referring back to my earlier comment, can you sell me on the idea that they shouldn't? If you think aid should all come from taxes, sell me on the idea that USAID is less subject to the whims of the powerful than individual donations. Also sell me on the idea that overseas aid will naturally increase if individual donations fall. Or, sell me on the idea that the lives of the poor don't matter.
For decades things like usaid were bipartisan and basically untouchable, so that and higher taxes would have been a fairly secure way to do things. The question is can that be accomplished again, or do we need a thorough overhaul of who's in power in various parts of society?
None of this will happen naturally though. We need to make it happen. So ultimately my position is that we need to aim efforts at making these changes, possibly at a higher priority than individual giving - if you can swing elections or change systems of government the potential impact is very high in terms of policy change and amount of total aid, and also in terms of how much money we allow the rich to play and gamble with. None of these are natural states of affairs.
(Sincerely) good luck with that, but I don't see why it means we should be against saving the lives of poor people in the immediate term. At some point we might just have to put it down to irreconcilably different mental wiring.
Similarly, the reason comments like yours get voted to the top of discussions about EA is that they imply "It's best if rich people keep their money, because the people trying to save poor people's lives are actually bad". There's a very obvious appeal to that view, especially somewhere like HN.
No, I think this is just about the difference between Effective Altruism (tm), altruism that is actually effective, and the hidden third option (tax the rich).
EA-the-brand turned into a speed run of the failure cases of utilitarianism. Because it was simply too easy to make up projections for how your spending was going to be effective in the future, without ever looking back at how your earning was damaging in the past. It was also a good lesson in how allowing thought experiments to run wild would end up distracting everyone from very real problems.
In the end an agency devoted to spending money to save lives of poor people globally (USAID) got shut down by the world's richest man, and I can't remember whether EA ever had anything to say about that.
The work I do is / was largely funded by USAID so I'm biased, but from literally everything I've seen EA people are unanimously horrified by the gutting of USAID. And EA people are overwhelmingly pro "tax the rich".
But again, I recognize the appeal of your narrative so you're on safer ground than I am as far as HN popularity goes.
I have a lot of sympathy for the ideas of EA, but I do think a lot of this is down to EA-as-brand rather than whatever is happening at grassroots level. Perhaps it's in the same place as Communism; just as advocates need a good answer to "how did this go from a worker's rights movement to Stalin", EA needs an answer to "how did EA become most publicly associated with a famous fraudster".
EA had a fairly easy time in the media for a while which probably made its "leadership" a bit careless. The EA foundation didn't start to seriously disassociate itself from SBF until the collapse of FTX made his fraudulent activity publicly apparent.
But mostly, people (especially rich people) fucking hate it when you tell them they could be saving lives instead of buying a slightly nicer house. That (it seems to me) is why eg. MOMA / Harvard / The British Museum etc get to accept millions of dollars of drug dealer money and come out unscathed, whereas "EA took money from somebody who was subsequently convicted of fraud" gets presented as a decisive indicator of EA's moral character. It's also, I think, the reason you seem to have ended up thinking EA is anti-tax and anti-USAID.
I feel like I need to say, there's also a whole thing about EA leadership being obsessed with AI risk, which (at least at the time) most people thought was nuts. I wasn't really happy with the amount of money (especially SBF money) that went into that, but a large majority of EA money was still going into very defensible life-saving causes.
It's not kid friendly, but in case anybody's interested I just wrote up how I made a simple "hardware" synth by bodging together a Raspberry Pi Pico 2 and I2S audio module, total cost around £10 on Amazon UK.
The hardware's very cheap and easy. The "default" synthesis is pretty simple but also pretty hackable (in Rust) if you want to customize it.
I did some similar playing around with an ESP32 and I2S a few years ago (lockdowns were an odd time). Where I seem to remember getting stuck was how to get the phase to line up, so that each sample looped at a zero-crossing point (which is different for each frequency).
I don't really understand your issue sorry, but my thing basically just writes the audio output to a buffer, and the DMA and PIO "automatically" send that to the I2S output. There's also some messing about with ping ponging between two DMA buffers to avoid gaps between each buffer write audio. I guess things might be quite different on the ESP32.
Edit: Ok I glanced at your code, if I read it right it seems like you're writing sin waves into buffers at "init" time then copying the appropriate buffer at "run" time. Which is not what I'd do, but then I'm used to more luxurious devices. Maybe try using a fast sin approximation rather than the precomputed buffer table?
Thanks! This was just some playing round years ago, so I don't remember much about it. I don't remember if calculating sin() at runtime was too slow, or I wanted to avoid polyphonic O(n) inside the buffer loop, or what. I bet it uses a LUT anyway.
I much prefer your method of just rotating the phase!
were you using single cycle waveforms or longer samples? in the former case, I guess there's not much to it, you just cycle through the waveform (in which case the waveform you choose would usually start and end at a zero crossing by construction, like sines, triangles, etc - or, if it does not, well, that will just create extra harmonics that might or not be desirable).
For the second case, it's more subtle. Most samplers that implement this feature will assume correct loop points (usually, but not necessarily at a zero crossing) are chosen manually by the user. Some of them implement cross-fading at the looping point to make that more forgiving, but that may be CPU/RAM intensive for some devices. If you're referring to small clicks you may get at the start and stop of sample playback, it's fairly common to use very short (ms or less) fade-in/fade-out to avoid that. There's a lot of books out there, but the main one I've read and enjoyed is this one, that happens to be free: https://cs.gmu.edu/~sean/book/synthesis/. it's more of a textbook than a cookbook.
Thanks! Reading the code again, looks like I was filling up buffer of 256 x 16 bit samples.
I think the issue with looping at arbitrary points was word alignment. You need to give it a whole buffer. So you'd have to do some nasty bit-shifting.
Per other reply, I think doing it live is probably easisest!
Thanks for the recommendation. If I ever get back into this I'll take a look.
Edit: I suppose a jukebox confuses things as I think it belongs in the "physical media" box, but it isn't dedicated to a specific work. Hmm.
reply