I myself started making the same distinction when I talk about these things in English, except it's "social media" vs "social networks". Though I have no idea how to make that distinction in Russian, social "media" never caught on as a term there.
An extra annoying problem about social media for me is that while I can make most of the platforms give me a chronological feed of content authored only by people I follow, most other people see mine in an algorithmic feed. This includes people I have zero social connections with. For example, I just gave up trying to discuss politics on Twitter, because every time I post anything political, that tweet ends up in the feeds if hundreds of people who hold the radical version of opposite views, with predictable results. And there's nothing I can do. I can't opt out of being recommended.
Sure you can. You can not post political things on social networks. They're not doing any good anyway. They're not changing anyone's mind. They're not providing depth or width to the discussion. I don't say this to be insulting, but rather a realist.
My point is that I just want to be able to discuss any topic with my followers without self-policing lest a bunch of anonymous accounts butts into the conversation and completely derails it.
What you're probably looking for is closer to a closed discussion group or mailing list than "social media", which is presently universally-readable, algorithmically-targeted, feed-based, advertising-supported, and increasingly, saturated with AI slop (which itself has replace clickbait and ragebait).
Which reminds me of Kitman's Law: Pure drivel tend to drive off the TV screen ordinary drivel.
I want my posts universally-readable and universally-interactable (that's why I don't like the idea of locking my accounts). I also want to be able to explore the social graph — looking at who follows who, what that friend of a friend posts, etc. It all forms an integral part of what social networks are.
What I absolutely do not want is the platform having any of its own agency. I want a social network that ideally works as a dumb pipe. I especially don't want my content surfaced in front of the kinds of people who would've never found it through their own exploration.
It should come as no surprise, then, that I have a lot of faith in the fediverse.
My evolved view is that there's a time and place for various types of interactions. That's after being a long-time fan of universal readability.
Truth though is that today's Internet is vastly different from that experienced in the 1980s (when I first came online), '90s, aughts, or even the teens. Scale is a huge piece of this, though broadband, mobile devices, advertising, attention merchants, clickbait, and AI have all had their impacts. The Internet (or proto-Internet) of the 1990s and earlier was very limited in access, with soft-but-imposing barriers to entry (selective research universities, some government agencies, some tech firms), which made the experience both "open" and closed. Yes, there was exposure to a large audience, particularly as contrasted to immediate physical space or mass media of the time (print, including early small-scale copiers, amplified audio, radio, television, and telephones). But the total online population would be considered a minuscule social network by current standards --- a few thousands to a few millions of souls in the 1980s and 1990s.
I continue to use some smaller networks today (HN, Mastodon, Diaspora*), and find that they tend to retain at least some of the feel of the forums I was familiar with in the 1980s and 1990s: small, intentional, generally motivated. Ironically, their limited size and the fact that those who are there want to be there is something of a feature. A significant problem isn't so much people leaving as dying, which seems to happen with regularity. (An older population amplifies this, though I've noted previously that mortality at FB/Google scale is likely on the order of tens of thousands of accounts daily.)
The platforms I mention also largely lack agency, which as you note is quite refreshing. I'll note that HN is somewhat an exception, but it's mediated mostly by humans (member flags, moderator actions), as well as some automated rules, though those are largely guided by HN's mission of "intellectual curiosity" rather than attention-mongering.
Factors other than scale alone include broadband (enabling graphics, audio, video, and interactive content, all of which have considerable downsides), mobile devices (making for more distracted and far less nuanced discussion, as well as quite brief responses contrasted with physical keyboards), and the pernicious first and higher-order effects of advertising, manipulation, algorithms, AI, and the like.
I've toyed with the notion of a set of interrelated scopes, some limited and personal, some more widely open, though arranging that formally and as part of a designed system has yet to emerge. I have hopes for that though.
There's also the distinction between a pure social graph and a highly-curated specific discussion or forum. I've tried the latter from time to time with stunningly good results, especially at modest size (< 50 participants generally).
(This comment, as most of mine, was composed at a keyboard, and edited several times.)
Politics is a complex topic. If you want to learn more, social media is not the way to do it. Well reasoned books and essays are. If you want to convince others of your positions, social media is not the way to do it. Personal relationships in real life are.
Again, you seem to insist on an ulterior motive, completely discounting the value or pleasure of conversation. In contrast, reading is a solitary activity. Have you heard of book clubs? People read books, and then they get together to discuss the books.
Hacker News itself is all about reading articles, and then discussing the articles with others. "If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity."
My understanding is that it's likely somewhere around 7.5% for men and women. Including bisexual people brings it closer to 10%. That's based on self-reporting, I think. I'm not sure how significant that would be in Meta's world.
Among men this would only be 3 or 4%. Probably not that significant given how coarse the strategy itself is.
And no one can even give a concrete answer why root certificates need expiration dates. It's just because reasons.
IMO the whole PKI thing is a terrible idea to begin with. It would make much more sense to tie the trust in TLS to DNS somehow, since the certificates themselves depend on domains anyway. Then you would only have a single root of trust, and that would be your DNS provider (or the root servers). And nothing will expire ever again.
Root certificates need expiration dates for the same reason that LetsEncrypt certs need an expiration date: risk of cert compromise and forgery increases over time.
Over a long enough timeline, there will be vulns discovered in so much of the software that guards the CA certs in RAM
> risk of cert compromise and forgery increases over time.
And what if the certificate is compromised before it expires? Right, there's a revocation mechanism for that. So why expire them then if they can be revoked anyway IF they get compromised?
The reason why domain TLS certificates expire is that domains can change owners. It makes sense that it should not be possible for someone to buy a domain for one year, get a non-expiring TLS certificate issued for it, and then have the ability to MitM its traffic if it ever gets bought by someone else later.
Domain certificates are sent as part of the connection handshake, so them expiring is unnoticeable for the end users. However, root certificates rely on the OS getting updates forever, which is unsustainable. Some systems lack the ability to install user-provided root CAs altogether, and some (Android) do allow it but treat them as second-class.
Because the most dangerous secret is one that has been compromised and you don’t know it. This sets a time limit for their usefulness. Sometimes the stories about terrible default choices that are insecure sink in and architects choose a better path.
Also, details about the certs and the standards for them change over time. This makes it easier for the browser venders (via the CA forum) to force cert providers to update over time.
The instant we bound encrypted connections with identity we failed. And decades later we're still living with the mistake.
I'm completely serious when we need to abandon the ID verification part of certificates. That's an entirely separate problem from encryption protocol. An encryption protocol needs absolutely no expiration date, it's useful until it's broken, and no one can predict that. Identity should be verified in a separate path.
Do certificate revocation lists need to keep including certificates that have long since expired? I don't see why root certificates need to expire as long as the certificates signed by those roots all have reasonable expiration windows, unless someone is doing something strange about trusting formerly-valid certificates, or not checking root certificates against revocation lists.
Of course they do, they have to. But it's okay for things that are sent to you over the network to expire. It's not okay for things built into your potentially abandoned OS to expire.
Or maybe, you know, we should stop writing security-critical software in memory-unsafe languages. Mobile devices not treating their owner as an adversary would also be nice.
That's only part of it. That all security issues would be gone after writing code in a memory-safe language is a fairytale (though it does help a lot).
The other parts layered defense, reducing the number of privileged/non-sandboxed applications/processes, not shipping spyware/adware, etc.
Only Apple/GrapheneOS and to a slightly lesser extend Google Pixel are good at this. Many phone manufacturers still use the TrustZone TEE on the main CPU (rather than a separate security processor), isolated radios, hardware memory tagging, and dozens of other defense-in-depth features.
If you mean the software supply chain, minimize third-party dependencies and carefully review any updates. I mean read and understand code diffs before you bump versions.
If you mean the hardware supply chain, has that ever actually happened? I've only ever seen it mentioned as a theoretical possibility so far.
I worked at Russia's largest social media company as the founding Android developer. I quit as soon as I realized it was only going to get worse from now on after an acquisition and a very noticeable shift in user treatment. But that job was never about the money for me. The salary was just a nice yet optional bonus.
He encodes bits as signs of DCT coefficients. I do feel like this is not as optimal as it could be. A better approach IMO would be to just ignore the AC coefficients altogether and instead encode several bits per block into the DC. Not using the chrominance also feels like a waste.
This actually won't work against YouTube's compression. The DC coefficient is always quantized, rounded, scale, and any other things. That means that these bits are pretty much guaranteed to be destroyed immediately. If this is the case for every single block, then data is unrecoverable. Also, chrominance is not used on purpose, because chrominance is compressed much more aggressively compared to luminance.
I meant choosing multiple values, e.g. 4 to represent 2 bits. Say, 0.25, 0.5, 0.75, and 1. Then when decoding you would pick the closest valid value, so for example for 0.20 it would be 0.25. Not using AC coefficients would mean that theoretically you would get more bitrate for the DC ones.
I’ve been told this many times in the comments, but this again is not reliable. Simply put, compression doesn’t necessarily follow a pattern, so specifying “ranges” or rounding to a specific place will not work. Compression optimizes for the eye, and doesn’t do the same thing for every value. It will round some down, some other mores, others less. Giving a range is simply not enough.
The one related thing that has always driven me a bit crazy on iOS (which I hardly ever use, only for testing) compared to Android (which I've been using daily since 2011) is how touches register slightly higher than where you're physically tapping. This alone makes hitting small-ish targets, including keyboard keys, quite challenging and infuriating. This does seem intentional too, as it depends neither on the device model nor on the OS version, it's been like that for over a decade at this point.
An extra annoying problem about social media for me is that while I can make most of the platforms give me a chronological feed of content authored only by people I follow, most other people see mine in an algorithmic feed. This includes people I have zero social connections with. For example, I just gave up trying to discuss politics on Twitter, because every time I post anything political, that tweet ends up in the feeds if hundreds of people who hold the radical version of opposite views, with predictable results. And there's nothing I can do. I can't opt out of being recommended.
reply