Hacker Newsnew | past | comments | ask | show | jobs | submit | vld_chk's commentslogin

By which time should we expect US administration to post a video on X about “good classic” plastic bags and ban in the US any attempt to replace them? :)

In my mental model, the fundamental problem of reproducibility is that scientists have very hard time to find a penny to fund such research. No one wants to grant “hey I need $1m and 2 years to validate the paper from last year which looks suspicious”.

Until we can change how we fund science on the fundamental level; how we assign grants — it will be indeed very hard problem to deal with.


In theory, asking grad students and early career folks to run replications would be a great training tool.

But the problem isn’t just funding, it’s time. Successfully running a replication doesn’t get you a publication to help your career.


Grad students don’t get to publish a thesis on reproduction. Everyone from the undergraduate research assistant to the tenured professor with research chairs are hyper focused on “publishing” as much “positive result” on “novel” work as possible


Publishing a replication could be a prerequisite to getting the degree

The question is, how can universities coordinate to add this requirement and gain status from it


Prerequisite required by who, and why is that entity motivated to design such a requirement? Universities also want more novel breakthrough papers to boast about and to outshine other universities in the rankings. And if one is honest, other researchers also get more excited about new ideas than a failed replication that may for a thousand different reasons and the original authors will argue you did something wrong, or evaluated in an unfair way, and generally publicly accusing other researchers of doing bad work won't help your career much. It's a small world, you'd be making enemies with people who will sit on your funding evaluation committees, hiring committees and it just generally leads to drama. Also papers are superseded so fast that people don't even care that a no longer state of the art paper may have been wrong. There are 5 newer ones that perform better and nobody uses the old one. I'm just stating how things actually are, I don't say that this is good, but when you say something "should" happen, think about who exactly is motivated to drive such a change.


> Prerequisite required by who, and why is that entity motivated to design such a requirement?

Grant awarding institutions like the NIH and NSF presumably? The NSF has as one of its functions, “to develop and encourage the pursuit of a national policy for the promotion of basic research and education in the sciences”. Encouraging the replication of research as part of graduate degree curricula seems to fall within bounds. And the government’s interest in science isn’t novelty per se, it’s the creation and dissemination of factually correct information that can be useful to its constituents.


The commenter I was replying to wanted it to be a prerequisite for a degree, not for a grant. Grant awarding institutions also have to justify their spending to other parts of government and/or parliament (specifically, politicians). Both politicians and the public want to see breakthrough results that have the potential to cure cancer and whatnot. They want to boast that their funding contributed to winning some big-name prize and so on. You have to think with the mind of specific people in specific positions and what makes them look good, what gets them praise, promotions and friends.

> And the government’s interest in science isn’t novelty per se, it’s the creation and dissemination of factually correct information that can be useful to its constituents.

This sounds very naive.


I think Arxiv and similar could contribute positively by listing replications/falsifications, with credit to the validating authors. That would be enough of an incentive for aspiring researchers to start making a dent.


But that seems almost trivially solved. In software it's common to value independent verification - e.g. code review. Someone who is only focused on writing new code instead of careful testing, refactoring, or peer review is widely viewed as a shitty developer by their peers. Of course there's management to consider and that's where incentives are skewed, but we're talking about a different structure. Why wouldn't the following work?

A single university or even department could make this change - reproduction is the important work, reproduction is what earns a PhD. Or require some split, 20-50% novel work maybe is also expected. Now the incentives are changed. Potentially, this university develops a reputation for reliable research. Others may follow suit.

Presumably, there's a step in this process where money incentivizes the opposite of my suggestion, and I'm not familiar with the process to know which.

Is it the university itself which will be starved of resources if it's not pumping out novel (yet unreproducible) research?


> In software it's common to value independent verification - e.g. code review. Someone who is only focused on writing new code instead of careful testing, refactoring, or peer review is widely viewed as a shitty developer by their peers.

That is good practice

It is rare, not common. Managers and funders pay for features

Unreliable insecure software sells very well, so making reliable secure software is a "waste of money", generally


Actually yes you're 100% right, I phrased that badly


> Presumably, there's a step in this process where money incentivizes the opposite of my suggestion, and I'm not familiar with the process to know which.

> Is it the university itself which will be starved of resources if it's not pumping out novel (yet unreproducible) research?

Researchers apply for grants to fund their research, the university is generally not paying for it and instead they receive a cut of the grant money if it is awarded (IE. The grant covers the costs to the university for providing the facilities to do the research). If a researcher could get funding to reproduce a result then they could absolutely do it, but that's not what funds are usually being handed out for.


Hmm I see. So the grant makers are more of a problem here. And what are their incentives to fund ~bad research?


Universities are not really motivated to slow down the research careers of their employees, on the contrary. They are very much interested in their employees making novel, highly cited publications and bringing in grants that those publications can lead to.


That.. still requires funding. Even if your lab happens to have all the equipment required to replicate you're paying the grad student for their time spent on replicating this paper and you'll need to buy some supplies; chemicals, animal subjects, pay for shared equipment time, etc.


You may well know this, but I get the sense that it isn’t necessarily common knowledge, so I want to spell it out anyway:

In a lot of cases, the salary for a grad student or tech is small potatoes next to the cost of the consumables they use in their work.

For example,I work for a lab that does a lot of sequencing, and if we’re busy one tech can use 10k worth of reagents in a week.


We are on the comment section about an AI conference and up until the last few years material/hardware costs for computer science research was very cheap compare to other sciences like medicine, biology etc. where they use bespoke instruments and materials. In CS, up until very recently, all you needed was a good consumer PC for each grad student that lasted for many years. Nowadays GPU clusters are more needed but funding is generally not keeping up with that, so even good university labs are way underresourced on this front.


Enough people will falsify the replication and pocket the money, taking you back to where you were in the first place and poorer for it. The loss of trust is an existential problem for the USA.


Yeah, but doesn't publishing an easily falsifiable paper end one?


One, it doesnt damage your reputation as much as one would think.

But two, and more importantly, no one is checking.

Tree falls in the forest, no one hears, yadi-yada.


Here's a work from last year which was plagiarized. The rare thing about this work is it was submitted to ICLR, which opened reviews for both rejected and accepted works.

You'll notice you can click on author names and you'll get links to their various scholar pages but notably DBLP, which makes it easy to see how frequently authors publish with other specific authors.

Some of those authors have very high citation counts... in the thousands, with 3 having over 5k each (one with over 18k).

https://openreview.net/forum?id=cIKQp84vqN


<< no one is checking.

I think this is the big part of it. There is no incentive to do it even when the study can be reproduced.


Not in most fields, unless misconduct is evident. (And what constitutes "misconduct" is cultural: if you have enough influence in a community, you can exert that influence on exactly where that definitional border lies.) Being wrong is not, and should not be, a career-ending move.


If we are aiming for quality, then being wrong absolutely should be. I would argue that is how it works in real life anyway. What we quibble over is what is the appropriate cutoff.


There's a big gulf between being wrong because you or a collaborator missed an uncontrolled confounding factor and falsifying or altering results. Science accepts that people sometimes make mistakes in their work because a) they can also be expected to miss something eventually and b) a lot of work is done by people in training in labs you're not directly in control of (collaborators). They already aim for quality and if you're consistently shown to be sloppy or incorrect when people try to use your work in their own.

The final bit is a thing I think most people miss when they think about replication. A lot of papers don't get replicated directly but their measurements do when other researchers try to use that data to perform their own experiments, at least in the more physical sciences this gets tougher the more human centric the research is. You can't fake or be wrong for long when you're writing papers about the properties of compounds and molecules. Someone is going to come try to base some new idea off your data and find out you're wrong when their experiment doesn't work. (or spend months trying to figure out what's wrong and finally double check the original data).


In fields like psychology, though, you can be wrong for decades. If your result is foundational enough, and other people have "replicated" it, then most researchers will toss out contradictory evidence as "guess those people were an unrepresentative sample". This can be extremely harmful when, for instance, the prevailing view is "this demographic are just perverts" or "most humans are selfish thieves at heart, held back by perceived social consensus" – both examples where researcher misconduct elevated baseless speculation to the position of "prevailing understanding", which led to bad policy, which had devastating impacts on people's lives.

(People are better about this in psychology, now: schoolchildren are taught about some of the more egregious cases, even before university, and individual researchers are much more willing to take a sceptical view of certain suspect classes of "prevailing understanding". The fact that even I, a non-psychologist, know about this, is good news. But what of the fields whose practitioners don't know they have this problem?)


Yeah like I said the soft validation by subsequent papers is more true in more baseline physical sciences because it involves fewer uncontrollable variables. That's why I mentioned 'hard' sciences in my post, messy humans are messy and make science waaay harder.


Well, this is why the funniest and smartest way people commit fraud is faking studies that corroborate very careful collaborators' findings (who are collaborating with many people, to make sure their findings are replicated). That way, they get co-authorship on papers that check out, and nobody looks close enough to realize that they actually didn't do those studies and just photoshopped the figures to save time and money. Eliezer Masliah, btw. Ironically only works if you can be sure your collaborators are honest scientists, lol.


The vast majority of papers is so insignifcant, nobody bothers to try and use and thereby replicate it.


But the thing is… nobody is doing the replication to falsify it. And if the did, it wouldn’t be published because it’s a null result


Not really, since nobody (for values of) ends up actually falsifying it, and if they do, it's years down the line.


There is actually a ton of replication going on at any given moment, usually because we work off of each other's work, whether those others are internal or external. But, reporting anything basically destroys your career in the same way saying something about Weinstein before everyone's doing it does. So, most of us just default to having a mental list of people and circles we avoid as sketchy and deal with it the way women deal with creepy dudes in music scenes, and sometimes pay the troll toll. IMO, this is actually one of the reasons for recent increases in silo-ing, not just stuff being way more complicated recently; if you switch fields, you have to learn this stuff and pay your troll tolls all over again. Anyway, I have discovered or witnessed serious replication problems four times --

(1) An experiment I was setting up using the same method both on a protein previously analyzed by the lab as a control and some new ones yielded consistently "wonky" results (read: need different method, as additional interactions are implied that make standard method inappropriate) in both. I wasn't even in graduate school yet and was assumed to simply be doing shoddy work, after all, the previous work was done by a graduate student who is now faculty at Harvard, so clearly someone better trained and more capable. Well, I finally went through all of his poorly marked lab notebooks and got all of his raw data... his data had the same "wonkiness," as mine, he just presumably wanted to stick to that method and "fixed" it with extreme cherry-picking and selective reporting. Did the PI whose lab I was in publish a retraction or correction? No, it would be too embarrassing to everyone involved, so the bad numbers and data live on.

(2) A model or, let's say "computational method," was calibrated on a relatively small, incomplete, and partially hypothetical data-set maybe 15 years ago, but, well, that was what people had. There are many other models that do a similar task, by the way, no reason to use this one... except this one was produced by the lab I was in at the time. I was told to use the results of this one into something I was working on and instead, when reevaluating it on the much larger data-set we have now, found it worked no better than chance. Any correction or mention of this outside the lab? No, and even in the lab, the PI reacted extremely poorly and I was forced to run numerous additional experiments which all showed the same thing, that there was basically no context this model was useful. I found a different method worked better and subsequently, had my former advisor "forget" (for the second time) to write and submit his portion of a fellowship he previously told me to apply to. This model is still tweaked in still useless ways and trotted out in front of the national body that funds a "core" grant that the PI basically uses as a slush fund, as sign of the "core's" "computational abilities." One of the many reasons I ended up switching labs. PI is a NAS member, by the way, and also auto-rejects certain PIs from papers and grants because "he just doesn't like their research" (i.e. they pissed him off in some arbitrary way), also flew out a member of the Swedish RAS and helped them get an American appointment seemingly in exchange for winning a sub-Nobel prize for research... they basically had nothing to do with, also used to basically use various members as free labor on super random stuff to faculty who approved his grants, so you know the type.

(3) Well, here's a fun one with real stakes. Amyloid-β oligomers, field already rife with fraud. A lab that supposedly has real ones kept "purifying" them for the lab involved in 2, only for the vial to come basically destroyed. This happened multiple times, leading them to blame the lab, then shipping. Okay, whatever. They send raw material, tell people to follow a protocol carefully to make new ones. Various different people try, including people who are very, very careful with such methods and can make everything else. Nobody can make them. The answer is "well, you guys must suck at making them." Can anyone else get the protocol right? Well, not really... But, admittedly, someone did once get a different but similar protocol to work only under the influence of a strong magnetic field, so maybe there's something weird going on in their building that they actually don't know about and maybe they're being truthful. But, alternatively, they're coincidentally the only lab in the world that can make super special sauce, and everybody else is just a shitty scientist. Does anyone really dig around? No, why would a PI doing what the PI does in 2 want to make an unnecessary enemy of someone just as powerful and potentially shitty? Predators don't like fighting.

(4) Another one that someone just couldn't replicate at all, poured four years into it, origin was a big lab. Same vibe as third case, "you guys must just suck at doing this," then "well, I can't get in contact with the graduate student who wrote the paper, they're now in consulting, and I can't find their data either." No retraction or public comment, too big of a name to complain about except maybe on PubPeer. Wasted an entire R21.


Grad students have this weird habit of eating food and renting places to live, though, so that's also money


Funding is definitely a problem, but frankly reproduction is common. If you build off someone else's work (as is the norm) you need to reproduce first.

But without repetition being impactful to your career and the pressure to quickly and constantly push new work, a failure to reproduce is generally considered a reason to move on and tackle a different domain. It takes longer to trace the failure and the bar is higher to counter an existing work. It's much more likely you've made a subtle mistake. It's much more likely the other work had a subtle success. It's much more likely the other work simply wasn't written such that a work could be sufficiently reproduced.

I speak from experience too. I still remember in grad school I was failing to reproduce a work that was the main competitor to the work I had done (I needed to create comparisons). I emailed the author and got no response. Luckily my advisor knew the author's advisor and we got a meeting set up and I got the code. It didn't do what was claimed in the paper and the code structure wasn't what was described either. The result? My work didn't get published and we moved on. The other work was from a top 10 school and the choice was to burn a bridge and put a black mark on my reputation (from someone with far more merit and prestige) or move on.

That type of thing won't change in a reproduction system but needs an open system and open reproduction system as well. Mistakes are common and we shouldn't punish them. The only way to solve these issues is openness


> If you build off someone else's work (as is the norm) you need to reproduce first.

Not if the result you're building off of is a model, you can just assume it


Everything is a model. The word is just so vague. Did you use math? Great, mathematical model. Program? That's a model.

You can assume a model is true but you know what they say about assumptions


I often think we should movefrom peer review as "certification" to peer review as "triage", with replication determining how much trust and downstream weight a result earns over time.


grants should come with money and requirement for independent reproduction

academia is too fragmented and extremely inefficient


Partially. There's also the issue that some sciences, like biology, are a lot messier & less predicatble than people like to believe.


yes, this should be built-in to grants and publishing

of course the problem is that academia likes to assert its autonomy (and grant orgs are staffed by academia largely)


If this is a symbolism, then why 30YR Treasuries are YTH despite FED rate cuts throughout the year?


While general vibe of Sam and OpenAI losing steam is correct for me; I genially disagree with all “hate” of GPT-5. It was indeed not a crazy breakthrough, but, honestly, 5.2 model at extended thinking path in Pro version is utterly scary to me. I can give it some complicated multi-tier question which requires modeling, abstract thinking, computation, and rationality; it can walk away for 45 minutes, produce CoT with a length of Empire State Building and give me fairly good and well written response. I can’t speak for all industries, but 5.2-Pro-extended is really scarce when it comes to math and reasoning. Here I am a bit on Sam side when he said that most of people severely underutilize modern AI by using it same as in 2023. The capability of recent models sounds to me way beyond current typical use cases.


IME it still can’t answer “is the state of Oregon entirely north of New York City” correctly. It’s possible they hard coded it since I posted about it online. How do you reconcile its inability to answer this simple question with your rather optimistic view of its capabilities?

Edit: lmao at downvotes over discussion…well, I use burner accounts. You have no power here.


Does it mean that if internal DB of internal numbers will be leaked anytime in the future, it will give to anyone power to wake up all Tesla cars in a few clicks?


Certainly sounds like it.


Oh boy, years after years, after everything we went so far, there are people in the world who believes that Big Tech invests billions dollars into AI (including data centers and infra) with only socially good intent and we should not at all touch anything Big Tech do …


Idk, subjectively it feels as end of Apple we all know.

Clear stagnation in terms of innovation; no new fundamental interface (no smart glasses, AVP looks too expensive and too narrow); abandon on car project; AI for Apple is nowhere close to others; sales decrease; Apple Watch sales ban; Epic Games court; DMA — after taking a leading position at the market, Apple clearly fails to offer anything new and instead invest too much resources into protecting their market position.

Wonder what will replace Apple. Will we see a slow stagnation here or radically new interface for human-to-technology interaction is around the corner?


Apple has consistently made laptops and development environments that creative types enjoy and admire. It’s too mad our economic system is setup to only care about infinite growth and monopolization at Apple’s size. It’s too bad just “being a great maker of great tools for great people” isn’t possible when you’re publicly traded.


> AVP looks too expensive and too narrow

I bought the first generation and agree with these points, but boy is it really cool tech. I thought it’d be super gimmicky and dumb before I got it, I changed my mind within a couple of hours after using them.

I ended up sending mine back, but only because I bought them to develop vOS apps for it and simply don’t have the time after taking on a new client. I really hope they make a second generation and continue improving on it, I already know I’m going to buy it.

(I know it’s ironic hoping they’ll make more of their fledgling platform after I just sent mine back and potentially helped kill it, but I really loved it.)


Maybe they should make cars.


> Idk, subjectively it feels as end of Apple we all know.

You do rationally realize the cognitive dissonance in your subjective feeling, right?

A company that is accused of abusing their market position, competitors trying to break into their platform, isn't an indicator that the company is close to an end at all.

If it were really the end of Apple, you'd see something like "Epic withdraws legal claims against Apple due to dwindling interest in developing for iOS platforms".


Whenever I read about Planet Nine search, I have a very naive (as a non-physicist) childish question: if we detect anomaly in our Solar System just less than a decade ago and can’t find anything visual which explains it, assuming there is giant planet/piece of ice floating somewhere at the edge of Solar System, how we are sure that there are not a lot of such objects and real space is not as “empty” as we think it is? Simply, what is the probability that let’s say space between us and Alpha Centaurs is not filled with objects like this? Invisible and leaving a tiny gravitational trace at the edge of our ability to detect it?


The simple answer is that there are actually very large numbers of objects in the space between stars that aren't detectable. Including, certainly, planet-sized objects that formed in stellar environments, escaped, and now fly freely in between.

https://en.wikipedia.org/wiki/Rogue_planet


Indeed - and the Oort cloud itself extends to about 1.5 light-years, which is a substantial fraction of the ~ 4 light-year distance to our nearest neighboring stars.


That's even more so substantial if the nearest neighboring star also has a comparable oort cloud. In that case a mere 1ly of interstellar space would separate the two systems


The so called Rogue Planets.

There's a nice video from Kurzgesagt on the topic: https://youtu.be/M7CkdB5z9PY?si=0BwFwotoHi8PxL1f


If there is so many of these materials not necessarily planet, can we have this partially dark matter?


Very good question. As a non-astronomer, my guess is that we don’t observe a lot of transient black outs for example of the milky way. Which would happen if there’s a lot of these around.

That gives you an upper bound of how many objects like that exist.


Microlensing events! (to be slightly more precise)

https://en.wikipedia.org/wiki/Rogue_planet#Microlensing

- "They found 474 incidents of microlensing, ten of which were brief enough to be planets of around Jupiter's size with no associated star in the immediate vicinity. The researchers estimated from their observations that there are nearly two Jupiter-mass rogue planets for every star in the Milky Way.[26][27][28] One study suggested a much larger number, up to 100,000 times more rogue planets than stars in the Milky Way, though this study encompassed hypothetical objects much smaller than Jupiter.[29]"


And "MACHOs":

https://en.wikipedia.org/wiki/Massive_compact_halo_object

In the 1990s I went to a talk which suggested that "WIMPs" (Weakly Interacting Massive Particles aka supersymmetric particles that would give rise to dark matter) was an acronym for "Well It Might be Physics" while MACHOs was "Maybe Astrophysics Can Help Out".

The upper bound on MACHOs has looong since eliminated them as the explanation for all of dark matter though...


...or an upper bound of how big they might be. For all we know, the galaxy might be full of really dark bowling ball sized objects. We'd have no way of observing them, right?


What if dark matter is primordial bowling balls!


So these are called "snowballs" and are excluded as dark matter candidates because we would then see a much larger number of extra-solar asteroids/meteorites. Cf. e.g. section 2 of the reference below.

https://web.archive.org/web/20180723032406/http://iopscience...


I love how every time someone comes up with some random notion, a scientist can point at a chart and say “we’ve thought of that already and excluded it with experiment CoolName.”


Really dark bowling balls would still exhibit blackbody radiation


The gravitational effect of a lot of spread out mass outside the Oort cloud (or indeed, a lot more mass in the Oort cloud) would have a different effect on the orbits of the planets than a single planet that we had not found yet.

So for some known configuration of orbits of the known planets, there are a limited number of solutions (in terms of mass, inclination, eccentricity, etc.) that you could add to the gravitational interactions of the solar system and still have the current orbits we observe. That gives a reasonable guess about what (another planet) and where (in the sky) to look for to explain anomalies.

But with distant objects that don't emit their own light, even having a good guess of a bunch of the orbital parameters doesn't mean it is easy to find, because you don't have much sunlight reflecting off of it, and the chances it will occlude something else brighter (like a background star) are needle-in-a-haystack level.

Even _if_ you knew the _exact_ orbit you were looking for, there are 360 degrees of sky to search for a tiny, dark object, because you don't necessarily know where in the orbit the object currently is.


> how we are sure that there are not a lot of such objects and real space is not as “empty” as we think it is?

There could be lots of such objects without meaningfully denting how empty we think space is.


This is a fundamentally different problem for a fundamentally different audience.

If we take privacy issue, it can be divided into 3 segments:

* Privacy of user data. The basic level. When you use Google or Apple, they collect data. Even if you minimize all settings — data is still collected. This data is used to train models and models is used to sell ads, target you or do anything else you have no clue about (like reselling it to hundred of “partners”).

* Privacy against undesired identification. Next layer of privacy. When you want to have some personal life online without sharing much about you. Like Reddit, anonymous forums, or Telegram (to some degree).

* Privacy against governments. The ultimate boss of privacy. When you want to hide from all governments in the world your identity.

Signal was perfect at first layer strong but not perfect at 3rd layer (e2e encryption, no data collection to share nothing with governments who seek for data, good privacy settings, always tell you if your peer logged to new device to protect from cases when government operates with telecom companies and use sms password to make a new login), and almost non present at 2nd because they have no public features except group chats where you share your number.

Now they in one move close gaps at 2nd layer — you can hide phone number and stay fully anonymous, and strength their positions in 3rd layer, leaving the last piece open: government still will know that you have some Signal account.

As for me, this setup solves 99,999% cases for regular people in democratic and semi-democratic countries and address the most fundamental one: privacy of data and actions online.

Yes it is not perfect but barrier for government to spy on me is that high that I reasonably can believe that in most cases you should never be worried about being spied, especially if you live in some places which are named not as Iran or Russia.

The only scenario, in my perspective, you can want to have a login without phone (with all sacrifices to spam accounts, quality of peers and usual troll fiesta in such places) is when you want to do something you don’t want ever be found in your current country.

But in this case, IMO, Signal is the last worry you usually have on your mind and there are a lot of specialized services and protocols to address your need.


1,2 and in part 3 were already fixed with the Signal FOSS fork back then, but Moxie and his army of lawyers decided to send out multiple cease and desist letters against those projects. Which, in return, makes Signal not open source, no matter what the claims are. If they don't hold up their end of the license and argue with their proprietary (and closed to use) infrastructure then I'd argue they are no better than Telegram or WhatsApp. Signal's backup problem is another story which might blow up my comment too much.

Because of your mentioned points I would never recommend Signal, and rather point to Briar as a messenger and group/broadcast platform. Currently, it's still a little painful to use and e.g. QR Codes would already help so much with easing up the connection and discovery/handshake process.

But it has huge potential as both a messenger and a federated and decentralized platform.


I just don't want my metadata (contact graph) hoovered because I send a (encrypted) message to someone that may be an over sharer on FB, etc.

I use Signal because I am a "nothing to hide and I like to own my privacy as much as possible" type online person.

Signal == more peace of mind just generally in this online world we have.


> no data collection to share nothing with governments who seek for data,

That isn't true anymore and hasn't been for years. Signal collects your data and keeps it forever in the cloud.


citation needed. care to elaborate on this?


Not exactly that but looks relevant unless you trust Amazon: https://news.ycombinator.com/item?id=39414322



Signal is not a VPN. How is this relevant? Or did you link to the wrong comment?


Yeah, not sure how that happened, but that link wasn't exactly what I was going for. If you scroll down far enough from there you'd find the parts I tried to point you to, but try this link instead: https://news.ycombinator.com/threads?id=autoexec&next=394457...

Just to be safe here's a copy/paste with the details:

This has been true for many years now. At the time it caused a major uproar among the userbase (myself included) whose concerns were almost entirely ignored. Their misleading communication at the time caused a lot of confusion, but if you didn't know that Signal was collecting this data that should tell you everything you need to know about how trustworthy they are.

Here's some reading from the time of the change:

https://community.signalusers.org/t/proper-secure-value-secu...

https://community.signalusers.org/t/dont-want-pin-dont-want-...

https://old.reddit.com/r/signal/comments/htmzrr/psa_disablin...

https://www.vice.com/en/article/pkyzek/signal-new-pin-featur...

Note that the "solution" of disabling pins mentioned at the end of that last article was later shown to not prevent the collection and storage of user data. It was just giving users a false sense of security. To this day there is no way to opt out of the data collection.

My personal feeling is that Signal is compromised and the fact that the very first sentence of their privacy policy is a lie and they refuse to update it to detail their new data collection is a big fat dead canary warning people to find a new solution for secured communication. Other very questionable Signal moves that make me wonder if it wasn't an effort to drive people away from the platform as loudly as they were allowed to include the killing off of one of the most popular features (the ability to get both secured messages and insecure SMS/MMS in the same app) and the introduction of weird crypto shit nobody was asking for.


If we take privacy issue, it can be divided into 3 segments:

This sounds like a bunch of bullshit.


Sounds like your username.


For example, now you can’t restrict who can send you a message unless you have a premium. Also they added a “feature” that premium users can bypass non-premium users privacy setting “last seen and online” and TG will tell that info regardless of your choice unless you are premium too.


You're significantly misunderstanding the changes.

> now you can’t restrict who can send you a message unless you have a premium.

And before that you just weren't able to restrict that at all, there was no such feature. They didn't remove this feature for free users - it never existed. They just added it right now only for paid users.

> premium users can bypass non-premium users privacy setting “last seen and online”

That is absolutely not what the feature is. If you hide YOUR OWN last seen time, you won't be able to see last seen time of other users, even when they have it public. Now, premium users will be able to see public last seen times of other people if they hide their own. But they obviously still can't see last seen time of people who set it to private, that would've been very dumb.


Thanks for the clarification on last seen, I certainly misread it. About messages: hm, I was sure it existed before but maybe again my brain just lags.

As someone who for some time created and moderated fairly popular chat (200+ people) for anti-war Russians, I have very long and complicated history of relationship with this service and have a lot of different grey-zone stories where it is hard to understand whether it is a mistake from users and whether it is a leak from the service.

Hence I have a little low expectation and overreact on their recent changes


I have three Telegram channels with a few hundred subscribers each, and I also use the service daily, as I'm Russian as well.

I generally agree with you that Durov makes a lot of incredibly stupid decisions. I think pretty much everyone in the "Telegram community" (eg. channel administrators, bot/client developers, etc.) would agree that the changes Telegram is introducing are often bad.

The issue, though, is that there isn't any alternative right now - Telegram is the best messenger out there in terms of general usage. So while I do hate what they're doing sometimes, I still use the product and even pay for Telegram Premium. It's bad enough to be mildly annoying, but not bar enough to actually make people leave the platform.

Edit: just as I was writing this, Telegram introduced a new feature. I'm not sure if I love it or hate it to be honest, it's a smart way for them to save money, but it is pretty weird: https://t.me/tginfo/3942


If you consider Telegram as a product to be a logical continuation of the VK message system, then all of these "features" existed.

Restricting of incoming messages existed (cloned from Facebook as usual).

Restricting of "last seen and online" existed in third-party clients. Later on VK started to actively destroy this functionality, by moving manual "is online" management from designated API into all data-fetching APIs.

Not to mention that VK and Telegram are now actively fighting with third-party clients. In which world they would not fight Ninjagram/AyuGram/Plus Messenger/other forks, which allow to add multiple accounts, hide online/reading (to some extent), show message editing history and so on?


If you consider technology to be a logical continuation of earlier technology, then all features existed.


> And before that you just weren't able to restrict that at all

This is a really basic security feature though that every single platform should support. If Telegram didn't support messaging restrictions before, that doesn't mean they're not currently gating a basic privacy/safety feature behind a paywall. It just means they should be embarrassed that they used to be doing something even worse, ie not even offering a basic privacy/safety feature at all.

Correct that this would not technically count as removing a feature, but I feel like that's possibly a distinction without a difference. I'm not coming out of reading this explanation feeling more charitable about Telegram's security or willingness to gate off security features. It's a bad look for a company to put basic blocklists behind a paywall, that is not a company I trust not to start degrading security for free users.


How is message restriction a "basic privacy/safety" feature? It's at most a basic "anti-annoyance" feature, I'm not sure what security you gain from preventing everyone from messaging you. The ability to block users was always there and it still there for free.


> It's at most a basic "anti-annoyance" feature, I'm not sure what security you gain from preventing everyone from messaging you.

This could be a long conversation. The short version is there are plenty of articles online by marginalized groups talking about the consequences of having no ability to block arbitrary groups from harassing them online. If someone is calling that "just an annoyance" they've likely never been the target of an extended public harassment campaign.

A slightly longer answer is that the consequences to privacy and security are in a practical sense -- in the sense that someone coming into my house is a violation of my security and privacy. Privacy is not just about hiding information, it's also about why we hide information. It's about the ability to be private; to not be forced to constantly listen to a bunch of people shout at you. Similarly, security exists for a reason, we have security in our homes in the sense that people can't just walk into them and start yelling at us and harassing us. And DMs should be thought of as analogous.

Your DMs are not secure if you have no way to turn them off or restrict them.

> The ability to block users was always there and it still there for free.

If you recognize that is important to privacy and security to be able to block individual users, it's not too hard to recognize that the requirement to individually block users leaves a huge gaping hole in security for a network that supports open registrations.

I use disposable email addresses rather than just blocking individual spammers in my email client. The reason is because there are a near-infinite number of spammers and blocking them one-by-one is ineffective. Being able to turn off a leaked email address is much more valuable to me. It's something that actually cuts down on spam.

And the same is true on social media -- being able to go private and turn off messages or restrict messages to certain subgroups is critically important for people who are stuck in the middle of public harassment campaigns.

----

Regardless, the lack of a feature that is pretty much standardized across most other platforms, and that is pretty widely recognized as a safety feature -- it doesn't make me feel better about Telegram's willingness to gate these kinds of features behind paywalls.

You're saying that the ability to block users is free, but there is no bright line between blocking users and setting general messaging restrictions. That is the same category of safety feature. There's no reason to believe that Telegram wouldn't make blocking users into a paid feature in the future, especially since it has demonstrated that blocking/moderation/lockdown features are something it is willing to monetize.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: