Hacker Newsnew | past | comments | ask | show | jobs | submit | fwn's commentslogin

It's always a dubious method to ask an LLM to reveal information about its inner workings.

However, if you export your Grok account information via the Settings menu, it will export your information, including a risk score not exposed by the browser UI. You can try it yourself.

I wonder if this risk score is compiled based on any content you produce, or if it's informed by other factors, such as VPN or app usage.


Your comment specifically, I believe, falls short of the ideal to get more thoughtful and substantive, not less, as a topic becomes more divisive.

We should probably avoid posting shallow dismissals and remember that a good critical comment teaches us something.

Additionally, as your comment leaves readers wondering which parts of the divided comment section you deem "worthy of HN" and which you do not, I cannot help but think of this ambivalence as a method to spark reactions. Another word for this method would be "to bait", I think.


reMarkable still work really hard to force you into a subscription - or at least into using their very bad, non-zero knowledge cloud service. ... I don't know why but it is not obvious why this would be for the benefit of the user.

I own a reMarkable 2, but I fill it via RCU [1]. I will absolutely not buy into yet another opaque cloud offering in 2025.

edit: The project you linked looks interesting, though!

[1] https://www.davisr.me/projects/rcu/


I don't know about their methodology, but as a German, I get the feeling that the project is rather vague.

For example, I would have liked to see more specifics on what they define as censorship in terms of scope.

Mainstream discourse in Germany is very conservative when it comes to defining censorship. They would hardly name de-ranking, deplatforming, intimidation, exclusion from the financial system, or even full control of information by private organizations as censorship. Government-enacted media bans, such as the Commission banning Russian state media, are rarely viewed as censorship by Germans. ( https://www.politico.eu/article/russia-rt-sputnik-illegal-eu... )

I'm not trying to make a value statement in one direction or the other, but if your communication product addresses a market or seeks to tie into public discourse, it should be in touch with its concepts.


> Getting scars on purpose is a quite questionable decision.

Interesting. Why?

Isn't it a common and longstanding cultural practice, even among indigenous peoples? Intuitively, I'd say body modification is based on the desire to shape one's own body, something we usually embrace in fitness culture and medicine, for example.

I don't have any tattoos or scars, but I can't think of anything that would make them questionable.

Perhaps some of the objection arises from a confusion between body modification and self-harm?


Just because something has been done for a long time doesn't mean it's good. We also shouldn't confuse self-mutilation with healthy activities like exercising simply because both "shape one's own body".


> Just because something has been done for a long time doesn't mean it's good.

This is true, although it is a good start, right? If a cultural practice has survived for many generations, this alone already indicates that the practice might be compatible with human society, morals, sustainability, etc.

> We also shouldn't confuse self-mutilation with healthy activities like exercising simply because both "shape one's own body".

True! We should indeed not confuse self-mutilation with healthy activities just because they share some similarities.

But would you classify scars or tattoos motivated by aesthetics as self-mutilation? What about piercings, such as holes for earrings or laser hair removal?

I believe that is an interesting and unusual position. Do you have an argument in favor of your (so far implicit) take?


> If a cultural practice has survived for many generations, this alone already indicates that the practice might be compatible with human society, morals, sustainability

Does it? This sounds like a disingenuous take that doesn't even pretend to bother with reality.

> But would you classify scars or tattoos motivated by aesthetics as self-mutilation?

Disingenuous question - the person you're replying to called you out for very obvious collating of body mutilation and fitness/medicine.

Are you an AI btw?


Tattoos are self-mutilation the same way that taxes are theft. This is the worst argument in the world [1]

-[1] https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncen...


I was talking about scars. I wasn't the one who said tattoos are scars.


> Had a couple of drive-by downvotes... Is it that stupid an opinion?

While I do not agree with your take, FWIW I found your comment substantive and constructive.

You seem to be making two points that are both controversial:

The first is that generative AI makes the availability of lyrics more problematic, given new kinds of reuse and transformation it enables. The second is that AI companies owe something (legally or morally) to lyric rights holders, and that it is better to have some mechanism for compensation, even if the details are not ideal.

I personally do not believe that AI training is meaningfully different from traditional data analysis, which has long been accepted and rarely problematized.

While I understand that reproducing original lyrics raises copyright issues, this should only be a concern in terms of reproduction, not analysis. Example: Even if you do no data analysis at all and your random character generator publishes the lyrics of a famous Beatles song (or other forbidden numbers) by sheer coincidence, it would still be a copyright issue.

I also do not believe in selective compensation schemes driven by legal events. If a legitimate mechanism for rights holders cannot be constructed in general, it is poor policy craftsmanship to privilege the music industry specifically.

Doing so relieves the pressure to find a universal solution once powerful stakeholders are satisfied. While this might be seen as setting a useful precedent by small-scale creators, I doubt it will help them.


While I like the general idea of Ecosia (in that it's a less harmful ad-funded service) they do share user IP addresses with their search partners (Google and Microsoft).

> We, and our search partners, collect your IP address in order to protect our service against spammers trying to conduct fraud or to up-rank specific search results.

Src.: https://www.ecosia.org/privacy

This shouldn't necessarily stop anyone; I think it should just be mentioned when it is suggested as an alternative to DuckDuckGo. You probably wouldn't switch from a search engine that proxies all favicons to avoid tracking to one that sells your identity to Google and Microsoft for tree-money.


You might be able to find it by searching for the 'Welcome to Gmail' email in your archives.

AFAIK it is also returned when you download your Google data archive.


"Jail", on the other hand, is a great metaphor for a walled garden.


It's also easy to spot ChatGPT's distinct writing style. Take a look at the other comments from the HN account Lumoscore:

> ... It doesnt fully solves the trust issue — it just shifts it.

It doesn't [context-based goal word] - it [adjacent, slightly similar context-based goal word] it. That's an overused AI pattern.

All of the Lumoscore comments seem to have been written by AI.

edit: And all submissions appear to link to a vague crypto service named like the user which is filled with slop content as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: