I'm worried about the situation when Dark Patterns are not widely recognized enough as a malicious practice for users.
Half a month ago I see someone on Twitter defending its own product design as "transparent and nothing hidden" - the "$0 now, then $15/month in 14 days" description where all text after "$0" are small and in grey. I don't think it maintains trust between the product and users, and thus it doesn't seem like a good thing.
Without knowing the product in this case, the two scenarios can be different. It one is billed $15 now refundable within 14 days, it’s more likely than not that they would forget to cancel, and therefore lose at least a month of subscription. If it’s $O, then they wouldn’t be billed for 2 weeks, and presumably have a chance to reconsider.
Don't let reddit idiots gaslight you. There are aot seriously deranged people running around on reddit, who are eating up mainstream corporate propaganda and lies without any questioning at all. I some subreddits they are the majority and paying attention to what those clowns thing can only be a negative for your own health.
Vercel has a fairly generous free quota and a non-negligible high pricing scheme - I think people still remember https://service-markup.vercel.app/ .
For the crawl problem, I want to wait and see whether robots.txt is proved enough to stop GenAI bots from crawling since I confidently believe these GenAI companies are too "well-behaved" to respect robots.txt.
This is my experience with AI bots. This is my robots.txt:
User-agent: *
Crawl-Delay: 20
Clear enough. Google, Bing and others respect the limits, and while about half my traffic are bots, they never DoS the site.
When a very well known AI bot crawled my site in august, they fired up everything: fail2ban put them temporarily in jail multiple times, the nginx request limit per ip was serving 426 and 444 to more than half their requests (but they kept hammering the same Urls), and some human users contacted me complaining about the site going 503. I had to block the bot IPs at the firewall. They ignore (if they even read) the robots.txt.
The reveal.js slide itself probably isn't the best way for readers. The reveal.js project actually provides a PDF export feature which can be more helpful.
Anyway, it's an asahilina.net page, not a cve.mitre.org page. That domain is for the Virtual YouTuber Lina-chan, so I would not expect it to be the most friendly for developers.
As a VTuber follower, I do really like the style :D
1. Anything beyond control may cause problems. For the security part: add SRI to whatever you care about, please.
2. Could we, in 2022, get rid of the troublesome Referer?
Have tried Kagi for a while as a beta user. It generally works fine, but I am just reminded every time when I try to do a search via Private Browsing - and Kagi, a subscription service, will apparently ask me to login - that each of my search history will be able to be linked to an individual.
Also a beta user. I wonder if the final version you’ll be able to stay pseudoanonymous - pay with crypto, no email, etc (kinda like Mullvad).
Not defending them and also dislike this (but can’t think of a way involving payments which doesn’t do it). I do find their privacy policy quite clear and transparent - https://kagi.com/privacy
And also let’s not kid ourselves - I would guess this is still likely more privacy-forward than a Private Session on Google. Would be surprised if Google isn’t using every potential signal to fingeprint you and link back that session back to some other login.
Unfortunately Twitter still remains as a major information source.
Since Twitter does not care about UX for unregistered users (so do Facebook, Instagram, Medium, Reddit on mobile and so on), users can switch to Nitter instances or using some extensions to block it.
It would be also somehow helpful to disable Cookie on twitter.com.
reply