Hacker Newsnew | past | comments | ask | show | jobs | submit | lolinder's commentslogin

Alternatively, they're people who write documents in a field where LaTeX is the standard, they're not computer savvy enough to try to even look for something new that might be acceptable or might compile to LaTeX, and at any rate they want to focus more on their research than they do on changing the typesetting norms in their field.

(No shade on people who do decide to use alternatives, and Typst is great!)


Possibly. It could also be evidence that the bombs didn't really do their job. Either they missed or the bunkers were fortified enough and/or deep enough to avoid even the bunker busters.


In Iran's defense, there was credible OSINT[0] warning of the B-2s taking off 12+ hours in advance of the strike. Iran knows what a GBU-57 is, the writing was pretty clearly on the wall that a strike was imminent.

It's possible (though not guaranteed) that they simply relocated the enriched uranium before the attack.

[0] https://x.com/thenewarea51/status/1936391071430308207


Yes, that too. It was in newspapers hours before the attack, so there's no way Iran didn't know it was coming.


> "there was credible OSINT[0] warning"

That one was counterintelligence deliberately placed in OSINT to confuse Iran. The B-2's that flew west over CONUS in daylight, in plain view, were decoys on a wrong timing.

https://www.wsj.com/politics/national-security/the-u-s-strik... ("U.S. Strike on Iran Began With a Ruse")


Not necessarily. It could be one for loop running on tens of thousands of compromised IoT devices, with the only thing distributed being the command that starts the loops.


Sounds like you've never managed tens of thousands of nodes in a distributed system. It's not trivial.


What would making a C&C server for a botnet hard? It's not like you need to carefully coordinate all those clients to hit precise timings, you just tell them who to target and let them rip, don't you?


Nothing. I did it with IRC servers in the late 90s when I was a dumb kid in high school


Coordinating a botnet to launch a DDoS is commodity software at this point. You could argue that the engineering that went into the coordination software is good, which may or may not be true, but simply launching a botnet is well within the capabilities of a script kiddie and not something that shows sophistication on the part of the attacker.


(elixir / otp says "hold my beer")


Right. So why does the fact that they targeted 34,500 ports show it was a well-engineered attack? By itself it's just evidence that they know how to iterate over ports. Coupled with the data size (7.3Tbps) we know they had an enormous botnet. None of this points to a well-engineered attack, it just means that lousy IoT has made botnets incredibly cheap.

A well-engineered attack would not draw headlines for its scale because it would take down its target without breaking any records.


> A well-engineered attack would not draw headlines for its scale because it would take down its target without breaking any records.

You don't hear much about DDoS that are either comparable in size or bring down targets. How do you explain why this one made the news in spite of not having met your arbitrary and personal bar?


Like I said: it broke records for data throughput. It doesn't hurt that Cloudflare has an interest in publicizing the size of the DDoS attacks it fights off.

> in spite of not having met your arbitrary and personal bar?

I'm not sure what you mean by this. I didn't establish any sort of bar for what sorts of DDoS should get headlines, I'm just agreeing with OP that that line in the article doesn't make any sense. There may be other reasons to believe this attack was well-engineered but the article doesn't get into them.


Yep. The number of ports is a useless metric to indicate sophistication of an attack. It’s like saying someone is a genius because they can write the numbers 1 through 10 on a sheet of paper, which is about the equivalent complexity.


Video games I feel like reverse this general trend, though. Unless they have a major story component (and sometimes even if they do) many games get iteratively 'better' (better for the purposes of making sales if not of making original fans happy) for various reasons: improvements to the core game loop, polish that makes the game more appealing to new audiences, and most importantly graphics.

Story-based content is what struggles with sequels because it's really hard to both capture the feeling of the original sufficiently to satisfy existing fans while also telling a new story that's interesting in its own right. Being derivative without being too derivative.


At least for a while, technology got consistently better at a high rate for video games. Today I'm not so sure.


That would help, but who defines which sites are required for the task? If it's the LLM you haven't solved prompt injection because the LLM can be persuaded to open other sites that the user didn't intend.


> If your browser behaves, it's not going to be excluded in robots.txt.

No, it's common practice to allow Googlebot and deny all other crawlers by default [0].

This is within their rights when it comes to true scrapers, but it's part of why I'm very uncomfortable with the idea of applying robots.txt to what are clearly user agents. It sets a precedent where it's not inconceivable that we have websites curating allowlists of user agents like they already do for scrapers, which would be very bad for the web.

[0] As just one example: https://www.404media.co/google-is-the-only-search-engine-tha...


>clearly user agents

I am not sure I agree with an AI-aided browser, that will scrape sites and aggregate that information, being classified as "clearly" a user agent.

If this browser were to gain traction and ends up being abusive to the web, that's bad too.

Where do you draw the line of crawler vs. automated "user agent"? Is it a certain number of web requests per minute? How are you defining "true scraper"?


I draw the line where robotstxt.org (the semi-official home of robots.txt) draws the line [0]:

> A robot is a program that automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.

To me "recursive" is key—it transforms the traffic pattern from one that strongly resembles that of a human to one that touches every page on the site, breaks caching by visiting pages humans wouldn't typically, and produces not just a little bit more but orders of magnitude more traffic.

I was persuaded in another subthread that Nxtscape should respect robots.txt if a user issues a recursive request. I don't think it should if the request is "open these 5 subreddits and summarize the most popular links uploaded since yesterday", because the resulting traffic pattern is nearly identical to what I'd have done by hand (especially if the browser implements proper rate limiting, which I believe it should).

[0] https://www.robotstxt.org/faq/what.html


robotstxt.org [0] is pretty specific in what constitutes a robot for the purposes of robots.txt:

> A robot is a program that automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.

This is absolutely not what you are doing, which means what you have here is not a robot. What you have here is a user agent, so you don't need to pay attention to robots.txt.

If what you are doing here counted as robotic traffic, then so would:

* Speculative loading (algorithm guesses what you're going to load next and grabs it for you in advance for faster load times).

* Reader mode (algorithm transforms the website to strip out tons of content that you don't want and present you only with the minimum set of content you wanted to read).

* Terminal-based browsers (do not render images or JavaScript, thus bypassing advertising and according to some justifications leading them to be considered a robot because they bypass monetization).

The fact is that the web is designed to be navigated by a diverse array of different user agents that behave differently. I'd seriously consider imposing rate limits on how frequently your browser acts so you don't knock over a server—that's just good citizenship—but robots.txt is not designed for you and if we act like it is then a lot of dominoes will fall.

[0] https://www.robotstxt.org/faq/what.html


> only reading the content the user would otherwise have gone through.

Why? My user agent is configured to make things easier for me and allow me to access content that I wouldn't otherwise choose to access. Dark mode allows me to read late at night. Reader mode allows me to read content that would otherwise be unbearably cluttered. I can zoom in on small text to better see it.

Should my reader mode or dark mode or zoom feature have to respect robots.txt because otherwise they'd allow me to access content that I would otherwise have chosen to leave alone?


Yeah no, nothing of that helps you bypass the ads on their website*, but scraping and summarizing does, so its wildly different for monetization purposes, and in most cases that means the maintainability and survival of any given website.

I know its not completely true, I know reader mode can help you bypass the ads _after_ you already had a peek at the cluttered version, but if you need to go to the next page or something like that you need to disable reader-mode once and so on, so its a very granular ad-blocking while many AI use cases are about bypassing viewing it at all by a human; and the other thing is that reader mode is not very popular so its not a significant threat.

*or other links on their websites, or informative banners, etc


> I know its not completely true, I know read-mode can help you bypass the ads _after_ you already had a peek at the cluttered version

What about reader mode that is auto-configured to turn on immediately on landing on specific domains? Is that a robot for the purposes of robots.txt?

https://addons.mozilla.org/en-US/firefox/addon/automatic-rea...

And also, just to confirm, I'm to understand that if I'm navigating the internet with an ad blocker then you believe that I should respect robots.txt because my user agent is now a robot by virtue of using an ad blocker?

Is that also true if I browse with a terminal-based browser that simply doesn't render JavaScript or images?


If you are using an ad-blocker by definition you are intentionally breaking the intended behavior by the creator of any given website (for personal gain), in that context any discussion about robots.txt or any other behavior that the creator expects is a moot point.

Autoconfig of reader mode and so on its so uncommon that is not even in the radar of most websites, if it was browser developers probably would try to create a solution that satisfies both parties, like putting the ads at the end and required to be text-only and other guidelines, but its not popular, same thing happens with terminal-based browsers, a lot of the most visited websites in the world don't even work without JS enabled.

On the other hand, this AI stuff seems to envision a larger userbase so it could become a concern and therefore the role of robots.txt or other anti-bot features could have some practical connotations.


> If you are using an ad-blocker by definition you are intentionally breaking the intended behavior by the creator of any given website (for personal gain), in that context any discussion about robots.txt or any other behavior that the creator expects is a moot point.

I'm not asking if you believe ad blocking is ethical, I got that you don't. I'm asking if it turns my browser into a scraper that should be treated as such, which is an orthogonal question to the ethics of the tool in the first place.

I strongly disagree that user agents of the sort shown in the demo should count as robots. Robots.txt is designed for bots that produce tons of traffic to discourage them from hitting expensive endpoints (or to politely ask them to not scrape at all). I've responded to incidents caused by scraper traffic and this tool will never produce traffic in the same order of magnitude as a problematic scraper.

If we count this as a robot for the purposes of robots.txt we're heading down a path that will end the user agent freedom we've hitherto enjoyed. I cannot endorse that path.

For me the line is simple, and it's the one defined by robotstxt.org [0]: "A robot is a program that automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced. ... Normal Web browsers are not robots, because they are operated by a human, and don't automatically retrieve referenced documents (other than inline images)."

If the user agent is acting on my instructions and accessing a specific and limited subset of the site that I asked it to, it's not a web scraper and should not be treated as such. The defining feature of a robot is amount of traffic produced, not what my user agent does with the information it pulls.

[0] https://www.robotstxt.org/faq/what.html


robots.txt is not there to protect your ad-based business model. It's meant for automated scrapers that recursively retrieve all pages on your website, which this browser is not doing at all. What a user does with a page after it has entered their browser is their own prerogative.


>It's meant for automated scrapers that recursively retrieve all pages on your website, _which this browser is not doing at all_

AFAIK this is false, and this browser can do things like "summarize all the cooking recipes linked in this page" and therefore act exactly like a scraper (even if at smaller scale than most scrapers)

If tomorrow magically all phones and all computers had an ad-blocking browser installed -and set as the default browser- a big chunk of the economy would collapse, so while I can see the philosophical value of "What a user does with a page after it has entered their browser is their own prerogative", the pragmatic in me knows that if all users cared about that and enforced it it would have grave repercussions in the livelihood of many.


https://www.robotstxt.org/faq/what.html

> A robot is a program that automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.

There's nothing recursive about "summarize all the cooking recipes linked on this page". That's a single-level iterative loop.

I will grant that I should alter my original statement: if OP wanted to respect robots.txt when it receives a request that should be interpreted as an instruction to recursively fetch pages, then I'd think that's an appropriate use of robots.txt, because that's not materially different than implementing a web crawler by hand in code.

But that represents a tiny subset of the queries that will go through a tool like this and respecting robots.txt for non-recursive requests would lead to silly outcomes like the browser refusing to load reddit.com [0].

[0] https://www.reddit.com/robots.txt


The concept of robots.txt was created in a different time, when nobody envisioned that users would one day use commands written in plain English sentences to interact with websites (including interacting with multiple pages with such commands), so the discussion about if AI browsers should respect it or if they should not is senseless, and instead -if this kind of usage takes off- it would probably make more sense to have a new standard for such use cases, something like AI-browsers.txt to make clear the intent of blocking (or not) AI browsing capabilities.


Alright, I think we can agree on that. I'll see you over in that new standardization discussion fighting fiercely for protections to make sure companies don't abuse it to compromise the open web.


So is Chrome. Very artificial. It's still not a robot for the purposes of robots.txt.

What coherent definition of robot excludes Chrome but includes this?


The meatsack at the end of the technology chain.

No meatsack in the loop making decisions and pushing the button? Robots.txt applies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: