Hacker Newsnew | past | comments | ask | show | jobs | submit | breser's commentslogin

Endpoint Detection and Response. Basically a new term for antivirus/antimalware but that reports back to defenders and helps them respond to malicious software that may be on the device.


so it's malware.


"antivirus/antimalware" has gotten such a bad rap that it needed a makeover: EDR


A declaration of reputational bankruptcy, but where's the concomitant effort to restructure the reputational debt that necessitated it?


"I'd rather have ED than EDR."


never worked in an environment with hard security requirements?

tell me, if your responsibility was to prevent, identify, and respond to breaches, what policies and technologies would you utilise to achieve this goal?


The comments on this site are really something after having worked for an engineering corp that was actively targeted for industrial espionage. You guys really don't wanna monitor what processes on your boxes are doing? Hopefully your servers don't do anything of consequence lol.


Do these actually work?

We've got one of those at work, and the most visible effect is it makes me feel like driving around with the handbrake on.

Then, every so often, it'll flag the code I'm working on as "malicious". It's pretty basic glue stuff, and launching the executable in their sandbox usually turns up nothing. Sure, I can add an exception for what I'm working on and my tools so it doesn't scan rustc every time it runs. But exceptions can only be paths. Aren't we lucky that bad guys would never ever overwrite the files I've excluded.

When we first started deploying it, I wrote a quick and dirty cryptolocker. Reading files and rewriting their content encrypted in AES. Didn't take any evasive action, just traverse directories and fetch all the files. I even went out of my way to do it multi-threaded, so I wouldn't have to wait too long while testing. Sure enough, it flagged my test-crypto.exe as suspicious. But I guess I'm not enough of threat, since I've tried renaming it to meh.exe and, wouldn't you know it, I could happily encrypt my own home folder without any bother.

So I'm still not fully convinced these aren't just like the antivirus of old, only with a different name.


Yes, I have operated carbon black, huntress, and crowdstrike and they all work very well at stopping real attacks. You are always going to have edge cases, but there's a lot of power in being able to roll back anything even if it wasn't initially blocked. Within a few minutes of badstuff.exe being flagged I can have a graph of everything it's ever touched, how it got there, say with certainty if consumer data was impacted, and know everything that was exfiltrated. We can go back to patient zero and see everything that it branches out to and freeze every iteration of it out of the network instantly. And it's easy, you used to be down for weeks and hire a DFIR firm to puzzle it out. Now it's a button.


> there's a lot of power in being able to roll back anything even if it wasn't initially blocked. Within a few minutes of badstuff.exe being flagged I can have a graph of everything it's ever touched, how it got there, say with certainty if consumer data was impacted, and know everything that was exfiltrated.

I can certainly see the value in that.

But does that work when the threat is actually "new"? Say, some badstuff.exe managed to run and do its thing without being flagged by the EDR. Somehow you found out about it, say on another box. Can you investigate a posteriori how it got on the initial box and what it did there?


Oh, I fully understand why it's needed, and I have experience working with EDR software - which is why I stand by my statement that I'd rather deal with ED than EDR because at least there's a remedy for the former :P


SolarWinds.

Oh wait! It keeps happening!


First step, get rid of windows. :)

- if something requires windows, then we don't need that something.


Fire everyone.


Like Advertising (surveillance and dossier creation)


no, it's that the capabilities have evolved far beyond traditional antivirus that it's simply inaccurate to describe it as such.


The only difference between malware and security software is the intent of its author. Functionally they are equivalent however.


Well antivirus is also software that has to:

- be in a priviledged position on the system

- open up all kind of files for analysis without the user's interaction

Now if you want a way to create a juicy target for malware authors and increase the attack surface of your system, this is one way to do it.


I partially agree with you after seen behaviors of some "security" software that really put the "intent of its author" in question.


Well, the intent is usually the same: extract money from user, either outright stealing, or scare them and get paid for "protection"


what part of EDR software seems malicious to you?


The company I work for recently had the beautiful experience of having Windows Defender delete our program from many of our customers computers during the weekend, with the consequent support calls the next day about "your program does not run and I'm losing money!" and the headache of having to find out why the exe is magically gone, since the antivirus going crazy is the last thing you think of.

"Thankfully" it seems they did a progressive rollout of whatever version of Defender that detects our software so we didn't get every customer angry at once, which would come pretty close to a business ending event.

So yeah malware seems an adequate word to me. Especially since there's no way to find out what heuristic we're tripping and no one to ask for help so there's no guarantee that this won't happen again in a few weeks.


The malicious mindset is right in the name. It redefines my computer to exist only in context of another thing. My hardware is now an """endpoint""" and not a standalone system.


I'm trying to see your point, but it doesn't really track; a re-definition based on modern context isn't malicious.

Threats are not simply viruses, and network detection / response is objectively different.

You also probably connect your "standalone system" to a network.


It's not something that you're going to install on personal machines. It's something that the CISO wants installed on company machines for compliance reasons. And before you claim that you don't want your activity monitored on the company laptop, the laptop belongs to the company. There's no expectation of privacy.


In a corporate setting (where this kind of software is often used), „your“ computer is not really yours and does in fact only exist in context of another thing (the corporation).


> new term

friend, the current term is XDR (eXtended Detection and Response - although that was a year or two ago and might be old in the market by now!)


XDR is a marketing term for a service that bundles or aggregates EDR with other types of enterprise level security monitoring. The endpoint part is still called EDR.


i know that, but the discussion was about the latest buzzwords in the endpoint domain and EDR is definitely very mature technology at this point


Quick, change it again so I can't know what you're talking about!


Some EDR examples for those wondering

* CrowdStrike

* SentinelOne

* Heimdal


There's one on there that's $28.58 - $8,000.00 per hour that's especially hilarious. Assuming 40 hours a week for 52 weeks that's roughly $60k - $16.6 million for a salary range. Obviously bogus.


See the Howey Test: https://www.investopedia.com/terms/h/howey-test.asp

Airline Miles don't grant you access to a portion of the profits from the airline. They aren't an investment. They are a currency for future travel.


They're a liability of the airline, and they're an asset of the customer, but they are not an investment, because they can only go down in value, not up.

Same as gift cards.


Sometimes the dollar price of a flight goes up but the points price stays the same = points went up.


That you can make money through arbitrage is not the definition of a security. See my link above.


Does the CC companies buying these and then granting them to card holders as a reward qualify or affect the test at all?


Your example is George Floyd.

* He was accused of passing a counterfeit $20 bill, we still don't know if Floyd created the bill or even knew it was counterfeit. For all we know he might have been an innocent victim of someone else. https://web.archive.org/web/20220409101419/https://www.nytim...

* Floyd did not die of the results of fentanyl overdose: https://www.usatoday.com/story/news/factcheck/2021/04/16/fac...

* He died because a police officer knelled on his neck after he was already in handcuffs for 9 minutes and 29 seconds. https://web.archive.org/web/20210410114811/https://www.start...

* They had him in the car but then pulled him onto the ground and started kneeling on him. There were 4 officers present at that point. They did this despite calling for EMS. They did this despite Floyd saying he couldn't breath. They did this despite bystanders pointing out that he couldn't breath. The position continued even after he was clearly unconscious. They only got off his neck when EMS arrived and told them to. With EMS on site and asking for Fire Department help, the police didn't direct the Fire Department to Floyd, delaying their help for 5 minutes. https://www.youtube.com/watch?v=vksEJR9EPQ8

This behavior was so shocking that these officers were convicted of crimes for their behavior. It is VERY rare for police officers to be indicted, let alone tried and convicted for killing someone. I seriously doubt that they would have been tried and convicted if it weren't for the shocking video showing them kneeling on his neck for a long period of time.

I find it impossible to reconcile police refusing to do their jobs with a VERY rare conviction for misconduct that is so utterly shocking. If you don't want what happened to those officers to happen to you, simply don't kneel on someones neck for a long period of time. Carrying out a search warrant for a laptop is unlikely to result in that sort of situation. Simply because there's not really a good explanation for any of the behavior that resulted in Floyd's death on the part of the police.

Maybe they choose not to do their jobs for the other reasons you gave. But the facts of the George Floyd case do not support your conclusion.


You must watch the actual, full, uncut video of his death - you will realize very quickly that you have been lied to.

> Floyd did not die of the results of fentanyl overdose

I don't care what USA Today says - I suggest not placing much credence on claims mid-tier tabloids make. I am a former EMT, and I've seen people go into drug-induced respiratory failure. Even if you haven't, it's pretty obvious in this case. I implore you to actually watch the entire video of his death - it make my claims so obvious.

> They had him in the car but then pulled him onto the ground and started kneeling on him.

Because he escaped the car and was spazzing out, trying to escape, high as hell. Please watch the video.

> They did this despite Floyd saying he couldn't breath.

He was complaining he couldn't breathe before he was even on the ground (including when he was in the car), because he was experiencing drug-induced respiratory failure. Please watch the video.

> They did this despite bystanders pointing out that he couldn't breath.

Yes, this is what bystanders do in every ghetto neighborhood when the cops arrest someone. I've seen it 50 times. "He didn't do nothing, why are you arresting him, you're hurting him" - the eternal refrain of the ghetto bystander, regardless of the situation. Whatever the retarded bystanders were yelling conveys zero information.

> This behavior was so shocking that these officers were convicted of crimes for their behavior.

They were indicted and convicted as a sacrifice to avoid political backlash and rioting, not because they were guilty.

> a VERY rare conviction for misconduct

Doesn't matter how rare it is - it's rationally going to affect everyone's behavior anyway. Cf https://en.wikipedia.org/wiki/Chilling_effect

> But the facts of the George Floyd case do not support your conclusion.

I guarantee you, if you watch the full, uncut video of the entire interaction (which, incidentally, are some of the only facts in play here, unlike claims by mid-tier tabloids), you will change your mind. You have been fed a false narrative, and it's not even difficult to prove this, but people for some reason refuse to spend like 15 minutes to watch the actual source of truth (presumably partially because venerable institutions like USA Today are doing their best not to show it to you).


You do not have to even talk to AWS to remove the MFA from the root account. You simply need access to the phone number on the account (though there are ways around the phone number, see below) and the email address for the root account.

It's been a little over a year since I've done it but as I recall this is how it goes. You receive an email with a link that takes you to a site that starts a verification process via the phone. You get a number from the site that you are prompted to enter when they call you on the phone. Once that's done you can log into the account with the MFA device and then even remove the MFA device entirely.

The email address I believe can only be changed by AWS (and at least the last time this was an issue for me can't ever be reused for a new AWS account).

The phone number can be changed by anyone with aws-portal:ModifyAccount, which probably means someone with admin access. It is NOT restricted to being modified by the root account.

So if you have a working access to an account with that permission and access to the email you can change the phone number to one you have access to and go through the whole process. Meaning if you have the above permission you really only need access to the email.

Link to the documentation for this flow: https://aws.amazon.com/blogs/security/reset-your-aws-root-ac...


Ok, that's not trivial to hack, but it's in no way more secure than accepting a few more backup tokens.

Both email and phone numbers have widely known and exploited vulnerabilities that won't ever be fixed (worse if the phone part is only SMS). Requiring both at the same time is OKish, but not any exemplary security.


For what it's worth the phone portion is a voice call where you have to enter a number with touchtone.


It's possible that even though we are not using GovCloud they had additional precautions enabled for us (this was a few years back). My coworker vividly remembers having to wait for the notary to show up.


How excellent the ASN.1 tooling is depends on which subset of ASN.1 you're using. Some of the tooling supports one iteration of ASN.1 or the other. To the degree that the IETF had to write a document on how to deal with this since some of the standards use the older ASN.1 and some use the newer ASN.1: https://tools.ietf.org/id/draft-ietf-pkix-asn1-translation-0...

Interoperability with ASN.1 is very fragile at best.


BTW, that I-D is now RFC 6025 [0].

There's also RFC 5912 [1], which adds x.681/x.682/x.683 constraints to PKIX modules. I use this to great effect in Heimdal[2]. One function call can decode everything in a certificate, and a second can pretty print it in JSON; one command can pretty-print a certificate in all its glory in JSON.

  [0] https://datatracker.ietf.org/doc/html/rfc6025
  [1] https://datatracker.ietf.org/doc/html/rfc5912
  [2] https://github.com/heimdal/heimdal
      https://github.com/heimdal/heimdal/tree/master/lib/asn1


We have tons of interoperable PKIX implementations (OpenSSL and derivatives, NSS, OpenJDK's, GnuTLS, wolfSSL, Heimdal, and many many more), and a bunch of interoperable Kerberos implementations (MIT Kerberos, Heimdal, Windows / AD, OpenJDK's, the IBM Java's, GNU Shishi, there's a python implementation).


I know of at least one problem with ASN.1. The string encodings other than UTF-8 are terrible. Most of the string encodings are very limited and weird subsets of ASCII that nobody actually uses anymore. ASN.1 itself doesn't define the encodings and just refers to other standards.

The problem with this is probably most notable with the T.61 encoding which changed over the years and since ASN.1 references other standards nobody is quite sure exactly what you have to support to have T.61 actually work right.

Within X.509 certificates though nobody bothers to actually implement T.61 and just uses the T.61 flag for ISO-8859-1.

There are a bunch of gory details around this mess in this (now quite old) write-up here: https://www.cs.auckland.ac.nz/~pgut001/pubs/x509guide.txt

Since that write up I believe UTF-8 is pretty much the expectation for character encoding for X.509.

I documented some of the quirks around 6 years ago when I took an existing X.509 parser and improved it for use in certificate trust management in Subversion: http://svn.apache.org/viewvc/subversion/trunk/subversion/lib...

Basically ASN.1 wasn't well defined and it only works well when people agreed to only use certain features or to interpret things in a particular way when ambiguous.

It's also notoriously difficult to parse well. It's very easy to have bugs in your parser, even if you're implementing a subset of it that's needed for X.509. Especially if you're doing so in a non-memory safe language.

I can't speak for why Google invented Protobufs, but I can't imagine anyone sane picking up ASN.1 for anything modern and deciding that this is what they want to use.


For the string encoding thing, however, it does have UTF-8 and you should not use anything else to express arbitrary human text anyway.

PKIX actually leverages the weird encoding restriction to our benefit. It defines two kinds of names which things might have on the Internet (you can and should stop trying to name things which are actually on the Internet some other way), DnsNames and IpAddresses. IpAddresses, since they're either 32-bit or 128-bit arbitrary bit values, are just represented as either 32-bit or 128-bit arbitrary bit values. So you cannot express the erroneous IPv4 address 100.200.300.400 as an IpAddress, which means you can't trip up somebody's parser with that nonsense address. DnsNames use a deliberately sub-ASCII encoding from ASN.1 which can express all the legal DNS names (all A-labels and the ASCII dot . are permissible) but can't express lots of other goofy things including most Unicode. So a certificate issuer, even if they're completely incompetent, cannot write a valid DnsName that expresses some garbage IDN as Unicode. Hopefully they read the documentation and find out they need to use A-labels (Punycode) but if not they're prevented from emitting some ambiguous gibberish.

Even in forums where you'd once have expected pushback, "Just use UTF-8" is becoming more widespread. Microsoft for example, once upon a time you'd get at least some token resistance, today they're likely to agree "Just use UTF-8". So ASN.1 ends up no worse off for a half a dozen bad ways to write text you shouldn't use, compared to say XML, HTML, and so on.


Agree, although the right thing to do helps in specific applications but not so much in the general case. You're very often stuck with other people's MIBs / specs and encoders, trying to make sense of what a) they're allowed to put on the wire and b) what they actually do and under what circumstances.


A couple of years ago I ran into the same confusion of the "TeletexString"/"T61String" data type in ASN.1. After going down the rabbit hole of what is T.61 and trying to map it to Unicode, I reread the ASN.1 (X.690) spec and realized that the authors never actually referenced T.61. Ever since the first edition of ASN.1 in 1988, those strings have not used T.61. They use a character set that is easily mapped to Unicode - https://www.itscj-ipsj.jp/ir/102.pdf, a subset of US ASCII.

Not to say the rest of the spec is notably better. If fully implemented, it requires supporting escape codes in strings to change character sets. I've never seen valid escape codes in real world data, but it probably exists.

As the original article shows, ASN.1 has lots of other challenges and complexity. Trying to write a code generator that supports all the complexity is no trivial task and the only open source one I've seen only generates C code. Protobuf has the advantage of having modern language support (including multiple type safe and memory safe languages).


Eh... It does have a transitive normative reference to T.61, but only by way of special restrictions on the use of three characters.

T61String is defined in terms of ISO 2022, with the default C0 Character set set to ISO-IR-102 (as you linked). ISO-IR-102 defines the set of graphical characters, but also places a condition on the use of 3 of them by reference to T.61. It also requires that the control character set C0 be set to ISO-IR-106 by default, and ISO-IR-107 for C1.

The net effect is that the default character set of T61String is almost the T.61 character set, except that to get the T.61 character set, you need to include the escape sequence to set G1 to ISO-IR-103. ESC 2/9 7/6

A conforming T61String implementation does need to support the escape sequences and resulting encodings from ISO-IR-6, ISO-IR-87, ISO-IR-102, ISO-IR-103, ISO-IR-106, ISO-IR-107, ISO-IR-126, ISO-IR-144, ISO-IR-150, ISO-IR-153, ISO-IR-156, ISO-IR-164, ISO-IR-165, ISO-IR-168.

Since the control character sets include shift prefixes etc, properly parsing T61Strings into Unicode is non-trivial.

This is actually a pretty good reflection of the complexity in ASN.1. Technically the ASN.1 spec proper only requires that a T61 string support exactly the set of characters specified in the above registrations. It does not mandate any particular format, for them. It is the BER encoding that requires that ISO2022 be used to encode these. A different encoding could specify that all strings are encoded as UTF-8, and the different types are just various subsets of allowed characters.


Heimdal's ASN.1 compiler generates C code. It also generates bytecode with C bindings. Two options.

Also, I've made it generate JSON dumps of the ASN.1 modules. My goal is to eventually replace the C-coded backends that generate C / bytecode with jq-coded backends that can generate C, Java, Rust, etc.


> Basically ASN.1 wasn't well defined and it only works well when people agreed to only use certain features or to interpret things in a particular way when ambiguous.

ASN.1 has always been as-well- or better-defined than its competition. The ITU-T specs for it are a thing of beauty not often equaled outside the ITU-T.

That said, for a long time the ASN.1 specs were non-free, and that hurt a lot. Also, the BER family of encoding rules stunted development of open source tooling for ASN.1.


> I can't imagine anyone sane picking up ASN.1 for anything modern and deciding that this is what they want to use.

Part of my curiosity stems from Apple using it as part of their bootable file-format: https://www.theiphonewiki.com/wiki/IMG4_File_Format

But as you say, I have to assume they're using it in a very constrained way.


> Part of my curiosity stems from Apple using it as part of their bootable file-format: https://www.theiphonewiki.com/wiki/IMG4_File_Format

I could only speculate, but I wonder if part of the reason is that DER is completely unambiguous and therefore suitable for cryptographic services. It's also very easy to decode without a specification (TLV format). Apple are almost certainly using ASN.1 compilers for their mobile devices and security layers (even if they ship FOSS implementations, I'd be surprised if they aren't checking their work with commercial compilers), so there's overlap there. Rolling your own format in that case could be unnecessary and another failure point that could be rolled into a single unit.


One should not design cryptographic protocols so that they require canonical encodings.

Instead one should write tooling that produces decoders that preserve the original encoding of signed data.


> Instead one should write tooling that produces decoders that preserve the original encoding of signed data.

That's an interesting idea. How do you evaluate the tradeoffs in this design? I.e., what does it buy you compared to saying that you need to sort in tag order, for example? (Assume that you have something like an automatic tagging environment for sake of argument.)


Say you have a certificate, and it's supposed to be encoded in DER, which is canonical, but for some reason the issuing CA has a crappy encoder and produced something slightly not-DER-but-still-BER. Well, because certificates are supposed to be DER you can just reject it. But if you wanted to accept it you couldn't validate the signature if you simply tried to re-encode the `tbsCertificate` field -- you'd come up with DER encoding that doesn't match the original. So instead you want your codec to preserve the original encoding of the `tbsCertificate` even as it returns to you the decoded `tbsCertificate`, and now you can validate the signature. This is easier said than done because the encoding of the `tbsCertificate` is buried in the encoding of the Certificate, so you can't easily get at that encoding without writing a partial decoder, or without having support from the ASN.1 tooling.

This is what Heimdal's ASN.1 compiler does: it lets you request that for `TBSCertificate` you get a `_save` field that has the original encoding of that value, and just that value (not the outer `Certificate`).

The only trade-off is that you're wasting memory for a while, as you now keep around both, the decoded value and its original encoding. But after you're done validating the signature, you can release the memory used for tracking the original encoding.

Sorting by tag is not involved here, and neither is automatic tagging.


> The string encodings other than UTF-8 are terrible.

Well, yes, because ASN.1 predates Unicode.


At least on iPhones though they have a way to activate a mode that prevents the use of TouchID and FaceID. If I press the power button on my phone 5 times in a row that turns that off.

Yes I still run the risk of my device being unlocked against my will if I'm caught by surprise. But I'm able to disable this functionality in places where I think the risk of that may be higher, e.g. while traveling.

I'll still take the trade off of longer password (not just a few numbers) on my phone while using a biometric test for normal access.

Of course not everyone may have the same threats to consider and others may make different choices. Doesn't make either of our choices wrong.


On modern FaceID phones you need to hold the power and down volume key to bring up the Reset/PowerOff and cancel. Just clicking multiple times will bring up wallet, siri, or do nothing.


Just clicking multiple times will bring up wallet, siri, or do nothing.

On my iPhone 13, just now, I rapidly clicked the power button 5x.

The phone immediately made a loud sound and put up a screen that said "Emergency SOS". There was an option to "cancel" it, but I assume that the phone would have contacted 911 in short order unless I quickly cancelled.

So the correct description is probably "it depends".


There is an option in the settings to make it not actually call unless you press a further UI button on the iPhone.

In Settings go to Emergency SOS. Uncheck Auto Call


My Android phone did exactly the same. Very loudly and unexpectedly.


Just hit the power button 5 times on my iPhone 13 Pro, and it locked down FaceID as I'd expect (while bringing up the Reset/Power Off screen). You've described an alternative method, not the only.


Which OS are you updated to (no please don't post it)? not 15.1.x? Have you disabled wallet, Siri, and SOS? It doesn't work on any of the 5x 12s and 13s (pro and not) I just tried. It did work on an 11, which was not updated to 15.

You also risk the accidental activation of an SOS call.


I think you need to turn on the power button clicking 5 times feature by turning on the "Call with Side Button" option in Settings under Emergency SOS. I'd also suggest turning off Auto Call if you want to use it like this.

Yes this is kinda buried and not clear at all that this function also disables FaceID but it does.


No the car does not require a remote connection to work. See my detailed explanation here: https://news.ycombinator.com/item?id=29283262


There are 4 ways to unlock a Tesla.

* Via Bluetooth from your phone. Model 3 and Y used this as the primary way of unlocking. This does not require a Tesla server. It's just local communication between the car and the phone. The car and the phone are paired.

* Via a key card. Model 3 and Y use this as the method when you get the car until you pair it with the phone and what you use if you want to give someone temporary access to the car (e.g service, valets).

* Via a key fob. Model S and X used this in the past (not sure if the latest refresh changed this but older S and X vehicles didn't support Bluetooth or Key cards) as the primary method.

* Remotely via the phone app. As in you make an API call to Tesla with your Tesla credentials and Tesla sends a remote command to the car. This requires Internet access for the device making the request and to the car in order to receive the command. This last bit is what's broken. Given the requirements this has never been very reliable and nobody would want to use this on a day to day basis.

So I seriously doubt very many people are locked out of their cars. I am able to get into my 2015 Model S via the key fob and my 2018 Model 3 via Bluetooth from my phone.


FWIW: my understanding is that the current generation of fobs are just bluetooth devices speaking the same protocol as the app.


Could be. I don't believe the old fobs were bluetooth.

For what it's worth I also didn't mention that at least the old fobs also had an RFID in them that if the battery was dead you could just hold them up to certain spots on the car to open/drive it.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: