Hacker Newsnew | past | comments | ask | show | jobs | submit | m3047's commentslogin

I'm sorry for your loss, but this sounds like an antipattern. Hundreds of emails between co-workers and it was all contemporaneously related to work in progress or cat pictures of your own cats, didn't contain PII or proprietary information of your employer or unaware third parties? And you want it back? From far enough away (that I might as well be in orbit) this seems preferable to an unencrypted drive ending up in somebody's hands for "refurbishment" (cough printers with hard drives).

No one is innocent. I refuse to use LE and operate my own CA instead, and as a consequence of scareware browser warnings I publish http: links instead of https: (if anyone cares, you know to add the "s" don't you?). I run my own mailserver which opportunistically encrypts, and at least when it gets to me it's on hardware which I own and somebody needs a search warrant to access.. as opposed to y'all and your gmail accounts. I do have a PGP key, but I don't include it on the first email with every new correspondent because too many times it's been flagged as a "virus" or "malicious".

Clearly we live in a world only barely removed from crystals and ouija boards.


> Hundreds of emails between co-workers and it was all contemporaneously related to work in progress or cat pictures of your own cats, didn't contain PII or proprietary information of your employer or unaware third parties?

You're merely defining away the problem. You have no idea what was in those emails.


Whatever was in those emails wasn't important enough for them to unencrypt them in a durable fashion, or put the keys in a safe with the gold bars.

We call this the "scream test" in BOFH land.


Who knew I’d need to do this? I’d never needed to do this either my emails in the decades prior.

You’ve also got no idea what was in those emails. Could be some valuable knowledge or logs about some crazy rare bug or scenario, and would be useful to review today.

We just turned on S/MIME by default, to “be secure”, whatever that means. There was no warning in the email client about losing access to the email if you lost your keys.

Citing BOFH is all well and good inside certain circles. In the real world, people don’t like spending time or effort on poorly thought out and implemented solutions.


The keys aren't in the backups you still have?

IOW: who owns the backups owns the data... until proven otherwise. My default presumption from space is that 1) there are document management policies and 2) document management policies apply.


It wasn't important enough at the time to the BOFH.

As they say in physics: color and charm may change, but up and down are forever.

Ummm... Google, Amazon, eBay, and PayPal... Facebook, Airbnb, Uber, and the offspring of Y Combinator... doesn't look like a particularly virtuous trajectory to me.

One of the more interesting ways of detecting rail damage, and subsidence in general, is optically detecting noise / distortion in fiber optic cables. An applied case of observables which are the basis for an evaluative (the "signal") being utilized originally to diagnose possible maintenance issues and then going "hey there, wait a sec, there's a different evaluative we can produce from this exhaust and sell".

https://fibersense.com/

http://www.focus-sensors.com/


  Location: Tacoma, WA, USA
  Remote: Sure, that's an option.
  Willing to relocate: No, but willing to drive around the greater Pacific Northwest.
  Technologies: Python, Redis, C, DNS, Ignition SCADA / HMI, SQL, Bash, Linux, TCP/UDP,
                old school ML & numerical methods, threat hunting, electronics, handy
                with a wrench, dangerous with acetylene.
  Résumé/CV: https://github.com/m3047 https://www.linkedin.com/in/fred-morris-03b6952/
  Email: [email protected]
Let's have a conversation, I like working with people. Not looking for the same old thing, have had a business license since 1984; prior to 2000 about a third of my work was firm bid (why did that go out of fashion?). Technologist and problem solver. Cloud skeptic, but I use what works. I prefer contracts to promises. W2 / contract: it depends, let's choose the correct vehicle for the scenario, and risk has a lot to do with it. Part time, seasonal, campaign... it's all good.


Some great links to thinky things, if you like those.


It's been over 20 years, but I used to build batch processing pipelines using SMTP (not Outlook). Biggest "choose your own adventure" aspect is accounting / auditing (making sure you didn't lose batches). I still joke about it as an option; although if somebody took me up on it I'd write up at least a semi-serious proposal.

In the middle are Mule, Rabbit, Kafka, ZMQ.

At the other end is UDP multicast, and I still use that for VM hosts or where I can trust the switch.


Not intended as an endorsement, but Access / FileMaker / 4D were all great solutions for low-contention, low-user count low / no code CRUD apps.


DNS is utilized for many things besides looking up web sites (and consequently ads on web sites). DNS was used for many things etcd was invented to solve, and still is by many. Adblocking is kidstuff; the bearded, motorcycle riding, gun-shooting, jumping out of airplanes and hanging off of rocks jackals use a "DNS firewall" (just posted this the other day): https://www.dnsrpz.info/ and Dnstap for application-level DNS logging.


Absolutely — DNS goes way beyond just resolving websites. It’s been used for service discovery and coordination long before tools like etcd came along, and still is in many systems today. Adblocking is one use case, but DNS firewalls (like RPZ) and logging frameworks such as Dnstap show how powerful DNS can be at the infrastructure level. Thanks for sharing the link — it’s a great reminder that benchmarking speed is only one piece of the bigger DNS picture.


Things built with asyncio and dnspython are close to my heart. ;-)

So, my impression from the doc (and a quick browse of the code) is that this is a tool for monitoring DNS caching / recursing resolver (RD) performance, not authoritative. If performance really matters to you, you should be running your own resolver(s). [0] Granted, you will quickly realize that some outfits running auth servers seem to understand that they're dependent on caching / recursing resolvers, and some are oblivious. Large public servers (recursing and auth) tend to "spread the pain" and so most people don't feel the bumps; but when they fall over they fall over large, and they bring some principles (and thereby create "vulnerabilities") at odds with what the DNS was architected for and throw the mitigation on the other operators, including operators who never accepted these self-anointed principles to begin with.

I have a hard time understanding how DNS is adding 300ms to every one of your API requests... unless DNS is both the API and transport, or you're using negative TTLs /s.

Good doc, by the way.

[0] Actual resolvers. Not forwarders.


Thanks for the thoughtful read — and yes, the tool is focused on caching / recursing resolver performance, not authoritative. The asyncio + dnspython stack makes it easy to script and monitor those behaviors over time. Running your own resolver is definitely the gold standard if performance and control really matter, but benchmarking public ones helps surface the trade‑offs users face in practice. The 300ms example was more about illustrating how ads and systemic factors can dwarf raw resolver speed, not a claim about per‑request DNS overhead. Appreciate the detailed perspective and glad the doc came across clearly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: