In this model, hosts don’t need any direct internet connectivity or access to public DNS. All outbound traffic is forced through the proxy, giving you full control over where each host is allowed to connect.
It’s not painless: you must maintain a whitelist of allowed URLs and HTTP methods, distribute a trusted CA certificate, and ensure all software is configured to use the proxy.
The key itself appears to have no validity period, the validity period is only for the certificate made for the key. Maybe you could create a CSR for the key/identity and then sign it with your own CA (or self-sign with openssl) for whatever validity period you like. Then `sc_auth import-ctk-certificate`.
The npm team is, frankly, a bunch of idiots for saying that. It has been obvious for TEN YEARS that the bar for publishing npm packages is far too low. That’s what made npm what it is, but it’s no longer needed. They should put on their big boy pants.
Welcome to the web side. Everything’s bonkers. Hard-earned software engineering truths get tossed out, because hey, wtf, I’ll just do some stuff and yippee. Feels like everyone’s stuck at year three of software engineering, and every three years the people get swapped out.
That's because they are being "replaced", in a sense!
When an industry doubles every 5 years like web dev was for a long time, that by the mathematical definition means that the average developer has 5 years or less experience. Sure, the old guard eventually get to 10 or 15 years of experience, but they're simply outnumbered by an exponentially growing influx of total neophytes.
Hence the childish attitude and behaviour with everything to do with JavaScript.
I never saw it as a problem for nginx to just serve web content and let certbot handle cert renewals. Whatever happened to doing one thing well and making it composable? Fat tools that try to do everything inevitably suck at some important part.
Having distinct tools for serving content and handling certs is not a problem, and nothing changes on this side. Moreover, the module won't cover every need.
BTW, cerbot is rather a "fat tool" compared to other acme tools like lego. I've had bad experiences with certbot in the past because it tried to do too much automatically and it's hard to diagnose – though I think certbot has been rewritten since then, since it has no more dependency on python zope.
It's kind of annoying to set up. Last I remember certbot could try to automatically configure things for you but unless you had the most default setup it wouldn't work. Just having Nginx do everything for you seems like a better solution.
Certbot can just as easily work with a directory you have nginx set up to point .well-known/acme-challenge/ to. No automatic configuration magic needed.
I wonder about the same thing. I've come to the conclusion that it's driven a lot by Management-Ideal definition of devops: developers who end up doing OPs without sufficient knowledge or experience to do it well.
Nginx with certbot is annoying to setup. Especially with HTTP challenge. Mostly because of a circular dependency. You need nginx to clear the challenge and once verboten gets a cert you need to reload nginx.
I switched to Lego because it has out of the box support for my domain registrar so I could use DNS instead of HTTP challenge. It’s also a single go binary which is much simpler to install than certbot.
There is no circular dependency since the HTTP challenge uses unencrypted port 80 and not HTTPS. Reloading nginx config after cert updates is also not a problem as nginx can do that without any downtime.
There’s dependency in the nginx config. You have to specify where your certs are. So you have to have a working config before you start nginx, then you need to get certs and change config with the cert/key location before you can HUP nginx. This is extremely brittle, especially if you have a new box or a setup where you regularly bring up clean nodes as that’s when you can get all sorts of unexpected things to happen. It’s much less brittle when you already have a cert and a working config and just renew the certificate but not all setups are like that. I can’t even confidently say that most are like that.
I was excited to try this, since I'm a bit tired of selecting the input manually multiple times per day. Unfortunately, connecting AirPods automatically switches the input to them, regardless of the previously selected input device, whether it's an aggregate device or not.
Hm let me double check on this tomorrow! It works with my Sony headphones (which also cause MacOS to go into bad audio mode when you eg launch Shazam) but not sure I have tried the same with AirPods. Unless I did something else to lock it to that device and I’ve forgotten… anyway I’ll check on my work machine tomorrow
OK, so the defusedxml.lxml submodule is deprecated and one should use the other APIs from defusedxml instead. That does not mean that defusedxml in it's entirety would be useless.
If you’re trying to use it for lxml then yes, it was only ever experimental and has been deprecated (it also failed to define some interfaces correctly causing issues).
In this model, hosts don’t need any direct internet connectivity or access to public DNS. All outbound traffic is forced through the proxy, giving you full control over where each host is allowed to connect.
It’s not painless: you must maintain a whitelist of allowed URLs and HTTP methods, distribute a trusted CA certificate, and ensure all software is configured to use the proxy.