Hacker Newsnew | past | comments | ask | show | jobs | submit | ran290's commentslogin

The client will be renamed and moved to the EFF soon: https://letsencrypt.org/2016/03/09/le-client-new-home.html


Any plans to make the official client based on Go? I wasn't too happy about having to download a bunch of Python stuff on my server just to get an SSL cert. Reminded me of the days of yore when you had to fiddle with Perl modules just to run basic scripts.


There are a bunch of great unofficial clients, several written in Go (I like acmetool): https://www.metachris.com/2015/12/comparison-of-10-acme-lets...


Unfortunately lots of Go code on GitHub has significant oversights, this included. I remember reporting a DoS bug in a different Go acme library identical to this one I found in acmetool in less than 60s:

https://github.com/hlandau/acme/blob/master/acmeapi/ocsp.go#...

In case it is not obvious, anyone in a privileged point on the network can fill resb with enough data that the program panics due to OOM and crashes. ioutil.ReadAll really needs a big warning in the docs because I have seen this pattern far too often.


Yeah, serious +1 to this. I'm amazed by the usage of ioutil.ReadAll in popular Go libraries and tools.


I'm not sure if, conceptually, the term "official client" is still appropriate after the project is moved to the EFF and the rename is done. It's basically a move to ensure a vibrant client ecosystem which encourages users to pick the client that best fits their needs.

If you're looking for a Go client, lego[1] is awesome.

[1]: https://github.com/xenolf/lego


It seems like it'd be more accurate to call it a "reference implementation" than an official client.

at the very least, it'd be nice if people stopped referring to other implementations of the LE spec as "unofficial clients".


Doesn't have to be written in Go to produce a single statically-linked binary.


Why go? I can't think of any reason to prefer Go over any other language for this project. I'd prefer a security-oriented program to be written in a safer language, actually.


Could you explain what you mean by "safer"? If you mean memory safe or free from undefined behavior, Go is exactly that. If you mean a language that has excellent native crypto libraries rather than wrappers over openSSL, Go provides that too. To answer your specific query, Go makes more sense for a LE client compared to Python because you'd simply need to run a binary instead of fiddling around with the source on your server.


Go is not memory safe. It admits null pointers and a whole host of incompleteness bugs.

Rust and Haskell are both examples of safe languages. These languages admit very few bug classes. Both also compile to binaries; I'm not sure why you're touting that as a feature of Go.

That's not even a useful feature in this case. Running a python program is just as easy as running a binary from the user's perspective.


“simply running a binary”:

  - download letsencrypt-auto  
  - ./letsencrypt-auto
”fiddling around with the source”:

  - download letsencrypt.tar.gz
  - extract letsencrypt.tar.gz
  - ./letsencrypt-auto
(and there might even be a package available!)


This is assuming you have the correct version of python installed, right? What if you were on CentOS and the python version is 2.6? Or on Alpine and you simply didn't have python at all?


What if you download a binary and a dynamic library is missing? (This is what happens with GHC on Alpine. Binaries will expect glibc. Packages fix this problem, but they also fix the Python problems.)

Another example: I recently wanted to run IDA on Arch Linux, but there are no 32-bit Qt5 packages. Compiling Qt5 is more painful than installing Python.


I don't know about an "official" client but but caddy has support built in so you could probably extract the portions you need from that?

https://caddyserver.com/


The inofficial clients work well enough nowadays.



I'm curious: why doesn't webroot work for your setup?


A dynamic script is handling all requests, so there is no "webroot" directory where you can put stuff for them to appear under /


You could quite easily add a location /.well-known rule to the server, right?


Oh yeah, I didn't know about this option. A static dir for /.well-known is a much more elegant solution than shutting down nginx... Thx for the pointer.



It's worth pointing out that StartCom claims the verification email address in the article in question was only accepted because it matched the WHOIS record of said domain[1], which is permitted according to CA/B Baseline Requirements and their CPS. Definitely not a good choice by the researcher either way.

[1]: https://www.startssl.com/NewsDetails?date=20160322


That is real confusing, wouldn't that imply one party is outright lying?

Blog claims: In the last step of the validation process is where you can modify the email address and replace it with any regular email address

StartSSL claims: The email address used to verify the domain name is listed in the WHOIS records..


It could be that, or the researcher really didn't think to try it with an address that's completely unrelated to the domain.

Personally, I find it hard to believe that an audited CA has a system where the web frontend can make a decision as to what would be an allowed verification email address. I'm leaning towards believing their story, and would assume they have a backend system which is responsible for checking that input (and which happened to be out of sync with the options offered by the frontend). That's a reasonable explanation for the complete lack of validation in their frontend code.

Then again, some CAs have had a terrible track record, so I guess we'll never know for sure now that they fixed the issue (whatever the issue actually was).


Honestly, while StartSSL's front-end is awful, their practices always seem to far exceed other CAs - especially around verification.

I don't enjoy the website, or the verification procedure, but ultimately I generally trust them pretty highly - they operate in a way which shows me they care about security.


We've been generating certificates in direct violation of their TOS for over six years. Every few years they pretend to find out, we do another blatantly non-compliant verification, fork over 120 dollars, and they let us keep printing certificates.


Went through the same headaches for a few years. Their atrociously unfriendly and unintuitive interface finally just pushed me over to using a cheap alternative that is much less painful (RapidSSL in our case).


Until Let's Encrypt came around we've heavily depended on wildcard certificates (several domains with 100+ customer facing subdomains), so any other alternative would have been massively more expensive.

But with LE allowing scripted certificate generation, we're just moving to that instead.


How do you plan to get around LE's 5 subdomains per 7 day period limit? You can only get about 60 subdomains in theory, and that only if you stagger the registrations out carefully over three months and never make any mistakes.


If appropriate for your use case, you can get your domain added to the public suffix list [1]. Then the restrictions no longer apply.

This has side-effects with browsers and cookies so you wouldn't want to do it on a domain without understanding the impact.

[1]: https://github.com/publicsuffix/list/blob/master/public_suff...

P.S. In the unlikely event that someone involved is reading this, PLEASE make this a DNS attribute that is set on the top-level domain instead, in a TXT record perhaps. It's silly that we have to have a globally coordinated and distributed list for this data.


> P.S. In the unlikely event that someone involved is reading this, PLEASE make this a DNS attribute that is set on the top-level domain instead, in a TXT record perhaps. It's silly that we have to have a globally coordinated and distributed list for this data.

The Dbound WG[1] was working on this, but sadly didn't seem to get anywhere.

[1]: https://tools.ietf.org/wg/dbound/


You can get up to 100 SANs on one certificate, which will only increase your rate limit counter by one.

Works nicely if you have a (mostly) fixed list of subdomains, but becomes hard or impossible to manage if subdomains are dynamic.


You can get 100 subdomains per certificate, you're only limited to 5 certificates per domain per week.

That's largely sufficient for our use case, but we're still staggering renewal for certificates on our main domains. So far it's no problem because renewal is fully automated and we're leaving buffers.


That's interesting. If you don't mind, what rules are you violating?


You're only ever allowed to use your account with the person you validated with. You cannot share an account, e.g. between employees of the same company; if you want to transfer an account to another person over your vacation, you have to create a new account, re-validate, and recreate all certificates on the new account.

Obviously, we said f*k that and just registered everything on the CEO's name and have him do the phone verification.


Oh right, yeah. I'm pretty sure every company violates that particular rule :P


Every company that does this loses all their auditing capabilities on the systems that use these accounts. Not good.


>the vulnerability was reported and fixed

If this was not really a vuln, then they wouldn't have told the researcher it was fixed.

OTOH maybe it wasn't exploitable because the backend checks it, but they still considered it a vulnerability and fixed the ability to put a bad email in at all.


Sure, it's a vulnerability in the sense that they didn't want to allow WHOIS-based verification from their web frontend (for whatever reason. Maybe it wasn't even a conscious decision and they just forgot to include it during some rewrite.)

It's not a vulnerability in the sense that it's not allowed in their CPS or by CA/B.


Reminds me of Symantec's "we really like CT too, and have spontaneously decided to use it for all our certs" after Google ripped them a new one and demanded they do it: https://security.googleblog.com/2015/10/sustaining-digital-c...


Is it possible to do the back end connection to S3 securely instead of over insecure HTTP?


Their latest controller (v4?) removed the Flash requirement for both video (playback) and main (maps) IIRC. Do they still have leftover areas that require Flash?


v4 is a long way from being a stable release


Ah, I was under the impression that they've officially moved it out of the 'beta' naming


wtf are you complaining for? You don't like flash and they are removing it.


ubnt has a habit of not finishing what they start. AirControl 2 is not finished and they're talking about Aircontrol 3. Airvision has been rewritten 3 times in 3 years.

For all I know they're doing Unifi 5 in pure flash. I wouldn't be surprised.


All my frequent porn sites support HTML5 video now


Great! I hope we can stop using 'SSL' (or even 'SSL/TLS') everywhere and start using just 'TLS' now. :)


I doubt that will happen anytime soon. Myself and many engineers I know still use the term "SSL" even when we mean TLS (i.e., 1.0+) exclusively, in part because some people don't "know" TLS (but they do know what SSL is).

Old habits die hard, I guess.


Indeed. I used to do the same, but I've since switched to only using TLS and taking a couple of seconds to say 'a newer version of SSL' to any confused looking faces in the discussion. After a while of doing this I don't see nearly as many confused faces and others have even taken up the procedure! :)


Heck, my cellular provider was tracking the HTTP connections of their customers by default to sell profiles to marketing companies. (You could opt out, but I believe the fine print was something along the lines of 'we won't sell your information anymore but we will still collect it for later'). Other Internet providers have offered a cheaper plan to opt-in to traffic snooping for marketing profile building/selling. Tor exit nodes and my residential ISPs are on a similar level of distrust for me.

I've since started using 'whole premises VPN' (all traffic is routed through an encrypted tunnel to a VPS) - I have more confidence in my VPS provider than I do in my residential ISPs. At least the VPS company probably won't use my connection data for marketing profiles..


Hi dang, this is unrelated (feel free to remove this reply) but it seems that my post at https://news.ycombinator.com/item?id=9774037 has disapeared? Could you look into it?


Comments by new accounts are autokilled if they trip certain filters that we have in place because of past abuse by spammers and trolls. I've unkilled your comment and marked your account legit so it won't happen again.

Soon, everyone will be able to contribute to unkilling good comments, so fixing these will no longer block on moderator attention. In the future, though, everyone: please send issues like this to [email protected]. It's too random whether we'll see it on HN itself! Not to mention off topic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: