Hacker Newsnew | past | comments | ask | show | jobs | submit | vicbun's commentslogin

And we at https://simulify.com are making a UI for react based static sites. Pls join our alpha.


looks like instagram took down his account. So no more free food.


In TFA, he says that he created other accounts like this. So if his OPSEC is good enough, he's still getting that free food. This was arguably just a sacrifice to get some publicity. Indeed, I'm guessing that he's selling these bots. Or at least, offering consulting services.


I agree that clients normally don't know full spec because no one works on that.

One way I find very helpful in mitigating this is by getting a prototype of system, a detailed prototype made which goes a long way to clarify specs for the client and help me understand the scope.

I found transitioning to non hourly project billing extremely satisfying.


I found transitioning to full project billing extremely satisfying.

The main downside is not clear project requirements, scope creep and you said / I said issues.

One way I found very helpful in mitigating it is getting a prototype of the UI, with some comments about what's unseen.

Getting an "idea to prototype" thing done is not expensive in context of any 25k+ projects and gains are huge on both sides.


> non-secured HTTP still makes sense for example during development

To make it easier & cleaner in development, it's best to make yourself a Certificate Authority CA and issues local certificates. This makes for no warnings in browsers and ensures a better development experience.

A post from my colleague on how to do it https://reactpaths.com/how-to-get-https-working-in-localhost...


> To make it easier & cleaner in development, it's best to make yourself a Certificate Authority CA and issues local certificates

sure. That's what I do on those sites where I absolutely need SSL even during development. But it's still a hassle and, I would argue, an overall security downgrade.

If somehow that CA certificate gets out (commited to github by accident, extracted via malware, etc.), I become MitMable unless I manually check every certificate of every site I visit.

I know that this isn't too likely an attack vector, but if I had to target a developer friend, that's something I'd look into because this has the potential to work across machines whereas traditional malware based solutions would only work on machines I can compromise.

If you have unique CAs per development machine, this would of course be moot, but at that point, you might as well just run with a self-signed cert to begin with and allow the exception once.


Certificates are public documents, it isn't a problem if everybody sees them. Committing the certificate is, at worst, a minor inconvenience for someone who doesn't want your certificate because they don't trust your CA.

A _Private Key_ is the thing that needs to be kept private, in Microsoft Windows it's common to bundle this together with a corresponding Certificate in a single file, often with the filename extension .PFX I assume it's more "convenient" this way, shame about the security.

If you do this a LOT (maybe you're a team of fifty developers at a software company that mostly builds web apps but has a strict policy of not using "real" certificates or wildcards) you should mimic the hierarchy required of "real" public Certificate Authorities:

Mint a "root CA". You'll end up with a certificate and a private key, let's call them root-C and root-P. Every developer (and QA and so on) system needs to trust root-C, you can use Group Policy or similar tools to arrange that.

Use the root CA to issue a sub-CA, the "issuing CA". Write _constraints_ in the certificate used to issue this sub-CA. A constraint is a rule about what things the Subject is an authority about in the opinion of the Issuer. So for example if you set a DNS constraint of just example.com, certs for example.org, or google.com from this Issuing CA don't work. If you later acquire example.org, you can "just" replace the Issuing CA. No updates to employee systems. Think about the Constraints you write, so that they block "mistakes" like MITMing google.com, but not anything you expect to actually need. You can also constrain the types of certs issued. Will you only issue TLS server certs? Then write a constraint saying so.

Now, lock away root-P, a serious CA has it in a special hardware device to protect it, but a USB key in a locked desk drawer is a good first step.

The Issuing CA credentials will probably need to stay live on someone's computer, they're valuable of course, but because of the Constraints they now aren't an unlimited black hole in your security.


I agree with everything you say.

But are you telling me that each developer should do this for their personal webpage project just because Chrome might decide to put HTTP-only pages behind a modal dialog?


Individual developers can often test with localhost (use the numeric 127.0.0.1 or ::1 for best browser compatibility). All the Browser Vendors are agreed that although http://127.0.0.1/ isn't HTTPS, it is a secure context, all the fancy new secure-only features of web pages should work (if any don't that's a browser bug) and so on, there is no security interstitial and so on.

If you're a bit bigger, so that you're needing to put webpage projects on a separate machine, so maybe QA can try it, or you can run a demo, you can and should set aside one or more chunks of public DNS hierarchy for the test systems, for example a cert for *.test.example.com can allow a number of test systems at Example Inc to do HTTPS. If you're happy to pay a commercial CA they'll mint such a certificate for fair price upon proof you control example.com. If you want to use Let's Encrypt, they'll need you to have records for test.example.com in public DNS to run proof of control automatically. Provisioning the private key and copies of (periodically updated) certificates to the test boxes is not exactly a difficult problem for a working test environment.

Only if you (perhaps out of paranoia) refuse to do that, should you try what I suggested with a private CA hierarchy for web pages.

If you're building something quite different (not TLS, or genuinely not on the Internet, although what the point of things other than the Internet is I'm not sure) then you can and should roll your own PKI, but that's a huge under-taking, you should already have hired at least one person who knows MORE about this than me or you're already in deep trouble.


2paths.com - REMOTE - Nodejs

20paths is a 4 persons collaboration. All of us are remote from US to India and UK and sync on slack & github.

We are aiming at changing how SPA apps provide tech help / support to their users. Check out our working example on gmail.

Looking for a nodejs person who is passionate about changing how the things are done. You will be #5 on our team and be on ground zero as we polish our app.

Also looking for a growth hacker who finds the service interesting & exciting.

drop me a mail at [email protected] and let's chat on skype.

Check us out at https://20paths.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: