Good on you for having a list of known weaknesses, but here's the one that really makes this completely unsafe:
The server dictates what's run on the page, and thus can access the plaintext data in any way it sees fit. The trust model is fundamentally broken in client-side crypto of this nature.
Edit with two more thoughts: 1) Even if you trust the person running the service, how much do you trust the other users (who may be using stored XSS to compromise your data)? How much do you trust the hosting service behind it, if there is one? 2) Am I the only one that finds it massively irresponsible to not have a huge flashing "DO NOT TRUST THIS UNTIL IT'S BATTLE HARDENED" sign over it? This goes for just about every project of this nature.
Edit with a final thought: Most of the time, we think in terms of "relative goodness". A good car is better than a bad car, but a bad car is still better than no car at all. This logic completely breaks when it comes to crypto. Simply put, bad crypto (and bad implementations) when released on the world put lives at risk. This should be taken seriously.
It's one thing to build a project for learning (and please, please do!) but cover every friggin' surface you can with disclaimers.
Do you believe there is zero value to client-side JavaScript crypto?
There's a spectrum between "unencrypted webpage loaded over cafe WiFi" and "locally compiled open-source application". You don't always need the latter.
EDIT: Say you're using a web app that encrypts your data on the client. All the executable assets (html, js, css) are static and hosted on a CDN over HTTPS. The app stores your encrypted data to another server. Even if that server is compromised, it's still impossible to access your data without also compromising the CDN (assuming a correct implementation of the crypto and no XSS vulnerabilities, of course).
EDIT2: It would be interesting if you could sign a bundle of static assets that the browser could verify. Maybe as part of the HTML5 app cache. Like a lightweight version of a Chrome app. Add on a restrictive Content Security Policy and it sounds pretty good to me.
Client-side crypto in JS has value if and only if the following conditions are all met:
1) The application receives no HTML, CSS, Javascript, or code interpreted by plugins from a server in the same context (e.g. not an iframe) as the crypto code.
2) All assets that are run in the context of the crypto code are fixed and signed. No unsigned javascript code (generated at runtime, passed from a server or input to the app, whatever) is permitted to run in any way in that context.
3) The code as signed is completely transparent. It must be possible to audit the code as it would be run, and be able to 'quarantine' updates.
Without all of these things being met, client-side crypto in the browser is completely untrustable. These conditions have never been met.
If you're that paranoid, you shouldn't be using a hosted service at all. There could be a guy right outside your house intercepting your cable line and filtering all traffic to and from cryptonote.org and routing it to his own instance of the app. That doesn't mean that cryptonote is responsible for that happening.
This is everything that is wrong with cryptography and privacy. Simply put, if your data is important enough to use cryptography on it, why would you want to use crypto that is broken by design?
The whole point of a well-designed, well-implemented cryptosystem is that even if there is a dude sitting outside my house, he's not getting a single bit of my data. This is just as true for government secrets as it is sending a recipe to my mother. This service is broken, as all such services are. Bad crypto is never, ever, ever excusable.
No, he can't. Not without a valid SSL certificate for cryptonote.org. Sure, there are problems with CAs, but it would nevertheless be very difficult to obtain such a certificate.
He can put his root CA into the browser, certainly when the browser is first installed, and perhaps with the next update. (Are automatic browser updates encrypted?)
But this is besides the point of in browser crypto. The interesting thing is, you need a reliable delivery platform for your crypto code, but this implies you have a TLS connection. So either a third party can break your TLS and modify the crypto code, or your connection is secure in the first place. The scenarios you are then dealing with, is that the server is potentially malicious, but a malicious server just serves broken crypto.
If you are the recipient of the link, SSL can't be stripped.
Even if you are an author, assuming you have visited the site over SSL at least once, then it can't be stripped on future visits since the site seems to use HSTS.
Thanks, and that is absolutely true. That's why it's open source: I want people to host their own version if they don't trust me https://github.com/alainmeier/cryptonote
But if you need to trust the server (and you do), then the client-side encryption is 100% pointless. You might as well encrypt on the server with safe, sane, battle-hardened code.
At the end of the day, XSS, rogue hosts, etc can own this even if the person "running the show" doesn't want it to happen.
Hm, yes, added a warning message, good idea. As for your other thoughts, I completely agree. It's a work-in-progress and I figured the best way to figure out what's wrong is to learn from others.
Edit: Also added a link to that discussion next to Nadim's post.
I made something similar (except the one-time view part) not too long ago just to experiment with storing the base64 encoded message into the URL. It also has an option to add a key, which uses a javascript implementation of blowfish.
Since the message is stored within the URL, there's no backend needed, though that means the message needs to be short since most modern browsers can only support up to about 2000 characters in the URL and the messages can generate a long base64 string rather quickly.
For learning purpose, why not? Taught me a lot more about crypto than I would have by simply reading about it. Of course, it wasn't meant for real-world use. :)
> How comfortable did you feel with AES when you were done with the project? Could you try to put into a sentence or two what you (i) didn't know about using AES before you did the project, and (ii) knew at the end of the project?
Frankly, I didn't know much about AES (or encryption in general) before I started working on the project. My only direct encounter with encryption was ROT13 in IRC channels. I felt that without doing at least some "difficult" task myself, the application would be a lot more easy, and nothing new in it for me to learn. So I read how AES works, and created a small implementation myself.
Afterwards, I had a bit more understanding of all the moving parts of AES. Gained a huge appreciation for the algorithm and security in general. Now that I think about it, I guess you don't really have to implement it yourself in order to understand it, but I did it anyway, because, well, I felt like it.
It was just a toy project tbh. I certainly don't consider myself an expert on the algorithm, and will still trust tried and tested library implementations over my own. But now I know what's going on under the hood.
This is interesting. Thanks for playing along with me.
Can I ask another question? Now that you've implemented the algorithm, would you feel more comfortable employing AES encryption on a future project, or less comfortable?
Is your question specific to AES itself, or just my own implementation? If it's the latter, I'm hesitant to say more comfortable. It's not something you can just implement in a couple days from memory. There's always a chance something might go wrong.
If you mean will I be more comfortable in using the algorithm itself, then definitely yes. At least for the moment. IIRC, there have been some partially successful attacks against AES, but nothing that has managed to break it fully.
How comfortable did you feel with AES when you were done with the project? Could you try to put into a sentence or two what you (i) didn't know about using AES before you did the project, and (ii) knew at the end of the project?
This whole thread is like a textbook example of why people like me (breakers) have itchy trigger fingers when it comes to people building cryptography features.
I'm glad if this has been a good learning experience for you (may I suggest another?†), but real secure systems aren't, to steal a phrase from Richard Stallman, "debugged into existence": they start from a foundation of a secure, well-considered design and are verified piece by piece as the system is assembled.
Why is the password stored in plain text "for now"? What is so hard about running bcrypt or pbkdf2 against the password before storing it in the database?
There are a lot of single view self destruct sites, but I wanted to make a new one because I wanted to let people host their own instead of relying on the site provider.
The server dictates what's run on the page, and thus can access the plaintext data in any way it sees fit. The trust model is fundamentally broken in client-side crypto of this nature.
Edit with two more thoughts: 1) Even if you trust the person running the service, how much do you trust the other users (who may be using stored XSS to compromise your data)? How much do you trust the hosting service behind it, if there is one? 2) Am I the only one that finds it massively irresponsible to not have a huge flashing "DO NOT TRUST THIS UNTIL IT'S BATTLE HARDENED" sign over it? This goes for just about every project of this nature.
Edit with a final thought: Most of the time, we think in terms of "relative goodness". A good car is better than a bad car, but a bad car is still better than no car at all. This logic completely breaks when it comes to crypto. Simply put, bad crypto (and bad implementations) when released on the world put lives at risk. This should be taken seriously.
It's one thing to build a project for learning (and please, please do!) but cover every friggin' surface you can with disclaimers.