Cute. Here's the short version: "So for the sale of an item costing X, the buyer would put up 2X (1X for payment and 1X for deposit) and the seller 1X so a total of 3X would be locked in escrow."
If the buyer approves the transaction stuck in escrow, they get their 1x the price back, the seller gets their 1x deposit and 1x payment.
If the seller gives a refund, everybody gets back what they put in.
If neither party does anything, the buyer is out 2x and the seller is out 1x. This encourages the buyer to pay up and the seller to deliver.
This is bolted onto Bitcoin, but it could be bolted onto any payment system - Visa, PayPal/eBay, Venmo, Snapcash, MMORPG in-game currencies, etc. It would probably be more useful on Venmo than on Bitcoin.
This method of escrow comes up fairly regularly in the bitcoin world. Each person ignoring the reasons the previous version stopped being promoted.
Say I am a malicious seller and I am offering a brand new XBox for 1BTC. You the purchaser start the order put up your two BTC, I put up my 1BTC and send you a used XBox only worth 0.75BTC. Now you as the buyer have two choices. You can either a) Refuse to confirm in which case you'll be out 1.25BTC or b) you can accept your loss confirm the order and be out 0.25BTC. Very few people will chose option (a) to the point that it would almost certainly be profitable for the malicious seller.
The fall back is reputation systems which require centralization to avoid gaming/having enough data points to be useful.
Even if I put up 2BTC and get nothing, I still have an incentive to approve it to get my 1BTC back.
The profitability for the malicious seller is a function of the probability a given buyer in the system will choose to punish the seller or recoup some money.
Also, is there a time limit? For example, what if the seller ships and buyer receives but is lazy / not incentived by the deposit..
The challenge here is this is an attempt to create a "Justice Algorithm".
I think a reputation based system is probably the solution here, and it doesn't need to be centralized. You could have anonymous reputation nodes -- which would become valueable in the system over time the more trust you have built. Trust seems to be the responsibility of the seller to manufacture by delivering goods as promised. It would be less useful for 1 off transactions with individuals.
You'd also want to calculate some trust score based on volume and value and the non-trivial challenge of preventing manufacturing trust via buying / selling to your self.
In any distributed trust system you can really only calculate based on volume and value and both of those things are easily manipulated since there is no way to identify individuals.
Also you have the situation with your nodes of then requiring an additional layer of trust on the network to determine which nodes are to be trusted.
You can't even do things like lower the trust ratings of people who trust people who end up being dishonest because you don't know that they were dishonest in their trust of them.
Reputation systems don't work with anonymity. They rarely even work without it.
What might work well would be if the amount put up for escrow were determined in part by the reputation of the entity doing the trade. The entities could still be pseudonymous (it's got an identity, but no one knows who is behind it).
There is nothing there to stop sock puppet reputation gaming though which is going to be the main attack vector of any distributed anonymous/pseudoanonymous reputation network.
If you can create new fake identities cheaply, reputation systems can be spammed. Google tried that in 2011, weighting inputs from social networks heavily. They were spammed, badly. A whole ecosystem emerged for spamming social search. Not only was there "Google +1 supply" and "Bulk Likes", there services for fake phone numbers for fake phone number verification appeared.
Reputation systems do not need centralization to have enough data points, what they need is traction. That can very well be achieved with a decentralized system.
But most importantly, the idea of making "trust" graphs using a weak form of transitivity needs to die. Reputation networks should encode sign statements of facts, from which trust can be estimated by participants, using a variety of approaches - in particular non-local approaches which aren't fooled by cliques.
Any ideas for trust approaches that cannot be gamed by cliques in a distributed network? Anything I can think of would break the functionality of the network/remove all usefulness from it or open up other attack vectors.
You have to differentiate the actual network that holds the data (possibly as a DHT) and the network implicitely described by the statements.
The way you avoid clique is by staying within your clique. That is requiring that the entities you extend trust to be trusted by people close to you, who themselves trust each other.
But it's not just cliques. Say you have a system like eBay, based on buyer feedback. eBay can be fooled by playing a long game, and selling a lot of cheap items for a while before selling a big item.
If instead of eBay you have a distributed store of transaction report, you can constantly refine the trust algorithm. For instance, you might pay attention to the price of the items sold, or to the trustworthiness of the reviewers themselves, etc.
The more data goes in, the better your inference. Truth is coherent and entangled. The more data there is, the harder it is to convincingly lie (but the harder it is to remain anonymous as well).
If you stay within your clique you severely limit the number of possible merchants and if you trust them through personal relationship or foaf why go through the hassle of an escrow system like this?
>For instance, you might pay attention to the price of the items sold, or to the trustworthiness of the reviewers themselves, etc.
Ok so how about I just sell a bunch of high value "items" between my own accounts? Now my account has participated in a number of valid high value transactions. Since it is distributed there is no way to ensure the transaction is real.
>trustworthiness of the reviewers themselves
Who build their trustworthiness the same way. This is easy to game look at pagerank for an example of a similar rating system.
>The more data there is, the harder it is to convincingly lie
This isn't actually true though because you don't verify the data in any way(nor can you) so an attacker can craft the input to tell any story they want.
> If you stay within your clique you severely limit the number of possible merchants and if you trust them through personal relationship or foaf why go through the hassle of an escrow system like this?
Because as you increase the radius of your search, in a small-world netwok, you reach a very large number of real people while being minimally impacted by Sybil cliques. Check out the work on SybilGuard or SybilInfer for instance.
>Ok so how about I just sell a bunch of high value "items" between my own accounts? Now my account has participated in a number of valid high value transactions. Since it is distributed there is no way to ensure the transaction is real.
Ah, but you're looking for transactions reported by people who are themselves close to your circles of trust.
> Who build their trustworthiness the same way. This is easy to game look at pagerank for an example of a similar rating system.
A random surfer model is a naive model. With better probabilistic model, you become more resilient to gaming.
> This isn't actually true though because you don't verify the data in any way(nor can you) so an attacker can craft the input to tell any story they want.
This isn't arbitrary data, it's data connected to you in some way.
Not sure that is actually counter evidence. It says in most cases people will accept unfair splits as long as they aren't too unfair(which is why in my example I send a less valuable replacement instead of no item at all).
If neither party does anything, the buyer is out 2x and
the seller is out 1x. This encourages the buyer to pay
up and the seller to deliver.
If the seller doesn't mail the goods and cannot be contacted, surely the buyer will have a choice to either (a) leave the payment in escrow, getting nothing back (but giving nothing to the seller) or (b) approve the transaction, getting 1X back (but giving 2X to the fraudulent seller).
Wouldn't buyers choose (b) because getting back 1X is better than nothing?
And if you're going to release payments even when you're the victim of fraud, you might as well not bother with the escrow?
This is the situation that no one who releases a system based on this method(I can think of a few over the past year) thinks of. They assume the buyer will always act on social instead of financial best interest in the event of a scam.
In short: The protocol is not even remotely thought through. It is incomplete and cannot work the way it is described.
I wonder if it is theoretically impossible to create a protocol that punishes cheating when the cheating occurs outside the system (when the physical goods change hands).
And even if it isn't you would also somehow have to consider the fact that things get lost or damaged in the mail and assign blame to one party on the protocol level.
You simply don't. If it's your last 2X bitcoins, you probably should not trade on anonymous p2p marketplace in the first place. Ask your relatives and friends to help you with whatever problem you are trying to solve.
> This is bolted onto Bitcoin, but it could be bolted onto any payment system - Visa, PayPal/eBay, Venmo, Snapcash, MMORPG in-game currencies, etc. It would probably be more useful on Venmo than on Bitcoin.
It could be bolted onto any payment system that has scriptable multisig transactions. Visa, Paypal, and friends would not work without a trusted third party escrow.
Sorry, it does not make any sense "to bolt onto Visa/PayPal/Venmo" a two-party escrow like that. The whole point of two-party escrow is to solve the problem when you cannot (or don't want to) rely on any third party. E.g. PayPal can block/revert funds if they want to.
Full two-party escrow ensures that only these two parties can decide what to do with the funds and no one else could decide for them. You must use Bitcoin to ensure that. If you still have some 3rd guy to decide, you don't need such a setup in the first place.
The idea is that everyone locks up more value than they can potentially win by cheating. So every side is egoistically interested in getting their money back, so they have to find a compromise. Trolling people at your own expense is possible, but I expect it to happen infrequently to not be a big concern (just like vandalism does not happen everywhere all the time, but only somewhere, some of the time; plus in this case vandalism is quite pricey).
Many people discuss extortion scenario, so let me reply to all of you. Extortion is limited when both parties are anonymous or have little knowledge about personality of another person. If I try to play dirty with you, how do I know that you are in a desperate position and will submit? Maybe you will turn out a better player in this poker? Or maybe you will simply send me a message "no, please pay as we agreed initially, period" and we won't be able to discuss this longer.
This Nash Equilibrium approach is particularly great for automated agents. E.g. two apps paying each other for measurable service and owners of both apps know perfectly well how another app will behave - there's no one to extort. Either you follow the contract, or you don't. So you can troll someone by simply losing your own funds, all alone in a sad silence.
Please, do not run this on Tor. The p2p aspect will definitely help, as it will be harder to locate a single server with traffic analysis, but it isn't enough. Run this on bitmessage (or a fork thereof) which broadcast messages to all peers, making traffic analysis futile for an attacker.
This runs on bitmessage over Tor. Tor is just another layer to make it more difficult for merchant to recognize your location.
Think of it this way:
1) Bitmessage ensures that FBI has to catch merchants or buyers individually instead of catching them in one single server like SR.
2) Tor makes it harder for them to locate a party by tracing where message came from.
Tor is not perfect, but Bitmessage is not better, but worse than Tor in terms of tracing the origin of the message (because BM does not use onion routing with layers of encryption).
How cam bitmessage possibly be worse than Tor? All that bitmessage indicates is that you're taking part in the network, not what you're actually saying or who you're talking to.
When Alice and Bob communicate over Bitmessage, Eve can only see garbage being propagated through the network.
But when Alice sends a message to Bob (who is an undercover TLA agent), Bob could monitor traffic and statistically figure that certain IP address sends messages addressed to him earlier than other nodes. So he goes and busts the owner of that IP address.
Onion routing attempts to make this discovery harder for Bob because Bob will always receive messages from different IPs that supposedly do not belong to him. They have little value to him because they themselves are connected to relaying nodes, not to initial senders. Bob would have to bust too many IPs before finding the sender, which could be too expensive.
Onion routing begins to fail when Bob has a lot of his own nodes in the Tor network, so for a large enough number of messages he can trace route back to the sender's physical address.
To achieve real anonymity when chatting with strangers (e.g. blackmarket merchants), one needs to use a combination of these factors:
1) Bitmessage or alike to avoid evesdropping.
2) Tor to make it harder for recipient to find location of the sender.
3) Low-latency network to make statistical analysis less efficient. Every relaying node (both Tor and Bitmessage) should delay broadcasting messages randomly.
4) Infrequent communication, so it takes time for recipient to gather data. (This is a variant of #3)
5) Change physical location frequently, randomly and rarely reuse them. E.g. connect from various free wi-fi points in cafes, parks, shops, Apple Stores etc.
6) Never reuse identity between people you communicate with. Merchants must have separate Bitmessage and Bitcoin address per invoice (once item is sold, post another item with different identity). Buyers must use different Bitmessage and Bitcoin address for each purchase. This way amount of information available to an adversary will be strictly limited to just one deal. And that deal will be limited to one unique location and a few exchanged messages that hopefully won't be enough to locate the person. And even if that happens, person couldn't be charged with more than one sin.
If you communicate with people you trust (friends, family members), you only need #1 and that would be enough.
very great! And as a voluntaryist myself, I love the voluntary.net branding! It would be nice if they made these tools cross-platform with linux and windows builds.
You should mention what blocking-service/blacklist is being consulted, so that if it's an error, it can be corrected. (If it's not an error, moving the content will just get another domain/IP blocked... so it could make more sense for you to circumvent your minders locally.)
It uses BitcoinJ for Bitcoin stuff and Bitmessage library, those are multi-platform already. Anyone could fork the code and build alternative UI using these components for their platform.
If the buyer approves the transaction stuck in escrow, they get their 1x the price back, the seller gets their 1x deposit and 1x payment.
If the seller gives a refund, everybody gets back what they put in.
If neither party does anything, the buyer is out 2x and the seller is out 1x. This encourages the buyer to pay up and the seller to deliver.
This is bolted onto Bitcoin, but it could be bolted onto any payment system - Visa, PayPal/eBay, Venmo, Snapcash, MMORPG in-game currencies, etc. It would probably be more useful on Venmo than on Bitcoin.