Welcome to our AMA on Klausā recently published research paper on Fairness
Klausā paper examines the concept of Fairness in blockchain-based trading systems and whether they can provide a guarantee that order book access is treated transparently and fairly, i.e, handling attempts of frontrunning, rushing, denial of service, or other attempts to manipulate the sequencing of trading instructions.
His research explores adding a module to blockchains that supports the concept of ārelative fairnessā so that competing transactions may be sequenced under a known and understood protocol, and not subject to a validatorās discretion. This provides a fast enough transaction speed whilst maintaining a pretty good approximation at fairness.
Have a quick read, or check out Rebeccaās summary then letās pick Klausā big brain apart, 1 question at a time!
The AMA will go live on Tuesday 7th July 2020, 17.00-18.00 BST
I was wondering whether youāve done any modelling / simulations for āProbabilistic Relative Block Fairnessā (and for the more advanced definitions / protocols) of how many requests would end up scheduled unfairly with reasonable assumption on say latency, implying relatively low r_max.
Hello and welcome to our AMA on @klausā āFairnessā paper!
This AMA will last from now until ~6pm BST. Klaus will be present and answering your questions when and as they come. I will make an announcement 5 mins before the end as we begin to wrap up the discussion!
Iām looking forward to seeing what everyone comes up with and how Klaus meets any challenges to his research!
Hi David,
thereās no simulations so far, that would be a good next step to do. The results I would expect depend quite some on the network architecture - if there is a high variance in message delivery times and a lot of messages, the probability for unfair schedules for the probabilistic protocol will go up. The other open question (which is hard to model) is the level of network control an attacker has - from what I see now an attacker would need quite some network control, but not an impossible level. On a well behaved network with low-ability attackers I would say that the probability that the probabilistic protocol creates some unfairness is very low, so it would be acceptable for some markets (also, not all unfairnesses create monetary loss). It would be good though if we could quantify this to give marketmakers more guidance which fairness-version they need for a particular market.
Hi Klaus,
I was curious how you account for the increase in latency necessary to ensure fairness? This is especially key to high frequency trading systems, no?
Best,
Seb
Hi Sebastian,
thereās two places where latency comes in. One is that the protocol adds a message roundtrip to collect votes from the validators (plus some additional time until enough validators are aware of the transaction, which is up to another roundtrip depending on the architecture). The other factor is that some messages cannot be scheduled right away, as the protocol does not have enough information to determine if it can do so fairly - this added latency depends on the fairness definition we use (for example, the timed one has a very low delay, while the probabilistic one depends on the maximum blocksize one wants to allow) and how badly behaved the network is (which would be another point to do simulations). We mitigate this in the protocols by having a breakpoint where something happens one way or the other, which is configurable by the system, so there is a configurable upper bound on message delay.
One way we reduce the impact is that fairness is only required for messages that affect the same market, i.e., some markets can chose that their transactions donāt need to be fair (which are then completely unaffected), and messages of different markets donāt need to wait for each other (also, this means every marketmaker can set their own parameters on maximum latency)
The hardness comes from two factors. One is that itās quite difficult to find a good definition on what fair actually is in the first place (which is why we have three to choose from, and trying to add more in discussions with traders). The second one is that some of the more intuitive definitions of fairness can be proven to be impossible. The first one I came up with - if all good validators see a before b, then a must end up before b - cannot be enforced in this case the fair schedule depends on who is good and who isnāt, which is generally not known to the system. So, as many things on the consensus layer. this is a wild rid circumventing impossibilities and finding the right place to compromise.
I think the worst thing Iāve seen was some game on Ethereum where the last person to bid before some timer expired would get a large pool of money, and someone did go and spent all the gas necessary to lock out all the other bidders in the end (donāt nail me on the details, would need to look up the references again) - this would definitively have been avoided. I donāt have numbers for order book frauds, anyone else can jump in here ?
Hi Ti8*,
itās in between. The protocol is good enough that it could be implemented now (i.e., if we use the optimized protocol thatās the last in the paper, I donāt expect things to get substantially more performant, and it is already optimised to integrate into a blockchain with minimal integration pain). There are a couple of smaller things weāve come up with after the writeup though, so what will be implemented in the end is likely a version 2 of the protocol.
thereās probably more, but there are definitively orders I can make that donāt compete with yours (e.g., if youāre selling while Iām buying, and Iām offering less than you wanted). In those cases, if I unfairly get in front of you, it shouldnāt affect you at all.
@klaus Does is there any consideration for punishing/removing adversarial validators from the wendy-set, or is that left to the corresponding consensus layer?
Would it be possible to design a protocol that reshuffles the order of transactions in the block (so the consensus layer would effectively only drive the constituents of the block, but not their order) in a way thatās not a prior determinable by participants? Would this require an oracle to provide the entropy or can it be somehow internal to the protocol? Also, would such approach somehow fit into the fairness taxonomy that youāve put forward in your paper?
I do realise that random block ordering may not be a desirable characteristics (especially in financial applications), itās more of an abstract question.