things only canāt be fair for some definition of fairness. Relative fairness refers to something else, as the term fairness is starting to be overdefined in the consensus literature. Traditionally (i.e., since ca. 2000), fairness means that any request that is send to the validators ends up on the chain within a limited time/number of blocks. This doesnāt talk about the relationship of transactions with respect too each other, and is a bit like censorship resilience with performance guarantees. To differ from that, we use the term relative fairness to show that we mean fairness of messages relative to each other, rather than giving an absolute time guarantee.
Hi Daigan,
it is designed to be blockchain agnostic, so it could be picked up by other projects (it could even run on Ethererum as a pre-protocol, so that Capser the friendly finality gadget finally gets reunited with his girlfriend). Which does mean we have to keep improving to keep ahead with Vega
good question, and it depends on the consensus layer. If we use tendermint, the fairness protocol and the chain are very similar in terms of assumptions, so it would make a lot of sense to use the same punishment (and staking scheme etc); given that thereās different risk models, this could be done differently though, and (if we want to be extreme) every marketmaker could choose their own fairness validators and their own punishment/staking/whatever scheme. If we run on top of Ethereum, the models would be very different, so weād need to bring our own punishments (just like Casper does).
for the orders that are in the same block, this is fairly doable - we do know how to generate shared randomness cheaply, so we could generate a random number after the block is scheduled and use that to sort the orders inside that block. Being somewhat randomized about which block a transaction ends up in is a bit more tricky to do in a way that canāt be manipulated, but should be possible (within reason).
If I understand correctly this mitigates the ability to front-run but has high enough throughput. What sort of blocktime and trades per second can we expect? And what other benefits does implementing Wendy give?
I didnāt say that! Until now I thought Wendy was based on an acronym or really complex math referenceā¦ and Iāve just put 2 and 2 together thanks to @klausās context clues.
we havenāt measured the impact yet, so a (hopefully) competent guess:
Throughput should be fairly close to that of the underlying blockchain protocol - Wendy does add some latency, but not a lot of load (thereās some additional signature generation/verifications and a few more multicasts/gossips, but I expect the impact to to be limited). Thereās some scenarios where the impact is bigger - for example, if we use Ethereum 2 as the blockchain and want to use Wendy with 10000 validators, she will probably be a bottleneck. For most realistic settings, this wouldnāt be the case though
Latency will be more impacted, as thereās additional steps before a transaction can be processed as well as some potential waiting time (there was a question before on that point).
There is no test run yet to confirm that, but for Vega I do expect that the blocktime stays in the area that we can count blocks per second rather than seconds per block (and in either case donāt need to count far)
as for other benefits - Wendy eliminates a number off issues related to frontrunning, but also rushing (e.g., fairness during a market crash). The other benefit is the flexibility - Wendy works with every blockchain, can be combined with other widgets such as a comitt&reveal, and can differ between different markets - even on the same chain, different orderbooks can all be individually fair without blocking each other, and each can have their own parameters and fairness definitions. One thing weāre working on now is to collect more requirements from traders and integrate them (for example, one market could require provably trusted clocks and then go by the time a transaction is sent by the trader rather than seen by the validator, and so on)
Thank you all for attending and challenging @klaus with a wealth of questions! My particular favourite was the clarity Klaus provided with respect to āWendyā and the added latency she may cause to the underlying network. It turns out that this isnāt such a large problem provided we have a suitable upper bound on message delay!
Thank you Klaus for taking the time to answer in such depth. I think weāve all learned something new today and I look forward to future research from Klaus and the discussions that will inevitably follow!