Building a purpose built blockchain for trading - ask me anything

The position resolution algorithm collects margin shortfall from traders with profitable open positions as a back-stop of last resort. Might the risk of this occurring deter market participants, and could this behaviour be exploited in any way by malicious actors to destabilise the system?

1 Like

Link please! :smiley:

Hi NovL, first of all, we don’t have pure prove of stake; we’ll include other factors that assure some spread of validators over various organizations. Also, the voting right of a single validator will be capped (e.g., no matter how much stake you have, no validator can have more than 5% voting rights). This could of course be circumvented by setting up a number of Streamen, but it makes a takeover much harder. Thirdly, there will be some reconfigurability through on-chain-votes - if the validators think they current parameters aren’t good enough for the value they represent, they can change them quite rapidly. Some future work might also be to assure there’s at least some validators with a high level of security (e.g., using hardware security modules), but that’s still a bit ahead.

Does that answer the question, or did you mean something different with ‘cheap to attack’ ?

The exact as taking model is still to be defined. On a protocol level, the plan is to go for generalized adversary structures, which allows us as much flexibility in the staking as possible - the example I gave in the talk was to divide the world into regions and require 2/3 of the stake of each of 2/3 of the regions, but this is not necessarily the final model (also, it is possible to change the model on the fly through an on-chain-vote). On a more market related aspect, the current plan is to cap voting rights somewhere initially around 5% to prevent whales, and require some minimum stake of, say, .5% to prevent depending on some validator on their raspberry zero and nudge those people towards delegating (Just like on the real stock market IBM wouldn’t want every shareholder to turn up on the assembly, even though everyone has the right to do so).
Market orders will need some relative order. Unfortunately, we can prove that some desirable fairness properties aren’t possible (e.g., if all honest validators see A before B, then A is scheduled before B). This is still work in progress, but the (so far) best we can achieve is that if there is some point t in time so that all honest validators saw A before t and B after t, then A is scheduled before B. (Fingers crossed as I haven’t done the formal proof yet that it works)

@SpenceTapi, great questions (I’m a trader by background so also close to my heart). Our margin calculations are designed to ensure that this very very rarely happens (the risk profile of the system is set to a very risk averse level).

With regards to malicious actors, we are currently conducting research to investigate how these kinds of gaming attacks can happen and design mitigations accordingly. It will be an area of ongoing research and we expect to continuously improve the protocol in this area even after launch.

1 Like

To some extend yes, though there is some structural issse: to do the market matching (and determine prices), it needs to be sort of known what trades are going on (one could probably build a zero knowledge protocol that hides the trades and only shows the end prices, but it might be relatively simple to determine what happened by watching the market prices). So my gut feeling for now is it’s possible within some limits, but the existing zero knowledge features may need some adaption.

Hi Noki, given that we are in a (mostly) asynchronous system, there is no real notion of ‘in time’. If someone launches a major denial of service attack, every distributed protocol will at best slow down. What can do is to provide a protocol that goes for low latency where ever possible, and use protocol agility to adapt to changing environments (e.g., if you have 20 validators, it’s fine that everyone sends messages to everyone; if you have a 1000, you might want to use a gossip protocol). There will be other precautions, e.g., traders can mark that a transaction shouldn’t happen unless it’s scheduled before a certain time - if the network is DoSed then, the damage is somewhat limited.

For your other question, I do see a lot of potential - we are aiming for financial transactions (and optimize for those), but low latency and high security can be useful for a lot of trusted services (when we started this in 1998, we where thinking of replicating the DNS service, which also is something people want fast. The issue back then was that there never was a business model that would motivate the validators)

1 Like

Also, in addition to being low latency, we also want to be highly adaptive (e.g., y being able to change the consensus protocol on the fly). I do see a lot of potential for this architecture also in other applications

1 Like

I wouldn’t necessarily only put it on the number of nodes. If I have 10000 nodes and they all run the exact same software and stand in the same basement, this is clearly insufficient. What we need is a decent spread over geographies (requiring, say, at least 4 legislatures to work together to achieve something), different operators, and ideally different implementations. We also might not want too many validators - for voting based protocols, a validator that has no idea how to keep their system secure is worse than no validator, and too many slow validators can slow down everyone. Right now, my feeling is around 20 validators are sufficient if we take measures to promote diversity, but this will not be hard coded (we’ll add some economic nudge to get to those numbers), and the parameters can be changed through an on-chain vote if the traders/validartors feel another number is better

1 Like

If anyone feels the question isn’t fully answered, please do drill deeper

My favourite “impossibility proof” is Gödel’s incompleteness theorems:

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure i.e., an algorithm is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

2 Likes

Yep, that’s a nice one. Anyone with too much time, the book Göpel, Escher, Bach is a great read around those things

1 Like

How are you going to prevent frontrunning?

There’s a number of ways to implement a commit and reveal scheme, i.e., the transactions is encrypted (we might even encrypt the sender of the transaction, but that makes it harder to prevent spamming attacks). Only once the order of transactions is fixed (or, depending on the algorithm, mostly fixed) is the order decrypted and the content available to the validators. If we use verifiable secret sharing or threshold encryption, the sender of the transaction is not required for the reveal, so they cannot block anything.
This might mean that it is possible to schedule an invalid transaction that will make it into the blockchain, but then fail the post-scheduling validity test and thus be ignored.
One thing we are working on is how to integrate it in a way that this doesn’t add latency (we only consider a transaction scheduled in that sense once it is known), that’s where the ‘mostly’ fixed comes in.

Great, so in essence it will be dynamically self-correcting? Perhaps it could still be tricky in a market that combines a tendency towards natural illiquidity together with low volume

Great, so in essence it will be dynamically self-correcting? Perhaps it could still be tricky in a market that combines a tendency towards natural illiquidity together with low volume

You are correct. The liquidity pricing mechanism will act as a discovery mechanism. It is possible that a market gets created; doesn’t attract sufficient market making stake so fees are high (so no one wants to trade and it doesn’t attract liquidity). Either it will naturally expire. Or it will be forcibly wound down and all participants compensated.

But the viable markets will flourish by attracting more market making stake driving down fees.

Thanks @klaus! Are there any drawbacks to including lots of specific requirements on top?

Thanks everyone for the great questions. And indeed, nothing is free; there is additional work to implement specific measures (though we try to minimalise the impact by modular using the consensus layer), and sometimes the requirements do contradict each other - for example, protecting fairness does come at the price of more latency, which we are trying to reduce in the first place.

3 Likes

And, the more requirements from research might negatively impact our popularity by the people that need to implement them

Thanks Tamlyn. Yes I expect that that research will gather even more momentum when you start collecting data in the wild after beta/launch

2 Likes