State of Cryptocurrencies Scaling

Written by yotamyachmoorgafni | Published 2018/02/11
Tech Story Tags: bitcoin | blockchain | cryptocurrency | ethereum | bitcoincash

TLDRvia the TL;DR App

Anyone getting toe-deep into the world of crypto-currencies understands there is a scalability crisis there. 2017 has made it evident to users of the platform resulting in

  • High Commissions when trading bitcoin
  • High gas prices in the Ethereum platform

The scalability issue, if inherent to the platform and concept, can be deadly to its future.

Most current altcoins that brag about low commissions only have them because they are not yet of scale, but face the same inherent obstacles bitcoin is facing and thus their bragging is laughable.

I will examine the areas where scaling issues appear in blockchain, and the solutions that were suggested to tackle them.

Throughput — The Blockchain as suggested by Satoshi Nakamoto has two limitations that prevent it from scaling — Constant Block size and setting an average difficulty of arriving at a valid block to a constant time. The combination of the two at the current settings yields ~3 transactions per second (tps), much less than achieved by centralized alternatives such as Paypal or Visa. What’s worse, lowering the average time for block calculation would result in many forks and network splitting, so it is not a viable solution. Removing the block size limit would enable DOS attacks as very large invalid blocks could be sent to nodes and congest their network traffic. Thus protocol changes of some kind are required to tackle these issues, as the limitations in the original setting seem to be important.

The reason the tps problem is so important, beside constricting the viability of cryptocurrencies as full blown replacements to existing online payment solutions at scale, is its effect on commissions — commissions rise when many people wish to transact — high demand, but the supply is limited due to architecture reasons. The same goes for Ethereum storage prices — as there is only as much data that can be written to the blockchain per second, costs are very high.

Validation latency — In bitcoin a transaction is never fully validated. As the blockchain is defined as the longest chain of valid blocks, an extremely strong miner may appear at any point to traverse the history of the entire existing blockchain by creating a fork from a previous block that will become longer than the existing one. This is very hard to do though, and so in bitcoin the reasonable wait time is 6 blocks, after which it is believed no one will be able to pull such a trick. With block validation time averaging at 10 minutes, this means a merchant receiving funds will have to wait circa 1 hour to know the payment he received is trustworthy enough. The alternative in centralized payments is a few seconds and one can think of a bundle of use cases where this feature is critical.

Security — To prevent Sybil attacks and Double spending attacks, blockchain uses Proof of work (PoW). Letting alone the environmental effects of it, this has a peculiar effect on the coin which I will thus analyze. Of course, in some way overall costs of energy miners take on themselves have to percolate to the whole coin users. The way it actually happens is that when coin price are rising due to high demand, the equilibrium of how many new miners will take advantage of this to generate more coins is skewed. Thus prices are rising more than they would under normal circumstances where many more miners would join in the game, making the currency unstable as a means of exchange. In other words, assuming rational miners, the equation holds

C + P * S >= M

With —

C — Total of commissions paid to date

P — Current Price of Bitcoin

S — Current Supply of Bitcoins

M — Total mining costs to date

This means one of the following undesired consequences -

  • Unstoppable coin deflation, leading it to be unable to serve as a reliable medium of exchange.
  • Commission prices spiking

I am planning a future blog post concerning coin price stability, where I will discuss this specific point more.

My Point of view is that of the three, throughput is the worst and most concerning issue. Validation times are important for many use-cases but not all, but currently the main part of lags in bitcoin exchanges occur because of the throughput issue, so the throughput issue creates a latency issue, which does not happen vice-versa. The security scaling issue, though it’s the one dealt with the most and is the most hyped for some reason, is the least concerning one. I don’t see it as having very consequential effects on the future of the technology or its current state, and it has many suggested alternatives. I will not discuss it here.

I will now go through the existing proposed solutions.

Bitcoin-ng -

Instead of the hash-riddle being solved and submitted together with the validated block, it is solved beforehand in a special block, and then allows the solver to issue as many transactions as ‘mini-blocks’ until another solver appears on the chain.

Thus, bitcoin-ng brings two new ideas to the table -

  1. There is no block size limit. The miner can add as many transactions as he can process. Even if some throttling is imposed, effective block size will be much higher than today’s blockchain.
  2. The miner can use the time he dedicates for Proof of Work to propagate the blocks through the network. This can save much time and unnecessary forks.

Analyzing these two ideas, I would say the first seems like a good feature but it is dependent on the second, as otherwise as mentioned before it would be equivalent to forking bitcoin to have no block size limit, which will result in problems described here.

The Second idea is undoubtedly beneficial but introduces new risks that are not addressed in the Bitcoin-Ng white paper. Basically, the knowledge of who validates transactions propagates to the network when the validator is still actively validating transactions, rather than hand in hand with the final set of transactions he’s validated. This is valuable information, that may cause behaviors such as -

DOS — adversaries will send invalid transactions to the node, in quantities that are enough to hold it from processing the valid transactions. This is hard to duplicate as an attack on all nodes simultaneously, but much easier achieved when an isolated node become the single point of failure for the entire network. To perform this attack, an adversary will need to traverse a blockchain address to a mining node, but this may be achieved given the node would send a huge bulk of ‘implicating’ micro-blocks, in a way that would expose him to all first-tier nodes.

Bribes — It will be much easier to implement the following scenario — the chosen validator will publish a log of pending transactions, and ask commissions to include them in his micro-blocks. The Scenario is say on black friday, Amazon wishes to cause DOS to small merchants, so that all deals would flow through Amazon. They would then offer commissions to the validator not to process these deals — in an interactive bidding way that is impossible to propagate through a network with innumerable amount of possible future validators.

Iota

Iota is probably the most innovative protocol in the way they go out of the original blockchain conventions and way of thinking. In a few sentences, instead of a chain of blocks there is a DAG — Directed Acyclic Graph of transactions, where anyone can add his own transactions using PoW, pointing to two previous transactions. Double spending is tackled by a very strong herd-mentality initiative for the platform users. You want your transaction to be linking to the main branch of the DAG, and so if it is very well connected at some point — causing it to be approved by the merchant — it will be very hard to make a double spending transaction more well connected than it in the future. This is the basic intuition that you can read more about in iota white paper and the tangle equilibrium paper.

Still, it seems that:

  • The scenario described in the bitcoin-ng discussion where an entity prevents other entities from using the blockchain is even easier to achieve in Iota, using a variation of Aviv Zohar’s splitting attack. This will not require bribing any Single point of failure but rather using a small amount of computational capability to keep the transaction from ever acquiring enough validity.
  • In general their white paper’s response to the splitting attack is multiple heuristics that do not amount to anything verifiable, some requiring deus ex-machinas such as quote “Another effective method for defending against a splitting attack would be for a sufficiently powerful entity to instantaneously publish a large number of transactions on one branch, thus rapidly changing the power balance and making it difficult for the attacker to deal with this change”
  • This informal attitude is quite unique with iota and it’s highly disturbing as their protocol is so different from the others and many other flaws seem possible in it. Everything is presumably verified with their own conducted undisclosed simulations, not with proofs or thorough logical discussion. This is enough for me to not consider them a real future scaling solution at the moment, though I believe they introduced very interesting probabilistic elements into the cryptocurrency building blocks.

SPECTRE

I recommend first watching Aviv Zohar’s video lecture explaining the SPECTRE architecture here.

In a few sentences, SPECTRE uses a DAG where each user submits his blocks using PoW pointing to all visible tips of the graph. Double spending is tackled by having a pairwise order between the blocks, so that if block A > block B, a node will accept contradicting transactions according to A. It is highly likely that if block A > block B at some point, it will stay this way due to the way this ordering is calculated — SPECTRE voting mechanism. In fact, there is a great similarity between Spectre voting mechanism and Iota’s random walk equilibrium — They also even both suffer from splitting attacks, though in Spectre it is mitigated. if it could be proven that they are essentially the same, it could be a huge thing for both Iota and Spectre — dealing with the voting complexity of Spectre and the lack of formal proof in Iota.

In the voting mechanism, to determine block A ? block B, all blocks in the DAG vote on who they ‘prefer’. The more a block is linked to and from, the more votes it will get. This means future blocks will only amplify well-linked blocks and so double spending is highly unlikely, once a transaction was accepted.

The thing though about this voting, is it needs to be done for each small block, on every pair of two other small blocks. It seems as the DAG grows larger and transactions become more frequent, this could be in the trillions of computations per second and much more, just to keep verifying everything — let alone generating PoW for submitting anything to the DAG.

Say you have 100 transactions per second rate, these new blocks will need to vote on the ordering of ~N² blocks of the DAG, which at this transaction rate can amount in a year to ~3B blocks. Without going into implementation details and optimizations, on its face it looks a challenge today’s strongest super computer can’t deal with.

Aviv Zohar claims it’s not a big deal and the guys at DagLabs who are working on implementing the protocol obviously think it’s not a major flow that will hold them back. I did not see a concise discussion of this matter. I would say though there seems to be a lot of possible paths for optimizing a naive implementation of voting, so it might be solvable, but for me it’s the biggest cloud hanging over SPECTRE at the moment.

Bitcoin cash

Raising the block size gradually over time, as seems to be the solution advocated by bitcoin cash will work, but it will depend heavily on the integrity of the community for years and years to come. This will need to balance between two different factors — the demand for blockchain transactions, and the computational and network capability of modern day computers — so DOS will not be a threat. While this will work, it is not a leap forward in scaling, such that will enable Ethereum’s vision of Dapps, Iota’s vision of IOT microtransactions, and it will even might face future constrictions in raising block size for native bitcoin transactions, if at some point demand will rise much more than what current viable choices of block sizes enable.

Also worth noting & To Summarize

Iota and SPECTRE also solve the latency issue, bringing it to a few seconds.

As for implementation, bitcoin-ng is currently running in beta under the waves-ng platform. Iota is one of the dominant altcoins, though it is not clear how much the network follows the white paper versus how many hacks and centrally controlled elements are still incorporated into it. You could say it’s still in beta with a ~4.8B$ market cap as of this blog post publication. The insane world of Cryptocurrencies.

SPECTRE is being implemented by Jerusalem based DagLabs and is expected to be released to beta by 2018Q4.

Bitcoin cash is running as the second most dominant fork of bitcoin — the first being bitcoin of course.

Only time will tell what comes out from crypto currencies scaling. It doesn’t seem to me like someone pulled the chestnuts out of the fire here and cryptocurrency scaling is now just a matter of waiting for some protocol to be implemented. I would say my favorite is SPECTRE, if indeed the voting will prove itself to be a non-issue. Otherwise, it may just be that the most simplistic attitude, of ‘scaling as it comes’ advocated by the bitcoin-cash guys could go for the win. Iota and bitcoin-ng are very innovative but I’m not sure how they will survive the test of time.

To Be Briefly Mentioned

Ethereum sharding — is a good idea, but helps mainly small scale applications.

Phantom — Very recently published by the authors of SPECTRE, I was not able to refer to it yet but I will add the link for further reading.

Ripple — Main advantage it brings to the table is around reducing latency. I’m yet to figure out whether it’s a heuristic solution that compromises security or a viable error-free solution.

Lightning networks — Though advocated as a huge thing for Bitcoin, It’s not a real protocol solution to the issues, rather a mitigation that raises the question why not just use third party services and be done with it.

I want to thank Michael Mirkin for discussing these issues together in a great process that I believe will produce many results in the future. He’s not to be held responsible though for any mistakes or wild claims I have made here.

Further reading:


Published by HackerNoon on 2018/02/11