Moore’s Observation

Written by stopanddecrypt | Published 2018/04/22
Tech Story Tags: bitcoin | bitcoincash | segwit | blockchain | blocksize-increase

TLDRvia the TL;DR App

…and how it’s irrelevant to scaling Bitcoin

Foreword

This article is a segment of a much larger piece I’ll be publishing in the near future, and I’ll provide that link at the bottom when I do. I wanted this to also be a standalone piece that could easily be linked when someone tries to make this argument in the future. Please bookmark this if you find that useful.

Moore’s law is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years. — Wikipedia

It’s become commonplace to cite Moore’s Law when people discuss raising Bitcoin’s block size in an attempt to justify why the network can continue scaling this way. In short, the mantra goes something like this:

  • Bitcoin has always scaled this way in the past when needed.
  • Computers get more powerful all the time, just look at Moore’s Law.
  • You don’t need to run a node, only miners should decide what code is run.

Notice how the third one has nothing to do with the first two? That’s because after you dismiss the claims about Moore’s Law they fall back to how you just don’t need to run a node, so because you don’t need to run one it’s okay if they take away your ability to run one. So let’s dismiss Moore’s Law, and make a solid argument for why being able to run a Bitcoin node is of utmost importance.

  1. Moore’s Law is a measure of integrated circuit growth rates, which averages to 60% annually. It’s not a measure of the average available bandwidth (which is more important).
  2. Bandwidth growth rates are slower. Check out Nielsen’s Law. Starting with a 1:1 ratio (no bottleneck between hardware and bandwidth), at 50% growth annually, 10 years of compound growth result’s in a ~1:2 ratio. This means bandwidth scales twice as slow in 10 years, 4 times slower in 20 years, 8 times in 40 years, and so on… (It actually compounds much worse than this, but I’m keeping it simple and it still looks really bad.)
  3. Network latency scales slower than bandwidth. This means that as the average bandwidth speeds increase among nodes on the network, block & data propagation speeds do not scale at the same rate.
  4. Larger blocks demand better data propagation (latency) to counter node centralization.

https://twitter.com/ELEProbtc/status/963845795140292609

forever and ever

4 times slower in 20 years, 8 times in 40 years, and so on…

This is not new information either, he’s been spouting this misconception for years. He knows it’s not relevant. Even people who have hopped on the BCash bandwagon after knocking Bitcoin non-stop knew years ago that latency is the issue, but none of this stops this argument from being brought up over and over again. These tweets are from years ago (‘15/’16) and the following Reddit post was in April of this year (2018):

https://twitter.com/adam3us/status/693847158693433344 /// https://twitter.com/el33th4xor/status/638399125474684931

https://www.reddit.com/r/btc/comments/8e88xu/satoshis_original_whitepaper_talks_about/

It’s not about storing those transactions. It’s about full-nodes being able to receive the transaction, check the UTXO set to verify the information in the transaction is correct, and consider it valid before sending it off to the next node to do the same. This takes time. This delays propagation. Then at some point a miner successfully finds a valid hash for a block and sends that block out to the network, which must then propagate to the nodes on the network and get validated just like all the transactions. There’s ways to shortcut this by checking the block against transactions you’ve already validated, but again it still takes time, and increasing the block size directly affects this process. If you want to increase the block size, put in work that helps offset this issue.

This is real scaling:

https://www.reddit.com/r/Bitcoin/comments/7x4psl/advances_in_block_propagation_greg_maxwell/

So we’ve established (again…) that all of this is important because of the centralizing effects on full-nodes. Good, now that we’ve completely dismissed that argument, we can forget it even happened. Why? All that work you put into arguing with someone online over this doesn’t matter because:

You don’t need to run a node, only miners should decide what code is run.

There’s only two sides to this debate. “Non-mining” nodes matter, or they don’t. Anyone arguing in between these two stances is missing the bigger picture, or knows that arguing the middle ground helps the side you’re actually on win the tug of war.

https://twitter.com/VinnyLingham/status/936271705298812928

…Let’s discuss why that’s a misunderstanding of the bigger picture, but first go over some simple concepts we should all be able to agree with.

  • It’s better to overshoot security than it is to undershoot security.
  • If a change in consensus results in an increase in the number of individuals with the ability to operate a full node, ceteris paribus, the network decentralizes by some value.
  • If a change in consensus results in a decrease in the number of individuals with the ability to operate a full node, ceteris paribus, the network centralizes by some value.
  • Soft-forks are inclusive. Hard-forks are exclusive. Inclusiveness builds a healthy single network, exclusion divides.

Any change you propose that results in someone no longer being able to run their node, and the code they agree upon when they connected to the network, is a bad thing to do. You just created an enemy who has the financial incentive to oppose your change, and you downsized the network by some degree. You also established a precedent that disincentivizes others from getting involved because they may fear being disconnected. (Tangentially, this precedent is one reason why a Proof of Work change is also a bad idea. You’re cutting of nodes and hashpower.)

Additionally, you run the risk of breaking infrastructure that is already in place. Right now this includes services that are dependent on running nodes, current and future services that make use of the Lighting Network, and anything else down the line that will get built on top of this network. Lightning is just the beginning. I wouldn’t even say the network is secure right now because there’s still too much risk of change, and I don’t think it will be secure until there is sufficient layering and real-world negative consequences for even attempting to make such a change to the underlying rules.

What do I mean by attempting?

The following is a “TCP segment”. You don’t have to know what that means, and I don’t know much all about it past what I’m about to tell you.

A TCP segment consists of a segment header and a data section. The TCP header contains 10 mandatory fields, and an optional extension field. The data section follows the header. Its contents are the payload data carried for the application. The length of the data section is not specified in the TCP segment header. It can be calculated by subtracting the combined length of the TCP header and the encapsulating IP header from the total IP datagram length (specified in the IP header). — Wikipedia

Those are consensus rules. The Internet consensus rules.

If you change those rules and try to send that data across the Internet using your new rules nothing happens, it’s unrecognized. It’s invalid.

If you implement network code or hardware that uses different rules, and then connect 2 computers together with those new rules, you just created a new network which is completely incompatible with the rest of the Internet.

If you do this with a datacenter full of new hardware and software, network them all together, and then try to connect to the outside world, nothing happens, and you just wasted a lot of money and time.

If you’re the AT&T CEO and you conspire with Verizon to roll out 4G LTE as a hard-fork, you have a few options:

  • Surprise everyone, nobody has 4G phones, everyone switches to Sprint, and you lose so much money your mother rejects you as her child.
  • Announce a hard-fork date. Everyone laughs at you. You scrub your idea.

In either scenario, if you were the CEO and wanted to do this, y_ou wouldn’t even be able to._ If the entire Board of Directors voted yes, you still wouldn’t be able to get it done. From your lawyer calling you all idiots down to the engineering group being told that in 6 months they need to offline all of their switches globally, and then subsequently laughing at the memo you just sent them. While just a sample of a much broader analogy I won’t be making, this is what enforced consensus really is.

My 3G phone enforces this consensus just by existing alongside millions of others, and asking the following question just sounds absurd, doesn’t it?

What’s the minimum number of phones worldwide to ensure that you have sufficient decentralization?

Conclusion

Hard-forks don’t happen in real life. There are just too many layers to the entire system such that going against the grain and refusing to cooperate with existing infrastructure would only result in you getting cut off from the system. Try driving on the opposite side of the road as a hard rule. I’ve done it on an empty road, I’ve done it to pass slow cars, but actually as a hard rule? Never going back to the normal side? See how long that works out for you. Get back to me with the results.

Bitcoin’s consensus security at full scale doesn’t come from the amount of “users” running “nodes”, it comes from an overall inability to change any old rules from the mining or the client side. How we get to that point or when we get to that point I couldn’t tell you, but we will never get to that point with a “hard-forks are okay” mentality, and we will never get to that point with a “miners can dictate consensus freely” mentality, as if the miners ever even had decentralization on their priority list…

Afterword

The full article this segment is from is below:

That’s not Bitcoin, that’s BCash_Or, There and Back Again, a Full-Node’s tale_medium.com

🅂🅃🄾🄿 (@StopAndDecrypt) | Twitter_The latest Tweets from 🅂🅃🄾🄿 (@StopAndDecrypt). Fullstack Social Engineer: 10% FUD, 20% memes, 15% concentrated…_twitter.com


Written by stopanddecrypt | Byzantine Fault Tolerance Abstractionist
Published by HackerNoon on 2018/04/22