What really means “scalable” in the world of blockchain?

Scalability is one of the most important problems in the blockchain. Since then, as there was Bitcoin, it was the center of attention as practitioners from industry and academics. Jizze Ren and Peter Zhou investigate the scalability of the blockchain in VeChain and compare different baccani, analyzing their pros and cons. Their goal is to provide both the cryptocurrency community and the General public a deep understanding of the current state of development of the blockchain. I think it will also be pretty interesting. Hereinafter in the first person.

Regardless of whether you are researching blockchain in academia or just trudge on what is happening in the world of crypts, you’ve probably heard the term “scalability” (scalability) or “scalable blockchain” (scalable blockchain). About as many say, so much noise. However, in most cases, for a “scalable” bloccano is just a regular block chain, is able to achieve high performance TPS (transactions per second). Sometimes it is even so that the true meaning of “scalability” is distorted or even deliberately changed to lead people astray and get an unfair advantage. On the other hand, we have seen a lot of reports and articles written by research institutes, companies or the media, who are trying to objectively compare the scalability of different blockchains. However, hardly any of them are able to distinguish false claims from having a well-grounded.

Despite the fact that the concept of scalability are well identified in many scientific fields, the world of the blockchain has plenty of values, you’ll see it later. We want to show you the latest developments in scalability of the blockchain, as from practitioners of the blockchain, and scientific, and more importantly, researchers. We believe that the society is in great need to better understand this issue. Then the industry will grow healthier and faster.

For most computer systems, e.g., databases or search engines — “scalability” means the ability of a system to handle growing amount of work, that is to scale. The system scales or, in other words, it has bad scalability, if instead of having to use more resources (for example, to connect additional computing power, servers and bandwidth), it requires additional effort for system modification to be able to cope with the increased workload.

And yet, in blockchain the word “scalability” of a much wider range of values. Yes what there to speak — even the term “blockchain” has not yet got a good definition from an academic point of view. For example, if we talk about Bitcoin, many people still consider “scaling” any improvement in throughput, delay, time initial load or value of transactions.

In our days there are many different systems of the blockchain, which can be considered “scalable”, but their capacity is very different. Please note that the word “scalable” is a comparative term in the blockchain. When the system is called scalable, this means that it reaches higher values of TPS than other existing systems, by changing the mechanism of consensus and/or clarify some of the parameters of the system.

In fact, we can classify the scalable blackany into four types:

  • 1. Scaling Bitcoin: the solution for increasing bandwidth Bitcoin by increasing the block size or reducing the spacing block, without a change in POW algorithm consensus
  • 2. Zoom POW: solutions that still fit within the structure of the consensus Satoshi Nakamoto, but achieve higher throughput than POW algorithm in Bitcoin, due to the algorithm change
  • 3. Scaling algorithms for Byzantine fault tolerance (Byzantine Fault Tolerance BFT): decisions based on the BFT algorithm, but with a more simple message than the algorithms Practical Byzantine Fault Tolerance PBFT
  • 4. Scalable blackany: decisions that weaken the requirement that nodes validation/hashing should know the whole story of transactions. Thanks to this system throughput can increase with the network size and, consequently, better scale than the above three types

The contents

Scaling Bitcoin

We all know that Bitcoin is bad scales. Because the design of POW (proof-of-work), underlying the work of Bitcoin, it is not possible. In Bitcoin POW is used as a random method of determining the next valid (valid) block, that is, all the nodes “work” (provide proof of work, POW) within a certain time to determine the winner. Moreover, the new unit must be synchronized with the entire network that each node could (plus or minus) to compete in the race for the next block. In fact, POW Bitcoin has a cascade structure, as shown below.

The cascade structure of the POW Bitcoin algorithm starts the consensus only after all nodes finish receive and verify all the blocks.

That synchronization takes 1 minute, when the duration of work POW is 10 minutes (as in Bitcoin) is normal. But Bitcoin will be no more honest and safe, if the time synchronization will be comparable with each cycle POW, what happens if the block size increases or the interval block is significantly reduced, e.g. to 1 minute. In this case, the network has many forks, branches that will eventually lead to a very long confirmation time and the level of security.

In other words, an obvious limitation of Bitcoin is that each round of the algorithm of the consensus must be much larger than the synchronization period. How much time it takes to sync depends not only on the design of the consensus algorithm, but also largely on the characteristics of the core network, for example, bandwidth, latency, topology, level of decentralization. In the work ‘On scaling decentralized blockchains‘ (On scaling decentralised blockchains) , it is estimated that Bitcoin could provide no more than 27 transaction per second in Bitcoin network in 2016. This limitation may not apply to individual aldonu that uses the same POW algorithm to achieve consensus, or even to the modern Bitcoin as networks differ in size or level of decentralization. However, the above restriction remains in force. Therefore, “naive” approaches that increase unit (BCH Hello — from the editor) or reduce spacing between blocks, you can “scale” Bitcoin quite a bit.

Zoom POW

To solve the problems described above are a new POW scheme where the security system does not depend on synchronization of new units, as shown in the figure below. In other words, the period of agreement (consensus) does not necessarily have to be much larger than the synchronization time, you can leave it approximately or exactly the same. For example, in Bitcoin-NG consensus is only used to determine the leader of the round instead of the whole set of transactions. Thus, synchronization transactions can run concurrently and can use a larger block size. Other similar blackany in this category are Hybrid Consensus, Byzcoin and GHOST.

Scalable POW was threaded would be to synchronize and harmonize with the consensus, thus the entire bandwidth could be used to transmit messages.

POS (proof-of-stake)

We can include some new scheme of POS in the category of scalable POW from the point of view of scalability. All because in such systems, consensus in a network is achieved through the mechanisms of leadership selection based on random number generators and which do not require much time to achieve a fair choice. Therefore they have no limitations in that “period of consensus needs to be much more time for synchronization”, and you can go directly to the large block size as in solutions zoom POW. Among the famous projects: Ouroboros, Snow White, Dfinity and Algorand.

Scaling BFT

Algorithms for Byzantine fault tolerance (BFT) is a family of consensus algorithms that can tolerate arbitrary behavior of untrusted sites that allows honest nodes to reach consensus in an unreliable network. It all began with the Byzantine generals problemproposed by Leslie Lemport in the early 80s. However, due to the lack of “real” applications practical version of the BFT appeared only in 1995 and was called “practical Byzantine fault tolerance” (PBFT).

PBFT is an algorithm having message complexity O(N2), as shown in the following figure. Here N is the total number of nodes checking/hashing on the network. The illustration below shows five steps in each round of negotiation (consensus), and the arrow represents a message sent from one node to another. You can see that to achieve consensus on a single message, this message must first be transmitted to all the nodes in the network, then each node (unit) shall notify each other of the message.

One of the main disadvantages PBFT is that it badly scales depending on the size of the network due to the complexity of the message O(N2). It is easy to find that the number of messages sent between nodes for every transaction, will grow in the square on the increase in the number of checking network nodes. And since the bandwidth can grow only in proportion to the number of nodes, the throughput will decrease with the growth of the network, and in principle it cannot be used in networks with more than, say, 50 nodes.

To solve this problem it was suggested a few ideas, scaling classic algorithms, BFT. The first attempt received the name of speculative (or suspected) BFT. The idea is very simple: first, the nodes assume that the network condition is good and the environment is trusted, and use a more simple and efficient scheme to achieve consensus. If the attempt in this case fails, they switch back to a more “expensive” PBFT. This is equivalent to exchanging the “worst delays” to “best possible throughput”. Note that this type BFT for example, let’s say, Zyzzyva, existed before the concept of the blockchain. Since the problem of scalability becomes more and more important, the idea of speculative Byzantine fault tolerance was revised and adopted by the practitioners and researchers of the blockchain to build such systems, as Byzcoin, Algorand and Thunderella.

Zyzzyva speculative uses schema message complexity of O(N) to achieve consensus

The second idea is the deliberate removal of redundancy in the BFT process through the use of information-theoretical tool coding with Erasure. It can improve the efficiency of use of bandwidth. For example, Honeybadger-BFT falls into this category.

The third idea is to introduce randomness in the data exchange between the nodes so that after receiving the message instead of listen to it from all other peers for validation, each node just listens to randomly selected nodes and takes the appropriate decisions. Theoretically, the node will make the right decision with high probability if the sample size is chosen correctly and the selection process is truly random. The algorithm of consensus Avalanche uses this idea to achieve better scalability.

Which is better: a scalable POW (POS) or scalable BFT

Although the scheme scalable POW (POS) and scalable BFT mentioned above may vary in form and concept, they can have a similar performance, in terms of bandwidth. Ideally, both approaches should make maximum use of the bandwidth for message transmission and to ensure the smooth complexity of messages is O(N). 100-1000 transactions per second (TPS) in the network with hundreds of nodes (nod) will be a rough approximation of the bandwidth scalable POW (POS) or scalable BFT. In other words, if you see the term “scalable blockchain”, most likely it will be to treat the two types of “scalability”.

Directed acyclic graphs (DAGs)

Many will be surprised that the algorithms of consensus on the basis of the DAG also fall into this category, since many believe that it is possible to scale horizontally. But the fact is that the majority of DAG, regardless of whether they are academic suggestions, like the Phantom, Conflux, or Avalanche, or industrial projects by the type of IOTA and Hedera Hashgraph, they require that all messages have been known to every corner. Phantom, Conflux and IOTA can be considered as improved versions of GHOST (scalable POW) that provides the best parallelization of consensus and synchronization. Avalanche and Hedera Hashgraph can be viewed as a speculative BFT algorithms that give high throughput with less stringent assumptions BFT.

Blackany horizontal (scale-out)

This concept is more reminiscent of the original definition of “scalable” in distributed systems, in the sense that both scales horizontally (scale-out) blockchain and scalable distributed system are happy to offer a higher throughput with the growth of the network. The fundamental difference between them lies in the fact that scalability in distributed systems requires a linear growth of system performance along with a number of servers (nodes), and it is essentially unattainable for blockchain because of decentralization.

So researchers blockchain sought to lower the level of scalability to the network bandwidth has grown cullinane with increasing network size. The result was schemes that today are called “scalable horizontally blockchains”. You may not have heard about this very horizontal scaling (scale-out), but surely heard of “sharding”, Lightning Network and Ethereum Plasma. All of them can be considered as horizontal approaches to addressing the issue of scalability of the blockchain.

Scalable horizontally in the block chain some messages may never reach some nodes. Here, “nodes”, we mean those who are involved in the validation and consensus. In the context of Bitcoin, this will mean that miners don’t have to know and confirm all transactions. A major consequence of this option is that increasing the risk of double spending because the coins spent in the transaction, you can spend again in nodes do not know about this transaction. To prevent double-spending and, at the same time, save this setting, we need to have some nodes in the network have checked transaction other that actually returns some level of centralization in the system. The result is threatening the security or decentralization. This problem is called the “trilemma scalability of the blockchain”. Due to the trilemma there were disputes about whether or not we even have to use the horizontal scheme scalability.

Trilemma scalability of the blockchain

As we have already mentioned, in some schemes, horizontal scaling, there are two popular strategies for the development and implementation of a horizontal blockchain: one is through sharding and the other is through off-chain scheme, that is, those which are not on the main blockchain.

Sharding (sharding) is the separation of all network subnets, “shards”, or segments where the nodes in each subnet use the local ledger, that is, the book blocks. Ideally, each node should know to check and store messages only within its own segment, but not all. You can imagine sharding as splitting the original block into smaller blackany that are less safe, because less nodes validate transactions and participate in the consensus.

Thus, the biggest problem of the strategy of sharding the following: 1) how to protect each shard; 2) how the shards could effectively and securely interact for processing transactions across shards. For example, if any cryptocurrency is moved from shard A to shard B, the recipient shard B needs to query the validity of the currencies of many nodes of the shard And not to fall for the trick of criminals. To solve these two problems have offered many solutions, it is enough to enumerate some: Omniledger, Chainspace, Rchain, sharding for Ethereum, spread them we will be in another article.

Schemes outside of the blockchain (off-chain), external solutions add-ins, largely based on the ideas of Lightning Network, which uses some clever techniques to activate the individual channel out of the blockchain between two nodes for quick translation without need to register every transaction between them on the Bitcoin blockchain. However, this convenience carries with it some costs, namely: both sides must make a Deposit on the blockchain, to open ofcan channel between them. Since then proposed many such schemes vallakkadavu offering quick payments. In particular, parties are allowed to interact via other types of messages, such as transactions with multiple parties (multi-party transactions), transactions, conditional fees (conditional payment transactions) and transactions on smart contracts smart (contract transactions). Thus, it remains only the task to design and effectively deploy such mechanisms outside of the blockchain with the use of coercive measures on the blockchain for different types of messages. From the discussion of projects: Plasma, Polkadot, Liquidity.

Which is better: sharding or unblockable payments?

Oddly enough, in fact, quite difficult to determine the difference between sharding and vneplanovo scheme. Some schema sharding can also turn on the main blockchain or a General consensus among all segments, and some schemes outside the chain can also divide nodes into groups. Here the differences are more theoretical.

In fact, the term “consensus” consists of two properties: consistency (agreement) and viability (affordability). The first means that two honest node should not have disagreements concerning the content of the message. This means that if an honest node knows the message, all other honest nodes eventually learn about him, too. As for sharding, and unblocking schemes viability is compromised because some of the messages will know not all honest nodes. The difference between them is how they achieve consistency. In particular, sharding ensures consistency in the shard with a certain insecurity. On the other hand, decisions outside the chain of blocks do not give strong guarantees of consistency. Instead, the consistency depends on some economic compulsion, such as a Deposit on the main chain, and a penalty mechanism, if someone behaves badly outside of the blockchain.

VAPOR

In addition chargingbull and unblockable approaches, we recently proposed another solution for horizontal scaling: VAPOR. This system is based on important assumption called “rationality” that we saw in the existing systems of the blockchain. In particular, we find that most systems blockchain consider a special type of message transactions, and most systems, by default, mean that the participants in the blockchain rational in relation to the transaction. For example, if Alice is rational and if she wants to buy something at Bob’s, after she will carry out payment transaction to Bob, she will need to provide the authenticity of this transaction to Bob. And Bob, if it is reasonable and rational, sell their goods only after checking that the deal is indeed confirmed and true. We call it “rationality in the transmission of values.” VAPOR uses “rationality” in the system of transfer values to scale without compromising security and decentralization. In other words, the VAPOR can be used as a fully protected and a decentralized system of values, that is, as a cryptocurrency, without the need for each node to know, to confirm and store all transactions. However, this system has a limitation in functionality, as it can be used only for transfer of value as “money” to the assumption of “rationality” are respected.

Discussion

We hope that now the concept of scalability of the blockchain has become clearer to you. The most important thing about all of this is to make the so-called “scalable blockchain” can’t tell you anything about its genuine massturbate if you compare with Bitcoin, Bitcoin’s POW, classical BFT or non-horizontal bloccano.

Criteria for determining scalability of the blockchain

Very difficult to judge the “scalability” of the system of the blockchain without any theoretical knowledge and experience in this area. However, I think that the following three criteria can be used to assess whether a particular blockchain system three types of scalability that we discussed:

Does the blockchain POW Bitcoin as a type of consensus? If Yes, is there a limitation that the nodes should always sync with the latest blocks so that their hashing power will be wasted? If so, it is not scalable POW.

Does the blockchain Byzantine fault tolerance (BFT) as the type of consensus? If Yes, is there any cunning trick that allows it to reduce message complexity? If not, it is not scalable BFT.

Do I need to know every piece of the message to each reviewer node/miner? The node in this case means the nodes that participate in consensus, that is, the nodes that can generate units, for example, miners in the context of cryptocurrencies. If so, it’s horizontally scalable blockchain.

So how many transactions per second can be on the blockchain?

Now let me give a slightly more concrete idea of the scale in terms of a TPS, transactions per second. As we all know, if the blockchain does not scales horizontally, each node participating in the consensus, should receive all messages. Therefore, system throughput will be limited to the least capable node in the network. Therefore, the capacity of the home personal computer, that is, 100-1000 TPS, would be a reasonable expectation of the maximum level of TPS, which can achieve a completely decentralized blockchain. In other words, if not horizontally scalable blockchain expresses the throughput of 10 thousand TPS, it says that the system must be sufficiently centralized, because the nodes with smaller bandwidth will not be able to join it. On the other hand, if the blockchain scales horizontally, its capacity is in theory not limited. However, it should be wary of trade-offs between security, decentralization and functionality, since it is impossible to simultaneously please them all.

Layer 1 or layer 2?

“What would be the best solution for scaling the blockchain, a layer 1 or layer 2?” — this question is causing so much controversy that we could not ask, discussing “scalability”. However, we won’t name them specifically, because the definition of layer 1 and layer 2 (layer 1/layer 2), as such, does not exist. Limit ourselves to a brief description.

In particular, layer 1 is used to represent the scaling of the block chain by modifying existing algorithms for consensus or new consensus algorithms that include all the algorithms described in this article, except unblocking schemes. However, as we have already explained, they are achievable “scalability” varies greatly. On the other hand, approaches layer 2 for the most part presented unblockable schemes. Would be inappropriate to compare “layer 1” and “layer 2” from the point of view of scalability, since only one category of “layer 1”, namely sharding, is nearing the level of “scalability” and “layer 2”.

To date, the scalability of the blockchain remains an open problem without a perfect solution. Theoretically, all existing schemes have their pros and cons and bad massturbate in all situations. Moreover, the security of some schemes either not proved or proved under certain theoretical conditions. What can I say: neither scalable scheme, especially with strict evidence of safety, have not been successfully implemented and tested in real life because of the difficulties of implementation.

And since scalability as the task is not yet even close to that in the future we will definitely see a new system scale blockchains.

Don’t miss them by subscribing to our channel in the Telegram.

Leave a Reply

Your email address will not be published. Required fields are marked *