“We calculated what happens when we have bigger blocks. Problems occur from a certain value.””

“We calculated what happens when we have bigger blocks. Problems occur from a certain value.””

Christian Decker is probably the first scientist worldwide to write a doctoral thesis on Bitcoin. His topic is scalability. In the interview, he explains which bottle necks there are and how you can still scale the Bitcoin blockchain.

There are cases in which a hobby pays off. For example with Christian Decker. The Swiss started a few years ago to deal with Bitcoin. During his master’s degree in “Distributed” at ETH Zurich, he dealt with consensus protocols, came across Bitcoin and started mining in 2009. This brought him a bar Bitcoins that continues to this day.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

It is more valuable for him, however, that he has scientifically dealt with Bitcoin very early on. Well, a few years later, he is probably the world’s first researcher who wrote a doctoral thesis on Bitcoin. That is good on the job market. “I didn’t have to look for a job. In between I had 13 or 14 companies that came into question and was asked by other companies whether I was flying to them at their expense. I then told them that they should save the money and time.””

Christian Decker has now decided. His future employer is an international company, he reveals that much, and he can continue directly with the topic that he has dealt with the recent years: the scalability of blockchains. In the interview, we talked intensively on this topic, which has been moving the Bitcoin scene violently for a year of a year.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

Natural limits

You have written a paper for scalability with other scientists worldwide. How did that come?

I dealt with scalability early. After I started researching Bitcoin and understood the protocol, the question arose: everyone saves everything and verified everything – that doesn’t scaled, or does not scale, or?

When it came to finding a topic for doctoral thesis, I thought, ok, I just ask if and how we can scale Bitcoin. In recent years I have written several paper about scalability. I also dealt with other topics, but I find the scalability most exciting.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

You write, “Bitcoin doesn’t scaled”. Can’t just increase the size of the blocks to let go of more transactions?

That works, but only up to a certain size. You push relatively brisk limits, and limits that are natural laws. From a certain block size you can no longer get the data pumped through the network.

In general, you think that you don’t have such a limit in mind. The perfect scalability is that you can record new users and process this amount of data.

The block size is already a parameter where you can turn. It was discussed for a long time how to do it without us having come to a solution, which caused a lot of frustration. But in the end you reach natural limits.

baner a:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

Where are these natural limits?

My very first paper was about how information is spreading in the Bitcoin network. We calculated what happens when we have bigger blocks. Problems occur from a certain value.

To understand that, you have to think about what happens when a miner finds a block: it begins to distribute this block in the network. While the block is spreading to knot from knots, there are two groups in the network: in one you know that there is this block, and in the other you don’t know that.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

A miner who sits in the ignorant part of the network will continue to reckon on one block, although it has already been found and distributed by another. While a block moves through the network, the ignorant part is becoming smaller and smaller because we will further distribute the block found, and the part of the network that knows it is greater.

The larger a block, the longer it takes to reach everyone in the network. From a certain size, the part, which does not yet know the block, is slowly shrinking that the competition has already found a new block before the miners have learned from the old block in the ignorant part. In such a situation, there are many forks that distribute the network into many small islands.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

From what block size would that happen?

When we calculated that in 2012, this limit was 13.5 MB. Today it is probably higher because the network became more efficient. We have the relay network, the mining pools work more efficiently, and the number of knots was lower. One would have to appreciate the concrete size.

But that is not a black and white question. As a miner, you have to assume that if you find a block, a competing miner has already found this block with a certain probability. The expectation value says how many competing blocks are probably in circulation. He is currently 0.015. When we calculated that at the time, it rose to over 1 at 13.5 MB, which means that the network is being cut off for us. For blocks with 1 GB, the expectation value would be about 50, that is, a miner should assume that the block that it has just found is already on the road 50 times in the network.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

Bottle necks & hops

But that’s just one of the bottle necks. In your paper you have described other factors on which the scaling hangs. Which ones are that?

It was primarily about the capacity of the nodes. The CPU validates transactions, the hard drives store the blockchain, the bandwidth forwards blocks. We assumed normal laptops to calculate how far you can increase the block size without being thrown out an average node.

However, these node -specific limits are not that important. It is much more important what results from the connection of the nodes. And we come across limits that show that the efficient throughput rate of the network is significantly lower than what the individual node can do. Because we have to communicate via several hops.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

Imagine I’m saying to you, I have a block, and you’ll see if you already know the block, and if you don’t know it, you say to me, hello I have interest. I will send you the block, but until then we have communicated twice. Then you start to verify the block, you need the hard drive and CPU, and only then can you pass it on to your neighbors. All of this leads to latencies.

We calculated that again for another paper and came to a limit per second with about 27 transactions.

Sounds like the number of hops is the decisive factor in scalability. Do you know how many they are?

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

Yes, you can say that. I have a website, Bitcoinstats.com that I show how long the blocks and transactions take. You may also be able to recognize the hops.

On Bitcoinstats.Com we combine with a lot of knots in the network and measure how long we need to reach 50 or 90 percent of the network. How long does it take for you to tell us, he, we have this block, you want it too? This is how we see the general latency of the network. This is great at 50%, but at 90% it often takes longer. We have a long tail.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

It is difficult to appreciate how many hops are. With a cover of 100 % there can be a chain of up to 20 hops, but I think 90 % can be reached in four to five hops.

This means that with the network we have now, we will never reach the capacity that we would like ..

Yes, we can still screw around the blocksize a little, I think 2, 4, maybe even 8 MB should be feasible, even if this is probably accompanied by compromises in the safety degree of confirmations, as larger blocks lead to more forks. But even with that we are still with a double -digit anazhl of transactions per second. At the moment we have 3-7, if we pay eight, this is still relatively little in comparison with Visa or PayPal. The potential that you calculate today cannot be realized, such as a Micorpayment to pay for content. This does not work onchain.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

You mean not on the Bitcoin Blockchain, or in general onchain? You made suggestions on how to improve the network ..

Yes, Emin Gur Sürer, from the University of Cornell, proposed Bitcoin NG. You can certainly do a lot with it, and we have developed our own protocol at ETH, peersensus. The two systems ensure that blocks are no longer used to confirm transactions, but only to limit the influence of the individual participants in the network. This allows you to increase the capacity by a factor of 10, but these two systems are still based on getting the global state – that all nodes have to save all transactions. Here you continue to reach a border quite quickly.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

Duplex and Lightning

Are there also ways to build blockchains in such a way that not everyone has to store and validate everything?

There are different ideas, but none yet that is ripe enough. They all suffer from the fact that individual participants may be canceled and then return the part they edit exclusively, manipulated to the network. One is currently still looking for methods to build it up in such a way that a participant can no longer position themselves. But so far every proposal still has problems. You don’t even know if it is feasible at all.

baner a:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

Therefore the trend is going to try offchain. You also built a model, the duplex microchannel ..

Yes, our duplex microchannel works similarly to Lightning. That was a bit unfavorable because Lightning did not have to go through the scientific peer review, it was faster public than the Duplex-Microchannel.

The concept of the microchannel is that we border money, transfer the transactions offchain and then set them on the Chain. The whole idea is based on the fact that we structure these negotiations that we do onchain in such a way that someone can be prevented automatically.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

If I say now I send you 0.1 Bitcoin, then I give you the tools to enforce this decision on the Bitcoinblockchain, even if I refuse or disappear.

What is the difference to Lightning Network?

We have the same goals and a similar setup, but there are small, structural differences. The two systems are also compatible, you can probably build a network that connects both. Since both Lightning and duplex use hashed timelock contracts, the transactions of the one system should also be recognized by the other.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);

What do you think, how long it will take until these networks exist?

There are already a few prototypes, and in addition to the Lightning people, Rusty Russel from Blockstream and I work on implementations. I already have a semi -finished microchannel prototype that is running. The protocols are complicated, but people are working on implementing them. This is not the problem.

It is currently more important that we do the basic work. For example that we get segregated witness. Both networks need that because they are based on the fact that they chain off offhain transactions, and for that we have to solve the transaction Malleability. That is what Segregated Witness does.

banera:hover imgbox-shadow:0 0 20px 5px rgba(255,0,0,0.6);