This was the tweet of Daniel Connolly, lead developer of Bitcoin SV on 15 November 2019.
There was a reaction by @Brentgunning who was brainstorming about how it could be done.
I will continue the brainstorming here. Disclaimer, I am not a blockchain developer, just an enthusiast that can't help himself thinking about the inner mechanisms of the system.
Right now a mining node receives transactions in their memory pool, makes a snapshot of the transactions, builds a Merkle tree out of it, this is the block template. Then it starts trying to find a correct nonce for it. In the meantime, new transactions come in which get added to the Mempool. The Mempool grows rapidly as you can imagine. The process gets interrupted if the miner himself or a competing miner finds a block. Everything starts over again. The Mempool will partly clear.
The way I imagine the new upcoming process is as follows.
The miner makes a snapshot of the first N transactions, builds a block template for it and starts hashing. Then as the next N transactions come in, a next block template of 2 x N transactions will be built and hashed upon. So the next template contains also all the transactions of the first template. We want to maximize profit in fees right? And so it continues as the next N transactions come in. The new block template will be bigger than the previous again and contain almost all of the transactions. My guess is, each time the new block template does not need to be built from scratch, the new transactions and Merkle paths can simply be added to the side of the previous Merkle tree. Only the newly added transactions need to be validated. Just like you would build a new bigger pyramid by adding a chunk of building blocks to the side of it. I found something using google the way I think it could work, albeit on a different implementation.
https://www.certificate-transparency.org/log-proofs-work (see figure 3 and 4)
Now as a new block is found by the miner himself or a competing miner, all the block templates are destroyed except for the last one (It contains all the transactions). The transactions of the last block template are filtered against the found block and a new first template is built from it. New templates are built and hashed in parallel again.
So what happens when there are no more cores available for new block templates? We will delete the first template with the least amount of transactions and use that core for the next template. It acts as a block template carousel.
So now we have a multitude of block templates being hashed in parallel optimizing the miner's hardware (each core is at work) and raising their chances to find a block. The new factor added to the competition will be the number of cores a miner can set employable.
I imagine this indeed proves the ability to scale drastically since adding cores to the hardware will be relatively easy.
Thanks for reading!
There is nothing in the paid section
No one has reviewed this piece of content yet