Comments
  earned 25.0¢
> The LevelDB database is known to be fragile and get corrupted. This can cause dramatic waiting times in order to re-create the database.
x100 even for an archival node where I just want to play with the blockchain data. It must be a serious issue for real nodes.
25.0¢
   6mo ago
25.0¢
  earned 0.0¢
@TomZ Could you link or elaborate on why a key-value store does not work well in this case? On the surface, it looks like a perfect use case.
0.0¢
   6mo ago
  earned 10.0¢
@emergent_reasons  I would not go so far as say a key-value store "does not work". It has served us well for some time. It just gives opportunity for improvement based on the immutability of the records.
10.0¢
   6mo ago
10.0¢
  earned 25.0¢
pffft.... Bcash
.
.
.
I'M KIDDING! Bitcoin Cash is great ;-)
25.0¢
   6mo ago
25.1¢
  earned 0.0¢
Why not optimize the protocol itself too? Why does it only have to be the validation?
0.0¢
   6mo ago
  earned 0.0¢
So basically you are going to build multiple buffer layers. The older the tx are that you have to deal with, then they are on the slowest layer. Is my statement correct?
0.0¢
   6mo ago
  earned 25.0¢
Great article. Although conventional wisdom is to never create a new database, it does seem appropriate for bitcoin over the long run due unique properties like the immutability of the records that you mention. Awesome see your work in this direction!
25.0¢
   6mo ago
25.0¢
  earned 0.0¢
this reminds me of the iguana project. Anything useful in there?
0.0¢
   6mo ago
  earned 50.0¢
@Ryan In Bitcoin we moved from CPUs to GPUs to hardware specific to Bitcoin. It was just a matter of time before we moved to usage specific databases ;) @Kain_niaK We indeed get to have always fast access to medium old transactions at the cost of slightly slower really old transactions.
50.0¢
   6mo ago
25.0¢ 25.0¢
  earned 0.0¢
How does it compare to LMDB? I know Monero uses it to good effect.
0.0¢
   6mo ago
  earned 0.0¢
<3
0.0¢
   6mo ago
10.0¢
  earned 0.0¢
Great work! I didn't realize Bitcoin uses a relational database for UTXO store. Has that been the case since its birth? Will it require a hard fork to change the the database?
UTXO store can be efficiently managed as a memory-mapped file or time-segmented files as you do. Instead of purging a utxo record everytime it is spent, the store can function as append-only with lazy delete using a flag. Periodically or triggered by some threshold of delete percentage, spent records can be permanently purged. Miners actually have amble time to do the purging between blocks.
0.0¢
   6mo ago
25.0¢