# Dgenr8’s Difficulty Adjustment Algorithm Explained

In the last few weeks, several new candidates for Bitcoin Cash’s new difficulty adjustment algorithm have emerged. Some of these proposed algorithms have been incorporated into Kyuupichan’s model [1], which attempts to simulate miners moving hash power based on their economic incentives.

Deadalnix has proposed a simpler version of his algorithm [2]. It still uses chain work to calculate a difficulty based on estimated hash rate, but his new proposal replaces the two targeting windows with one 144-block window. It is labelled as “cw-144” in Kyuupichan’s model.

One of the problems with using a simple fixed sample window, like cw-144 does, is that the calculated difficulty becomes sensitive to the particularities of the endpoints of the sample window. This has the potential to accentuate oscillations as unusually long (or short) blocks are produced, and then exit the sample window 144 blocks later.

A new proposal, from dgenr8 (Tom Harding, maintainer of Bitcoin XT) attempts to solve this problem. The algorithm is called wt-144. As can be guessed from the name, it has some similarities to deadalnix’s updated proposal, including a 144-block sample window. But it also has some important and interesting differences.

# Design Approach

The “wt” in the name stands for weighted time. The calculated difficulty is based on individual inter-block times, with many such times being used since each individual one is subject to variability. The basic concept for how these inter-block times are converted to a difficulty target is similar to the current Bitcoin difficulty calculation. It takes the difficulty of the last block in the sample window as a basis, and adjusts it up or down based on the difference between desired and observed inter-block times. This is different from cw-144 which calculate a difficulty target “from scratch” based on estimated hash rate, without direct reference to the difficulty of the last block.

The interesting part of wt-144, is that these inter-block times are not all treated equally. They are weighted by two factors:

- The inter-block time is weighted by the target of the block produced. This means that inter-block times mined at lower difficulty blocks are multiplied by a larger number (since lower difficulty corresponds to a higher "target"). This makes sense intuitively, as we can imagine the times being “normalized” to the difficulty of the last block. It is interesting to note that this calculation seems strangely similar to Deadalnix's chain work formula. It uses the same terms, but they are summed differently. It would interesting to analyze the theory behind the similarities and differences of the two approaches.
- The times are weighted by the recency. The most recent block is weighted highest, with weights decreasing linearly back to the start of the 144-block sample window. This recency weighting makes the targeting responsive, while also providing some stability based on block history. It also means that there is no sudden boundary to the sample window, which can cause sudden changes as blocks enter or leave the window, which can lead to oscillations.

Because the wt-144 uses every timestamp in the interval for its calculation, manipulated timestamps can affect the target. But since each individual timestamp has a small effect on the calculated target, the impact is limited.

# Specification

The difficulty target is recalculated for each new block. The calculation is based on a “timespan” value that is calculated for the 144 block window. To start, set timespan = 0, “prior_timestamp” equal to the timestamp of the block just prior to the window, and last_target to be the difficulty target of the last block. Then looping through the previous 144 blocks, starting at the first block, calculate timespan as follows:

- If the timestamp of the block is less than prior_timestamp, set timestamp equal to prior_timestamp (used to deal with negative inter-block times).
- Set time_i equal to timestamp - prior_timestamp (this is the inter-block time for this block).
- Set prior_timestamp equal to timestamp (for the next time through the loop).
- Multiply the inter-block time by the ratio of current block difficulty target to last_target
- Multiply this number by the recency weight. Recency weight starts at 1, and is incremented by 1 each time through the loop.
- Add the weighted inter-block time to “timespan”.
- End the loop when it reaches the most recent block.

After looping through all 144 blocks, normalize timespan by multiplying it by 2, and dividing by the number of blocks in the sample (144) plus 1.

Then, the difficulty target is calculated as the last target, multiplied by “timespan” and divided by the number of blocks in the sample times 600 seconds (which is the “expected” timespan for the sample blocks)

# Conclusion

Based on various simulations, dgenr8’s wt-144 algorithm seems to offer the best characteristics of other proposals, combining the rapid responsiveness of Deadalnix’s cw-144 with the stability of Kyuupichan’s k-1 algorithm [4].

Although it is very important to fix Bitcoin Cash's difficulty algorithm quickly, it should be replaced with the best algorithm possible. It should be an algorithm that performs well in varied and difficult circumstances, so that it can serve Bitcoin Cash for the long term. Perhaps Dgenr8's algorithm has the characteristics needed to fill that role.

# References

[1] Kyuupichan’s model: https://github.com/kyuupichan/difficulty

[2] cw-144 implementation: https://reviews.bitcoinabc.org/D601

[3] wt-144 implementation: https://reviews.bitcoinabc.org/D622

[4] worst case simulation: http://www.pvv.ntnu.no/~dagurval/daa/

No one has reviewed this piece of content yet

**Comments**

I asked the following question in Tomas Zanders post about simulating different difficulty adjustment algorithms, but I think it's appropriate to ask it here to, so:

Theoretical question: Would lowering the time between blocks make it easier to respond to changes in hashrate? Intuitively, I would guess that shorter time between blocks, would make the difficulty algorithm respond faster to changes in hashrate, because the measuring points, the blocks, comes more often and you get a more granular picture of the reality.

Has this been simulated?