- Each hash (32 byte hex string) can be interpreted as a number in the interval
$[0,1)$ .- For example the hash
FFFF...FFFF
would correspond to 1-2^256 which is almost 1 and - the hash
0000...0000
would correspond to 0. - Note: A hash cannot correspond to exactly 1 but almost 1.
- For example the hash
- For a block header
$h$ we denote- the verus hash interpreted as a number in
$[0,1)$ by$X(h)$ and - the triple sha256 hash interpreted as a number in
$[0,1)$ by$Y(h)$ .
- the verus hash interpreted as a number in
- We define the Janushash number
$J(h) = X(h)Y(h)^{0.7}$ . - Similar to above where we converted a hash to a number we can do the reverse, i.e. interpret
$J(h)$ as a hash, this hash is the Janushash but we will never compute it or work with it, instead we will solely consider the number representation$J(h)$ . All theory works with numbers, so implementation only needs to convert hashes to numbers, but not numbers to hashes For convenience we will call the Janushash number also Janushash. - Note: To represent a small number as a hash, one might require more than 32 digits, there exist transcendental numbers which even require infinite digits to be represent exactly. However this is not of interest for us.
For target
- The Sha256t must not be too small
$Y(h) \ge c$ for some constant$c=0.005$ - The Janushash must be below the threshold
$J(h) < \tau$ .
An equivalent formulation is to require that
The target controls the difficulty, obviously if the target is decreased then the condition to solve a block is more difficult to satisfy.
If we apply the logarithmic transformation on the acceptance region
Recall that
The following figure depicts the situation in log scale, the acceptance region
A proper hash function should be random in the sense that each output bit cannot be predicted from the input and also cannot be predicted from other bits in its output. Therefore with the interpretation of a hash as a number in
We therefore define the random vector
On the log scale we consider the transformed vector
Since the Borel
With this info we can do probability-theoretic calculations on the log scale.
Recall that a block is rejected if the Sha256t hash of its header is too small, i.e. if
Now consider a specific mining setting. We denote the Verushash v2.1 hashrate by
To match CPU hashrate, hashes computed on GPU must be filtered, and from the discussion above a reasonable filter condition is to compute Verushash v2.1 on headers
This way we would select fraction 1/a of GPU hashes to check Verushash v2.1 on the corresponding headers. The fraction
Note that this is only true for
For some number
We denote the conditional probability to mine a block for
We observe that the probability to mine a block given that its Sha256t is filtered to be in the interval
To express the effect of hashrate ratio
for
Julia code to plot this function
c = 0.005
beta = 0.7
f(a) = a*((c+1/a)^(1-beta)-c^(1-beta))
g(x)=f(x)/f(1)
using Plots
p = plot(g, xlim=[1,200], label = "\$\\gamma\$")
There is a limit on the hashrate ratio boost:
where we used L'Hôpital's rule in the third step and finally plugged in the constant
This is intended and protects Warthog against ASICs applied to Sha256t. Furthermore, at the moment while there does not yet exist an optimized miner yet, it protects against exploitation of the algorithm by closed source miners that reach higher Sha256t hashrate. Such mining behavior will suffer heavily from being bottlenecked by CPU hashrate.
We define the Janusscore
With this definition we can express the expected number mined blocks with traget
- For every target
$\tau$ the expected number of mined blocks in time$t$ is proportional to$S(\mathfrak{h}_X,\mathfrak{h}_Y)$ and therefore$S(\mathfrak{h}_X,\mathfrak{h}_Y)$ describes the mining efficiency. -
$S(\mathfrak{h}_X,\mathfrak{h}_Y)$ takes the role of a hashrate. For example one can estimate the Janusscore$S$ by dividing the number of mined blocks by the time and the target (this can be used in pools to estimate the Janusscore based on number of shares, time and difficulty).
The Janusscore the unit "hashes per second" and can be interpreted as a hashrate equivalent to compare different setups.
Increasing one of the hashrates of
Python code to compute the Janusscore
# define Janusscore function
c = 0.005
S = lambda hx, hy: hy * 10 * ((c + hx/hy)**0.3 - c**0.3)/3
# example usage with 10 mh/s Verushash hashrate and 250 mh/s Sha256t hashrate
S(10000000, 250000000)
Julia code to compute the Janusscore
# define Janusscore function
c = 0.005
S(hx, hy) = hy * 10 * ((c + hx/hy)^0.3 - c^0.3)/3
# example usage with 10 mh/s Verushash hashrate and 250 mh/s Sha256t hashrate
S(10000000, 250000000)
The conditional density
for
If we have
The method of moments can be used to get an estimate
Unfortunately this can only be solved numerically, we cannot express the solution analytically. Since the hashrate ratio
we just estimate
Python code to estimate hashrate ratio
from math import exp
from scipy.optimize import fsolve
# example usage
def get_miningratio(sha256t_list):
"""Function to determine the hashrate ratio
from a list of observed sha256t values
:sha256t_list: list of numbers in [0,1] corresponding to sha256t hashes
:returns: estimate of hashrate ratio
"""
y_avg=sum(sha256t_list)/len(sha256t_list)
c = 0.005
p = lambda a: 0.3/1.3*((c+a)**1.3-c**1.3)/((c+a)**0.3-c**0.3)
threshold = p(1/100000)
if y_avg<threshold:
return 100000
elif y_avg >p(1):
return 1
f = lambda a: p(a)-y_avg
return 1/fsolve(f, [threshold, 1])[0]
# example usage
sha256t_list=[0.025,0.014,0.032]
get_miningratio(sha256t_list)