Just a quikie. Today we installed Julia 0.3.0 rc3 and re-ran a couple of benchmarks using a Chain Ladder algorithm in a previous blog entry. We updated the functions so that the matrix is copied internally rather than overwritten (it didn’t seem to make any difference to the benchmarks). This has been updated in GitHub.
The code in this blog entry is run/compiled on Ubuntu 14.04, core i7 processor having 16 GB RAM, using Julia version 0.3.0. The scripts are available on GitHub.
If you remember we created a faster version of the chain ladder function after brief communications with one of Julia’s creators:
function GetChainSquare(mTri)
mTri = mTri[:,:] # copying the matrix
p = size(mTri)[2] # This is the size of the triangle
dFactors = [GetFactor(i, mTri) for i = 1:(p - 1)] # the chain ladder factors
for i = 2:p # iterate over the rows
for j = (p - i + 2):p # iterative over the columns from the "antidiagonal"
mTri[i, j] = mTri[i, j-1]*dFactors[j - 1]
end
end
return mTri
end
The timings (in microseconds) are given below
bench(1000)
1x6 DataFrame
|-------|--------|--------|--------|---------|--------|-------|
| Row | min | lq | median | uq | max | neval |
| 1 | 12.166 | 13.786 | 14.187 | 23.8387 | 72.488 | 1000 |
The previous timing for this in Julia 0.2.1 was about 25 microseconds - so about 10 microseconds has been shaved off. But that’s not all, the timings for the original function:
function GetChainSquare(mTri)
mTri = mTri[:,:]
nRow = size(mTri)[1]
nCol = size(mTri)[2]
dFactors = [GetFactor(i, mTri) for i = 1:(nCol - 1)]
dAntiDiag = diag(mTri[:, reverse(1:nRow)])[2:nCol]
for index = 1:length(dAntiDiag)
mTri[index + 1, (nCol - index + 1):nCol] = dAntiDiag[index]*cumprod(dFactors[(nCol - index):(nCol - 1)])
end
mTri
end
Is almost the same as above:
bench(1000)
1x6 DataFrame
|-------|--------|--------|---------|--------|---------|-------|
| Row | min | lq | median | uq | max | neval |
| 1 | 15.028 | 15.506 | 15.8465 | 16.413 | 24214.2 | 1000 |
Down from a median of about 51 microseconds. Of course this is not a thorough benchmark, but yeah, I’d call that a nice improvement. Very nice indeed.