Nim is a static programming language that produces either C, or C++ code for further compilation or Javascript code for interpretation. It is relatively new (created in 2008), its syntax is inspired by languages such as Python, Pascal, and Ada, it has strong metaprogramming capabilities, and even support for multiple dispatch. Nim not only supports UFCS (universal function call syntax) but also allows functions to be called with or without brackets giving the user a lot of flexibility in their programming style. This article takes a quick look at the language by implementing a Kernel matrix calculation function that allows a variety of kernels to be passed to the main calculating function.
nim --version
Nim Compiler Version 1.4.0 [Linux: amd64]
Compiled at 2020-10-18
Copyright (c) 2006-2020 by Andreas Rumpf
git hash: bdcd87afca238a0a7b2c70971827cf9172817b12
active boot switches: -d:release
The matrix type used in this article was implemented as:
type
Matrix*[T] = ref object
data: seq[T]
dim: array[0..1, int64]
Nim is white space sensitive, delimiters are done using spaces rather than brackets like one of its influences Python. It does not employ the use of end
statements, and semicolons at the end of lines are allowed but not required.
Breaking down the above code, translates to creating a type named Matrix
, the asterisk *
indicates that the type should be exported for use outside the declaring module. The keyword ref
indicates that this will be a reference object. In Nim arrays are static declared with specific lengths, the dimensions dim
of the matrix are declared as a static array with int64
element type indexed from 0
to 1
inclusive. We could have declared it as dim: array[2, int64]
. The data
member is declared as a sequence (seq[T]
) with element type T
, which is which is equivalent to a dynamic array. In Nim, arrays and sequences are value types. Nim has great flexibility in the manner of types that can be created and these are given in the tutorial of the learn section of the official website.
A constructor is just a function (called a procedure in Nim) and created with the keyword proc
. For example, a possible constructor for the Matrix
object could be:
proc newMatrix*[T](data: seq[T], dim: array[0..1, int64]): Matrix[T] =
assert(data.len == dim[0]*dim[1], "Number of elements of data is not equal to nrows*ncols")
assert(dim.len == 2, "Length of dimension array is not equal to 2")
return Matrix[T](data: data, dim: dim)
For brevity, the implementation of rest of the Matrix operators such as $
, or column selection with [i]
, and so on are omitted from the article.
One strength of Nim’s syntax is that it is easy to create operators. For example the element indexing for this (column major) matrix we just created is:
proc `[]`*[T](m: Matrix[T], i: int64, j: int64): T =
return m.data[m.dim[0]*j + i]
while insertions are simply:
proc `[]=`*[T](m: Matrix[T], i: int64, j: int64, val: T): void =
m.data[m.dim[0]*j + i] = val
Consider a naive implementation of a .*
operator to multiply the elements in two sequences:
proc `.*`*[T](x, y: seq[T]): seq[T] =
let n: int64 = x.len
result = newSeq[T](n)
for i in 0..<n:
result[i] = x[i] * y[i]
return result
In the above, you may notice that the variable result
was not declared mearly used. This is because in Nim result is implicitly declared for use.
usage:
echo "sequence multiply: ", @[1.0, 2, 3] .* @[4.0, 5, 6]
gives us:
sequence multiply: @[4.0, 10.0, 18.0]
which is syntactically convenient. In the above code sequences are created using @[]
notation, the equivalent for arrays is []
.
Two methods of declaring variables were used in the above statement. The let
statement is used to create immutable variables, once assigned, they can not be changed, and the var
statement is used to create mutable variables.
The random
module in the standard library, allows a function to generate a random matrix to be easily implemented:
import std/random
proc rand*[T](nrow: int64, ncol: int64): Matrix[T] =
result = newMatrix[T](nrow, ncol)
for j in 0..<ncol:
for i in 0..<nrow:
result[i, j] = rand(1.0)
return result
Kernel matrix calculations are the basis of kernel methods in machine learning applications, and they will be used here to demonstrate possible calculation methodology in Nim. Consider the data matrix \(X\) consisting of \(n\) (column) vectors \(x_{1}, x_{2}, \ldots, x_{n}\) each of size \(m\). The kernel matrix \(\mathbf{K}\) is symmetric and calculated using kernel functions of the form \(K(x_{i}, x_{j})\) which form element \(K_{i, j}\) of the kernel matrix. Below are three common kernel functions:
More information on kernel matrices and kernel functions can be found in the references below:
The kernel functions given in the above equations are implemented below:
proc dot*[T](x: seq[T], y: seq[T], params: varargs[T]): T {.inline.} =
result = T(0)
let n = x.len
for i in 0..<n:
result += x[i]*y[i]
return result
proc gaussian*[T](x: seq[T], y: seq[T], params: varargs[T]): T {.inline.} =
var num: T = T(0)
let n = x.len
for i in 0..<n:
var tmp = x[i] - y[i]
num -= tmp*tmp
return exp(num/params[0])
proc polynomial*[T](x: seq[T], y: seq[T], params: varargs[T]): T {.inline.} =
var num: T = T(0)
let n = x.len
for i in 0..<n:
num += x[i] * y[i]
return pow(num + params[0], params[1])
The pragma option {.inline.}
added to functions in the above implementations indicate to the compiler that the functions should be inlined. There is also a vararg parameter indicated in the functions as params: varargs[T]
, that allows extra an parameters to be passed to the respective kernel functions.
The function to calculate the kernel matrix can thus be given as:
proc calculateKernelMatrix*[T](kernel: proc (x: seq[T], y: seq[T],
params: varargs[T]): T{.inline.}, data: Matrix[T], params: varargs[T]): Matrix[T] =
let n = int64(ncol(data))
result = Matrix[T](data: newSeq[T](n*n), dim: [n, n])
for j in 0..<n:
for i in j..<n:
var tmp: T = kernel(data[i], data[j], params)
result[i, j] = tmp
result[j, i] = tmp
return result
Note that we could have implemented calculateKernelMatrix
with something like this:
proc calculateKernelMatrix*[K, T](kernel: K, data: Matrix[T], params: varargs[T]): Matrix[T] =
#.... rest of code as above
the first implementation gives us some control over the form of the kernel function argument. Usage of calculateKernelMatrix
is given below.
let data = rand[float64](5, 4)
var mat = calculateKernelMatrix(dot[float64], data)
echo "dot mat: ", mat
output:
dot mat:
Matrix(4 x 4):
2.018334280223875 1.457488488464762 1.591879335753977 1.035991735383235
1.457488488464762 1.352625681173456 0.9796408613779417 0.8769877752438128
1.591879335753977 0.9796408613779417 1.649942675478743 0.6769463394423462
1.035991735383235 0.8769877752438128 0.6769463394423462 0.9119602424660425
echo "mat[0..2, 0..2]: ", mat[0..2, 0..2]
mat = calculateKernelMatrix(gaussian[float64], data, 2.0)
echo "gaussian mat: ", mat
output:
gaussian mat:
Matrix(4 x 4):
1.0 0.7961310331032529 0.7848527628052563 0.6510586647816368
0.7961310331032529 1.0 0.5935443645867173 0.7746800399321748
0.7848527628052563 0.5935443645867173 1.0 0.5466179757602834
0.6510586647816368 0.7746800399321748 0.5466179757602834 1.0
mat = calculateKernelMatrix(polynomial[float64], data, 1.0, 2.0)
echo "polynomial mat: ", mat
output:
polynomial mat:
Matrix(4 x 4):
9.110341827174576 6.039249670936819 6.717838491108478 4.145262346548838
6.039249670936819 5.534847595716871 3.918977940037199 3.523083108414718
6.717838491108478 3.918977940037199 7.022196183323436 2.812149025369085
4.145262346548838 3.523083108414718 2.812149025369085 3.655591968770808
Basic compile to C and run can be done using the command nim c -r -d:danger script.nim
. Compiler options can be seen with the help
directive e.g. nim --help
. The -d:danger
flag turns off all the runtime checks and turns on the optimizer - which can improve numeric code performance. Further information on compiler flags can be found in this link.
Nim is an innovative and exciting language with a novel approach, and trusts the programmer with a lot of power and freedom, both semantically and syntactically. Its choice of backend design gives it the flexibility to easily integrate with C, C++, and JavaScript. For example, calling functions from C libraries, and passing the respective flags are very easy which is important from a data science/technical computing point of view. The macro system (not covered here) gives programmers a lot of power to define compile time behavior.
Nim has a good forum where queries can be responded to and resolved. It has good IDE support for certain popular environments such as Visual Studio Code - which the code in this article was developed in. Nim’s VS Code package provides very good syntax support and spots bug in your code before you compile it, and it does this without feeling intrusive - which is important for programmers that dislike being distracted or overwhelmed by IDE features.
From my experience, Nim does have a few downsides. The first is documentation. However, recently, the documentation for the language itself has improved and there are now more and more articles, and a recommended book where information can be found. The official documentation of the standard library however, is not as fulsome as it could be. The standard library is still nascent and needs more time and effort invested to mature. Though there is a standard library implementation of concurrency and an interface for openMP, it is still not as user friendly, mature, or as robust as in other alternative languages such as D, Chapel, or Julia, though libraries such as Arraymancer mitigate this shortfall for multidimensional arrays. Nim also has no in-language feature support for SIMD but the aforementioned languages do, which is another aspect to bear in mind when using Nim. By that I mean that you can call SIMD C functions in Nim as you can with the other languages, but the Nim compiler has no mechanism for automatically converting your code to take advantage of SIMD enabled CPUs, but the other mentioned languages do.
Overall however, Nim shows a lot of promise as a flexible, and powerful language both for systems programming and for building a wide array of applications. Hopefully, it will mature quickly and become a popular choice for developers.