Implementing Givens QR in D

Author: Dr Chibisi Chima-Okereke Created: November 18, 2020 04:06:30 GMT Published: November 18, 2020 04:06:30 GMT

Introduction

In a previous article, Householder and Gram-Schmidt QR decompositions were implemented in D. This article presents the implementation of Givens QR decomposition. QR decomposition \(A = QR\) is used in innumerable applications. One fundamental use is to solve the linear equation \(Ax = b\) (for \(x\)), by transforming the matrix \(A\) into an upper triangular matrix \(R\) which we know how to solve for (using back substitution). This is done by re-contextualizing the problem as \(Rx = Q^{T}b\) which can be written as \(Rx = y\). Since \(y\) is known, it is easy to solve for \(x\).

The \(Q\) matrix is simply a transformation matrix and in Householder (reflection) and Givens (rotation) QR, they are given by a series of transformations \(Q = Q_1 Q_2 \ldots Q_n\). The following references where useful in writing this article:

Implementing Givens QR Solve

What is Givens rotation?

Given’s rotation is a transformation induced by multiplying with a Givens matrix so as to create zeros in a particular location of our matrix. The givens matrix is

$$ G(i, k, \theta) = { \begin{bmatrix} 1 & \ldots & 0 & \ldots & 0 & \ldots & 0 \\
\vdots & \ddots & \vdots & \quad & \vdots & \quad & \vdots \\
0 & \ldots & c & \ldots & s & \ldots & 0 \\
\vdots & \ddots & \vdots & \ddots & \vdots & \quad & \vdots \\
0 & \ldots & -s & \ldots & c & \ldots & 0 \\
\vdots & \ddots & \vdots & \quad & \vdots & \quad & \vdots \\
0 & \ldots & 0 & \ldots & 0 & \ldots & 1 \\
\end{bmatrix} \begin{matrix} \quad \\
\quad \\
i \\
\quad \\
k \\
\quad \\
\quad \\
\end{matrix} \atop \begin{matrix} & \quad & i & \quad & \quad k & \quad & \quad \\
\end{matrix} } $$

where \(c = \cos{\theta}\) and \(s = \sin{\theta}\) for some angle \(\theta\), though we do not usually explicitly evaluate \(\theta\). Essentially, the Givens matrix is an identity matrix with two non-unity \(\cos{\theta}\) values in the diagonal and two complementary off-diagonal \(\sin{\theta}\) values, one of which is negative. So

$$ g_{kk} = 1 \qquad \text{for } k \neq i, j\\
g_{kk} = c \qquad \text{for } k = i, j\\
g_{ij} = -g_{ji} = -s\\
$$

As the notation \(G(i, k, \theta)\) suggests Givens rotation matrix is constructed on the \(i_{th}\) and \(j_{th}\) elements of the respective row or column vector \(x\), and \(c\) and \(s\) can be obtained by:

$$ c = \frac{x_{i}}{\sqrt{x_{i}^{2} + x_{k}^{2}}}, \qquad s = \frac{-x_{k}}{\sqrt{x_{i}^{2} + x_{k}^{2}}} $$

So we can immediately implement a Givens transformation matrix object:

struct GivensObj(T)
if(isFloatingPoint!(T))
{
  T c;
  T s;
  size_t i;
  size_t k;
  size_t size;
  this(T xi, T xk, size_t i, size_t k, size_t size)
  {
    T denum = hypot(xi, xk);
    this.c = xi/denum;
    this.s = -xk/denum;
    this.i = i;
    this.k = k;
    this.size = size;
  }
  this(T[] x, size_t[] idx)
  {
    this(x[0], x[1], idx[0], idx[1], idx[2]);
  }
}

A simple example of inducing a zero into a vector is the situation below:

$$ \begin{bmatrix} c & -s \\
s & c \\
\end{bmatrix} \begin{bmatrix} a \\
b \\
\end{bmatrix} = \begin{bmatrix} r \\
0 \\
\end{bmatrix} $$

which gives:

$$ ac - sb = r \\
as + cb = 0 $$

and

$$ \frac{s}{c} = -\frac{b}{a} $$

then by \(\tan^2{\theta} + 1 = \sec^2{\theta}\) (or whatever method suits):

$$ \cos{\theta} = \frac{a}{\sqrt{a^2 + b^2}}, \qquad \sin{\theta} = -\frac{b}{\sqrt{a^2 + b^2}} $$

so

$$ r = \sqrt{a^2 + b^2} $$

In a matrix Givens transformation only affects two rows (or columns). The transformation \(Givens(i, k, \theta)^T A\) is given by:

auto mult(T)(GivensObj!(T) g, ref Matrix!(T) A)
{
  immutable auto n = A.ncol;
  for(long j = 0; j < n; ++j)
  {
    T t1 = A[g.i, j];
    T t2 = A[g.k, j];
    A[g.i, j] = g.c*t1 - g.s*t2;
    A[g.k, j] = g.s*t1 + g.c*t2;
  }
  return A;
}

and the complementary transformation \(A\text{ }Givens(i, k, \theta)\) is given by:

auto mult(T)(ref Matrix!(T) A, GivensObj!(T) g)
{
  immutable auto m = A.nrow;
  for(long j = 0; j < m; ++j)
  {
    T t1 = A[j, g.i];
    T t2 = A[j, g.k];
    A[j, g.i] = g.c*t1 - g.s*t2;
    A[j, g.k] = g.s*t1 + g.c*t2;
  }
  return A;
}

and the code for the Givens QR solver is given by:

auto qrSolver(QRAlgorithm algo, T)(Matrix!(T) A, T[] b)
if(isFloatingPoint!(T) && (algo == Givens))
{
  immutable auto m = A.nrow;
  immutable auto n = A.ncol;
  auto Q = eye!(T)(m);
  auto Rp = A.dup;
  auto y = b.dup;
  for(long j = 0; j < n; ++j)
  {
    for(long i = m - 1; i > j; --i)
    {
      auto g = GivensObj!(T)([Rp[i - 1, j], Rp[i, j]], 
                              [i - 1, i, m]);
      Q.mult(g);
      g.mult(y);
      g.mult(Rp);
    }
  }
  
  auto R = zeros!(T)(n, n);
  for(long i = 0; i < n; ++i)
  {
    for(long j = i; j < n; ++j)
    {
      R[i, j] = Rp[i, j];
    }
  }
  Q = Q[0..n];
  y = backSubstitution!(Upper)(R, y[0..n]);
  return QRSolve!(T)(Q, R, y);
}

Running this solver on the previous data:

writeln("qr solve for Givens: ", qrSolver!(Givens)(A, b));

gives the output:

qr solve for Givens: QRSolve!double(Matrix(10 x 5)
0.3757     0.1337     0.4163     -0.07128   -0.02322   
0.3884     -0.4844    0.0277     0.002577   -0.2394    
0.3562     -0.1569    -0.6226    0.1604     -0.01731   
0.3248     0.432      0.1625     -0.3244    -0.1595    
0.3511     0.2785     0.2983     0.2819     0.4118     
0.02568    0.3369     -0.374     0.3135     0.4796     
0.3167     -0.1565    0.1411     0.4385     0.0182     
0.2352     0.2019     -0.1087    0.3636     -0.565     
0.3209     -0.4392    0.01931    -0.315     0.4424     
0.3052     0.2951     -0.3886    -0.5123    -0.03973   
, Matrix(5 x 5)
2.288      1.517      1.607      1.892      1.183      
0          1.105      0.7235     0.07972    0.07877    
0          0          0.6674     0.299      -0.4158    
0          0          0          0.4826     0.6031     
0          0          0          0          0.9661     
, [0.344246, 0.427925, 0.356672, 0.979463, 0.213499])

which is the same output as for the other QR decomposition implementations.

That’s it! Thanks for reading!