# Difference between revisions of "TNT"

(9 intermediate revisions by 4 users not shown) | |||

Line 1: | Line 1: | ||

The Template Numerical Toolkit (TNT) is a series of C++ templates that implement matrices and vectors. You can get the code and documentation from [http://math.nist.gov/tnt http://math.nist.gov/tnt]. The most useful basic data type is <code>TNT::Array2D</code>, which is the preferred matrix data type. <code>TNT::Array1D</code> is occasionally useful for vectors that you are using with TNT code. Operators and []s are overloaded for matrices like you would expect, and an the templates handle memory allocation and sizing. The associated JAMA templates (same URL) provide Cholesky, Eigenvalue, LU, QR, and SVD decompositions. | The Template Numerical Toolkit (TNT) is a series of C++ templates that implement matrices and vectors. You can get the code and documentation from [http://math.nist.gov/tnt http://math.nist.gov/tnt]. The most useful basic data type is <code>TNT::Array2D</code>, which is the preferred matrix data type. <code>TNT::Array1D</code> is occasionally useful for vectors that you are using with TNT code. Operators and []s are overloaded for matrices like you would expect, and an the templates handle memory allocation and sizing. The associated JAMA templates (same URL) provide Cholesky, Eigenvalue, LU, QR, and SVD decompositions. | ||

− | The nice things about this library are that it is simple, free, and implemented entirely as templates (so you just specify -I/path/to/tnt and include the headers). On the down side, it's a bit limited in what it provides | + | The nice things about this library are that it is simple, free, and implemented entirely as templates (so you just specify -I/path/to/tnt and include the headers). On the down side, it's a bit limited in what it provides. |

Oddly, the library doesn't transpose matrices. It also doesn't invert them. Here's the simplest matrix inversion (and determinant) code: | Oddly, the library doesn't transpose matrices. It also doesn't invert them. Here's the simplest matrix inversion (and determinant) code: | ||

Line 11: | Line 11: | ||

#include <jama_lu.h> | #include <jama_lu.h> | ||

− | TNT::Array2D< | + | template<class T> |

− | + | TNT::Array2D<T> invert(const TNT::Array2D<T> &M) | |

+ | { | ||

+ | assert(M.dim1() == M.dim2()); // square matrices only please | ||

− | + | // solve for inverse with LU decomposition | |

− | + | JAMA::LU<T> lu(M); | |

− | |||

− | + | // create identity matrix | |

− | + | TNT::Array2D<T> id(M.dim1(), M.dim2(), (T)0); | |

− | + | for (int i = 0; i < M.dim1(); i++) id[i][i] = 1; | |

− | + | ||

+ | // solves A * A_inv = Identity | ||

+ | return lu.solve(id); | ||

+ | } | ||

+ | |||

+ | template<class T> | ||

+ | TNT::Array2D<T> transpose(const TNT::Array2D<T> &M) | ||

+ | { | ||

+ | TNT::Array2D<T> tran(M.dim2(), M.dim1() ); | ||

+ | for(int r=0; r<M.dim1(); ++r) | ||

+ | for(int c=0; c<M.dim2(); ++c) | ||

+ | tran[c][r] = M[r][c]; | ||

+ | return tran; | ||

+ | } | ||

+ | |||

+ | </pre> | ||

+ | |||

+ | |||

+ | |||

+ | Howdy: | ||

+ | |||

+ | I am not sure I should be able to edit this, but I can. Did you folks end up rolling your own transpose? I have recently done a very short survey of c++ matrix packages for svd, with the notable exception of blast. "newmat" was all but useless for for anything but positive definite matrices, and I was only using symmetric square matrices of size ~600x600. | ||

+ | |||

+ | While Im here, one more gotcha with TNT is you need to use the "matmult" methods, instead of naively doing C = A * B. I convinced myself of this with a little toy svd on 3x3 random numbers. | ||

+ | |||

+ | --John | ||

+ | |||

+ | Hey, it is indeed surprising that the library does not have such basic calculations. I have taken the liberty of fixing a few small bugs in the above code and putting it into the template style of the rest of the library. I also posted the transpose function, although it is not complicated. John, you can edit the header and add the following operator overload for multiplication (it uses the same code as matmult). Unfortunately this entire library was written using some very bad practices, I'm afraid someone should probably undertake the task of fixing it. | ||

+ | |||

+ | <pre> | ||

+ | template<class T> | ||

+ | Array2D<T> Array2D<T>::operator*(const Array2D<T> &B) const | ||

+ | { | ||

+ | if (dim2() != B.dim1()) | ||

+ | return Array2D<T>(); | ||

+ | |||

+ | int M = dim1(); | ||

+ | int N = dim2(); | ||

+ | int K = B.dim2(); | ||

+ | |||

+ | Array2D<T> C(M,K); | ||

+ | |||

+ | for (int i=0; i<M; i++) | ||

+ | for (int j=0; j<K; j++) | ||

+ | { | ||

+ | T sum = 0; | ||

+ | |||

+ | for (int k=0; k<N; k++) | ||

+ | sum += (*this)[i][k] * B[k][j]; | ||

+ | |||

+ | C[i][j] = sum; | ||

+ | } | ||

+ | |||

+ | return C; | ||

} | } | ||

</pre> | </pre> | ||

+ | |||

+ | Note: As far as I know, the JAMA routines are fairly straight ports from LAPACK and EIGPACK, which goes a long way to explaining the coding style. |

## Latest revision as of 17:14, 20 September 2009

The Template Numerical Toolkit (TNT) is a series of C++ templates that implement matrices and vectors. You can get the code and documentation from http://math.nist.gov/tnt. The most useful basic data type is `TNT::Array2D`

, which is the preferred matrix data type. `TNT::Array1D`

is occasionally useful for vectors that you are using with TNT code. Operators and []s are overloaded for matrices like you would expect, and an the templates handle memory allocation and sizing. The associated JAMA templates (same URL) provide Cholesky, Eigenvalue, LU, QR, and SVD decompositions.

The nice things about this library are that it is simple, free, and implemented entirely as templates (so you just specify -I/path/to/tnt and include the headers). On the down side, it's a bit limited in what it provides.

Oddly, the library doesn't transpose matrices. It also doesn't invert them. Here's the simplest matrix inversion (and determinant) code:

#include <assert.h> #include <tnt_array1d.h> #include <tnt_array2d.h> #include <jama_lu.h> template<class T> TNT::Array2D<T> invert(const TNT::Array2D<T> &M) { assert(M.dim1() == M.dim2()); // square matrices only please // solve for inverse with LU decomposition JAMA::LU<T> lu(M); // create identity matrix TNT::Array2D<T> id(M.dim1(), M.dim2(), (T)0); for (int i = 0; i < M.dim1(); i++) id[i][i] = 1; // solves A * A_inv = Identity return lu.solve(id); } template<class T> TNT::Array2D<T> transpose(const TNT::Array2D<T> &M) { TNT::Array2D<T> tran(M.dim2(), M.dim1() ); for(int r=0; r<M.dim1(); ++r) for(int c=0; c<M.dim2(); ++c) tran[c][r] = M[r][c]; return tran; }

Howdy:

I am not sure I should be able to edit this, but I can. Did you folks end up rolling your own transpose? I have recently done a very short survey of c++ matrix packages for svd, with the notable exception of blast. "newmat" was all but useless for for anything but positive definite matrices, and I was only using symmetric square matrices of size ~600x600.

While Im here, one more gotcha with TNT is you need to use the "matmult" methods, instead of naively doing C = A * B. I convinced myself of this with a little toy svd on 3x3 random numbers.

--John

Hey, it is indeed surprising that the library does not have such basic calculations. I have taken the liberty of fixing a few small bugs in the above code and putting it into the template style of the rest of the library. I also posted the transpose function, although it is not complicated. John, you can edit the header and add the following operator overload for multiplication (it uses the same code as matmult). Unfortunately this entire library was written using some very bad practices, I'm afraid someone should probably undertake the task of fixing it.

template<class T> Array2D<T> Array2D<T>::operator*(const Array2D<T> &B) const { if (dim2() != B.dim1()) return Array2D<T>(); int M = dim1(); int N = dim2(); int K = B.dim2(); Array2D<T> C(M,K); for (int i=0; i<M; i++) for (int j=0; j<K; j++) { T sum = 0; for (int k=0; k<N; k++) sum += (*this)[i][k] * B[k][j]; C[i][j] = sum; } return C; }

Note: As far as I know, the JAMA routines are fairly straight ports from LAPACK and EIGPACK, which goes a long way to explaining the coding style.