Department of Mathematics
http://hdl.handle.net/2104/4773
2016-05-06T11:09:13ZApplications of full rank factorization to solving matrix equations
http://hdl.handle.net/2104/9527
Applications of full rank factorization to solving matrix equations
In the study of matrices, we are always searching for tools which allow us to simplify our investigations. Because full rank factorizations exist for all matrices and their properties often help to simplify arguments, their uses are abundant. There exist many matrix equations for which solutions are otherwise quite difficult to find. Full rank factorizations and generalized inverses allow us to easily find solutions to many such equations. Their properties can also be used to study the diagonalization of non-square matrices and to develop conditions under which matrices are simultaneously diagonalizable. Finally, the full rank factorization can be used to derive canonical forms and other factorizations such as the singular value decomposition.
1992-12-01T00:00:00ZA multigrid Krylov method for eigenvalue problems.
http://hdl.handle.net/2104/9514
A multigrid Krylov method for eigenvalue problems.
We are interested in computing eigenvalues and eigenvectors of matrices derived from differential equations. They are often large sparse matrices, including both symmetric and non symmetric cases. Restarted Arnoldi methods are iterative methods for eigenvalue problems based on Krylov subspaces. Multigrid methods solve differential equations by taking advantage of the hierarchy of discretizations. A multigrid Krylov method is proposed by combining Arnoldi and multigrid methods. We compare the new approach with other methods, and explore the theory to explain its efficiency.
2015-07-31T00:00:00ZKrylov methods for solving a sequence of large systems of linear equations.
http://hdl.handle.net/2104/9511
Krylov methods for solving a sequence of large systems of linear equations.
Consider solving a sequence of linear systems A_{(i)}x^{(i)}=b^{(i)}, i=1, 2, ... where A₍ᵢ₎ ϵℂⁿᵡⁿ and b⁽ⁱ⁾ϵℂⁿ using some variations of Krylov subspace methods, like GMRES. For a single system Ax=b, it is well-known that the eigenvectors of the coefficient matrix A can be used to speed up the convergence of GMRES by deflating the corresponding eigenvalues. In this dissertation, we propose a deflation-based algorithm that utilizes the eigenvalue and eigenvector information obtained from one system to improve the convergence of GMRES for solving the subsequent systems. When the change in the system is small enough, the algorithm will REUSE the eigenvectors from the previous system to deflate the small eigenvalues from the new system via a projection to speed up convergence. When the change is significant enough that projection loses effectiveness, the algorithm will RECYCLE the eigenvectors from the previous system by adding them to the new Krylov subspace, thus improving them so that they can be suitable candidates for deflation once again. If the system has changed too much, or the new system is completely unrelated to the previous system, the algorithm will REGENERATE a new set of eigenvectors to help with deflation.
2015-07-22T00:00:00ZBoundary condition dependence of spectral zeta functions.
http://hdl.handle.net/2104/9459
Boundary condition dependence of spectral zeta functions.
In this work, we provide the analytic continuation of the spectral zeta function associated with the one-dimensional regular Sturm-Liouville problem and the two-dimensional Laplacian on the annulus. In the one-dimensional setting, we consider general separated and coupled boundary conditions, and on the annulus we restrict our work to Dirichlet-Robin boundary conditions. In both cases, we use our results to calculate the coefficients of the asymptotic expansion of the associated heat kernel. In the one-dimensional case, we additionally use the analytically continued spectral zeta function to compute the determinant of the Sturm-Liouville operator.
2015-07-14T00:00:00Z