The alternating direction implicit (ADI) methods are computationally efficient and numerically effective tools for computing low-rank solutions of large-scale linear matrix equations. It is known in the literature that the low-rank ADI method for Lyapunov equations is a Petrov-Galerkin projection algorithm that implicitly performs model order reduction. It recursively enforces interpolation at the mirror images of the ADI shifts and places the poles of the reduced-order models at the ADI shifts. In this paper, we show that the low-rank ADI methods for Sylvester and Riccati equations are also Petrov-Galerkin projection algorithms that implicitly perform model order reduction. These methods likewise enforce interpolation at the mirror images of the ADI shifts; however, they do not place the poles at the mirror images of the interpolation points. Instead, their pole placement ensures that the projected Sylvester and Riccati equations they implicitly solve admit a unique solution.
By observing that the ADI methods for Lyapunov, Sylvester, and Riccati equations differ only in pole placement and not in their interpolatory nature, we show that the shifted linear solves-which constitute the bulk of the computational cost-can be shared. The pole-placement step involves only small-scale operations and is therefore inexpensive. We propose a unified ADI framework that requires only two shifted linear solves per iteration to simultaneously solve six Lyapunov equations, one Sylvester equation, and ten Riccati equations, thus substantially increasing the return on investment for the computational cost spent on the linear solves. All operations needed to extract the individual solutions from these shared linear solves are small-scale and inexpensive. Three numerical examples are presented to demonstrate the effectiveness of the proposed ADI framework.