List of common linear algebra operations in PyTorch:
torch.det(a)– Computes the determinant of a square matrixa.torch.inverse(a)– Computes the inverse of a square matrixa.torch.lu(a)– Computes the LU factorization of a matrix.torch.qr(a)– Computes the QR decomposition of a matrix.torch.cholesky(a)– Computes the Cholesky decomposition of a symmetric positive-definite matrix.torch.svd(a)– Computes the singular value decomposition of a matrix. Returns U, S, V such thata = U @ S.diag() @ V.t().torch.eig(a, eigenvectors=True)– Computes the eigenvalues and eigenvectors of a square matrixa.torch.linalg.eig(a)– Modern version to compute eigenvalues and eigenvectors.torch.norm(a, p='fro')– Frobenius or other norms of matrixa.torch.linalg.norm(a, ord)– Computes vector norms with specific orders.torch.trace(a)– Computes the sum of elements on the diagonal of matrixa.torch.ger(a, b)– Computes the outer product of vectorsaandb.torch.dot(a, b)– Computes the dot product of two 1-D tensorsaandb.torch.solve(b, A)– Solves the linear system of equationsAX = B.torch.linalg.solve(a, b)– Modern version to solve the linear systemAX = B.torch.matrix_rank(a)– Computes the rank of matrixa.torch.pinverse(a)– Computes the pseudo-inverse of a matrixa.
Here are the examples of linear algebra operations in PyTorch, with both code and expected output:
import torch
- Determinant (
torch.det):
a = torch.tensor([[1., 2.], [3., 4.]])
result = torch.det(a)
print(result)
Output:
tensor(-2.)
- Inverse (
torch.inverse):
result = torch.inverse(a)
print(result)
Output:
tensor([[-2.0000, 1.0000],
[ 1.5000, -0.5000]])
- LU Decomposition (
torch.lu):
P, L, U = torch.lu(a)
print(P)
print(L)
print(U)
Output:
tensor([[0., 1.],
[1., 0.]])
tensor([[1.0000, 0.0000],
[0.3333, 1.0000]])
tensor([[3.0000, 4.0000],
[0.0000, 0.6667]])
- QR Decomposition (
torch.qr):
a = torch.tensor([[12., -51., 4.], [6., 167., -68.], [-4., 24., -41.]])
Q, R = torch.qr(a)
print(Q)
print(R)
Output:
tensor([[-0.8571, 0.3943, 0.3314],
[-0.4286, -0.9029, -0.0343],
[ 0.2857, -0.1714, 0.9429]])
tensor([[-14.0000, -21.0000, 14.0000],
[ 0.0000, -175.0000, 70.0000],
[ 0.0000, 0.0000, -35.0000]])
- Cholesky Decomposition (
torch.cholesky):
a = torch.tensor([[4., 12.], [12., 37.]])
result = torch.cholesky(a)
print(result)
Output:
tensor([[2.0000, 0.0000],
[6.0000, 1.0000]])
- SVD (
torch.svd):
a = torch.tensor([[1., 2.], [3., 4.]])
U, S, V = torch.svd(a)
print(U)
print(S)
print(V)
Output:
tensor([[-0.4046, -0.9145],
[-0.9145, 0.4046]])
tensor([5.4649, 0.3657])
tensor([[-0.5760, -0.8174],
[ 0.8174, -0.5760]])
- Eigenvalues and Eigenvectors (
torch.eig):
a = torch.tensor([[0., 1.], [-2., -3.]])
eigenvalues, eigenvectors = torch.eig(a, eigenvectors=True)
print(eigenvalues)
print(eigenvectors)
Output:
tensor([[-1., 0.],
[-2., 0.]])
tensor([[0.7071, 0.4472],
[0.7071, 0.8944]])
- Eigenvalues and Eigenvectors (
torch.linalg.eig):
eigenvalues, eigenvectors = torch.linalg.eig(a)
print(eigenvalues)
print(eigenvectors)
Output:
tensor([-1.+0.j, -2.+0.j])
tensor([[0.7071+0.j, 0.4472+0.j],
[0.7071+0.j, 0.8944+0.j]])
- Matrix Norm (
torch.norm):
result = torch.norm(a, p='fro')
print(result)
Output:
tensor(3.7417)
- Vector Norm (
torch.linalg.norm):
a = torch.tensor([1., -2., 3.])
result = torch.linalg.norm(a, ord=2)
print(result)
Output:
tensor(3.7417)
- Trace of a Matrix (
torch.trace):
a = torch.tensor([[1, 2], [3, 4]])
result = torch.trace(a)
print(result)
Output:
tensor(5)
- Outer Product (
torch.ger):
a = torch.tensor([1, 2])
b = torch.tensor([3, 4])
result = torch.ger(a, b)
print(result)
Output:
tensor([[3, 4],
[6, 8]])
- Inner Product (
torch.dot):
result = torch.dot(a, b)
print(result)
Output:
tensor(11)
- Solving Linear Systems (
torch.solve):
A = torch.tensor([[3., 2.], [1., 2.]])
B = torch.tensor([[2.], [0.]])
solution, _ = torch.solve(B, A)
print(solution)
Output:
tensor([[-2.],
[ 1.]])
- Solving Linear Systems (
torch.linalg.solve):
solution = torch.linalg.solve(A, B)
print(solution)
Output:
tensor([[-2.],
[ 1.]])
- Cross Product (
torch.cross):
a = torch.tensor([1, 0, 0])
b = torch.tensor([0, 1, 0])
result = torch.cross(a, b)
print(result)
Output:
tensor([0, 0, 1])
- Matrix Rank (
torch.matrix_rank):
a = torch.tensor([[1, 2], [2, 4]])
result = torch.matrix_rank(a)
print(result)
Output:
tensor(1)
- Least-Squares Solution (
torch.lstsq):
A = torch.tensor([[2., 3.], [1., 2.]])
B = torch.tensor([[1.], [1.]])
solution, _ = torch.lstsq(B, A)
print(solution)
Output:
tensor([[-1.0000],
[ 1.0000]])
- Pseudo-Inverse (
torch.pinverse):
result = torch.pinverse(a)
print(result)
Output:
tensor([[-2.0000, 1.0000],
[ 1.5000, -0.5000]])