Tensor Reshaping:
tensor.view(shape)
– Returns a new tensor with the specified shape.tensor.reshape(shape)
– Reshapes tensor while preserving data.tensor.transpose(dim0, dim1)
– Permutes the dimensions of the tensor.tensor.squeeze()
– Removes dimensions of size 1.tensor.unsqueeze(dim)
– Adds a dimension of size 1 at the specified position.
Tensor Type and Device Management:
tensor.to(device)
– Moves tensor to a specific device (cpu
orcuda
).tensor.type(dtype)
– Casts tensor to a different data type.tensor.is_cuda
– Checks if the tensor is on a CUDA device.tensor.cpu()
– Moves tensor to CPU.tensor.cuda()
– Moves tensor to GPU.
Here are examples for Tensor Reshaping and Tensor Type and Device Management functions in PyTorch:
Tensor Reshaping
tensor.view(shape)
(Reshape tensor to a new shape, while keeping the same number of elements)
import torch
tensor = torch.tensor([[1, 2, 3], [4, 5, 6]])
result = tensor.view(3, 2) # Reshaping to 3 rows and 2 columns
print(result)
Output:
tensor([[1, 2],
[3, 4],
[5, 6]])
tensor.reshape(shape)
(Similar toview
, but can handle non-contiguous tensors)
result = tensor.reshape(3, 2) # Reshape to 3 rows and 2 columns
print(result)
Output:
tensor([[1, 2],
[3, 4],
[5, 6]])
tensor.transpose(dim0, dim1)
(Permute two dimensions)
tensor = torch.tensor([[1, 2, 3], [4, 5, 6]])
result = tensor.transpose(0, 1) # Swap rows and columns (transpose)
print(result)
Output:
tensor([[1, 4],
[2, 5],
[3, 6]])
tensor.squeeze()
(Remove dimensions of size 1)
tensor = torch.tensor([[[1], [2], [3]]])
result = tensor.squeeze() # Remove the size-1 dimensions
print(result)
Output:
tensor([1, 2, 3])
ten = torch.tensor([[[1,2],[2,2],[2,3]]])
ten.squeeze()
# output:
tensor([[1, 2],
[2, 2],
[2, 3]])
tensor.unsqueeze(dim)
(Insert a dimension of size 1 at a given position)
tensor = torch.tensor([1, 2, 3])
result = tensor.unsqueeze(1) # Add a new dimension at dim 1
# result
tensor([[1],
[2],
[3]])
tensor.unsqueeze(0)
#result
tensor([[1, 2, 3]])
Tensor Type and Device Management
tensor.to(device)
(Move tensor to a specific device, e.g.,cpu
orcuda
)
tensor = torch.tensor([1.0, 2.0, 3.0])
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tensor = tensor.to(device) # Move tensor to GPU (if available)
print(tensor)
Output:
tensor([1., 2., 3.], device='cuda:0') # If CUDA is available
tensor.type(dtype)
(Cast tensor to a different data type)
tensor = torch.tensor([1.0, 2.0, 3.0], dtype=torch.float32)
result = tensor.type(torch.int32) # Change type from float to int
print(result)
Output:
tensor([1, 2, 3], dtype=torch.int32)
tensor.is_cuda
(Check if tensor is on a CUDA-enabled GPU)
tensor = torch.tensor([1.0, 2.0, 3.0]).cuda() # Move tensor to GPU
print(tensor.is_cuda) # Check if tensor is on GPU
Output:
True
tensor.cpu()
(Move tensor to CPU)
tensor = torch.tensor([1.0, 2.0, 3.0]).cuda() # Move tensor to GPU
tensor_cpu = tensor.cpu() # Move tensor back to CPU
print(tensor_cpu)
Output:
tensor([1., 2., 3.])
tensor.cuda()
(Move tensor to GPU)
tensor = torch.tensor([1.0, 2.0, 3.0])
tensor_cuda = tensor.cuda() # Move tensor to GPU
print(tensor_cuda)
Output:
tensor([1., 2., 3.], device='cuda:0')
Discover more from Science Comics
Subscribe to get the latest posts sent to your email.