Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added docs/content/pytorch/concepts/using-functional-api/using-functional-api.md #5902

Merged
merged 23 commits into from
Jan 22, 2025
Merged
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
84c9cbd
Added docs/content/pytorch/concepts/using-functional-api/using-functi…
ericgeiger1 Dec 28, 2024
13953cb
Merge branch 'main' into codecademy-project
ericgeiger1 Jan 6, 2025
a9e0da0
Merge branch 'main' into codecademy-project
PragatiVerma18 Jan 10, 2025
602c3ae
Merge branch 'main' into codecademy-project
PragatiVerma18 Jan 11, 2025
4d17562
Update using-functional-api.md
PragatiVerma18 Jan 11, 2025
2353aba
Update using-functional-api.md
PragatiVerma18 Jan 11, 2025
ab9eda0
Update using-functional-api.md
PragatiVerma18 Jan 11, 2025
edfe001
Update using-functional-api.md
PragatiVerma18 Jan 11, 2025
731c3ed
Update using-functional-api.md
PragatiVerma18 Jan 11, 2025
08e497d
Update using-functional-api.md
PragatiVerma18 Jan 11, 2025
c1f02fa
Update using-functional-api.md
PragatiVerma18 Jan 11, 2025
be910d5
Merge branch 'main' into codecademy-project
ericgeiger1 Jan 15, 2025
b5e0c60
Update using-functional-api.md
ericgeiger1 Jan 18, 2025
90b7fc8
Update using-functional-api.md
ericgeiger1 Jan 18, 2025
7617dc9
Update using-functional-api.md
ericgeiger1 Jan 18, 2025
2127c65
Update using-functional-api.md
ericgeiger1 Jan 18, 2025
4d69cde
Update using-functional-api.md
ericgeiger1 Jan 18, 2025
ad3b517
Update using-functional-api.md
ericgeiger1 Jan 18, 2025
9977879
Update using-functional-api.md
ericgeiger1 Jan 18, 2025
d85a723
Rename using-functional-api.md to using-functional-apis.md
PragatiVerma18 Jan 22, 2025
cbe15aa
Fix file path
PragatiVerma18 Jan 22, 2025
3ce0caa
Merge branch 'main' into codecademy-project
Radhika-okhade Jan 22, 2025
b8d3477
Merge branch 'main' into codecademy-project
Radhika-okhade Jan 22, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
170 changes: 170 additions & 0 deletions content/pytorch/concepts/using-functional-api/using-functional-api.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
---
Title: 'Using Functional API'
Description: 'Functional API in PyTorch provide a flexible way to define and manipulate neural networks using functions rather than object-oriented classes.'
Subjects:
- 'Data Science'
- 'Machine Learning'
Tags:
- 'Deep Learning'
- 'Neural Networks'
- 'PyTorch'
CatalogContent:
- 'learn-python'
- 'paths/data-science'
---

**Functional API** in PyTorch provides a flexible and powerful way to define and manipulate neural networks. Unlike the `torch.nn.Module` class, which uses an object-oriented approach, functional API allows you to define models using functions.

This can be particularly useful for creating complex models, experimenting with new architectures, or needing more control over the forward pass.

Functional API offer several advantages and unique features that set them apart. Below are essential concepts to understand how they work and how to use them effectively.

## Functional Layers

Functional layers are stateless and do not store parameters. Instead, parameters are passed explicitly. This can be useful for creating custom layers or reusing the same layer with different parameters.

The syntax to generate functional layers is as follows:

```pseudo
import torch
import torch.nn.functional as F

# General syntax for functional layers
output = F.layer_name(input, *parameters, **kwargs)
```

- `input`: The tensor to which the functional layer is applied. This is the primary input to the function, representing data or intermediate computations in the neural network.
- `*parameters`: These are the specific parameters required by the functional layer. These are explicitly passed because functional layers do not store parameters internally. Examples include:
- Weights: Weights are required for layers like `F.linear`.
- Bias: Optional bias terms can also be passed.
- `**kwargs`: Represents additional keyword arguments that modify the behavior of the functional layer. Examples include:
- `Padding`: Used in convolutional layers to add padding.
- `Stride`: Specifies the step size for moving the kernel in convolution.
- `Dilation`: Controls the spacing between kernel elements in convolution.
- `Activation type`: Sometimes layers allow activation specification directly (e.g., `softmax` temperature).

## Custom Loss Functions

Using functional APIs, you can easily define custom loss functions. This is useful when the predefined loss functions in PyTorch do not meet your requirements.

The syntax to define a custom loss function is as follows:

```pseudo
import torch

def custom_loss(output, target):
# Define the loss calculation
loss = some_loss_function(output, target)
return loss
```

- `output`: Represents the predicted values generated by the model. It is typically the output of the last layer of the model, such as logits, probabilities, or regression outputs.
- `target`: Represents the true or ground-truth values corresponding to the predictions. It is used to compute the discrepancy between predictions and the actual values.
- `loss`: This results from the loss function computation. It quantifies the error or difference between the predicted (`output`) and the actual (`target`) values. Common loss functions in PyTorch include:
- Mean Squared Error (MSE)
- Cross-Entropy Loss
- Binary Cross-Entropy Loss

## Activation Functions

PyTorch provides various activation functions in the `torch.nn.functional` module. These can be used to add non-linearity to your models.

The syntax to use a `ReLU` activation function is as follows:

```pseudo
import torch
import torch.nn.functional as F

# Syntax for ReLU activation
output = F.relu(input)
```

- `input`: A tensor to which the ReLU activation will be applied. Negative values in the tensor will be replaced with zero.

## Examples

### Using Linear Transformation

```python
import torch
import torch.nn.functional as F

# Define input tensor
x = torch.randn(10, 3)

# Define weights and bias
weight = torch.randn(5, 3, requires_grad=True)
bias = torch.randn(5, requires_grad=True)

# Apply linear transformation
output = F.linear(x, weight, bias)
```

The code will output a tensor of shape `(10, 5)` representing the result of the linear transformation. However, the actual values of the tensor will be random due to the use of `torch.randn`, which generates random values from a normal distribution.

The output might look like this:

```shell
tensor([[-1.1570, -1.0890, -0.4154, -0.1795, 1.6989],
[-0.5629, -0.3360, -0.3411, -0.2352, 1.0300],
[-0.3185, 0.2398, 0.5389, 0.2491, 1.0749],
...
[ 0.0665, 0.4579, -0.1494, -0.5361, 0.7465],
[-0.5970, 0.3147, 0.1569, 0.1582, 0.5355],
[-0.4481, -0.5795, 0.4445, -0.0623, 0.7024]])
```

> **Note**: The output values are random because `torch.randn` generates random values.

### Using Custom Loss Functions

```py
def custom_loss(output, target):
loss = torch.mean((output - target) ** 2)
return loss

output = torch.randn(10, 5)
target = torch.randn(10, 5)
loss = custom_loss(output, target)
print("Custom MSE Loss:", loss)
```

The output will be a scalar tensor representing the _mean squared error (MSE)_ loss between the `output` and `target` tensors. For example:

```shell
Custom MSE Loss: tensor(0.8423)
```

> **Note**: The exact value will differ on each run due to the random initialization of `output` and `target`.

### Using ReLU Activation

```py
import torch
import torch.nn.functional as F

x = torch.randn(10, 5) # Generates a 10x5 tensor with random values
output = F.relu(x) # Applies the ReLU activation function

print(output)
```

The `F.relu(x)` function replaces all negative values in the tensor with zero while retaining positive values.

A sample output might look like this:

```shell
tensor([[0.0000, 0.2345, 1.4567, 0.0000, 0.9876],
[0.7654, 0.0000, 0.0000, 2.3456, 0.0000],
[0.0000, 0.0000, 1.1234, 0.5678, 0.0000],
...,
[0.3456, 0.0000, 0.8765, 0.0000, 0.0000]])
```

> **Note**: Each run generates random values for `x`, so the exact output will vary.

## Advantages of Using Functional APIs

1. **Flexibility**: Functional API provide more control over the forward pass and allow for easy experimentation with different architectures.
2. **Reusability**: Since functional layers are stateless, they can be reused with different parameters without any side effects.
3. **Customizability**: Easily define custom layers, loss functions, and activation functions to suit your specific needs.
Loading