Skip to content

Commit

Permalink
minors
Browse files Browse the repository at this point in the history
  • Loading branch information
johnsaveus committed Nov 11, 2024
1 parent 8255a19 commit f58923e
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 8 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -5,20 +5,19 @@ sidebar_position: 2
# Create a Model
This document demonstrates how to create a Graph Neural Network model using JaqpotPy with a specific architecture. The example assumes you have already preprocessed your data and have a dataset ready to use.

## Import Required Libraries
## Basic Setup

We first import the necessary libraries:

```python
import torch
from torch_geometric.loader import DataLoader
from jaqpotpy.descriptors.graph import SmilesGraphFeaturizer
from jaqpotpy.datasets import SmilesGraphDataset
from jaqpotpy.models.torch_geometric_models.graph_neural_network import GraphSageNetwork, GraphAttentionNetwork
```

## Define the Graph Neural Network Architecture
In this example, we use a GraphSageNetwork architecture. The network takes node features from a featurizer and creates a model with specified input dimensions, hidden layers, and output dimensions. Currently Jaqpotpy library support the following architectures:
In this example, we use a GraphSageNetwork architecture. The network takes node features from a featurizer and creates a model with specified input dimensions, hidden layers, and output dimensions. Currently Jaqpotpy library supports the following architectures:

- GraphConvolutionNetork
- GraphSageNetwork
Expand All @@ -45,7 +44,7 @@ model = GraphSageNetwork(
- input_dim: Number of input neurons, determined by the number of node features.
- hidden_layers: Number of hidden layers in the network.
- hidden_dim: Number of neurons in each hidden layer.
- output_dim: Number of output neurons (default set to 1 for binary classification).
- output_dim: Number of output neurons (default set to 1 for binary classification or regression).
- activation: Activation function, specified using PyTorch (e.g., ReLU).
- pooling: Graph pooling method (options: mean, add, max).
- dropout_proba: Dropout probability for regularization.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
sidebar_position: 4
sidebar_position: 3
---

# Train and evaluate a model
# Train and Evaluate a Model

In this section, we continue building on the previous example by defining the optimizer, loss function, and training procedure for our graph neural network (GNN) model using `jaqpotpy`. We utilize PyTorch-based components to configure the training process.

Expand All @@ -16,7 +16,7 @@ from torch_geometric.loader import DataLoader
from jaqpotpy.descriptors.graph import SmilesGraphFeaturizer
from jaqpotpy.datasets import SmilesGraphDataset
from jaqpotpy.models.torch_geometric_models.graph_neural_network import GraphSageNetwork, GraphAttentionNetwork
from jaqpotpy.models.trainers.graph_trainers import BinaryGraphModelTrainer, RegressionModelTrainer
from jaqpotpy.models.trainers.graph_trainers import BinaryGraphModelTrainer, RegressionGraphModelTrainer
```

## Binary Classification
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 3
sidebar_position: 4
---

# Upload a Model
Expand Down

0 comments on commit f58923e

Please sign in to comment.