Skip to content

Commit 7f44992

Browse files
committed
Trying to resolve conflict
2 parents 5d69719 + 3f88711 commit 7f44992

File tree

71 files changed

+2609
-12
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

71 files changed

+2609
-12
lines changed

bin/concept-of-the-week.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
content/general/concepts/machine-code/machine-code.md
1+
content/typescript/concepts/interfaces/interfaces.md
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
---
2+
Title: 'Classification'
3+
Description: 'Classification is a supervised technique in machine learning used to categorize data into predefined classes or labels.'
4+
Subjects:
5+
- 'AI'
6+
- 'Machine Learning'
7+
Tags:
8+
- 'AI'
9+
- 'Machine Learning'
10+
- 'Supervised Learning'
11+
- 'Unsupervised Learning'
12+
CatalogContent:
13+
- 'learn-python-3'
14+
- 'paths/intermediate-machine-learning-skill-path'
15+
---
16+
17+
**Classification** is a supervised technique in machine learning used to categorize data into predefined classes or labels. It involves training a model using labeled data and then using the model to predict labels for new data. Common applications include spam detection, sentiment analysis, and medical diagnosis.
18+
19+
## Classification Process
20+
21+
The general process for performing classification involves the following steps:
22+
23+
```pseudo
24+
1. Import necessary libraries
25+
2. Load and preprocess the dataset
26+
3. Split the dataset into training and testing sets
27+
4. Initialize the classifier (e.g., Logistic Regression, Decision Tree, SVM).
28+
5. Fit the model on the training set
29+
6. Make predictions on the test set
30+
7. Evaluate the model using metrics such as accuracy, precision, recall, and F1-score
31+
```
32+
33+
## Example
34+
35+
[Python](https://www.codecademy.com/resources/docs/python) provides several libraries for performing classification, such as [Scikit-learn](https://www.codecademy.com/resources/docs/sklearn).
36+
37+
Here is an example that demonstrates how to perform classification using Logistic Regression in Scikit-learn:
38+
39+
```py
40+
from sklearn.datasets import load_iris
41+
from sklearn.model_selection import train_test_split
42+
from sklearn.linear_model import LogisticRegression
43+
from sklearn.metrics import accuracy_score
44+
45+
# Load the Iris dataset
46+
iris = load_iris()
47+
X = iris.data
48+
y = iris.target
49+
50+
# Split the dataset into training and testing sets
51+
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
52+
53+
# Initialize the classifier (Logistic Regression)
54+
model = LogisticRegression()
55+
56+
# Fit the model on the training set
57+
model.fit(X_train, y_train)
58+
59+
# Make predictions on the test set
60+
y_pred = model.predict(X_test)
61+
62+
# Evaluate the model
63+
accuracy = accuracy_score(y_test, y_pred)
64+
print(f"Accuracy: {accuracy:.2f}")
65+
```
66+
67+
The above code produces the following output:
68+
69+
```shell
70+
Accuracy: 1.00
71+
```
72+
73+
## Codebyte Example
74+
75+
The following codebyte example demonstrates how to perform classification using [Decision Tree](https://www.codecademy.com/resources/docs/sklearn/decision-trees) in Scikit-learn:
76+
77+
```codebyte/python
78+
from sklearn.datasets import load_iris
79+
from sklearn.model_selection import train_test_split
80+
from sklearn.tree import DecisionTreeClassifier
81+
from sklearn.metrics import accuracy_score
82+
83+
# Load the Iris dataset
84+
iris = load_iris()
85+
X = iris.data
86+
y = iris.target
87+
88+
# Split the dataset into training and testing sets
89+
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
90+
91+
# Define the base classifier (Decision Tree)
92+
model = DecisionTreeClassifier()
93+
94+
# Fit the model on the training set
95+
model.fit(X_train, y_train)
96+
97+
# Make predictions on the test set
98+
y_pred = model.predict(X_test)
99+
100+
# Evaluate the model
101+
accuracy = accuracy_score(y_test, y_pred)
102+
print(f"Accuracy: {accuracy:.2f}")
103+
```
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
---
2+
Title: 'Vanishing Gradient Problem'
3+
Description: 'When gradients become very small during backpropagation, slowing or halting the training process.'
4+
Subjects:
5+
- 'AI'
6+
- 'Data Science'
7+
Tags:
8+
- 'Machine Learning'
9+
- 'Deep Learning'
10+
CatalogContent:
11+
- 'learn-python-3'
12+
- 'paths/computer-science'
13+
---
14+
15+
The **Vanishing gradient problem** occurs when gradients shrink as they move backward through a deep neural network. This causes slow or stalled training because updates to early layers become extremely small. It often appears in neural networks that use certain activation functions, such as sigmoid or hyperbolic tangent, or when the network has many layers.
16+
17+
## How does it occur?
18+
19+
- **Deep Architectures**: Deeper networks have more layers that can multiply small gradient values.
20+
- **Sigmoid or Tanh Activations**: These functions squash input values into a narrow range, which can reduce gradient magnitude.
21+
- **Poor Weight Initialization**: Wrong initial weight scales can cause gradients to vanish.
22+
23+
## How to Fix It
24+
25+
- **Use ReLU or Related Activations**: ReLU functions help avoid squashing the gradient in early layers.
26+
- **Proper Initialization**: Techniques like Xavier or He initialization maintain stable gradients.
27+
- **Batch Normalization**: Normalizing layer inputs can stabilize gradient flow.
28+
- **Skip Connections**: Shortcut paths reduce the effective depth of the network.
29+
30+
## Example: Demonstrating and Addressing the Vanishing Gradient Problem
31+
32+
The following PyTorch example shows a simple deep network with sigmoid activation. The gradients in the earliest layers may become too small, slowing training. Switching to ReLU in the final code snippet provides a potential fix:
33+
34+
```py
35+
import torch
36+
import torch.nn as nn
37+
import torch.optim as optim
38+
39+
# Deep feedforward network with Sigmoid
40+
class DeepSigmoidNet(nn.Module):
41+
def __init__(self):
42+
super().__init__()
43+
self.layers = nn.Sequential(
44+
nn.Linear(100, 128),
45+
nn.Sigmoid(),
46+
nn.Linear(128, 128),
47+
nn.Sigmoid(),
48+
nn.Linear(128, 128),
49+
nn.Sigmoid(),
50+
nn.Linear(128, 10)
51+
)
52+
53+
def forward(self, x):
54+
return self.layers(x)
55+
56+
# Create random data
57+
x = torch.randn(32, 100) # batch of 32
58+
y = torch.randint(0, 10, (32,)) # target classes
59+
60+
model = DeepSigmoidNet()
61+
criterion = nn.CrossEntropyLoss()
62+
optimizer = optim.SGD(model.parameters(), lr=0.01)
63+
64+
# Forward pass
65+
outputs = model(x)
66+
loss = criterion(outputs, y)
67+
68+
# Backward pass
69+
loss.backward()
70+
71+
# Check the gradient norm of the first layer
72+
grad_norm = model.layers[0].weight.grad.norm().item()
73+
print(f"Gradient norm (Sigmoid net, first layer): {grad_norm:.6f}")
74+
75+
# Potential fix: Using ReLU
76+
class DeepReLUNet(nn.Module):
77+
def __init__(self):
78+
super().__init__()
79+
self.layers = nn.Sequential(
80+
nn.Linear(100, 128),
81+
nn.ReLU(),
82+
nn.Linear(128, 128),
83+
nn.ReLU(),
84+
nn.Linear(128, 128),
85+
nn.ReLU(),
86+
nn.Linear(128, 10)
87+
)
88+
89+
def forward(self, x):
90+
return self.layers(x)
91+
92+
model_relu = DeepReLUNet()
93+
optimizer = optim.SGD(model_relu.parameters(), lr=0.01)
94+
95+
outputs_relu = model_relu(x)
96+
loss_relu = criterion(outputs_relu, y)
97+
98+
loss_relu.backward()
99+
grad_norm_relu = model_relu.layers[0].weight.grad.norm().item()
100+
print(f"Gradient norm (ReLU net, first layer): {grad_norm_relu:.6f}")
101+
```
102+
103+
The above code returns the following output:
104+
105+
```shell
106+
Gradient norm (Sigmoid net, first layer): 0.004324
107+
Gradient norm (ReLU net, first layer): 0.118170
108+
```
109+
110+
1. **DeepSigmoidNet**: A fully connected network with multiple layers of sigmoid activation. The gradient often shrinks as it propagates back through each layer.
111+
2. **Gradient Norm**: The code checks the gradient norm of the first layer. A very small value suggests that those parameters receive negligible updates.
112+
3. **DeepReLUNet**: Switching to ReLU reduces the vanishing effect, which can be seen in the larger gradient norm for the first layer.
113+
114+
Using suitable activations, initialization, or techniques like batch normalization and skip connections makes the vanishing gradient problem less severe, making training faster and more reliable.

content/c-sharp/concepts/data-types/data-types.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -24,15 +24,15 @@ Value types are data types that are built-in to C#. The available types and thei
2424
| --------- | ----------------------- | ------------ |
2525
| `bool` | Boolean | 1 byte |
2626
| `byte` | Byte | 1 byte |
27-
| `sbyte` | Short Byte | 1 byte |
27+
| `sbyte` | Signed Byte | 1 byte |
2828
| `char` | Character | 2 bytes |
2929
| `decimal` | Decimal | 16 bytes |
3030
| `double` | Double | 8 bytes |
3131
| `float` | Float | 4 bytes |
3232
| `int` | Integer | 4 bytes |
3333
| `uint` | Unsigned Integer | 4 bytes |
3434
| `nint` | Native Integer | 4 or 8 bytes |
35-
| `unint` | Unsigned Native Integer | 4 or 8 bytes |
35+
| `nuint` | Unsigned Native Integer | 4 or 8 bytes |
3636
| `long` | Long | 8 bytes |
3737
| `ulong` | Unsigned Long | 8 bytes |
3838
| `short` | Short | 2 bytes |
@@ -51,7 +51,7 @@ float heightOfGiraffe = 908.32f;
5151
int seaLevel = -24;
5252
uint year = 2023u;
5353
nint pagesInBook = 412;
54-
unint milesToNewYork = 2597;
54+
nuint milesToNewYork = 2597;
5555
long circumferenceOfEarth = 25000l;
5656
ulong depthOfOcean = 28000ul;
5757
short tableHeight = 4;

0 commit comments

Comments
 (0)