You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: Implement additional optimization algorithms by Prof. R.V. Rao
Add five new optimization algorithms to the library:
- Jaya Algorithm: Parameter-free algorithm moving toward best and away from worst solutions
- Rao-1: Algorithm using best solution and solution comparison
- Rao-2: Algorithm using best, worst, and average fitness values
- Rao-3: Algorithm using best solution and phase factor
- TLBO: Teaching-Learning-Based Optimization with Teacher and Learner phases
Additional changes:
- Create comprehensive documentation for each new algorithm
- Update API reference documentation
- Add test cases with appropriate thresholds for stochastic algorithms
- Update README with examples for all new algorithms
- Ensure CI/CD workflows compatibility
All algorithms support both constrained and unconstrained optimization problems.
These algorithms are designed to solve both **constrained** and **unconstrained** optimization problems without relying on metaphors or algorithm-specific parameters. The package is based on the paper:
10
+
These algorithms are designed to solve both **constrained** and **unconstrained** optimization problems without relying on metaphors or algorithm-specific parameters. The BMR and BWR algorithms are based on the paper:
8
11
9
12
**Ravipudi Venkata Rao and Ravikumar Shah (2024)**, "BMR and BWR: Two simple metaphor-free optimization algorithms for solving real-life non-convex constrained and unconstrained problems." [arXiv:2407.11149v2](https://arxiv.org/abs/2407.11149).
10
13
11
14
## Features
12
15
13
16
-**Metaphor-Free**: No reliance on nature-inspired metaphors.
14
-
-**Simple**: No algorithm-specific parameters to tune.
17
+
-**Simple**: Most algorithms have no algorithm-specific parameters to tune.
15
18
-**Flexible**: Handles both constrained and unconstrained optimization problems.
19
+
-**Versatile**: Includes a variety of algorithms suitable for different types of optimization problems.
16
20
17
21
## Installation
18
22
@@ -50,14 +54,14 @@ print(f"Constrained BMR Best solution: {best_solution}")
50
54
51
55
```
52
56
53
-
### Example: Unconstrained BMR Algorithm
57
+
### Example: Jaya Algorithm
54
58
55
59
```python
56
60
import numpy as np
57
-
from rao_algorithms importBMR_algorithm, objective_function
61
+
from rao_algorithms importJaya_algorithm, objective_function
print(f"Rao-3 Best solution found: {best_solution_rao3}")
94
124
```
95
125
96
126
### Unit Testing
@@ -122,6 +152,27 @@ The BWR algorithm updates solutions by considering the best, worst, and random s
122
152
123
153
-**Paper Citation**: R. V. Rao, R. Shah, *BMR and BWR: Two simple metaphor-free optimization algorithms*. [arXiv:2407.11149v2](https://arxiv.org/abs/2407.11149).
124
154
155
+
### Jaya Algorithm
156
+
157
+
The Jaya algorithm is a parameter-free algorithm that always tries to move toward the best solution and away from the worst solution. The name "Jaya" means "victory" in Sanskrit.
158
+
159
+
-**Paper Citation**: R. V. Rao, "Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems", International Journal of Industrial Engineering Computations, 7(1), 2016, 19-34.
160
+
161
+
### Rao Algorithms (Rao-1, Rao-2, Rao-3)
162
+
163
+
The Rao algorithms are a family of three metaphor-less optimization algorithms. Each algorithm uses a different strategy to guide the search process:
164
+
-**Rao-1**: Uses the best solution and solution comparison
165
+
-**Rao-2**: Uses the best, worst, and average fitness
166
+
-**Rao-3**: Uses the best solution and a phase factor
167
+
168
+
-**Paper Citation**: R. V. Rao, "Rao algorithms: Three metaphor-less simple algorithms for solving optimization problems", International Journal of Industrial Engineering Computations, 11(2), 2020, 193-212.
169
+
170
+
### TLBO (Teaching-Learning-Based Optimization)
171
+
172
+
TLBO is a parameter-free algorithm inspired by the teaching-learning process in a classroom. It consists of two phases: Teacher Phase and Learner Phase.
173
+
174
+
-**Paper Citation**: R. V. Rao, V. J. Savsani, D. P. Vakharia, "Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems", Information Sciences, 183(1), 2012, 1-15.
175
+
125
176
## Docker Support
126
177
127
178
You can use the included `Dockerfile` to build and test the package quickly. To build and run the package in Docker:
@@ -137,4 +188,7 @@ This package is licensed under the MIT License. See the [LICENSE](LICENSE) file
137
188
138
189
## References
139
190
140
-
1. Ravipudi Venkata Rao, Ravikumar Shah, "BMR and BWR: Two simple metaphor-free optimization algorithms for solving real-life non-convex constrained and unconstrained problems," [arXiv:2407.11149v2](https://arxiv.org/abs/2407.11149).
191
+
1. Ravipudi Venkata Rao, Ravikumar Shah, "BMR and BWR: Two simple metaphor-free optimization algorithms for solving real-life non-convex constrained and unconstrained problems," [arXiv:2407.11149v2](https://arxiv.org/abs/2407.11149).
192
+
2. Ravipudi Venkata Rao, "Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems", International Journal of Industrial Engineering Computations, 7(1), 2016, 19-34.
193
+
3. Ravipudi Venkata Rao, "Rao algorithms: Three metaphor-less simple algorithms for solving optimization problems", International Journal of Industrial Engineering Computations, 11(2), 2020, 193-212.
194
+
4. Ravipudi Venkata Rao, V. J. Savsani, D. P. Vakharia, "Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems", Information Sciences, 183(1), 2012, 1-15.
The BMR (Best-Mean-Random) algorithm is a simple, metaphor-free optimization algorithm designed to solve both constrained and unconstrained optimization problems. It uses the best solution, mean solution, and a random solution from the population to guide the search process.
6
+
7
+
## Algorithm Workflow
8
+
9
+
```mermaid
10
+
flowchart TD
11
+
A[Initialize Population] --> B[Evaluate Fitness]
12
+
B --> C[Identify Best Solution]
13
+
C --> D[Calculate Mean Solution]
14
+
D --> E[Update Solutions]
15
+
E --> F{Termination Criteria Met?}
16
+
F -->|No| B
17
+
F -->|Yes| G[Return Best Solution]
18
+
19
+
subgraph "Update Rule"
20
+
E1[For each solution] --> E2{Random r4 > 0.5?}
21
+
E2 -->|Yes| E3[Update using Best and Mean]
22
+
E2 -->|No| E4[Random Exploration]
23
+
E3 --> E5[Clip to Bounds]
24
+
E4 --> E5
25
+
end
26
+
```
27
+
28
+
## Mathematical Formulation
29
+
30
+
The BMR algorithm updates solutions based on the following rules:
31
+
32
+
For each solution $X_i$ in the population:
33
+
34
+
1. Generate random numbers $r_1, r_2, r_3, r_4 \in [0,1]$
35
+
2. Randomly select $T \in \{1, 2\}$
36
+
3. Select a random solution $X_{rand}$ from the population
5. Clip the solution to ensure it stays within bounds
43
+
44
+
Where:
45
+
- $X_{best}$ is the best solution in the population
46
+
- $X_{mean}$ is the mean of all solutions in the population
47
+
- $X_{upper}$ and $X_{lower}$ are the upper and lower bounds
48
+
49
+
## Pseudocode
50
+
51
+
```
52
+
function BMR_algorithm(bounds, num_iterations, population_size, num_variables, objective_func, constraints):
53
+
Initialize population randomly within bounds
54
+
55
+
for iteration = 1 to num_iterations:
56
+
Evaluate fitness of each solution (with penalty for constraints if applicable)
57
+
Identify best solution and calculate mean solution
58
+
59
+
for each solution in population:
60
+
Generate random numbers r1, r2, r3, r4
61
+
Randomly select T ∈ {1, 2}
62
+
Select a random solution from the population
63
+
64
+
if r4 > 0.5:
65
+
Update solution using best and mean solutions
66
+
else:
67
+
Perform random exploration
68
+
69
+
Clip solution to stay within bounds
70
+
71
+
Return best solution and convergence history
72
+
```
73
+
74
+
## Implementation Details
75
+
76
+
The BMR algorithm is implemented in the `algorithms.py` file. Here's a breakdown of the key components:
77
+
78
+
1.**Initialization**: Population is initialized randomly within the specified bounds
79
+
2.**Fitness Evaluation**: Each solution is evaluated using the objective function (with penalty for constraints if applicable)
80
+
3.**Solution Update**: Solutions are updated based on the best solution, mean solution, and a random solution
81
+
4.**Exploration vs. Exploitation**: The algorithm balances exploration and exploitation through its update rules
82
+
5.**Constraint Handling**: Constraints are handled using a penalty function approach
83
+
84
+
## Parameters
85
+
86
+
| Parameter | Description |
87
+
|-----------|-------------|
88
+
|`bounds`| Lower and upper bounds for each variable |
89
+
|`num_iterations`| Maximum number of iterations |
90
+
|`population_size`| Number of solutions in the population |
91
+
|`num_variables`| Dimensionality of the problem |
92
+
|`objective_func`| Function to be optimized |
93
+
|`constraints`| List of constraint functions (optional) |
94
+
95
+
## Constrained Optimization
96
+
97
+
For constrained optimization problems, the BMR algorithm uses a penalty function approach:
98
+
99
+
```mermaid
100
+
flowchart LR
101
+
A[Objective Function] --> C[Combined Fitness]
102
+
B[Penalty Function] --> C
103
+
D[Constraints] --> B
104
+
105
+
style C fill:#bbf,stroke:#333,stroke-width:2px
106
+
```
107
+
108
+
The penalty function adds a penalty term to the objective function value based on the degree of constraint violation.
109
+
110
+
## Example Usage
111
+
112
+
```python
113
+
import numpy as np
114
+
from rao_algorithms importBMR_algorithm, objective_function
115
+
116
+
# Define the bounds for a 2D problem
117
+
bounds = np.array([[-100, 100]] *2)
118
+
119
+
# Set parameters
120
+
num_iterations =100
121
+
population_size =50
122
+
num_variables =2
123
+
124
+
# Run the BMR algorithm
125
+
best_solution, best_scores = BMR_algorithm(
126
+
bounds,
127
+
num_iterations,
128
+
population_size,
129
+
num_variables,
130
+
objective_function
131
+
)
132
+
print(f"Best solution found: {best_solution}")
133
+
```
134
+
135
+
## Performance Characteristics
136
+
137
+
-**Convergence**: The BMR algorithm typically exhibits fast convergence due to its use of the mean solution
138
+
-**Exploration**: Random exploration is ensured through the random solution component and the random exploration step
139
+
-**Exploitation**: Exploitation is achieved through the use of the best solution and mean solution
140
+
-**Balance**: The algorithm maintains a good balance between exploration and exploitation
141
+
142
+
## References
143
+
144
+
1. Ravipudi Venkata Rao and Ravikumar Shah (2024), "BMR and BWR: Two simple metaphor-free optimization algorithms for solving real-life non-convex constrained and unconstrained problems." [arXiv:2407.11149v2](https://arxiv.org/abs/2407.11149).
0 commit comments