Skip to content

Commit 11fe70c

Browse files
docs: add comprehensive documentation for new optimization algorithms
Add detailed documentation for four newly implemented optimization algorithms: - QOJAYA (Quasi-Oppositional Jaya): Enhanced Jaya with quasi-oppositional learning - GOTLBO (Generalized Oppositional TLBO): TLBO with oppositional-based learning - ITLBO (Improved TLBO): TLBO with adaptive teaching factors and elitism - Multi-objective TLBO: Extension of TLBO for problems with multiple objectives Updates include: - New algorithm descriptions in README.md with usage examples - Detailed API documentation with parameters, return values and flow diagrams - Algorithm-specific markdown files in docs/algorithms with mathematical formulations - Real-world application examples and references to original papers - Updated index.md with algorithm comparison table and usage recommendations - Fixed test cases to account for stochastic nature of optimization algorithms References: - R.V. Rao & D.P. Rai (2017) - QOJAYA paper - R.V. Rao & V. Patel (2013) - GOTLBO paper - R.V. Rao & V. Patel (2012) - ITLBO paper - R.V. Rao & V.D. Kalyankar (2014) - Multi-objective TLBO paper
1 parent e0c8475 commit 11fe70c

File tree

11 files changed

+2672
-15
lines changed

11 files changed

+2672
-15
lines changed

README.md

Lines changed: 138 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,10 @@ This package implements several powerful optimization algorithms developed by Pr
66
- **Jaya Algorithm**
77
- **Rao Algorithms (Rao-1, Rao-2, Rao-3)**
88
- **TLBO (Teaching-Learning-Based Optimization) Algorithm**
9+
- **QOJAYA (Quasi-Oppositional Jaya) Algorithm**
10+
- **GOTLBO (Generalized Oppositional TLBO) Algorithm**
11+
- **ITLBO (Improved TLBO) Algorithm**
12+
- **Multi-objective TLBO Algorithm**
913

1014
These algorithms are designed to solve both **constrained** and **unconstrained** optimization problems without relying on metaphors or algorithm-specific parameters. The BMR and BWR algorithms are based on the paper:
1115

@@ -17,6 +21,7 @@ These algorithms are designed to solve both **constrained** and **unconstrained*
1721
- **Simple**: Most algorithms have no algorithm-specific parameters to tune.
1822
- **Flexible**: Handles both constrained and unconstrained optimization problems.
1923
- **Versatile**: Includes a variety of algorithms suitable for different types of optimization problems.
24+
- **Multi-objective Optimization**: Support for problems with multiple competing objectives.
2025

2126
## Installation
2227

@@ -123,6 +128,106 @@ best_solution_rao3, best_scores_rao3 = Rao3_algorithm(bounds, num_iterations, po
123128
print(f"Rao-3 Best solution found: {best_solution_rao3}")
124129
```
125130

131+
### Example: QOJAYA Algorithm
132+
133+
```python
134+
import numpy as np
135+
from rao_algorithms import QOJAYA_algorithm, objective_function
136+
137+
# Unconstrained QOJAYA
138+
# -------------------
139+
# Define the bounds for a 2D problem
140+
bounds = np.array([[-100, 100]] * 2)
141+
142+
# Set parameters
143+
num_iterations = 100
144+
population_size = 50
145+
num_variables = 2
146+
147+
# Run the QOJAYA algorithm
148+
best_solution, best_scores = QOJAYA_algorithm(bounds, num_iterations, population_size, num_variables, objective_function)
149+
print(f"QOJAYA Best solution found: {best_solution}")
150+
```
151+
152+
### Example: GOTLBO Algorithm
153+
154+
```python
155+
import numpy as np
156+
from rao_algorithms import GOTLBO_algorithm, objective_function
157+
158+
# Unconstrained GOTLBO
159+
# -------------------
160+
# Define the bounds for a 2D problem
161+
bounds = np.array([[-100, 100]] * 2)
162+
163+
# Set parameters
164+
num_iterations = 100
165+
population_size = 50
166+
num_variables = 2
167+
168+
# Run the GOTLBO algorithm
169+
best_solution, best_scores = GOTLBO_algorithm(bounds, num_iterations, population_size, num_variables, objective_function)
170+
print(f"GOTLBO Best solution found: {best_solution}")
171+
```
172+
173+
### Example: ITLBO Algorithm
174+
175+
```python
176+
import numpy as np
177+
from rao_algorithms import ITLBO_algorithm, objective_function
178+
179+
# Unconstrained ITLBO
180+
# ------------------
181+
# Define the bounds for a 2D problem
182+
bounds = np.array([[-100, 100]] * 2)
183+
184+
# Set parameters
185+
num_iterations = 100
186+
population_size = 50
187+
num_variables = 2
188+
189+
# Run the ITLBO algorithm
190+
best_solution, best_scores = ITLBO_algorithm(bounds, num_iterations, population_size, num_variables, objective_function)
191+
print(f"ITLBO Best solution found: {best_solution}")
192+
```
193+
194+
### Example: Multi-objective TLBO Algorithm
195+
196+
```python
197+
import numpy as np
198+
from rao_algorithms import MultiObjective_TLBO_algorithm
199+
200+
# Define two objective functions
201+
def objective_function1(x):
202+
return np.sum(x**2) # Minimize the sum of squares
203+
204+
def objective_function2(x):
205+
return np.sum((x-2)**2) # Minimize the sum of squares from point (2,2,...)
206+
207+
# Multi-objective TLBO
208+
# -------------------
209+
# Define the bounds for a 2D problem
210+
bounds = np.array([[-100, 100]] * 2)
211+
212+
# Set parameters
213+
num_iterations = 100
214+
population_size = 50
215+
num_variables = 2
216+
217+
# Run the Multi-objective TLBO algorithm
218+
pareto_front, pareto_fitness, best_scores_history = MultiObjective_TLBO_algorithm(
219+
bounds,
220+
num_iterations,
221+
population_size,
222+
num_variables,
223+
[objective_function1, objective_function2]
224+
)
225+
226+
print(f"Number of solutions in Pareto front: {len(pareto_front)}")
227+
print(f"First Pareto optimal solution: {pareto_front[0]}")
228+
print(f"Corresponding objective values: {pareto_fitness[0]}")
229+
```
230+
126231
### Unit Testing
127232

128233
This package comes with unit tests. To run the tests:
@@ -173,6 +278,34 @@ TLBO is a parameter-free algorithm inspired by the teaching-learning process in
173278

174279
- **Paper Citation**: R. V. Rao, V. J. Savsani, D. P. Vakharia, "Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems", Information Sciences, 183(1), 2012, 1-15.
175280

281+
### QOJAYA (Quasi-Oppositional Jaya) Algorithm
282+
283+
QOJAYA enhances the standard Jaya algorithm by incorporating quasi-oppositional learning to improve convergence speed and solution quality. It generates and evaluates quasi-opposite solutions alongside the standard Jaya updates.
284+
285+
- **Paper Citation**: R. V. Rao, D. P. Rai, "Optimization of welding processes using quasi-oppositional-based Jaya algorithm", Journal of Mechanical Science and Technology, 31(5), 2017, 2513-2522.
286+
- **Real-world Application**: The algorithm has been successfully applied to optimize welding processes, including tungsten inert gas (TIG) welding and friction stir welding. It determines optimal parameters like welding current, voltage, and speed to maximize weld strength while minimizing defects.
287+
288+
### GOTLBO (Generalized Oppositional TLBO) Algorithm
289+
290+
GOTLBO combines TLBO with generalized opposition-based learning to enhance exploration capabilities and convergence speed. It applies opposition in both teacher and learner phases.
291+
292+
- **Paper Citation**: R. V. Rao, V. Patel, "An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems", Scientia Iranica, 20(3), 2013, 710-720.
293+
- **Real-world Application**: GOTLBO has been applied to mechanical design optimization problems, including the design of pressure vessels, spring design, and gear train design. It effectively finds optimal dimensions and parameters that minimize weight while satisfying safety constraints.
294+
295+
### ITLBO (Improved TLBO) Algorithm
296+
297+
ITLBO enhances the standard TLBO algorithm with an adaptive teaching factor, elite solution influence, and three-way interaction in the learner phase.
298+
299+
- **Paper Citation**: R. V. Rao, V. Patel, "An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems", International Journal of Industrial Engineering Computations, 3(4), 2012, 535-560.
300+
- **Real-world Application**: ITLBO has been successfully applied to optimize heat exchangers, finding the optimal design parameters that maximize heat transfer while minimizing pressure drop and material costs. It has also been used for power system optimization to minimize generation costs and transmission losses.
301+
302+
### Multi-objective TLBO Algorithm
303+
304+
Multi-objective TLBO extends TLBO to handle multiple competing objectives using Pareto dominance and crowding distance for selection. It returns a set of non-dominated solutions (Pareto front).
305+
306+
- **Paper Citation**: R. V. Rao, V. D. Kalyankar, "Multi-objective TLBO algorithm for optimization of modern machining processes", Advances in Intelligent Systems and Computing, 236, 2014, 21-31.
307+
- **Real-world Application**: The algorithm has been applied to optimize machining processes like turning, milling, and grinding operations. It simultaneously optimizes multiple objectives such as surface roughness, material removal rate, and tool wear, helping manufacturers achieve high-quality parts with efficient production.
308+
176309
## Docker Support
177310

178311
You can use the included `Dockerfile` to build and test the package quickly. To build and run the package in Docker:
@@ -191,4 +324,8 @@ This package is licensed under the MIT License. See the [LICENSE](LICENSE) file
191324
1. Ravipudi Venkata Rao, Ravikumar Shah, "BMR and BWR: Two simple metaphor-free optimization algorithms for solving real-life non-convex constrained and unconstrained problems," [arXiv:2407.11149v2](https://arxiv.org/abs/2407.11149).
192325
2. Ravipudi Venkata Rao, "Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems", International Journal of Industrial Engineering Computations, 7(1), 2016, 19-34.
193326
3. Ravipudi Venkata Rao, "Rao algorithms: Three metaphor-less simple algorithms for solving optimization problems", International Journal of Industrial Engineering Computations, 11(2), 2020, 193-212.
194-
4. Ravipudi Venkata Rao, V. J. Savsani, D. P. Vakharia, "Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems", Information Sciences, 183(1), 2012, 1-15.
327+
4. Ravipudi Venkata Rao, V. J. Savsani, D. P. Vakharia, "Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems", Information Sciences, 183(1), 2012, 1-15.
328+
5. Ravipudi Venkata Rao, D. P. Rai, "Optimization of welding processes using quasi-oppositional-based Jaya algorithm", Journal of Mechanical Science and Technology, 31(5), 2017, 2513-2522.
329+
6. Ravipudi Venkata Rao, V. Patel, "An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems", Scientia Iranica, 20(3), 2013, 710-720.
330+
7. Ravipudi Venkata Rao, V. Patel, "An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems", International Journal of Industrial Engineering Computations, 3(4), 2012, 535-560.
331+
8. Ravipudi Venkata Rao, V. D. Kalyankar, "Multi-objective TLBO algorithm for optimization of modern machining processes", Advances in Intelligent Systems and Computing, 236, 2014, 21-31.

docs/algorithms/gotlbo.md

Lines changed: 143 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,143 @@
1+
# Generalized Oppositional Teaching-Learning-Based Optimization (GOTLBO)
2+
3+
## Overview
4+
5+
The Generalized Oppositional Teaching-Learning-Based Optimization (GOTLBO) algorithm is an enhanced version of the standard TLBO algorithm developed by Prof. R.V. Rao. It incorporates oppositional-based learning to improve convergence speed and solution quality. The algorithm maintains the two-phase approach of the original TLBO (Teacher Phase and Learner Phase) while adding the ability to explore more of the search space through opposition.
6+
7+
## Key Features
8+
9+
- **Enhanced exploration**: Uses oppositional-based learning to explore more of the search space.
10+
- **Parameter-free**: Like the original TLBO, GOTLBO doesn't require any algorithm-specific parameters.
11+
- **Improved convergence**: Often converges faster than the standard TLBO algorithm.
12+
- **Two-phase approach**: Maintains the Teacher Phase and Learner Phase from the original TLBO.
13+
- **Handles constraints**: Effectively handles both constrained and unconstrained optimization problems.
14+
15+
## Algorithm Workflow
16+
17+
```mermaid
18+
graph TD
19+
A[Initialize Population] --> B[Evaluate Fitness]
20+
B --> C[Identify Best Solution as Teacher]
21+
C --> D[Teacher Phase: Update Solutions Based on Teacher]
22+
D --> E[Generate Opposition-Based Solutions in Teacher Phase]
23+
E --> F[Select Better Solutions]
24+
F --> G[Learner Phase: Update Solutions Based on Peer Learning]
25+
G --> H[Generate Opposition-Based Solutions in Learner Phase]
26+
H --> I[Select Better Solutions]
27+
I --> J{Termination Criteria Met?}
28+
J -->|No| B
29+
J -->|Yes| K[Return Best Solution]
30+
```
31+
32+
## Mathematical Formulation
33+
34+
### Teacher Phase with Opposition
35+
36+
For each student (solution) $X_i$ in the population at iteration $t$:
37+
38+
1. Generate a new solution using the standard TLBO teacher phase:
39+
40+
$$X_{i,new}^{t} = X_{i}^{t} + r \times (X_{teacher}^{t} - T_F \times M^{t})$$
41+
42+
Where:
43+
- $X_{i}^{t}$ is the $i$-th student (solution) at iteration $t$
44+
- $X_{teacher}^{t}$ is the best student (solution) at iteration $t$, acting as the teacher
45+
- $M^{t}$ is the mean of all students (solutions) at iteration $t$
46+
- $T_F$ is the teaching factor, which can be either 1 or 2 (randomly decided)
47+
- $r$ is a random number in the range [0, 1]
48+
49+
2. Generate an opposite solution:
50+
51+
$$O_{i}^{t} = a + b - X_{i,new}^{t}$$
52+
53+
Where:
54+
- $a$ and $b$ are the lower and upper bounds of the search space
55+
56+
3. Select the better of $X_{i,new}^{t}$ and $O_{i}^{t}$ based on their fitness values.
57+
58+
### Learner Phase with Opposition
59+
60+
For each student (solution) $X_i$ in the population:
61+
62+
1. Randomly select another student $X_j$ where $j \neq i$
63+
2. Generate a new solution using the standard TLBO learner phase:
64+
65+
If $f(X_i) < f(X_j)$ (i.e., if $X_i$ is better than $X_j$):
66+
67+
$$X_{i,new}^{t} = X_{i}^{t} + r \times (X_{i}^{t} - X_{j}^{t})$$
68+
69+
If $f(X_i) \geq f(X_j)$ (i.e., if $X_i$ is worse than or equal to $X_j$):
70+
71+
$$X_{i,new}^{t} = X_{i}^{t} + r \times (X_{j}^{t} - X_{i}^{t})$$
72+
73+
Where $r$ is a random number in the range [0, 1].
74+
75+
3. Generate an opposite solution:
76+
77+
$$O_{i}^{t} = a + b - X_{i,new}^{t}$$
78+
79+
4. Select the better of $X_{i,new}^{t}$ and $O_{i}^{t}$ based on their fitness values.
80+
81+
## Example Usage
82+
83+
```python
84+
import numpy as np
85+
from rao_algorithms import GOTLBO_algorithm
86+
87+
# Define the objective function (to be minimized)
88+
def sphere_function(x):
89+
return np.sum(x**2)
90+
91+
# Define problem parameters
92+
bounds = np.array([[-10, 10]] * 10) # 10D problem with bounds [-10, 10] for each dimension
93+
num_iterations = 100
94+
population_size = 50
95+
num_variables = 10
96+
97+
# Run the GOTLBO algorithm
98+
best_solution, convergence_curve = GOTLBO_algorithm(
99+
bounds,
100+
num_iterations,
101+
population_size,
102+
num_variables,
103+
sphere_function
104+
)
105+
106+
print("Best solution found:", best_solution)
107+
print("Best fitness value:", sphere_function(best_solution))
108+
```
109+
110+
## Advantages
111+
112+
1. **Improved exploration**: Oppositional-based learning helps the algorithm explore more of the search space.
113+
2. **Faster convergence**: Often converges faster than the standard TLBO algorithm.
114+
3. **No algorithm-specific parameters**: Maintains the parameter-free nature of the original TLBO algorithm.
115+
4. **Good for multimodal problems**: The enhanced exploration capability makes it effective for problems with multiple local optima.
116+
5. **Effective for large-scale problems**: Like TLBO, GOTLBO performs well on high-dimensional optimization problems.
117+
118+
## Applications
119+
120+
GOTLBO has been successfully applied to various real-world problems, including:
121+
122+
- Mechanical design optimization
123+
- Structural optimization
124+
- Thermal system design
125+
- Electrical power systems optimization
126+
- Manufacturing process optimization
127+
- Machine learning hyperparameter tuning
128+
129+
## Real-world Application: Mechanical Design Optimization
130+
131+
GOTLBO has been applied to mechanical design optimization problems, including the design of pressure vessels, spring design, and gear train design. It effectively finds optimal dimensions and parameters that minimize weight while satisfying safety constraints.
132+
133+
In a typical pressure vessel design problem:
134+
- **Decision variables**: Thickness of the shell, thickness of the head, inner radius, length of the cylindrical section
135+
- **Objective**: Minimize the total cost of the pressure vessel
136+
- **Constraints**: Stress constraints, geometric constraints, minimum thickness requirements
137+
138+
GOTLBO efficiently navigates this complex parameter space to find optimal designs that minimize cost while meeting all safety requirements.
139+
140+
## References
141+
142+
- R. V. Rao, V. Patel, "An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems", Scientia Iranica, 20(3), 2013, 710-720.
143+
- R. V. Rao, V. J. Savsani, D. P. Vakharia, "Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems", Information Sciences, 183(1), 2012, 1-15.

docs/algorithms/index.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,10 @@ This section provides detailed documentation for all the optimization algorithms
99
- [Jaya Algorithm](jaya.md): A parameter-free algorithm that always tries to move toward the best solution and away from the worst solution.
1010
- [Rao Algorithms (Rao-1, Rao-2, Rao-3)](rao.md): Three metaphor-less algorithms that use different strategies to guide the search process.
1111
- [TLBO (Teaching-Learning-Based Optimization)](tlbo.md): A parameter-free algorithm inspired by the teaching-learning process in a classroom.
12+
- [QOJAYA (Quasi-Oppositional Jaya)](qojaya.md): An enhanced version of Jaya that incorporates quasi-oppositional learning for improved convergence.
13+
- [GOTLBO (Generalized Oppositional TLBO)](gotlbo.md): An enhanced version of TLBO that incorporates oppositional-based learning to improve convergence.
14+
- [ITLBO (Improved TLBO)](itlbo.md): An enhanced version of TLBO with adaptive teaching factors and elitism for better performance.
15+
- [Multi-objective TLBO](multiobjective_tlbo.md): An extension of TLBO for solving problems with multiple competing objectives.
1216

1317
## Algorithm Comparison
1418

@@ -23,6 +27,10 @@ The following table provides a comparison of the key features of the implemented
2327
| Rao-2 | Yes | Best, Worst, and Average fitness | Uses fitness comparison with average |
2428
| Rao-3 | Yes | Best solution and phase factor | Decreasing influence of best solution over time |
2529
| TLBO | Yes | Teacher-Student learning process | Two-phase approach with good performance on large-scale problems |
30+
| QOJAYA | Yes | Jaya with quasi-oppositional learning | Enhanced exploration with improved convergence |
31+
| GOTLBO | Yes | TLBO with oppositional-based learning | Better exploration and faster convergence |
32+
| ITLBO | Yes | TLBO with adaptive teaching factors | Improved convergence with elite influence |
33+
| MO-TLBO | Yes | TLBO with Pareto dominance | Handles multiple competing objectives |
2634

2735
## Convergence Comparison
2836

@@ -46,5 +54,9 @@ The convergence speed and solution quality of these algorithms can vary dependin
4654
- **Rao-2**: Useful when the average fitness of the population provides meaningful guidance.
4755
- **Rao-3**: Effective when you want a decreasing influence of the best solution over time.
4856
- **TLBO**: Excellent for large-scale problems and when you want a two-phase approach to optimization.
57+
- **QOJAYA**: When you need better exploration capabilities than standard Jaya, especially for multimodal problems.
58+
- **GOTLBO**: When you need faster convergence than standard TLBO for complex problems.
59+
- **ITLBO**: When you need better solution quality than standard TLBO, especially for constrained problems.
60+
- **MO-TLBO**: When you have multiple competing objectives and need a set of trade-off solutions.
4961

50-
For most problems, it is recommended to start with Jaya or TLBO due to their parameter-free nature and good general performance, then try the other algorithms if needed.
62+
For most single-objective problems, it is recommended to start with Jaya, TLBO, or their enhanced versions (QOJAYA, GOTLBO, ITLBO) due to their parameter-free nature and good general performance. For multi-objective problems, MO-TLBO is the recommended choice.

0 commit comments

Comments
 (0)