From 3d161b77b76d3106a4e80f3fa51696e51509f9ce Mon Sep 17 00:00:00 2001 From: Savi Dahegaonkar Date: Wed, 7 Aug 2024 01:19:37 +0530 Subject: [PATCH 1/5] New file has been added. --- .../dart/concepts/user-input/user-input.md | 56 +++++++++++++++++++ 1 file changed, 56 insertions(+) create mode 100644 content/dart/concepts/user-input/user-input.md diff --git a/content/dart/concepts/user-input/user-input.md b/content/dart/concepts/user-input/user-input.md new file mode 100644 index 00000000000..10c685392cd --- /dev/null +++ b/content/dart/concepts/user-input/user-input.md @@ -0,0 +1,56 @@ +--- +Title: 'user-input' +Description: 'User input in Dart is used to read the data from console and interact with the web applications.' +Subjects: + - 'Code Foundations' + - 'Computer Science' + - 'Mobilr Development' +Tags: + - 'Dart' + - 'Input' + - 'Flutter' + - 'Console' + - 'Forms' +CatalogContent: + - 'learn-dart' + - 'paths/computer-science' +--- + +**User input** is the most basic aspect of intearction between the user and the software. In Dart it varied depending on the type of application being used. It can be a web application, a Flutter or a console application. Dart provides varitey of tools and libraried to manage the user input effectively. + +## Syntax + +```pseudo +import 'dart:io'; + +void main(){ + String? input = stdin.readLineSync(); +} +``` + + +## Example + +The below code shows the implementation of user input in dart. It inputs a value as an age from the user and generates it as output: + +```dart +import 'dart:io'; +void main(){ + stdout.write('Enter your age: '); + String? input=stdin.readLineSync(); + + if(input !=null){ + int age=int.parse(input); + print('Dear Friend, You are $age years old.'); + }else{ + print('Invalid input.'); + } +} +``` + +The above code produces the following output: + +```shell +Enter your age: 21 +Dear Friend, You are 21 years old. +``` \ No newline at end of file From 5a211ba5d71aa65c6e98a37185912f4dc02f63a7 Mon Sep 17 00:00:00 2001 From: shantanu <56212958+cigar-galaxy82@users.noreply.github.com> Date: Sun, 8 Sep 2024 06:40:43 +0530 Subject: [PATCH 2/5] Update user-input.md --- .../dart/concepts/user-input/user-input.md | 31 +++++++++---------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/content/dart/concepts/user-input/user-input.md b/content/dart/concepts/user-input/user-input.md index 10c685392cd..7c460739ab7 100644 --- a/content/dart/concepts/user-input/user-input.md +++ b/content/dart/concepts/user-input/user-input.md @@ -1,23 +1,22 @@ --- -Title: 'user-input' -Description: 'User input in Dart is used to read the data from console and interact with the web applications.' +Title: 'User Input' +Description: 'User input in Dart is used to read the data from the console and interact with the web applications.' Subjects: - 'Code Foundations' - 'Computer Science' - - 'Mobilr Development' + - 'Mobile Development' Tags: - 'Dart' - 'Input' - 'Flutter' - 'Console' - - 'Forms' + - 'Form' CatalogContent: - 'learn-dart' - 'paths/computer-science' --- -**User input** is the most basic aspect of intearction between the user and the software. In Dart it varied depending on the type of application being used. It can be a web application, a Flutter or a console application. Dart provides varitey of tools and libraried to manage the user input effectively. - +**User input** is a fundamental aspect of interaction between users and software. In Dart, it varies depending on the type of application being used. It can be a web application, a Flutter application, or a console application. Dart provides a variety of tools and libraries to manage user input effectively. ## Syntax ```pseudo @@ -28,22 +27,20 @@ void main(){ } ``` - ## Example -The below code shows the implementation of user input in dart. It inputs a value as an age from the user and generates it as output: +The below code shows the implementation of user input in Dart. It inputs a value as an age from the user and generates an output: ```dart import 'dart:io'; -void main(){ +void main() { stdout.write('Enter your age: '); - String? input=stdin.readLineSync(); - - if(input !=null){ - int age=int.parse(input); - print('Dear Friend, You are $age years old.'); - }else{ - print('Invalid input.'); + String? input = stdin.readLineSync(); + if (input != null) { + int age = int.parse(input); + print('Dear Friend, You are $age years old.'); + } else { + print('Invalid input.'); } } ``` @@ -53,4 +50,4 @@ The above code produces the following output: ```shell Enter your age: 21 Dear Friend, You are 21 years old. -``` \ No newline at end of file +``` From 5923e4e82717287d92c33239edac66e03d84edb4 Mon Sep 17 00:00:00 2001 From: shantanu <56212958+cigar-galaxy82@users.noreply.github.com> Date: Sun, 8 Sep 2024 06:42:57 +0530 Subject: [PATCH 3/5] Update user-input.md --- content/dart/concepts/user-input/user-input.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/dart/concepts/user-input/user-input.md b/content/dart/concepts/user-input/user-input.md index 7c460739ab7..fbaf8a79327 100644 --- a/content/dart/concepts/user-input/user-input.md +++ b/content/dart/concepts/user-input/user-input.md @@ -17,6 +17,7 @@ CatalogContent: --- **User input** is a fundamental aspect of interaction between users and software. In Dart, it varies depending on the type of application being used. It can be a web application, a Flutter application, or a console application. Dart provides a variety of tools and libraries to manage user input effectively. + ## Syntax ```pseudo From c167502be0ec5461c0e1cefc1d6f8cc339900ec0 Mon Sep 17 00:00:00 2001 From: Savi Dahegaonkar Date: Fri, 27 Dec 2024 18:23:39 +0530 Subject: [PATCH 4/5] File has been added. --- .../sklearn/concepts/ensembles/ensembles.md | 133 ++++++++++++++++++ 1 file changed, 133 insertions(+) create mode 100644 content/sklearn/concepts/ensembles/ensembles.md diff --git a/content/sklearn/concepts/ensembles/ensembles.md b/content/sklearn/concepts/ensembles/ensembles.md new file mode 100644 index 00000000000..abe17a2eb95 --- /dev/null +++ b/content/sklearn/concepts/ensembles/ensembles.md @@ -0,0 +1,133 @@ +--- +Title: 'Ensembles' +Description: 'A machine learning approach that combines predictions from multiple models for enhancing accuracy and reliability.' +Subjects: + - 'AI' + - 'Data Science' + - 'Machine Learning' +Tags: + - 'Classification' + - 'Data' + - 'Machine Learning' + - 'Scikit-learn' +CatalogContent: + - 'paths/intermediate-machine-learning-skill-path' + - 'paths/data-science' +--- + +**Ensembles** are machine learning techniques that combine the predictions from multiple models in order to increase accuracy, robustness, and reliability in classification and regression tasks. Scikit-learn provides tools to build these sophisticated predictive systems effectively. +Some of the ensemble techniques include Bagging and Boosting. + +## Bagging (Bootstrap Aggregating) + +Bagging refers to training multiple models in parallel on different subsets of the data generated using bootstrapping or random sampling with replacement. The predictions from the models are combined. +This approach reduces the variance and prevents overfitting. Some popular algorithms that can be classified under bagging are `Random Forest` and `Bagging Classifier`. + +## Boosting + +Boosting creates models sequentially, where each new model corrects the mistakes of the previous one by focusing on the harder instances that the former model failed to predict. Well-known boosting algorithms include `AdaBoost`, `Gradient Boosting` and `XGBoost`. + +## Syntax + +Sklearn offers the `BaggingClassifier` for performing classification tasks: + +```pseudo +BaggingClassifier(estimator=None, n_estimators=10, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=None, random_state=None, verbose=0) +``` + +- `estimator` (`None, default: None`): The base estimator to fit on random subsets of the dataset. If `None`, the algorithm uses a decision tree as the default estimator. +- `n_estimators` (`int, default=10`): Number of `estimators` in the ensemble. +- `max_samples` (`float, default=1.0`): The fraction of samples for fitting each estimator, must be between `0` and `1`. +- `max_features` (`float, default=1.0`): The fraction of samples for fitting each estimator, must be between `0` and `1`. +- `bootstrap` (`bool, default=True`): Whether to use bootstrap sampling (sampling with replacement) for creating datasets for each estimator. +- `bootstrap_features` (`bool, default=False`): Determines whether to sample features with replacement for each estimator. +- `oob_score` (`bool, default=False`): Determines whether to use out-of-bag samples to estimate the generalization error. +- `warm_start` (`bool, default=False`): If `True`, the fit method adds more estimators to the existing ensemble instead of starting from scratch. +- `n_jobs` (`int, default=None`): The number of jobs to run in parallel for fitting the base estimators `None` means using `1` core, `-1` uses all available cores. +- `random_state` (`int, default=None`): Controls the randomness of the estimator fitting process, ensuring reproducibility. +- `verbose` (`int, default=0`): Controls the verbosity level of the fitting process, with higher values that produces more detailed output. + +## Example + +This example code demonstrates the use of the `BaggingClassifier` to build an ensemble of `decision trees` and examine its performance on the `iris` dataset: + +```py +#import all the necessary libraries +from sklearn.ensemble import BaggingClassifier +from sklearn.tree import DecisionTreeClassifier +from sklearn.datasets import load_iris +from sklearn.model_selection import train_test_split +from sklearn.metrics import accuracy_score + +# Load the Iris dataset +data = load_iris() +X = data.data +y = data.target + +# Split the dataset into training and testing sets +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) + +# Initialize a BaggingClassifier with a DecisionTreeClassifier as the base estimator +bagging_clf = BaggingClassifier(estimator=DecisionTreeClassifier(), + n_estimators=50, + max_samples=0.8, + max_features=0.8, + bootstrap=True, + random_state=42) + +# Train the BaggingClassifier +bagging_clf.fit(X_train, y_train) + +# Predict on the test set +y_pred = bagging_clf.predict(X_test) + +# Evaluate accuracy +accuracy = accuracy_score(y_test, y_pred) +print(f"Accuracy: {accuracy:.2f}") +``` + +The code results in the following output: + +```shell +Accuracy: 1.00 +``` + +## Codebyte Example + +This is an example that demonstrates the use of a `VotingClassifier` to combine multiple classifiers (`Decision Tree`, `Support Vector Classifier`, and `K-Nearest Neighbors`) for a classification task on the `iris` dataset: + +```codebyte/python +from sklearn.ensemble import VotingClassifier +from sklearn.tree import DecisionTreeClassifier +from sklearn.svm import SVC +from sklearn.neighbors import KNeighborsClassifier +from sklearn.datasets import load_iris +from sklearn.model_selection import train_test_split +from sklearn.metrics import accuracy_score + +# Load the Iris dataset +data = load_iris() +X = data.data +y = data.target + +# Split the dataset into training and testing sets +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) + +# Initialize individual classifiers +dt_clf = DecisionTreeClassifier(random_state=42) +svc_clf = SVC(probability=True, random_state=42) +knn_clf = KNeighborsClassifier() + +# Create a VotingClassifier that combines the classifiers +voting_clf = VotingClassifier(estimators=[('dt', dt_clf), ('svc', svc_clf), ('knn', knn_clf)], voting='soft') + +# Train the VotingClassifier +voting_clf.fit(X_train, y_train) + +# Predict on the test set +y_pred = voting_clf.predict(X_test) + +# Evaluate accuracy +accuracy = accuracy_score(y_test, y_pred) +print(f"VotingClassifier Accuracy: {accuracy:.2f}") +``` From 69350c7be2f5d9adac8f52ac67d8ad459c2f453c Mon Sep 17 00:00:00 2001 From: Sriparno Roy Date: Thu, 9 Jan 2025 12:13:35 +0530 Subject: [PATCH 5/5] Minor changes --- .../sklearn/concepts/ensembles/ensembles.md | 37 ++++++++++--------- 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/content/sklearn/concepts/ensembles/ensembles.md b/content/sklearn/concepts/ensembles/ensembles.md index abe17a2eb95..0880b40f7e9 100644 --- a/content/sklearn/concepts/ensembles/ensembles.md +++ b/content/sklearn/concepts/ensembles/ensembles.md @@ -1,10 +1,9 @@ --- Title: 'Ensembles' -Description: 'A machine learning approach that combines predictions from multiple models for enhancing accuracy and reliability.' +Description: 'A machine learning technique that incorporates predictions from multiple models to enhance accuracy and reliability.' Subjects: - 'AI' - 'Data Science' - - 'Machine Learning' Tags: - 'Classification' - 'Data' @@ -16,16 +15,17 @@ CatalogContent: --- **Ensembles** are machine learning techniques that combine the predictions from multiple models in order to increase accuracy, robustness, and reliability in classification and regression tasks. Scikit-learn provides tools to build these sophisticated predictive systems effectively. -Some of the ensemble techniques include Bagging and Boosting. +Some of the ensemble techniques include _Bagging_ and _Boosting_. ## Bagging (Bootstrap Aggregating) Bagging refers to training multiple models in parallel on different subsets of the data generated using bootstrapping or random sampling with replacement. The predictions from the models are combined. + This approach reduces the variance and prevents overfitting. Some popular algorithms that can be classified under bagging are `Random Forest` and `Bagging Classifier`. ## Boosting -Boosting creates models sequentially, where each new model corrects the mistakes of the previous one by focusing on the harder instances that the former model failed to predict. Well-known boosting algorithms include `AdaBoost`, `Gradient Boosting` and `XGBoost`. +Boosting creates models sequentially, where each new model corrects the mistakes of the previous one by focusing on the harder instances that the former model failed to predict. Well-known boosting algorithms include `AdaBoost`, `Gradient Boosting`, and `XGBoost`. ## Syntax @@ -35,24 +35,24 @@ Sklearn offers the `BaggingClassifier` for performing classification tasks: BaggingClassifier(estimator=None, n_estimators=10, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=None, random_state=None, verbose=0) ``` -- `estimator` (`None, default: None`): The base estimator to fit on random subsets of the dataset. If `None`, the algorithm uses a decision tree as the default estimator. -- `n_estimators` (`int, default=10`): Number of `estimators` in the ensemble. -- `max_samples` (`float, default=1.0`): The fraction of samples for fitting each estimator, must be between `0` and `1`. -- `max_features` (`float, default=1.0`): The fraction of samples for fitting each estimator, must be between `0` and `1`. -- `bootstrap` (`bool, default=True`): Whether to use bootstrap sampling (sampling with replacement) for creating datasets for each estimator. -- `bootstrap_features` (`bool, default=False`): Determines whether to sample features with replacement for each estimator. -- `oob_score` (`bool, default=False`): Determines whether to use out-of-bag samples to estimate the generalization error. -- `warm_start` (`bool, default=False`): If `True`, the fit method adds more estimators to the existing ensemble instead of starting from scratch. -- `n_jobs` (`int, default=None`): The number of jobs to run in parallel for fitting the base estimators `None` means using `1` core, `-1` uses all available cores. -- `random_state` (`int, default=None`): Controls the randomness of the estimator fitting process, ensuring reproducibility. -- `verbose` (`int, default=0`): Controls the verbosity level of the fitting process, with higher values that produces more detailed output. +- `estimator` (`None`, default=`None`): The base estimator to fit on random subsets of the dataset. If `None`, the algorithm uses a decision tree as the default estimator. +- `n_estimators` (int, default=`10`): Number of estimators in the ensemble. +- `max_samples` (float, default=`1.0`): The fraction of samples for fitting each estimator, must be between `0` and `1`. +- `max_features` (float, default=`1.0`): The fraction of features for fitting each estimator, must be between `0` and `1`. +- `bootstrap` (bool, default=`True`): Whether to use bootstrap sampling (sampling with replacement) for creating datasets for each estimator. +- `bootstrap_features` (bool, default=`False`): Determines whether to sample features with replacement for each estimator. +- `oob_score` (bool, default=`False`): Determines whether to use out-of-bag samples for estimating the generalization error. +- `warm_start` (bool, default=`False`): If `True`, the fit method adds more estimators to the existing ensemble instead of starting from scratch. +- `n_jobs` (int, default=`None`): The number of jobs to run in parallel for fitting the base estimators. `None` means using `1` core, `-1` uses all available cores. +- `random_state` (int, default=`None`): Controls the randomness of the estimator fitting process, ensuring reproducibility. +- `verbose` (int, default=`0`): Controls the verbosity level of the fitting process, with higher values producing more detailed output. ## Example -This example code demonstrates the use of the `BaggingClassifier` to build an ensemble of `decision trees` and examine its performance on the `iris` dataset: +This example code demonstrates the use of the `BaggingClassifier` to build an ensemble of `Decision Trees` and examine its performance on the Iris dataset: ```py -#import all the necessary libraries +# Import all the necessary libraries from sklearn.ensemble import BaggingClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.datasets import load_iris @@ -94,9 +94,10 @@ Accuracy: 1.00 ## Codebyte Example -This is an example that demonstrates the use of a `VotingClassifier` to combine multiple classifiers (`Decision Tree`, `Support Vector Classifier`, and `K-Nearest Neighbors`) for a classification task on the `iris` dataset: +This is an example that demonstrates the use of a `VotingClassifier` to combine multiple classifiers (`Decision Tree`, `Support Vector Classifier`, and `K-Nearest Neighbors`) for a classification task on the Iris dataset: ```codebyte/python +# Import all the necessary libraries from sklearn.ensemble import VotingClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC