What are the common Distance Measures in Clustering

istance measures (or similarity measures, depending on the context) play a crucial role in clustering algorithms, as they determine the similarity or dissimilarity between data points. Here are some common distance measures used in clustering: The choice of distance measure depends on the nature of your data and the specific requirements of your clustering task.…

Choosing the right estimator

Often the hardest part of solving a machine learning problem can be finding the right estimator for the job. Different estimators are better suited for different types of data and different problems. The flowchart below is designed to give users a bit of a rough guide on how to approach problems with regard to which…

What is Logistic Regression?

ogistic Regression is a statistical method used for binary classification tasks, where the outcome variable is categorical and has two classes. Despite its name, it is used for classification rather than regression. The logistic regression algorithm models the probability that a given input belongs to a particular class. The logistic regression model applies the logistic…

Parameter cv in GridSearchCV

In scikit-learn’s GridSearchCV (Grid Search Cross Validation), the parameter cv stands for “cross-validation.” It determines the cross-validation splitting strategy to be used when evaluating the performance of a machine learning model. When cv is set to an integer (e.g., cv=5), it represents the number of folds in a (Stratified) K-Fold cross-validation. For example, cv=5 means…

Pre-pruning Decision Tree – GridSearch for Hyperparameter tuning

Grid search is a tuning technique that attempts to compute the optimum values of hyperparameters. It is an exhaustive search that is performed on the specific parameter values of a model. The parameters of the estimator/model used to apply these methods are optimized by cross-validated grid-search over a parameter grid.

Pre-pruning Decision Tree – depth restricted

In general, the deeper you allow your tree to grow, the more complex your model will become because you will have more splits and it captures more information about the data and this is one of the root causes of overfitting. We can limit the tree with max_depth of tree:

Feature Importance in Decision Tree

In scikit-learn, the feature_importances_ attribute is associated with tree-based models, such as Decision Trees, Random Forests, and Gradient Boosted Trees. This attribute provides a way to assess the importance of each feature (or variable) in making predictions with the trained model. When you train a tree-based model, the algorithm makes decisions at each node based…

Visualizing the Decision Tree

To visualize a decision tree in scikit-learn, you can use the plot_tree function from the sklearn.tree module. This function allows you to generate a visual representation of the decision tree. Here’s a simple example: To show the decision tree as text in scikit-learn, you can use the export_text function from the sklearn.tree module. This function…