What is Mahalanobis Distance

he Mahalanobis distance is a measure of the distance between a point and a distribution, taking into account the correlation between variables. It is often used in statistics and machine learning to identify outliers and to assess the dissimilarity between a data point and a distribution. The Mahalanobis distance is defined for a point (x)…

What is Jaccard Distance

accard distance is a measure of dissimilarity between two sets. It is calculated as the complement of the Jaccard similarity coefficient and is particularly useful when dealing with binary data or sets. The Jaccard similarity coefficient measures the proportion of shared elements between two sets, and the Jaccard distance is essentially the complement of this…

Choosing the right estimator

Often the hardest part of solving a machine learning problem can be finding the right estimator for the job. Different estimators are better suited for different types of data and different problems. The flowchart below is designed to give users a bit of a rough guide on how to approach problems with regard to which…

What is Logistic Regression?

ogistic Regression is a statistical method used for binary classification tasks, where the outcome variable is categorical and has two classes. Despite its name, it is used for classification rather than regression. The logistic regression algorithm models the probability that a given input belongs to a particular class. The logistic regression model applies the logistic…

NumPy function argmax

np.argmax is a NumPy function that returns the indices of the maximum values along a specified axis in an array. If the input array is multi-dimensional, you can specify the axis along which the maximum values are computed. Here’s a simple example: Output: In this example, np.argmax(arr) returns the index (position) of the maximum value…

NumPy function argsort

np.argsort is a NumPy function that returns the indices that would sort an array along a specified axis. It performs an indirect sort on the input array and returns an array of indices that represent the sorted order of the elements. The returned indices can be used to construct a sorted version of the input…

Pre-pruning Decision Tree – GridSearch for Hyperparameter tuning

Grid search is a tuning technique that attempts to compute the optimum values of hyperparameters. It is an exhaustive search that is performed on the specific parameter values of a model. The parameters of the estimator/model used to apply these methods are optimized by cross-validated grid-search over a parameter grid.

Pre-pruning Decision Tree – depth restricted

In general, the deeper you allow your tree to grow, the more complex your model will become because you will have more splits and it captures more information about the data and this is one of the root causes of overfitting. We can limit the tree with max_depth of tree:

Get a random sample from your dataset

To grab random sample from a dataset in Python, you can use the pandas library. Assuming your dataset is stored in a pandas DataFrame, you can use the sample method to randomly select rows. Here’s an example: In this example, n=5 specifies the number of rows to sample, and random_state is set to ensure reproducibility.