**Regression**is a typical supervised discovering task. The is offered in those situations where the worth to be predicted is continuous. For example, we usage regression come predict a target numeric value, such as the car’s price, offered a set of functions or

**predictors**( mileage, brand, age ). We train the system with plenty of examples of cars, including both predictors and also the matching price that the car (labels).

You are watching: Advantages and disadvantages of regression analysis

**Types of Regression Models:**

Attention reader! Don’t stop discovering now. Gain hold of every the important an equipment Learning ideas with the

**Machine Learning structure Course**at a student-friendly price and become industry ready.

**Simple linear Regression**is a linear regression design that approximates the relationship in between one independent variable and also one dependence variable making use of a straight line.Example : salary = a0 + a1*Experience ( y = a0 + a1x type ).

**Multiple straight Regression**is a direct regression model that approximates the relationship between several elevation variables (features) and also one dependent variable.Example : automobile Price = a0 + a1*Mileage + a2*Brand + a3*Age ( y = a0 + a1x1 + a2x2 + ... + anxn kind )

**Polynomial regression**is a special case of multiple direct regression. The relationship between the independent variable x and dependent variable y is modeled together an nth level polynomial in x. Direct regression cannot be offered to to the right non-linear data (underfitting). Therefore, we increase the model’s complexity and use Polynomial regression, i m sorry fits together data better. ( y = a0 + a1x1 + a2x12 + ... + anx1n type )

**Support Vector Regression**is a regression design in i beg your pardon we try to fit the error in a details threshold (unlike minimizing the error price we to be doing in the vault cases). SVR can work for linear as well as non-linear problems depending on the kernel we choose. There is an implicitly relationship between the variables, unequal the vault models, where the connection was identified explicitly by one equation (coefficients are enough to balance the scale of variables). Therefore, function scaling is compelled here.

**Decision Tree Regression**build a regression version in the form of a tree structure. As the dataset is damaged down right into smaller subsets, an linked decision tree is developed incrementally. Because that a allude in the test set, us predict the value making use of the decision tree constructed

**Random woodland Regression –**In this, us take k data points the end of the training set and construct a decision tree. Us repeat this for various sets the k points. We have to decide the variety of decision trees to be developed in the over manner. Allow the number of trees constructed be n. We predict the value utilizing all n trees and also take their typical to acquire the final predicted value for a point in the test set.

See more: What Happens If All The Planets Align, What Would Happen If All The Planets Were Aligned

**How perform we pick the right Regression model for a given difficulty ?**Considering the components such as – the form of relation in between the dependency variable and also the live independence variables (linear or non-linear), the pros and cons of selecting a certain regression design for the problem and also the adjusted R2 intuition, we pick the regression model which is many apt come the difficulty to it is in solved.