In Inductive machine learning, we start with an input sample (x) and an output sample f(x). Our aim is to estimate the function f(x).
In simple words, we try to learn by example. We generalize the samples and estimate the output.
E.g. Let say we want to classify two types of fruits: apple and watermelon. We can measure the height and weight of these fruits in our test data set and label these as apple or watermelon.
Then we give it to our model. Our model can start approximating that things of higher weight and more height are watermelons and lesser weight things are apples. This is an example of inductive learning.
In deductive learning, we learn by actual experiment. We look at these fruits and conclude that watermelons are heavier than apples. Now we apply this rule to our data to deduce that this is a lighter object, therefore it is an apple.
Generally for Disease diagnosis, we use Inductive machine learning. Based on certain symptoms, a model can predict if the disease is present or not.
Some of the popular uses of Inductive machine learning are as follows:
Disease Diagnosis: Let say x is the symptoms of a patient.
Then f(x) is the disease patient is suffering from. We can derive the function f() by Inductive machine learning.
Credit risk analysis: Let say x is the important financial indicators. Eg. Social Security, Credit score, etc. of a person.
The function f(x) can give the result whether a credit is approved for this person. In this case, f is based on the machine learning model.
Self Driving Car: In this case images from different cameras of a car are X.
The f(x) is the angle at which the steering wheel can be turned to follow the path.
Face recognition: Images of different people are in a dataset X.
We can create f(x) model to get the name of a person from dataset X.
Logistic Regression is a classification algorithm. Although from the name it appears to be regression technique, it is used for classification problems.
We can use it to determine the class of data. It can be a binary class like 0,1 or yes, no, etc.
The Logistic regression is based on probability. In this case, the probability lies between 0 and 1. We use the log of the value to create a step function. The step function can distribute the values among the two classes.
E.g. Let say we want to predict whether it will be a rainy day or not based on the temperature on a day. We have to give one of the two answers.
True for a rainy day and false for a sunny day. We can use the temperature in our model to predict the probability of rain.
Then we take the log of this probability to divide into two classes. It can be 0 or 1. We can use 0 for a sunny day and 1 for a rainy day.