Loss Functions In ML – With Logistic Regression Example

Loss functions in ML(Machine Learning) are computational functions.

These functions evaluate the price paid for inaccuracy of predictions in classification problems.

Therefore, thier output should be minimized.

The Loss Function used in Logistic Regression is cross entropy function:
L(yˆ,y) = -(y*log(yˆ) + (1-y)*log(1-yˆ))

Where y is the ground truth outcome observed for a certain training set and yˆ is the predicted probability for that same training set  calculated via the Logistic Regression Function.

This above Loss Function produces only one minimum and thus  is easier to use for minimal error detection than a Mean Squared Error function which can create non-convex result shape( with multiple minimum local points) and thus harden on finding of the minimal error point.