Posts

Showing posts from July, 2020

Sigmoid Activation Function

Image
Sigmoid or Logistic Activation Function Sigmoid Activation Function translates the output to the range (0;1). For small values (<-5), sigmoid returns a value close to zero, and for large values (>5) the result of the function gets close to 1. It is non Zero Centric. Execute  the Following Code Advantages: It is mostly used in the output layer for binary classification. But there are other activation function perform more effectively than Sigmoid Function Disadvantages: The exp( ) function is computationally expensive. The problem of vanishing gradients Not useful for the regression tasks as well.

Activation Functions

Image
Activation Functions What is the Activation Function? Activation Functions are mathematical equations that determine the output of the neural network. It is applied to every neuron and determines when should each neuron be activated and deactivated.  For example, if I pour hot water on your left hand, you suddenly move away from your left hand but not your right hand. This is because only your left-hand neuron is activated and rest of the neurons are deactivated. These functions also normalize the output of each neuron to range between 0 and 1 or -1 and 1. Why Activation Function? We provide a set of training samples to a computer for training and the computer will adjust the weight and bias value according to minimize the output.  While a computer is training some image samples, we would like neural networks to according change the weights and bias values so, fewer images are recognized wrongly. But our neural network will not change the weights and bias value little by little, it jum

Deep Learning

Image
Deep Learning What is Deep Learning? Deep learning is a subset of machine learning in artificial intelligence (AI) that has networks based learning from data that is unstructured or unlabeled. Also known as deep neural learning or deep neural network. This is also similar to the Human Brain. Do you know the First Neuron created? Yes, Perceptron is the first neural network unit, that does some computations to detect features and business intelligence in input data. This was introduced by  Frank Rosenblatt in 1957. Types of Deep Learning? Artificial Neural Network Deep Brief Net Convolutional Neural Network Recurrent Neural Network Neural Network The first layer is the Input Layer, next is the Hidden layer and the final one is the Output layer. there can be any number of hidden layers in the network. How Neural Network works? Neural Network we create works similar to the human brain, we give features th