11. What would be a correct option for weight W [W0 ,W1 ,W2 ], so that the following sigmoid unit will function as an AND gate?
You must be logged in to post a comment.
12. Which of the following is true?
In batch gradient descent we update the weights and biases of the neural network after
forward pass over each training example.
In batch gradient descent we update the weights and biases of our neural network after
forward pass over all the training examples.
Each step of stochastic gradient descent takes more time than each step of batch gradient
None of these three options is correct
13. In a neural network, which one of the following techniques is NOT useful to reduce overfitting?
Adding more layers
14. For an image recognition problem (such as recognizing a cat in a photo), which architecture of
neural network has been found to be better suited for the tasks?
Multi layer perceptron
Recurrent neural network
Convolutional neural network
15. In training a batch neural network, after running the first few epochs, you notice that the loss
does not decrease. The reasons for this could be
1. The learning rate is low.
2. The neural net is stuck in local minima
3. The neural net has too many units in the hidden layer
1 or 2
1 or 3
2 or 3
UGC NET PAPER 1
UGC NET Management
UGC NET COMPUTER SCIENCE
UGC NET COMMERCE
GATE COMPUTER SCIENCE
CFA Level 1
Login with Facebook
Login with Google
Forgot your password?
Lost your password? Please enter your email address. You will receive mail with link to set new password.
Back to login