# Deep Learning | NPTEL | Week 4 answers

This set of MCQ(multiple choice questions) focuses on the Deep Learning NPTEL Week 4 answers

### Course layout

Week 2:

Week 12:

NOTE: You can check your answer immediately by clicking show answer button. This set ofDeep Learning NPTEL Week 4 answers ” contains 10 questions.

Now, start attempting the quiz.

### Deep Learning NPTEL 2023 Week 4 Quiz Solutions

Q1. Which of the following cannot be realized with single layer perception (only input and output layer)?

a) AND
b) OR
c) NAND
d) XOR

Q2. For a function f(θ0, θ1), if θ0 and θ1 are initialized at a local minimum, then what should be the values of θ0 and θ1 after a single iteration of gradient descent:

a) θ0 and θ1 will update as per gradient descent rule
b) θ0 and θ1 will remain same
c) Depends on the values of θ0 and θ1
d) Depends onf the learning rate

Q3. Choose the correct option:
i) Inability of a model to obtain sufficiently low training error is termed as overfitting
ii) Inability of a model to reduce large margin between training and testing error is termed as Overfitting
iii) Inability of a model to obtain sufficiently low training error is termed as Underfitting
iv) Inability of a model to reduce large margin between training and testing error is termed as Underfitting

a) Only option (i) is correct
b) Both options (ii) and (iii) are correct
c) Both options (ii) and (iv) are correct
d) Only option (iv) is correct

Deep Learning NPTEL week 4 Assignment Solutions

Q4. Suupose for a cost function J(θ) = 0.25θ2 as shown in graph below, refer to this graph and choose the correct option regarding the Statements given below θ is plotted along horizontal axis.

a) Only Statement i is true
b) Only Statement ii is true
c) Both statement i and ii are true
d) None of them are true

Q5. Choose the correct option. Gradient of a continuous and differentiable function is:
i) is zero at a minimum
ii) is non-zero at a maximum
iii) is zero at a saddle point
iv) magnitude decreases as you get closer to the minimum

a) Only option (i) is correct
b) Options (i), (iii) and (iv) are correct
c) Options (i) and (iv) are correct
d) Only option (iii) is correct

Q6. Input to SoftMax activation function is [3,1,2]. What will be the output?

a) [0.58, 0.11, 0.31]
b) [0.43, 0.24, 0.33]
c) [0.60, 0.10, 0.30]
d) [0.67, 0.09, 0.24]

Deep Learning NPTEL week 4 Assignment Solutions

Q7. If SoftMax if xf is denoted as σ(xi) where xi is the jth element of the n-dimensional vector x i.e., X = [xi,…,xj,…,xn], then derivate of σ(xi) w.r.t. xi i.e., ƍσ(xi)/ƍxi is given by,

Q8. Which of the following options is true?

a) In Stochastic Gradient Descent, a small batch of sample is selected rawndomly instead of the whole data set for each iteration. Too large update of weight values leading to faster convergence.
b) In Stochastic Gradient Descent, the whole data set is processed together for update in each iteration.
d) Stochastic Gradient Descent is a non-iterative process

Q9. What are the steps for using a gradient descent algorithm?
1. Calculate error between teh actual value and the predicted value
2. Re-iterate until you find best weights of network
3. Pass an input through the network and get values from output layer
4. Initialize random weight and bias
5. Go to each neurons which contributes to the error and change its respective values to reduce the error

a) 1, 2, 3, 4, 5
b) 5, 4, 3, 2, 1
c) 3, 2, 1, 5, 4
d) 4, 3, 1, 5, 2

Deep Learning NPTEL week 4 Assignment Solutions

Q10. J(θ) = 2θ2 – 2θ + 2 is a given cost function? Find the correct weight update rule for gradient descent optimixation at step t+1? Consider α=0.01 to be the learning rate

a) θt+1 = θt – 0.01(2θ – 1)
b) θt+1 = θt + 0.01(2θ – 1)
c) θt+1 = θt – (2θ – 1)
d) θt+1 = θt – 0.02(2θ – 1)

### Deep Learning NPTEL 2023 Week 4 answers

Q1.

Q2. Which of the following activation function leads to sparse acitvation maps?

a) Sigmoid
b) Tanh
c) Linear
d) ReLU

Q3.

Q4. Which logic function cannot be performed using a single-layered Neural Network?

a) AND
b) OR
c) XOR
d) All

Q5. Which of the following options closely relate to the following graph? Green cross are the samples of Classs-A while mustard rings are samples of Class-B and the red line is the separating line between the two class.

a) High Bias
b) Zero Bias
c) Zero Bias and High Variance
d) Zero Bians and Zero Variance

Deep Learning NPTEL Week 4 Answers

Q6. Which of the following statement is true?

a) L2 regularization lead to sparse activation maps
b) L1 regularization lead to sparse activation maps
c) Some of the weights are squashed to zero in L2 regularization
d) L2 regularization is also known as Lasso

Q7. Which among the following options give the range for a tanh function?

a) -1 to 1
b) -1 to 0
c) 0 to 1
d) 0 to infinity

Deep Learning NPTEL Week 4 Answers

Q8.

Q9. When is gradient descent algorithm certain to find a global minima?

a) For convex cost plot
b) For concave cost plot
c) For union of 2 convex cost plot
d) For union of 2 concave cost plot

Q10. Let X=[-1, 0, 3, 5] be the input of ith layer of a neural network. On this, we want to apply softmax function. What should be the output of it?

a) [0.368, 1, 20.09, 148,41]
b) [0.002, 0.006, 0.118, 0.874]
c) [0.3, 0.05, 0.6, 0.05]
d) [0.04, 0, 0.06, 0.9]

<< Prev: Deep Learning NPTEL Week 3 Answers

>> Next: Deep Learning NPTEL Week 5 Answers

For discussion about any question, join the below comment section. And get the solution of your query. Also, try to share your thoughts about the topics covered in this particular quiz.

### 1 thought on “Deep Learning | NPTEL | Week 4 answers”

1. Is Q. no. 5 & 6 are correct?