Deep Learning | NPTEL 2023 | Week 3 answers

This set of MCQ(multiple choice questions) focuses on the Deep Learning NPTEL Week 3 answers

Course layout

Answers COMING SOON! Kindly Wait!

Week 1: Assignment Answers
Week 2:
Assignment Answers
Week 3: Assignment Answers

Week 4: Assignment Answers
Week 5: Assignment Answers
Week 6: Assignment Answers
Week 7: Assignment Answers
Week 8: Assignment Answers
Week 9: Assignment Answers
Week 10: Assignment Answers
Week 11: Assignment Answers
Week 12:
Assignment Answers

NOTE: You can check your answer immediately by clicking show answer button. This set ofDeep Learning NPTEL Week 3 answers ” contains 10 questions.

Now, start attempting the quiz.

Deep Learning NPTEL 2023 Week 3 Quiz Solutions

Q1. Which of the following statements about backpropagation is true?

a) It is used to optimize the weights in a neural network
b) It is used to compute the output of a neural network
c) It is used to initialize the weights in a neural network
d) It is used to regularize the weights in a neural network

Answer: a)

Q2. Let y be the true class label and p be the predicted probability of the true class label in a binary classification problem. Which of the following is the correct formula for binary cross entropy?

a) ylogp + (1 – y)log(1 – p)/
b) -(ylogp + (1 – y)log(1 – p))/
c) p/
d) ylog(p)

Answer: a)

Q3. Let yi be the true class label of the i-th instance and pi be the predicted probability of the true class in a multi-class classification problem. Write down the formula for multi-class entropy loss.

Answer: a)

Deep Learning NPTEL week 3 Assignment Solutions

Q4. Can cross-entropy loss be negative between two probability distributions?

a) Yes
b) No

Answer: b)

Q5. Let p and q be two probability distributions. Under what conditions will the cross entropy between p and q be minimized?

a) p=q
b) All the values in p are lower than corresponding values in q
c) All the values in p are lower than corresponding values in q
d) p=0[0 is a vector]

Answer: a)

Q6. Which of the following is false about cross-entropy loss between two probability distributions?

a) It is always in range(0,1)
b) It can be negative
c) It is always positive
d) It can be 1

Answer: a), c), d)

Deep Learning NPTEL week 3 Assignment Solutions

Q7. The probability of all the events x1, x2, …. xn in a system is equal(n>1). What can you say about the entropy H(X) of that system? (base of log is 2)

a) H(X) <= 1
b) H(X) = 1
c) H(X) >= 1
d) We can’t say anything conclusive with the provided information

Answer: b)

Q8. Suppose we have a problem where data x and label y are related by y=x4+1. Which of the following is not a good choice for the activation function in the hidden layer if the activation function at the output layer is linear?

a) Linear
b) Relu
c) Sigmoid
d) Tan-1(x)

Answer: b)

Q9. We are given that the probability of Event A happening is 0.95 and the probability of Event B happening is 0.05. Which of the following statements is True?

a) Event A has a high information content
b) Event B has a low information content
c) Event A has a low information content
d) Event B has a high information content

Answer: c), d)

Deep Learning NPTEL week 3 Assignment Solutions

Q10. Which of the following activation functions can only give positive outputs greater than 0?

a) Sigmoid
b) ReLU
c) Tanh
d) Linear

Answer: b)

Deep Learning NPTEL 2023 Week 3 Quiz Solutions

Q1. A data point with 5-dimenstion [27, 40, -15, 30, 38] obtains a score [18, 20, -5, -15, 19]. Find the hinge loss incurred by second class (class-2) with a margin (â–³) of 5.

a) 37
b) 7
c) 3
d) 120

Answer: b)

Q2. What is the shape of the loss landscape during optimization of SVM?

a) Linear
b) Paraboloid
c) Ellipsoidal
d) Non-convex with multiple possible local minimum

Answer: b)

Q3. How many local minimum can be encountered while solving the optimization for maximizing margin for SVM?

a) 1
b) 2
c) infinite
d) 0

Answer: a)

Deep Learning NPTEL week 3 Assignment Solutions

Q4. Which of the following classifiers can be replaced by a linear SVM?

a) Logistic Regression
b) Neural Networks
c) Decision Trees
d) None of the above

Answer: a)

Q5. Consider a 2-class [y={-1,1}] classification problem of 2 dimensional feature vectors. The support vectors and the corresponding class label and lagrangian multipliers are provided. Find the value of SVM weight matrix W?

a) (-1, 3)
b) (2, 0)
c) (-2, 4)
d) (-2, 2)

Answer: b)

Q6. For a 2-class problem what is the minimum possible number of support vectors. Assume there are more than 4 examples from each class?

a) 4
b) 1
c) 2
d) 8

Answer: c)

Deep Learning NPTEL week 3 Assignment Solutions

Q7. A Support Vector Machine defined by WTX + b = 0, with support vectors xi and corresponding Lagrangian multipliers ai and the class value is yi. which of the following is true.

Answer: d)

Q8. Suppose we have one feature xER and binary class y. The dataset consists of 3 points: pl: (x1, y1) = (-1,-1), p2: (x2, y2) = (1, 1), p3: (x3, y3) = (3, 1). Which of the following true with respect to SVM?

a) Maximum margin will increase if we remove the point p2 from the training set.
b)Maximum margin will increase if we remove the point p3 from the training set.
c) Maximum margin will remain same if we remove the point p2 from the training set.
d) None of the above

Answer: a)

Q9. If we employ SVM realize two input logic gates, then which of the following will be true?

a) The weight vector for AND gate and OR gate will be same
b) The margin of AND gate and OR gate will be same
c) Both the margin and weight vector will be same for AND gate and OR gate
d) None of the weight vector and margin will be same for AND gate and OR gate

Answer: b)

Deep Learning NPTEL week 3 Assignment Solutions

Q10. The vales of Lagrange multipliers corresponding to the support vectors can be:

a) Less than zero
b) Greater than zero
c) Any real number
d) Any non zero number

Answer: b)

Deep Learning NPTEL 2022 Week 3 answers

Q1. Find the distance of the 3D point, P = (-3, 1, 3) from the plane defined by 2x + 2y + 5z + 9 = 0?

a) 3.1
b) 4.6
c) 0
d) ∞ (infinity)

Answer: b)4.6

Q2. What is the shape of the loss landscape during optimization of SVM?

a) Linear
b) Paraboloid
c) Ellipsoidal
d) Non-convex with multiple possible local minimum

Answer: b) Paraboloid

Q3. How many local minimum can be encountered while solving the optimization for maximizing margin for SVM?

a) 1
b) 2
c) ∞ (infinite)
d) 0

Answer: a) 1

Q4. Which of the following classifiers can be replaced by a linear SVM?

a) Logistic Regression
b) Neural Networks
c) Decision Trees
d) None of the above

Answer: a) Logistic Regression

Q5. Find the scalar projection of vector b = <-2, 3> onto vector a = <1, 2>?

a) 0
b) 4/√5
c) 2/√17
d) -2/17

Answer: b)

Deep Learning NPTEL Week 3 Answers

Q6. For a 2-class problem what is the minimum possible number of support vectors. Assume there are more than 4 examples from each class?

a) 4
b) 1
c) 2
d) 8

Answer: c) 2

Q7. Which one of the following is a valid representation of hinge loss(of margin = 1) for a two-class problem?
y = class label (+1 or -1).
p = predicted (not normalized to denote any probability) value for a class?

a) L(y, p) = max(0, 1-yp)
b) L(y, p) = min(0, 1-yp)
c) L(y, p) = max(0, 1+yp)
d) None of the above

Answer: a) L(y, p) = max(0, 1-yp)

Deep Learning NPTEL Week 3 Answers

Q8. Suppose we have one feature x E R and binary class y. The dataset consists of 3 points: p1: (x1, y1) = (-1, -1), p2: (x2, y2) = (1, 1), p3: (x3, y3) = (3, 1). Which of the following true with respect to SVM?

a) Maximum margin will increase if we remove the point p2 from the training set.
b) Maximum margin will increase if we remove the point p3 from the training set.
c) Maximum margin will remain same if we remove the point p2 from the training set.
d) None of the above

Answer: a)

Q9. If we employ SVM to realize two input logic gates, then which of the following will be true?

a) The weight vector and AND gate and OR gate will be same
b) The margin for AND gate and OR gate will be same
c) Both the margin and weight vector will be same for AND gate and OR gate
d) None of the weight vector and margin will be same for AND gate and OR gate

Answer: b)

Q10. What will happen to the margin length of a max-margin linear SVM if one of non-support vector training example is removed?

a) Margin will be scaled down by the magnitude of that vector
b) Margin will be scaled up by the magnitude of that vector
c) Margin will be unaltered
d) Cannot be determined from the information provided

Answer: c)


<< Prev: Deep Learning NPTEL Week 2 Answers

>> Next: Deep Learning NPTEL Week 4 Answers


The above question set contains all the correct answers. But in any case, you find any typographical, grammatical or any other error in our site then kindly inform us. Don’t forget to provide the appropriate URL along with error description. So that we can easily correct it.

Thanks in advance.

For discussion about any question, join the below comment section. And get the solution of your query. Also, try to share your thoughts about the topics covered in this particular quiz.

Leave a Comment

Your email address will not be published. Required fields are marked *