This set of MCQ(multiple choice questions) focuses on the** Introduction to Machine Learning NPTEL Week 4 Solutions** **NPTEL 2023**.

With the increased availability of data from varied sources there has been increasing attention paid to the various data driven disciplines such as analytics and machine learning. In this course we intend to introduce some of the basic concepts of machine learning from a mathematically well motivated perspective. We will cover the different learning paradigms and some of the more popular algorithms and architectures used in each of these paradigms.

### Course layout

*Answers COMING SOON! Kindly Wait!*

Week 1:Â Assignment answers

Week 2: **Assignment answers**

Week 3: **Assignment answers**** **

Week 4: **Assignment answers**

Week 5: **Assignment answers**

Week 6: **Assignment answers**

Week 7: **Assignment answers**

Week 8: **Assignment answers **

Week 9: **Assignment answers **

Week 10: **Assignment answers**

Week 11: **Assignment answers **

Week 12: **Assignment answers**

**NOTE:**Â You can check your answer immediately by clicking show answer button.** Introduction to Machine Learning NPTEL Week 4 Solutions** Assignment Solution” contains 7 questions.

Now, start attempting the quiz.

**Introduction to Machine learning** NPTEL 2023 Week 4 Solutions

**Introduction to Machine learning**

**Q1.** Consider a Boolean function in three variables, that returns True if two or more variables out of three are True, and False otherwise. Can this function be implemented using the perceptron algorithm?

Â a. no

b.Â yes

Answer: b

**Q2.** For a support vector machine model, letÂ xiÂ be an input instance with labelÂ yiÂ . IfÂ yi(Î²^0+xTiÎ²^)>1(Î²^0+Î²^)>1, whereÂ Î²0Î²0Â andÂ Î²^)Î²^)Â are the estimated parameters of the model, then

a. xiÂ is not a support vector

b. xi is a support vector

c. xiÂ is either an outlier or a support vectorÂ

d. Depending upon other data points, x i may or may not be a support vector.

Answer: a

**Q3.** CSuppose we use a linear kernel SVM to build a classifier for a 2-class problem where the training data points are linearly separable. In general, will the classifier trained in this manner be always the same as the classifier trained using the perceptron training algorithm on the same training data?

a. yes

b. no

Answer: b

**Q4.** Train a linear regression model (without regularization) on the above dataset. Report the coefficients of the best fit model. Report the coefficients in the following format:Â Î²0,Î²1,Î²2,Î²3Î²0,Î²1,Î²2,Î²3Â . (You can round-off the accuracy value to the nearest 2-decimal point number.)

a. -1.2, 2.1, 2.2, 1Â

b. 1, 1.2, 2.1, 2.2Â

c. -1, 1.2, 2.1, 2.2Â

d. 1, -1.2, 2.1, 2.2Â

e. 1, 1.2, -2.1, -2.2

Answer: d

**Q5.** Train an l2 regularized linear regression model on the above dataset. Vary the regularization parameter from 1 to 10. As you increase the regularization parameter, absolute value of the coefficients (excluding the intercept) of the model:

a. increaseÂ

b. first increase then decreaseÂ

c. decreaseÂ

d. first decrease then increase

Answer: c

**Q6.** Train anÂ l22Â regularized logistic regression classifier on the modified iris dataset. We recommend using sklearn. Use only the first two features for your model. We encourage you to explore the impact of varying different hyperparameters of the model. Kindly note that theÂ C parameter mentioned below is the inverse of the regularization parameterÂ Î»Î». As part of the assignment train a model with the following hyperparameters:

Model: logistic regression with one-vs-rest classifier,Â C=1e4=14

For the above set of hyperparameters, report the best classification accuracy

a. 0.88Â

b. 0.86Â

c. 0.98Â

d. 0.68

Answer: b

**Q7.** Train an SVM classifier on the modified iris dataset. We recommend using sklearn. Use only the first two features for your model. We encourage you to explore the impact of varying different hyperparameters of the model. Specifically try different kernels and the associated hyperparameters. As part of the assignment train models with the following set of hyperparameters

RBF-kernel,Â gamma=0.5=0.5, one-vs-rest classifier, no-feature-normalization. TryÂ C=0.01,1,10=0.01,1,10. For the above set of hyperparameters, report the best classification accuracy along with total number of support vectors on the test data.

a.Â 0.92, 69Â

b. 0.88, 40Â

c. 0.88, 69Â

d. 0.98, 41

Answer: c

**<< Previous- Introduction to Machine Learning Week 3 Assignment Solutions**

**>> Next- Introduction to Machine Learning Week 5 Assignment Solutions**

DISCLAIMER:Use these answers only for the reference purpose. Quizermania doesn't claim these answers to be 100% correct. So, make sure you submit your assignments on the basis of your knowledge.

*For discussion about any question, join the below comment section. And get the solution of your query.* Also, try to share your thoughts about the topics covered in this particular quiz.

Checkout for more NPTEL Courses: *Click Here!*