This set of MCQ(multiple choice questions) focuses on the** Introduction to Machine learning Week 7 Solutions**.

With the increased availability of data from varied sources there has been increasing attention paid to the various data driven disciplines such as analytics and machine learning. In this course we intend to introduce some of the basic concepts of machine learning from a mathematically well motivated perspective. We will cover the different learning paradigms and some of the more popular algorithms and architectures used in each of these paradigms.

### Course layout

*Answers COMING SOON! Kindly Wait!*

Week 1: Assignment answers

Week 2: **Assignment answers**

Week 3: **Assignment answers**** **

Week 4: **Assignment answers**

Week 5: **Assignment answers**

Week 6: **Assignment answers**

Week 7: **Assignment answers**

Week 8: **Assignment answers **

Week 9: **Assignment answers **

Week 10: **Assignment answers**

Week 11: **Assignment answers **

Week 12: **Assignment answers**

**NOTE:** You can check your answer immediately by clicking show answer button.** Introduction to Machine Learning Week 7 Solutions**” contains 7 questions.

Now, start attempting the quiz.

**Introduction to Machine learning** Week 7 Solutions

**Introduction to Machine learning**

Q1-2 with given data:

**Q1.** Find the most specific concept using Find-S algorithm.

a) <Red, Round, Big, ?>

b) <Red, 0, Big, 0>

c) <Red, Round, ?, Soft>

d) <Red, ?, Big, ?>

Answer: a)

**Q2.** Find the number of instances possible in X using the values that can be seen in the table in Q1.

a) 12

b) 48

c) 36

d) 24

Answer: b) 48

For Q3-4 with the given data:

Suppose the instance space, X is the set of real numbers, R and H be the set of intervals on the real number line. H is of the form a<x<b, where a and b may be real constants.

**Q3.** Find VC(H). [VC stands for Vapnik-Chervonekis Dimension]

a) 2

b) 3

c) 5

d) 4

Answer: a) 2

**Q4.** Can VC dimension of H be 3?

a) Yes

b) No

Answer: b) No

For Q5-6 with the given data:

Suppose you have trained three classifiers, each of which returns either 1 or -1, and tested their accuracies to find the following:

Classifier Accuracy

c1 0.6

c2 0.55

c3 0.45

**Q5.** Let C be the classifier that returns a majority vote of the three classifiers. Assuming the errors of the ci are independent, what is the probability that C(x) will be correct on a new test example x?

a) 0.1815

b) 0.1215

c) 0.5505

d) 0.099

Answer: c) 0.5505

**Q6.** Suppose you have run Adaboost on a training set for three boosting iterations. The results are classifiers h1, h2, and h3, with coefficients **Î±**1=.2, **Î±**2=-.3, and **Î±**3=-.2. You find that the classifiers results on a test example x are h1(x)=1, h2(x)=1, and h3(x)=-1, What is the class returned by the Adaboost ensemble classifier H on test example x?

a) 1

b) -1

Answer: a) 1

**Q7.** Bagging is done to ______________

a) increase bias

b) decrease bias

c) increase variance

d) decrease variance

Answer: d) decrease variance

**Q8.** Weak learners are the ones used as classifiers in Boosting algorithms. They are called weak learners because ______________

a) Error gate greater than 0.5

b) Error gate less than 0.5

c) No error

Answer: b) Error gate less than 0.5

**Q9.** Dropout is used as a regularization technique in Neural Networks where many different models are trained on different subsets of the data. In ensemble learning, dropout techniques would be similar to _____________________.

a) Bagging

b) Boosting

c) None of the above

Answer: a) Bagging

**Q10.** Which of the following option is/are correct regarding the benefits of ensemble model?

1. Better performance

2. More generalized model

3. Better interpretability

a) 1 and 3

b) 2 and 3

c) 1 and 2

d) 1, 2 and 3

Answer: c) 1 and 2

**Q11.** Considering the AdaBoost algorithm, which among the following statements is/are true?

a) In each stage, we try to train a classifier which makes accurate predicitons on any subset of the data points where the subset size is at least half the size of the data set.

b) In each state, we try to train a classifier which makes accurate predictions on a subset of the data points where the subset contains more of the data points which were misclassified in earlier stages.

c) The weight assigned to an individual classifier depends upon the number of data points correctly classified by the classifier.

d) The weight assigned to an individual classifier depends upon the weighted sum error of misclassified points for that classifier.

Answer: b), d)

**Q12.** The VC dimension of hypothesis space H1 is larger than the VC dimension of hypothesis space H2. Which of the following can be inferred from this?

a) The number of examples required for learning a hypothesis in H2 is larger than the number of examples required for H2.

b) The number of examples required for learning a hypothesis in H1 is smaller than the number of examples required for H2.

c) No relation to number of samples required for PAC learning.

Answer: a)

**Q13.** For a particular learning task, if the required error paramter E changes from 0.2 to 0.01, then how many more samples will be required for PAC learning?

a) Same

b) 2 times

c) 20 times

d) 200 times

Answer: c) 20 times

**Q14.** In boosting, which data points are assigned higher weights during the training of subsequent models?

a) Data points that are classified correctly by the previous models.

b) Data points that are misclassified by the previous models.

c) Data points that are randomly selected from the training data.

d) Data points that are ignored during training.

Answer: b) Data points that are misclassified by the previous models.

**Q15.** In AdaBoost, how are the individual weak learners combined to form the final strong ensemble model’s prediction?

a) By taking the majority vote for all weak learner’s predictions.

b) By averaging the predictions of all weak learners.

c) By weighting the predictions of weak learners based on their accuracy.

d) By selecting the prediction of the weak learner with the highest accuracy.

Answer: c) By weighting the predictions of weak learners based on their accuracy.

**<< Prev- Introduction to Machine Learning Week 6 Assignment Solutions**

**>> Next- Introduction to Machine Learning Week 8 Assignment Solutions**

DISCLAIMER:Use these answers only for the reference purpose. Quizermania doesn't claim these answers to be 100% correct. So, make sure you submit your assignments on the basis of your knowledge.

*For discussion about any question, join the below comment section. And get the solution of your query.* Also, try to share your thoughts about the topics covered in this particular quiz.

Checkout for more NPTEL Courses: *Click Here!*