Introduction to Machine Learning Week 8 Solutions NPTEL

This set of MCQ(multiple choice questions) focuses on the Introduction to Machine learning Week 8 Solutions.

With the increased availability of data from varied sources there has been increasing attention paid to the various data driven disciplines such as analytics and machine learning. In this course we intend to introduce some of the basic concepts of machine learning from a mathematically well motivated perspective. We will cover the different learning paradigms and some of the more popular algorithms and architectures used in each of these paradigms.

Course layout

Answers COMING SOON! Kindly Wait!

Week 1: Assignment answers
Week 2: Assignment answers
Week 3: Assignment answers
Week 4: Assignment answers
Week 5: Assignment answers
Week 6: Assignment answers
Week 7: Assignment answers
Week 8: Assignment answers
Week 9: Assignment answers
Week 10: Assignment answers
Week 11: Assignment answers
Week 12: Assignment answers

NOTE: You can check your answer immediately by clicking show answer button. Introduction to Machine Learning Week 8 Solutions” contains 7 questions.

Now, start attempting the quiz.

Introduction to Machine learning Week 8 Solutions

Q1. What is true about K-Mean Clustering?
1. K-means is extremely sensitive to cluster center initializations.
2. Bad initialization can lead to Poor convergence speed.
3. Bad initialization can lead to bad overall clustering.

a) 1 and 2
b) 1 and 3
c) All of the above
d) 2 and 3

Answer: c) All of the above

Introduction to Machine Learning Week 8 Solutions

Q2. In which of the following cases will K-Means clustering fail to give good results? (Mark all that apply)

a) Data points with outliers
b) Data points with round shapes
c) Data points with non-convex shapes
d) Data points with different densities

Answer: a), c), d)

Q3. Which of the following clustering algorithms suffers from the problem of convergence at local optima? (Mark all that apply)

a) K-Means clustering algorithm
b) Agglomerative clustering algorithm
c) Expectation-Maximization clustering algorithm
d) Diverse clustering algorithm

Answer: a), c)

Introduction to Machine Learning Week 8 Solutions

Q4. In the figure below, if you draw a horizontal line on y-axis for y=2. What will be the number of clusters formed?

a) 1
b) 2
c) 3
d) 4

Answer: b) 2

Introduction to Machine Learning Week 8 Solutions

Q5. Assume, you want to cluster 7 observations into 3 clusters using K-Means clustering algorithm. After first iteration the clusters: C1, C2, C3 has the following observations:
C1: {(1, 1), (4, 4), (7, 7)}
C2: {(0, 4), (4, 0)}
C3: {(5, 5), (9, 9)}
What will be the cluster centroids if you want to proceed for second iteration?

a) C1: (4, 4), C2: (2, 2), C3: (7, 7)
b) C1: (2, 2), C2: (0, 0), C3: (5, 5)
c) C1: (6, 6), C2: (4, 4), C3: (9, 9)
d) None of these

Answer: a) C1: (4, 4), C2: (2, 2), C3: (7, 7)

Q6. Following Question 5, what will be the Manhattan distance for observatioon (9, 9) from cluster centroid C1 in the second iteration?

a) 10
b) 5
c) 6
d) 7

Answer: a) 10

Introduction to Machine Learning Week 8 Solutions

Q7. Which of the following is not a clustering approach?

a) Hierarchical
b) Partitioning
c) Bagging
d) Density-Based

Answer: c) Bagging

Introduction to Machine Learning Week 8 Solutions

Q8. Which one of the following is correct?

a) Complete linkage clustering is computationally cheaper compared to single linkage.
b) Single linkage clustering is computationally cheaper compared to K-means clustering.
c) K-Means clustering is computationally cheaper compared to single linkage clustering.
d) None of the above

Answer: c) K-Means clustering is computationally cheaper compared to single linkage clustering.

Q9. Considering single-link and complete-link hierarchical clustering, is it possible for a point to be closer to points in other clusters than to points in its own cluster? If so, in which approach will this tend to be observed?

a) No
b) Yes, single-link clustering
c) Yes, complete-link clustering
d) Yes, both single-link and complete-link clustering

Answer: d) Yes, both single-link and complete-link clustering

Introduction to Machine Learning Week 8 Solutions

Q10. After performing K-Means Clustering analysis on a dataset, you observed the following dendrogram. Which of the following conclusions can be drawn from the dendrogram?

a) There were 28 data points in the clustering analysis
b) The best number of clusters for the analyzed data points is 4
c) The proximity function used is Average-link clustering
d) The above dendrogram interpretation is not possible for K-Means clustering analysis

Answer: d) The above dendrogram interpretation is not possible for K-Means clustering analysis

Introduction to Machine Learning Week 8 Solutions

Q11. Feature scaling is an important step before applying K-Mean algorithm. What is the reason behind this?

a) In distance calculation it will bive the same weights for all features
b) You always get the same clusters if you use or don’t use feature scaling
c) In Manhattan distance it is an important step but in Euclidean it is not
d) None of these

Answer: a) In distance calculation it will bive the same weights for all features

Q12. Which of the following options is a measure of internal evaluation of a clustering algorithm?

a) Rand Index
b) Jaccard Index
c) Davis-Boulding Index
d) F-score

Answer: c) Davis-Boulding Index

Introduction to Machine Learning Week 8 Solutions

Q13. Given, A = {0, 1, 2, 5, 6} and B = {0, 2, 3, 4, 5, 7, 9}, calculate Jaccard Index of these two sets.

a) 0.50
b) 0.25
c) 0.33
d) 0.41

Answer: c) 0.33

Q14. Suppose you run K-means clustering algorithm on a given dataset. What are the factors on which the final clusters depend?
I. The value of K
II. The initial cluster seeds chosen
III. The distance function used

a) I only
b) II only
c) I and II only
d) I, II and III

Answer: d) I, II and III

Introduction to Machine Learning Week 8 Solutions

Q15. Consider a training dataset with two numerical features namely, height of a person and age of the person. The height varies from 4-8 and age varies from 1-100. We wish to perform K-Means clustering on the dataset. Which of the following options is correct?

a) We should use Feature-scaling for K-Means Algorithm.
b) Feature scaling can not be used for K-Means Algorithm.
c) You always get the same clusters if you use of don’t use feature scaling.
d) None of these

Answer: a) We should use Feature-scaling for K-Means Algorithm.

<< Prev- Introduction to Machine Learning Week 7 Assignment Solutions

>> Next- Introduction to Machine Learning Week 9 Assignment Solutions


DISCLAIMER: Use these answers only for the reference purpose. Quizermania doesn't claim these answers to be 100% correct. So, make sure you submit your assignments on the basis of your knowledge.

For discussion about any question, join the below comment section. And get the solution of your query. Also, try to share your thoughts about the topics covered in this particular quiz.

Checkout for more NPTEL Courses: Click Here!

Leave a Comment

Your email address will not be published. Required fields are marked *