I've been going through an online tutorial
from sklearn.decomposition import *
from sklearn import datasets
import matplotlib.pyplot as plt
import time
digits=datasets.load_digits()
randomized_pca = PCA(n_components=2,svd_solver='randomized')
# a numpy array with shape= (1800,2)
reduced_data_rpca = randomized_pca.fit_transform(digits.data)
# make a scatter plot
colors = ['black', 'blue', 'purple', 'yellow', 'pink', 'red', 'lime', 'cyan',
'orange', 'gray']
start=time.time()
# Time Taken for this loop = 9.5 seconds
# for i in range(len(reduced_data_rpca)):
# x = reduced_data_rpca[i][0]
# y = reduced_data_rpca[i][1]
# plt.scatter(x,y,c=colors[digits.target[i]])
# Alternative way TimeTaken = 0.2 sec
# plots all the points (x,y) with color[i] in ith iteration
for i in range(len(colors)):
"""assigns all the elements (accordingly to x and y) whose label(0-9) equals the variable i (am I
correct ? does this mean it iterates the whole again to check for the
equality?) """
x = reduced_data_rpca[:, 0][digits.target == i]
y = reduced_data_rpca[:, 1][digits.target == i]
plt.scatter(x, y, c=colors[i])
end=time.time()
print("Time taken",end-start," Secs")
My question is although both commented and non-commented loops performs same operation I cannot understand how the second loop is working and why it is performing better than the other one.