I am analyzing a dataset with 9 features and I used Sparse PCA to reduce the dimensionality of the dataset to 3. After that I standardized the dataset to have mean 0 and variance 1. I want to select the best option from above two methods. I can see there is a difference between the results and which of the two results is most likely to capture the relationships between the variables.
from sklearn.decomposition import SparsePCA
from sklearn.preprocessing import StandardScaler
transformer = SparsePCA(n_components=3, random_state=0, alpha=5)
transformer.fit(data)
X_transformed = transformer.transform(data)
print(pd.DataFrame(X_transformed))
#Standardized Data
scaler = StandardScaler()
scaler.fit(data)
noramlized_data=scaler.transform(data)
transformer = SparsePCA(n_components=3, random_state=0, alpha=5)
transformer.fit(normalized_data)
X_transformed = transformer.transform(normalized_data)
pd.DataFrame(X_transformed)