I have a supervised learning classification problem. I have 4 numeric class labels (0, 1, 2, 3) and I have about 100 trials of 38 separate features as the input.
After inputting this data into an SVC classifier in Python and Matlab (specifically the Classification Learner App), and matching the hyperparameters (C = 1, type = quadratic SVM, multi-class_method = onevsone, standardised data, no PCA), the accuracies given vary drastically:
- Matlab = 86.7 %
- Python = 45.0 %
Has anyone come across this or have any other ideas what I could do to know which one is correct?
Matlab input:

Python input:
import numpy as np
from sklearn import datasets, linear_model, metrics, svm, preprocessing
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC, LinearSVC
symptom = input("What symptom would you like to analyse? \n")
cross_validation = input("With cross validation? \n")
if cross_validation == "Yes":
no_cvfolds = np.int(input("Number of folds? \n"))
x = symptomDF[feature]
y = symptomDF.loc[:, 'updrs_class'].values
x_new = StandardScaler().fit_transform(x)
scores = cross_val_score(SVC(kernel='poly', degree = 2, C=1.0, decision_function_shape = 'ovo'), x_new, y, cv = no_cvfolds)
print(name + " Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))