2

Probably this question has been asked many times in different forms. However, my problem is when I use XGBClassifier() with a production like data, I get a feature name mismatch error. I am hoping someone could please tell me what I am doing wrong. Here is my code. BTW, the data is completely made up:

import pandas as pd
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from sklearn.model_selection import train_test_split, KFold, cross_val_score
from sklearn.metrics import accuracy_score
import xgboost as xgb

data = {"Age":[44,27,30,38,40,35,70,48,50,37],
        "BMI":["25-29","35-39","30-35","40-45","45-49","20-25","<19",">70","50-55","55-59"],
        "BP":["<140/90",">140/90",">140/90",">140/90","<140/90","<140/90","<140/90",">140/90",">140/90","<140/90"],
        "Risk":["No","Yes","Yes","Yes","No","No","No","Yes","Yes","No"]}

df = pd.DataFrame(data)

X = df.iloc[:, :-1]
y = df.iloc[:, -1]

labelencoder = LabelEncoder()

def encoder_X(columns):
    for i in columns:
        X.iloc[:, i] = labelencoder.fit_transform(X.iloc[:, i])

encoder_X([1,2])

y = labelencoder.fit_transform(y)

onehotencdoer = OneHotEncoder(categorical_features = [[1,2]])
X = onehotencdoer.fit_transform(X).toarray()

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 13)

model = xgb.XGBClassifier()
model.fit(X_train, y_train, verbose = True)

y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]

accuracy = accuracy_score(y_test, predictions)
print("Accuracy: {0}%".format(accuracy*100))

So far so good, no error. The accuracy score is 100%, but that's because it is a made up data set so I am not worried about that.

When I try to classify a new dataset based on the model, I get "feature name mismatch error":

proddata = {"Age":[65,50,37],
        "BMI":["25-29","35-39","30-35"],
        "BP":["<140/90",">140/90",">140/90"]}

prod_df = pd.DataFrame(proddata)

def encoder_prod(columns):
    for i in columns:
        prod_df.iloc[:, i] = labelencoder.fit_transform(prod_df.iloc[:, i])

encoder_prod([1,2])

onehotencdoer = OneHotEncoder(categorical_features = [[1,2]])
prod_df = onehotencdoer.fit_transform(prod_df).toarray()

predictions = model.predict(prod_df)

After this I get the below error

predictions = model.predict(prod_df)
Traceback (most recent call last):

  File "<ipython-input-24-456b5626e711>", line 1, in <module>
    predictions = model.predict(prod_df)

  File "c:\users\sozdemir\appdata\local\programs\python\python35\lib\site-packages\xgboost\sklearn.py", line 526, in predict
    ntree_limit=ntree_limit)

  File "c:\users\sozdemir\appdata\local\programs\python\python35\lib\site-packages\xgboost\core.py", line 1044, in predict
    self._validate_features(data)

  File "c:\users\sozdemir\appdata\local\programs\python\python35\lib\site-packages\xgboost\core.py", line 1288, in _validate_features
    data.feature_names))

ValueError: feature_names mismatch: ['f0', 'f1', 'f2', 'f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12'] ['f0', 'f1', 'f2', 'f3', 'f4', 'f5']
expected f6, f11, f12, f9, f7, f8, f10 in input data

I know this is happening as a result of OneHotEncoding when fit and transform to an array. I might be wrong though.

If this is as a result of OneHotEncoding, can I just not use OneHotEncoding since LabelEncoder() already codes the categorical values?

Thank you so much for any help and feedback.

PS: The version of XGBOOST is 0.7

xgboost.__version__
Out[37]: '0.7'
2
  • Can you please post stack trace so that we know where is code facing error. Commented Aug 16, 2018 at 14:06
  • @UpasanaMittal - Sure. I'll edit my question to show where the error occurs. It is right after model.predict(prod_df) line. Although, I might have found the answer which I have posted earlier today. I am just waiting for more feedback. Thanks Commented Aug 16, 2018 at 14:16

1 Answer 1

1

It seems like the encoder needs to be saved after it is being fitted. I used joblib from sklearn. Jason from https://machinelearningmastery.com/ gave me the idea of saving the encoder. The below is an edited version:

import pandas as pd
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from sklearn.model_selection import train_test_split, KFold, cross_val_score
from sklearn.metrics import accuracy_score
from sklearn.externals import joblib
import xgboost as xgb

data = {"Age":[44,27,30,38,40,35,70,48,50,37],
        "BMI":["25-29","35-39","30-35","40-45","45-49","20-25","<19",">70","50-55","55-59"],
        "BP":["<140/90",">140/90",">140/90",">140/90","<140/90","<140/90","<140/90",">140/90",">140/90","<140/90"],
        "Risk":["No","Yes","Yes","Yes","No","No","No","Yes","Yes","No"]}

df = pd.DataFrame(data)

X = df.iloc[:, :-1]
y = df.iloc[:, -1]

labelencoder = LabelEncoder()

def encoder_X(columns):
    for i in columns:
        X.iloc[:, i] = labelencoder.fit_transform(X.iloc[:, i])

encoder_X([1,2])

y = labelencoder.fit_transform(y)

onehotencdoer = OneHotEncoder(categorical_features = [[1,2]])
onehotencdoer.fit(X)
enc = joblib.dump(onehotencdoer, "encoder.pkl") # save the fitted encoder
X = onehotencdoer.transform(X).toarray()

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 13)

model = xgb.XGBClassifier()
model.fit(X_train, y_train, verbose = True)

y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]

accuracy = accuracy_score(y_test, predictions)
print("Accuracy: {0}%".format(accuracy*100))

And now, we can use the fitted encoder to transform the prod data:

proddata = {"Age":[65,50,37],
        "BMI":["25-29","35-39","30-35"],
        "BP":["<140/90",">140/90",">140/90"]}

prod_df = pd.DataFrame(proddata)

def encoder_prod(columns):
    for i in columns:
        prod_df.iloc[:, i] = labelencoder.fit_transform(prod_df.iloc[:, i])

encoder_prod([1,2])
enc = joblib.load("encoder.pkl")
prod_df = enc.transform(prod_df).toarray()

predictions = model.predict(prod_df)
results = [round(val) for val in predictions]

It seems to be working for this example and I'll try this method at work for a larger data-set. Please, let me know what you think.

Thanks

Sign up to request clarification or add additional context in comments.

1 Comment

So I applied this technique to a data-set at work and it worked.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.