0

I am working on my research related project in Human Emotion Detection using OpenCV + Python. I follow a tutorial with using CK+ data sets to train. But when i try to Run the code to training the data set it gives OutOfMemory Error. How can i solved this problem. Please help me. I am a beginner for OpenCV and Python. I put my error code and source code bellow.

OpenCV Error: Insufficient memory (Failed to allocate 495880000 bytes) in cv::OutOfMemoryError, file C:\projects\opencv-python\opencv\modules\core\src\alloc.cpp, line 55
OpenCV Error: Assertion failed (u != 0) in cv::Mat::create, file C:\projects\opencv-python\opencv\modules\core\src\matrix.cpp, line 436
Traceback (most recent call last):
  File "D:/Documents/Private/Pycharm/EmotionDetection/training.py", line 71, in <module>
    correct = run_recognizer()
  File "D:/Documents/Private/Pycharm/EmotionDetection/training.py", line 49, in run_recognizer
    fishface.train(training_data, np.asarray(training_labels))
cv2.error: C:\projects\opencv-python\opencv\modules\core\src\matrix.cpp:436: error: (-215) u != 0 in function cv::Mat::create

This is Source Code.

   import cv2
    import glob
    import random
    import numpy as np

    emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"]  # Emotion list
    fishface = cv2.face.EigenFaceRecognizer_create()  # Initialize fisher face classifier

    data = {}


    def get_files(emotion):  # Define function to get file list, randomly shuffle it and split 80/20
        files = glob.glob("dataset\\%s\\*" % emotion)
        random.shuffle(files)
        training = files[:int(len(files) * 0.8)]  # get first 80% of file list
        prediction = files[-int(len(files) * 0.2):]  # get last 20% of file list
        return training, prediction


    def make_sets():
        training_data = []
        training_labels = []
        prediction_data = []
        prediction_labels = []
        for emotion in emotions:
            training, prediction = get_files(emotion)
            # Append data to training and prediction list, and generate labels 0-7
            for item in training:
                image = cv2.imread(item)  # open image
                gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)  # convert to grayscale
                training_data.append(gray)  # append image array to training data list
                training_labels.append(emotions.index(emotion))

            for item in prediction:  # repeat above process for prediction set
                image = cv2.imread(item)
                gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
                prediction_data.append(gray)
                prediction_labels.append(emotions.index(emotion))

        return training_data, training_labels, prediction_data, prediction_labels


    def run_recognizer():
        training_data, training_labels, prediction_data, prediction_labels = make_sets()

        print("training fisher face classifier")
        print("size of training set is:", len(training_labels), "images")

        fishface.train(training_data, np.asarray(training_labels))

        print("predicting classification set")

        cnt = 0
        correct = 0
        incorrect = 0
        for image in prediction_data:
            pred, conf = fishface.predict(image)
            if pred == prediction_labels[cnt]:
                correct += 1
                cnt += 1
            else:
                cv2.imwrite("difficult\\%s_%s_%s.jpg" % (emotions[prediction_labels[cnt]], emotions[pred], cnt), image)  # <-- this one is new
                incorrect += 1
                cnt += 1
        return (100 * correct) / (correct + incorrect)


    # Now run it
    meta_score = []
    for i in range(0, 10):
        correct = run_recognizer()
        print("got", correct, "percent correct!")
        meta_score.append(correct)

    print("\n\nend score:", np.mean(meta_score), "percent correct!")
4
  • 2
    Maybe, the code is right, but you dataset is too large compared to your computers' memory. (1) Add memory; (2) Cut down the dataset; (3) Alter to another algorithm(such as LBPHXXX). Eigenface is not really a good choice. Commented Oct 11, 2017 at 13:58
  • 1
    Maybe you have a 32 bit python running, then upgrading to 64bit might help Commented Oct 11, 2017 at 14:10
  • @Silencer Thank you. It worked. I reduce the dataset by 1/2 then it works fine. Commented Oct 11, 2017 at 15:29
  • @uphill Thank you, I will try it. Commented Oct 11, 2017 at 15:29

1 Answer 1

3

If you are using 32-bit system,this is not possible because there is not sufficient memory you can address. your images are too big for your build system. If you have 32 bit operating system, upgrade to 64-bit else you are probably using 32-bit build environments and you should switch to 64-bit build tools.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.