3

while using the unittest framework from python, I noticed a behaviour that causes some problems in my case. To demonstrate it, have a look at the following code:

import unittest
import time

    class TC_Memory(unittest.TestCase):

        def setUp(self):
            unittest.TestCase.setUp(self)
            self.__result = False

        def test_unittest_mem1(self):
            list1 = [9876543210] * 2048*2048*9
            time.sleep(1)

            self.assertTrue(self.__result, "Failed")

        def test_unittest_mem2(self):
            list1 = [9876543210] * 2048*2048*9
            time.sleep(1)

            self.assertTrue(self.__result, "Failed")

        def test_unittest_mem3(self):
            list1 = [9876543210] * 2048*2048*9
            time.sleep(1)

            self.assertTrue(self.__result, "Failed")

        def test_unittest_mem4(self):
            list1 = [9876543210] * 2048*2048*9
            time.sleep(1)

            self.assertTrue(self.__result, "Failed")

        def test_unittest_mem5(self):
            list1 = [9876543210] * 2048*2048*9
            time.sleep(1)

            self.assertTrue(self.__result, "Failed")

        def test_unittest_mem6(self):
            list1 = [9876543210] * 2048*2048*9
            time.sleep(1)

            self.assertTrue(self.__result, "Failed")

        def test_unittest_mem7(self):
            list1 = [9876543210] * 2048*2048*9
            time.sleep(1)

            self.assertTrue(self.__result, "Failed")

        def test_unittest_mem8(self):
            list1 = [9876543210] * 2048*2048*9
            time.sleep(1)

            self.assertTrue(self.__result, "Failed")

        def test_unittest_mem9(self):
            list1 = [9876543210] * 2048*2048*9
            time.sleep(1)

            self.assertTrue(self.__result, "Failed")

    if __name__ == "__main__":
        unittest.main()

These test methods do all the same stuff. Generate a huge list, wait one second and either pass or fail depending on the __result variable.

Now, when the test passes, nothing big happens, but when the test fails, the memory of the list seems not to be freed. This causes a huge memory consumption as each test seemingly holds its memory. In the end, after each test was run and the result was printed, the memory is freed and everything is back to normal.

While the code above exaggerates, the real case contains 200+ tests which each uses about 20-30 MB of memory. If those are not freed up, I'm running into a memory shortage.

It seems like unittest holds onto test method variables for reporting the values if the test fails, or at least offers reporting on the variables in such a case. I don't know, maybe I'm overlooking something here.

However, I need to get rid of this excess memory. So far my options are:

  • Calling del on any variable I don't need anymore. However this somewhat ruins the nice stuff about a garbage collector and "not having to worry about memory stuff"
  • Getting more RAM.

I'd love to hear the possibility of setting some sort of flag. Even more I'd love to hear someone pointing out an obvious error I made or someone telling my it doesn't happen using version x.y of python or unittest.

As for the used versions: It's python 3.3.5 final 64bit

So, if there are any more questions, I'll gladly answer those. If you have any idea, or a shot in the dark, let me hear it and I will try it out.

Thanks in advance.

2
  • Maybe the --failfast option would work in your case: see docs Commented Oct 9, 2015 at 9:12
  • unfortunately, --failfast is not really an option. A test may fail because of external hardware. I've thought about monitoring the process memory and killing it once it goes over a threshold, but that sounds hacky. Commented Oct 9, 2015 at 9:27

1 Answer 1

4

The problem is probably that the test runner (or the result class) retains the exception that is thrown which contains references to the frames that refers to the large objects. What you might want to do is to write a custom runner that does not show this behavior. Something like (sorry for the python2, but it's what I've got at the moment):

class CustomTestResult(TextTestResult):
      def addError(self, test, err):
           tp, vl, tb = err
           super(CustomTestResult, self).addError(test, (tp, vl, placeholder))

      def addFailure(self, test, err):
           tp, vl, tb = err
           super(CustomTestResult, self).addFailure(test, (tp, vl, placeholder))

class CustomTestRunner(TextTestRunner):
      resultclass = CustamTestResult

if __name__ == "__main__":
    import sys
    try:
        raise Exception
    except Exception as err:
        placeholder = sys.exc_info()[2]
    unittest.main(testRunner = CustomTestRunner)

There may be a bit of room for improvements here though. You could for example examine the traceback recursively and determine if it's large enough to motivate it being replaced (or even maybe to remove the offending objects from the frames). This is especially true for cases where the code under test raises exceptions (in which case you might be interrested in the traceback and not just a placeholder traceback).

Another solution would maybe be to not do the allocation in the same stack frame as the stackframe created from the failing test only will contains frames over it. Like:

def mem1(self):
        list1 = [9876543210] * 2048*2048*9
        time.sleep(1)


def test_unittest_mem1(self):
        self.mem1()

        self.assertTrue(self.__result, "Failed")
Sign up to request clarification or add additional context in comments.

1 Comment

I guess I will take the first route, sounds better and prevents me from having to update all other tests. I'll update once i was able to set everything up :)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.