I've been trying to gain a better understanding of how Python allocates memory for lists when the list is being extended by appending. This question covers the basics well and explains the memory increments increase in size as the length of the list increases. This article is an explaination of the source code that can be found here.
I want to ask about this explanation:
/* This over-allocates proportional to the list size, making room * for additional growth. The over-allocation is mild, but is * enough to give linear-time amortized behavior over a long * sequence of appends() in the presence of a poorly-performing * system realloc(). * Add padding to make the allocated size multiple of 4. * The growth pattern is: 0, 4, 8, 16, 24, 32, 40, 52, 64, 76, ... * Note: new_allocated won't overflow because the largest possible value * is PY_SSIZE_T_MAX * (9 / 8) + 6 which always fits in a size_t. */ new_allocated = ((size_t)newsize + (newsize >> 3) + 6) & ~(size_t)3; /* Do not overallocate if the new size is closer to overallocated size * than to the old size. */
And specifically, this calculation:
new_allocated = ((size_t)newsize + (newsize >> 3) + 6) & ~(size_t)3;
My understanding of this calculation is the new memory allocation will be equal to the newsize (the current size triggering the increase) + the newsize right rolled three bits (an effective divide by eight) + 6. This is then ANDed with the 1's complement of 3 so the last two bits are forced to zero to make the value divisible by 4.
I used this code to generate my lists and report the sizes:
a = [i for i in range(108)]
print(sys.getsizeof(a)) # 920 bytes
b = [i for i in range(109)]
print(sys.getsizeof(b)) # 1080 bytes
At 109 elements the resize is triggered an newsize now equals 928 bytes
The calculation above should look like this:

1048 bytes is less than the reported size of 1060 bytes.
The documentation states that this process may not work well with small lists so I tried a bigger list. I won't reproduce this in binary.
a = [i for i in range(10640)]
print(sys.getsizeof(a)) # 85176 bytes
b = [i for i in range(10641)]
print(sys.getsizeof(b)) # 95864 bytes
[85184 + (86184 >> 3) + 6] = 95838 bytes
This will drop to 95836 when the "& ~3" is applied. Again, short of the 95864 reported.
Why is the reported resize greater then the calculated resize?
appendor the likes? Two list comprehensions aren't really a great comparision (Plus there probably is more code related to the list comprehension that's not in the file you are looking at).