You cannot speed it up using threading due to the Global Interpreter Lock. Certain internal state of the Python interpreter is protected by that lock, which prevents different threads that need to modify that state from running concurrently.
You could speed it up by spawning actual processes using multiprocessing. Each process will run in its own interpreter, thus circumventing the limitation of threads. With multiprocessing, you can either use shared memory, or give each process its own copy/partition of the data.
Depending on your task, you can either parallelize the processing of a single image by partitioning it, or you can parallelize the processing of a list of images (the latter is easily done using a pool). If you want to use the former, you might want to store the image in an Array that can be accessed as shared memory, but you'd still have to solve the problem of where to write the results (writing to shared memory can hurt performance badly). Also note that certain kinds of communication between processes (Queues, Pipes, or the parameter/return-value passing of some function in the module) require the data to be serialized using Pickle. This imposes certain limitations on the data, and might create significant performance-overhead (especially if you have many small tasks).
Another way for improving performance of such operations is to try writing them in Cython, which has its own support for parallelization using OpenMP - I have never used that though, so I don't know how much help it can be.
processPixelcould be "numpy-ified", in which case you'll see an immense speedup over your current approach.