I think your core issue is a misplaced right parens. Consider the following code (I've tested the equivalent in Scala, but it should work the same way in pySpark):
PairRDD = rdd.flatMap(lambda (k,v): v.split(',').map(lambda x: (k,x)))
v is split into a list of strings, and then that list is mapped to a tuple of (key, string), and then that list is returned to flatMap, splitting it out into multiple rows in the RDD. With the additional right parens after v.split(','), you were throwing away the key (since you only returned a list of strings).
Are the key values unique in the original dataset? If so and you want a list of tuples, then instead of flatMap use map and you'll get what you want without a shuffle. If you do want to combine multiple rows from the original dataset, then a groupByKey is called for, not reduceByKey.
I'm also curious if the split is necessary--is your tuple (Int, String) or (Int, List(String))?