Pandas UDFs should be faster in the most cases, primarily because of the more effective encoding of data between Spark JVM and Python process, so it's recommended to use Pandas UDFs as much as possible.
The "normal" UDFs could be used in case when Pandas UDFs couldn't be used, for example, right now they don't work with MapType, arrays of TimestampType, and nested StructType.
P.S. Also, when using PySpark, maybe it makes sense to evaluate a use of Koalas, In my own tests, Koalas was ~2 times faster than similar code that used Pandas UDFs, although carefully written PySpark code was still faster.