I would use a similar idea as @wwnde - transform function. transform takes an array, transforms every its element according to the provided function and results in the array of the same size, but with changed elements. Exactly what you need.
However, having the same original idea, I would probably implement it differently.
2 options:
from pyspark.sql import functions as F
df = spark.createDataFrame(
[(["test.a", "random.ac"],),
(["test.41", "random.23", "test.123"],)],
['c1']
)
df = df.withColumn('c2', F.transform('c1', lambda x: F.element_at(F.split(x, '\.'), 1)))
df = df.withColumn('c3', F.transform('c1', lambda x: F.regexp_extract(x, r'(.+)\.', 1)))
df.show()
# +--------------------+--------------------+--------------------+
# | c1| c2| c3|
# +--------------------+--------------------+--------------------+
# | [test.a, random.ac]| [test, random]| [test, random]|
# |[test.41, random....|[test, random, test]|[test, random, test]|
# +--------------------+--------------------+--------------------+