The above code produces an output similar to the one you specified, but using maps as the names needed to be dynamic. Please note if you are only going to have a small finite number of IDs that you know beforehand (id1, id2, id3) then I might approach this slightly different. Also note the output is slightly different from what you specified because if there is only 1 ID, you will have a list with 1 item, but I am not sure it's possible to have it the way you specified as you would be asking for 2 different "types" as values (list if > 1 val, string if only one value) which would cause problems anyway.
I could have done this in fewer steps, but wanted to show you the thought process and walk you through it.
from pyspark.sql import functions as F
df = spark.createDataFrame([("id1:xxx7xxx", ), ("id2:777l777", ), ("id1:xxx4xxx||id2:555x555||id1:xxx5xxx", )], ["Value"])
# split the string based on the || and then explode
# the reason i am keeping the Value is because we will want to use it to group by to get the split_val back to their original rows - if there are other columns you can use for the group by, you do not need to keep it
df = df.select("Value", F.explode(F.split(F.col("Value"), "\|\|")).alias('split_val'))
df = df.withColumn("id_num", F.split(F.col("split_val"), ":").getItem(0)) \
.withColumn("id_val", F.split(F.col("split_val"), ":").getItem(1))
df = df.groupBy(["Value", "id_num"]).agg(F.collect_list("id_val").alias("id_val_list")) \
.withColumn("idMap", F.create_map(F.col("id_num"), F.col("id_val_list")))
# now group by original value to get this back in one row per Value
df = df.groupBy(["Value"]).agg(F.collect_list("idMap").alias("ValueList"))
# If you don't want Value anymore, you can just select Value List and rename it to Value
df = df.select(F.col("ValueList").alias("Value"))