The substring function from pyspark.sql.functions only takes fixed starting position and length. However your approach will work using an expression.
import pyspark.sql.functions as F
d = [{'POINT': 'The quick # brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog'},
{'POINT': 'The quick brown fox jumps over the lazy dog.# The quick brown fox jumps over the lazy dog.'}]
df = spark.createDataFrame(d)
df.withColumn('POINT', F.expr("substring(POINT, instr(POINT, '#'), 30)")).show(2, False)
+------------------------------+
|POINT |
+------------------------------+
|# brown fox jumps over the laz|
|# The quick brown fox jumps ov|
+------------------------------+
''.join(string.split("#")[1:])filtered_df = filtered_df.withColumn('POINT', split(filtered_df['POINT'], "#")[1:])gives startPos and length must be the same type. Got <class 'int'> and <class 'NoneType'>, respectively.