Define your case class and use it as the "source" of the schema for your datasets.
case class Point(val x: Double, val y: Double)
val points = Seq(Point(0,0), Point(0,1)).toDF
scala> points.show
+---+---+
| x| y|
+---+---+
|0.0|0.0|
|0.0|1.0|
+---+---+
As you may have noticed, the case class becomes a mere schema (i.e. structure) of your dataset. In other words, you cannot write a user-defined function that would accept Point objects while processing such datasets.
A possible solution is not to use a user-defined function, but typed Dataset and register the function not as a UDF but a regular Scala function (or method).
scala> val points = Seq(Point(0,0), Point(0,1)).toDS
points: org.apache.spark.sql.Dataset[Point] = [x: double, y: double]
def distance(x: Double, y: Double) = y - x
val myFn = (p:Point) => distance(p.x, p.y)
scala> points.map(myFn).show
+-----+
|value|
+-----+
| 0.0|
| 1.0|
+-----+