That's an interesting question in a sense that I don't see a reason why one would want it.
How can I create Dataset using "StructType"
I'd then ask a very similar question...
Why would you like to "trade" a case class with a StructType? What would that give you that a case class could not?
The reason you use a case class is that it can offer you two things at once:
Describe your schema quickly, nicely and type-safely
Working with your data becomes type-safe
Regarding 1. as a Scala developer, you will define business objects that describe your data. You will have to do it anyway (unless you like tuples and _1 and such).
Regarding type-safety (in both 1. and 2.) is about transforming your data to leverage the Scala compiler that can help find places where you expect a String but have an Int. With StructType the check is only at runtime (not compile time).
With all that said, the answer to your question is "Yes".
You can create a Dataset using StructType.
scala> val personDS = Seq(("Max", 33), ("Adam", 32), ("Muller", 62)).toDS
personDS: org.apache.spark.sql.Dataset[(String, Int)] = [_1: string, _2: int]
scala> personDS.show
+------+---+
| _1| _2|
+------+---+
| Max| 33|
| Adam| 32|
|Muller| 62|
+------+---+
You may be wondering why I don't see the column names. That's exactly the reason for a case class that would not only give you the types, but also the names of the columns.
There's one trick you can use however to avoid dealing with case classes if you don't like them.
val withNames = personDS.toDF("name", "age").as[(String, Int)]
scala> withNames.show
+------+---+
| name|age|
+------+---+
| Max| 33|
| Adam| 32|
|Muller| 62|
+------+---+