In Spark, as with any SQL left outer join, it will produce more rows than the total number of rows in the left table if the right table has duplicates.
You could first drop the duplicates on the right table before performing join as follows.
myDF.dropDuplicates(“myJoinkey”)
Or you could also do a groupBy and aggregate
Take a look at this dedup example
https://github.com/spirom/LearningSpark/blob/master/src/main/scala/dataframe/DropDuplicates.scala