For those who are curious, here is the ANTLR grammar specification for Spark SQL.
An adaptation of Presto’s presto-parser/src/main/antlr4/com/facebook/presto/sql/parser/SqlBase.g4 grammar
For those who are curious, here is the ANTLR grammar specification for Spark SQL.
An adaptation of Presto’s presto-parser/src/main/antlr4/com/facebook/presto/sql/parser/SqlBase.g4 grammar
In Spark, as with any SQL left outer join, it will produce more rows than the total number of rows in the left table if the right table has duplicates.
You could first drop the duplicates on the right table before performing join as follows.
myDF.dropDuplicates(“myJoinkey”)
Or you could also do a groupBy and aggregate
Take a look at this dedup example
https://github.com/spirom/LearningSpark/blob/master/src/main/scala/dataframe/DropDuplicates.scala