Skip to content Skip to sidebar Skip to footer

How To Use Scala Udf In Pyspark?

I want to be able to use a Scala function as a UDF in PySpark package com.test object ScalaPySparkUDFs extends Serializable { def testFunction1(x: Int): Int = { x * 2 } de

Solution 1:

Agree with @user6910411, you have to call apply method directly on the function. So, your code will be.

UDF in Scala:

import org.apache.spark.sql.expressions.UserDefinedFunction
import org.apache.spark.sql.functions._


object ScalaPySparkUDFs {

    def testFunction1(x: Int): Int = { x * 2 }

    def getFun(): UserDefinedFunction =udf(testFunction1 _ )
}

PySpark code:

def test_udf(col):
    sc = spark.sparkContext
    _test_udf = sc._jvm.com.test.ScalaPySparkUDFs.getFun()
    return Column(_test_udf.apply(_to_seq(sc, [col], _to_java_column)))


row = Row("Value")
numbers = sc.parallelize([1,2,3,4]).map(row).toDF()
numbers.withColumn("Result", test_udf(numbers['Value']))

Solution 2:

The question you've linked is using a Scala object. Scala object is a singleton and you can use apply method directly.

Here you use a nullary function which returns an object of UserDefinedFunction class co you have to call the function first:

_f = sc._jvm.com.test.ScalaPySparkUDFs.testUDFFunction1() # Note () at the end
Column(_f.apply(_to_seq(sc, [col], _to_java_column)))

Post a Comment for "How To Use Scala Udf In Pyspark?"