Is There A Way To Write Pyspark Dataframe To Azure Cache For Redis?
I'm having a pyspark dataframe with 2 columns. I created a azure cache for redis instance. I would like to write the pyspark dataframe to redis with first column of dataframe as ke
Solution 1:
You need to leverage this library:https://github.com/RedisLabs/spark-redis along with the associated jar needed(depending on which version of spark+scala you are using).
In my case I have installed 3 jars on spark cluster(Scala=2.12) latest spark:
- spark_redis_2_12_2_6_0.jar
- commons_pool2_2_10_0.jar
- jedis_3_6_0.jar
Along the configuration for connecting to redis:
Cluster conf setup
spark.redis.authPASSWORDspark.redis.port6379spark.redis.hostxxxx.xxx.cache.windows.net
Make sure you have azure redis 4.0, the library might have issue with 6.0. Sample code to push:
from pyspark.sql.types import StructType, StructField, StringType
schema = StructType([
StructField("id", StringType(), True),
StructField("colA", StringType(), True),
StructField("colB", StringType(), True)
])
data = [
['1', '8', '2'],
['2', '5', '3'],
['3', '3', '1'],
['4', '7', '2']
]
df = spark.createDataFrame(data, schema=schema)
df.show()
--------------
(
df.
write.
format("org.apache.spark.sql.redis").
option("table", "mytable").
option("key.column", "id").
save()
)
Post a Comment for "Is There A Way To Write Pyspark Dataframe To Azure Cache For Redis?"