How can I pass a Python dictionary key value into dataframe where clause in Pyspark ...
Python dictionary as below ...
column_dict= { 'email': 'customer_email_addr' ,
'addr_bill': 'crq_st_addr' ,
'addr_ship': 'ship_to_addr' ,
'zip_bill': 'crq_zip_cd' ,
'zip_ship': 'ship_to_zip' ,
'phone_bill': 'crq_cm_phone' ,
'phone_ship' : 'ship_to_phone'}
I've a spark dataframe with around 3 billion records. Dataframe as follows ...
source_sql= ("select cust_id, customer_email_addr, crq_st_addr, ship_to_addr,
crq_zip_cd,ship_to_zip,crq_cm_phone,ship_to_phone from odl.cust_master where
trans_dt >= '{}' and trans_dt <= '{}' ").format('2017-11-01','2018-10-31')
cust_id_m = hiveCtx.sql(source_sql)
cust_id.cache()
My intention to find out distinct valid customer's for Email, Addr, Zip and Phone and run in loop for above dictionary keys. For this when I test spark shell for one key value as below ...
>>> cust_id_risk_m=cust_id_m.selectExpr("cust_id").where(
("cust_id_m.'{}'").format(column_dict['email']) != '' ).distinct()
I'm getting error ... Need experts assistance in resolving this.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/mapr/spark/spark-2.1.0/python/pyspark/sql/dataframe.py", line 1026, in filter
raise TypeError("condition should be string or Column")
TypeError: condition should be string or Column