I am trying to list the files, their column count, column names from each sub directory present inside a directory,
Directory : dbfs:/mnt/adls/ib/har/
Sub Directory 2021-01-01
File A.csv
File B.csv
Sub Directory 2021-01-02
File A1.csv
File B1.csv
With the below code I am getting the error 'PosixPath' object is not iterable in the second for loop. Could someone help me out please?
files = dbutils.fs.ls(f"dbfs:/mnt/adls/ib/har/")
for fi in files:
il=fi.path
print(il)
ill=Path(il)
for fii in ill:
if(".csv" in fii.path):
df2 = spark.read.option("header","true").option("sep", ";").option("escape", "\"").csv(f"{fii.path}")
m = df2.columns
l = len(df2.columns)
print(f"{fii.path} has, {l} columns, {m}")
cols[fii.path] = l
maxkey = max(cols, key=cols.get)
maxvalue = cols.get(maxkey)