2

My csv file is on this link:

https://drive.google.com/file/d/1Pac9-YLAtc7iaN0qEuiBOpYYf9ZPDDaL/view?usp=sharing

I want to remove the duplicate from the csv by checking length of genres against each artist ID. If an artist have 2 records in csv (e.g., ed sheeran's id 6eUKZXaKkcviH0Ku9w2n3V have 2 records one record have 1 genres while row#5 have 5 genres so i want to keep the row which has largest genres length)

I'm using this script for now:

import pandas
import ast


df = pandas.read_csv('39K.csv', encoding='latin-1')

df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(str(x))))
print(df['lst_len'][0])

df = df.sort_values('lst_len', ascending=False)

# Drop duplicates, preserving first (longest) list by ID
df = df.drop_duplicates(subset='ID')


# Remove extra column that we introduced, write to file
df = df.drop('lst_len', axis=1)
df.to_csv('clean_39K.csv', index=False)

but this script works for 500 record (may be i have illusion that size of records matters),

but when I run this script for my largest file 39K.csv I'm getting this error:

Traceback (most recent call last):
******* error in line 5, in <module>....
    df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
    df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
TypeError: 'float' object is not iterable

Please point me where i am doing wrong? Thanks

1 Answer 1

2

You have bad data at (at least) line 16553 of your input csv file:

52lUXCmpmAIVsgNd1uADOy,Moosh & Twist,NULL

pandas interprets NULL as nan when it reads the file, which is of type float and is not iterable. There are a few other NULL entries in there too, so you could either manually remove or fix them (preferred), or handle this case in your code.

For example, if you actually want to pretend that NULL should be interpreted as an empty list, you can preprocess the data like this (just after reading the csv):

df.loc[df['genres'].isnull(),['genres']] = df.loc[df['genres'].isnull(),'genres'].apply(lambda x: [])

Or more elegantly, switch to reading the csv using na_filter=False:

df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)

which will prevent pandas from replacing these values with nan in the first place.

Finally, the code doesn't quite do what we ant because it is counting the number of characters in the string representation of the list. The solution is to preprocess the NULL values into strings representing empty lists, then use ast.literal_eval to turn all the strings back into lists:

import pandas
import ast

    df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
    df.replace(to_replace="NULL", value="[]", inplace=True)

    for item in df['genres']:

        print(str(item))
        print(ast.literal_eval(item))

    df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(x)))
Sign up to request clarification or add additional context in comments.

11 Comments

Or maybe worth doing pre-processing with DataFrame.fillna("0") or with empty across the dataframe.
@pygo I thought that, but I'm not sure that will work, because fillna doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Using fillna("0") definitely doesn't work (tested) without further processing.
Hmm, how about df = df.fillna('') which will fill na's (e.g. NaN's) with '' ie empty, or alternatively df.read_csv(path , na_filter=False) which will default consider empty fields as empty strings.
@pygo using fillna('') still didn't work, those values still ened up being nan. But your idea about na_filter=False worked beautifully, thanks, I've edited it into the answer.
@RobBricheno, awesome !
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.