I have a dataframe with sentences and a dictionary of terms grouped into topics, where I want to count the number of term matches for each topic.
import pandas as pd
terms = {'animals':["fox","deer","eagle"],
'people':['John', 'Rob','Steve'],
'games':['basketball', 'football', 'hockey']
}
df=pd.DataFrame({
'Score': [4,6,2,7,8],
'Foo': ['The quick brown fox was playing basketball today','John and Rob visited the eagles nest, the foxes ran away','Bill smells like a wet dog','Steve threw the football at a deer. But the football missed','Sheriff John does not like hockey']
})
So far I have created columns for the topics and marked it with 1 if a word is present by iterating over the dictionary.
df = pd.concat([df, pd.DataFrame(columns=list(terms.keys()))])
for k, v in terms.items():
for val in v:
df.loc[df.Foo.str.contains(val), k] = 1
print (df)
and I get:
>>>
Foo Score animals games \
0 The quick brown fox was playing basketball today 4 1 1
1 John and Rob visited the eagles nest, the foxe... 6 1 NaN
2 Bill smells like a wet dog 2 NaN NaN
3 Steve threw the football at a deer. But the fo... 7 1 1
4 Sheriff John does not like hockey 8 NaN 1
people
0 NaN
1 1
2 NaN
3 1
4 1
What is the best way to count the number of words for each topic that appears in the sentence? and is there a more efficient way of looping over the dictionary without using cython?