5

I have a df that has thousands of links like the ones below, for different users, in a column labeled url:

https://www.google.com/something
https://mail.google.com/anohtersomething
https://calendar.google.com/somethingelse
https://www.amazon.com/yetanotherthing

I have the following code:

import urlparse

df['domain'] = ''
df['protocol'] = ''
df['domain'] = ''
df['path'] = ''
df['query'] = ''
df['fragment'] = ''
unique_urls = df.url.unique()
l = len(unique_urls)
i=0
for url in unique_urls:
    i+=1
    print "\r%d / %d" %(i, l),
    split = urlparse.urlsplit(url)
    row_index = df.url == url
    df.loc[row_index, 'protocol'] = split.scheme
    df.loc[row_index, 'domain'] = split.netloc
    df.loc[row_index, 'path'] = split.path
    df.loc[row_index, 'query'] = split.query
    df.loc[row_index, 'fragment'] = split.fragment

The code is able to parse and split the urls correctly, but it is slow since I am iterating over each row of the df. Is there a more efficient way to parse the URLs?

1
  • dropped vectorization and list-comprehension tags - they don't really apply here Commented Nov 24, 2015 at 4:56

2 Answers 2

9

You can use Series.map to accomplish the same in one line:

df['protocol'],df['domain'],df['path'],df['query'],df['fragment'] = zip(*df['url'].map(urlparse.urlsplit))

Using timeit, this ran in 2.31 ms per loop instead of 179 ms per loop as in the original method, when run on 186 urls. (Note however, the code is not optimized for duplicates and will run the same urls through urlparse mulitple times.)

Full Code:

import pandas

urls = ['https://www.google.com/something','https://mail.google.com/anohtersomething','https://www.amazon.com/yetanotherthing'] # tested with list of 186 urls instead
df['protocol'],df['domain'],df['path'],df['query'],df['fragment'] = zip(*df['url'].map(urlparse.urlsplit))
Sign up to request clarification or add additional context in comments.

1 Comment

urlsplit used as a tuple..Great answer! Thanks.
0

I think there are too many lookups happening when you're writing back to the df. It looks like each df.loc[row_index, ...] needs to check as many rows as you've got urls in total (size of df.url). It means that first you look at all the rows at least once to find the unique urls, then for each url you do it again to find matching rows, then again for each write. So assuming unique takes only one full scan, you're scanning the table on average 1+N+(5N/2) times. You should only need one time really.

Unless you've got a huge number of repetitions, you could just ignore the duplicates, traverse df row-by-row and make sure you're using integer index for each iteration. (.iloc) If you're not storing other data in the row, you can also assign all fields at once:

df.iloc[idx] = {'protocol': ..., 'domain': ..., ...}

1 Comment

Thank you. I do have a huge amount of duplicates, and I am storing other data in the row.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.