27

I'm trying to scrape data from the public site asx.com.au

The page http://www.asx.com.au/asx/research/company.do#!/ACB/details contains a div with class 'view-content', which has the information I need:

enter image description here

But when I try to view this page via Python's urllib2.urlopen that div is empty:

import urllib2
from bs4 import BeautifulSoup

url = 'http://www.asx.com.au/asx/research/company.do#!/ACB/details'
page = urllib2.urlopen(url).read()
soup = BeautifulSoup(page, "html.parser")
contentDiv = soup.find("div", {"class": "view-content"})
print(contentDiv)

# the results is an empty div:
# <div class="view-content" ui-view=""></div>

Is it possible to access the contents of that div programmatically?

Edit: as per the comment it appears that the content is rendered via Angular.js. Is it possible to trigger the rendering of that content via Python?

4
  • I see ng-scope - it is name use by framework AngularJS (or similar framework) so this page is generated by JavaScript. Commented Jan 28, 2016 at 0:28
  • @furas given that, perhaps this is a duplicate of stackoverflow.com/questions/30673447/… and I need to use Selenium or similar? Commented Jan 28, 2016 at 0:38
  • 1
    you don't need selenium you already has url in my answer and you can get it using urrlib and json :) I'm working on code example. Commented Jan 28, 2016 at 0:40
  • @furas You can't use ng.probe when a site is in production mode Commented May 27, 2020 at 15:41

1 Answer 1

34

This page use JavaScript to read data from server and fill page.

I see you use developer tools in Chrome - see in tab Network on XHR or JS requests.

I found this url:

http://data.asx.com.au/data/1/company/ACB?fields=primary_share,latest_annual_reports,last_dividend,primary_share.indices&callback=angular.callbacks._0

This url gives all data almost in JSON format

But if you use this link without &callback=angular.callbacks._0 then you get data in pure JSON format and you will could use json module to convert it to python dictionary.


EDIT: working code

import urllib2
import json

# new url      
url = 'http://data.asx.com.au/data/1/company/ACB?fields=primary_share,latest_annual_reports,last_dividend,primary_share.indices'

# read all data
page = urllib2.urlopen(url).read()

# convert json text to python dictionary
data = json.loads(page)

print(data['principal_activities'])

Output:

Mineral exploration in Botswana, China and Australia.

EDIT (2020.12.23)

This answer is almost 5 years old and was created for Python2. Now in Python3 it would need urllib.request.urlopen() or requests.get() but real problem is that for 5 years this page changed structure and technologie. Urls (in question and answer) don't exists any more. This page would need new analyze and new method.

In question was url

http://www.asx.com.au/asx/research/company.do#!/ACB/details

but currently page uses url

https://www2.asx.com.au/markets/company/acb

And it use different urls for AJAX,XHR

https://asx.api.markitdigital.com/asx-research/1.0/companies/acb/about
https://asx.api.markitdigital.com/asx-research/1.0/companies/acb/announcements
https://asx.api.markitdigital.com/asx-research/1.0/companies/acb/key-statistics
etc.

You can find more urls using DevTools in Chrome/Firefox (tab: Network, filter: XHR)

import urllib.request
import json

# new url      
url = 'https://asx.api.markitdigital.com/asx-research/1.0/companies/acb/about'

# read all data
page = urllib.request.urlopen(url).read()

# convert json text to python dictionary
data = json.loads(page)

print(data['data']['description'])

Output:

Minerals exploration & development
Sign up to request clarification or add additional context in comments.

8 Comments

Many thanks for the fast and detailed response! This is awesome.
in new url you have to use other firm name in place of ACB and you get data for this firm
That XHR comment saved me. Amazing. Good job.
@furas what if the website being scraped has a login/pw, how would urllib2 handle that?
@Raj currently I use requests for scraping. Generally you have to send POST request with login/password and get cookie which you have to use with other requests. Page may also send other values for security reason so first you may have to get page with login form and get hidden values. I always start with DevTool in Chrome/Firefox to see all values in all requests. Sometimes page uses JavaScript to generate values and it is easier to do all with Selenium which can control web browser.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.