Competitive Business Intelligence: web scraping with Oracle.

January 6, 2009

In my opinion, one of the trends for Business Intelligence in 2009 (and the years to come) will be the integration of externally available data (data not found within the organisation itself, e.g. data in magazines, the web, libraries etc.) into the data warehouse and into an organisation’s business processes. Using BI to monitor the external environment that an organisation operates in, will grow in importance for decision making.
“Decision makers […] need information about what is going on outside the organization as well as inside.[…] Macroenvironmental analysis […] examines the economic, political, social, and technological events that influence an industry”.
From: Document Warehousing and Text Mining: Techniques for Improving Business Operations, Marketing, and Sales p.4.
However, this is not fully understood by the wider Business Intelligence community, as can be seen from the quote below. (This is a quote from an article on BI in one of the local business weeklies here in Dublin):
“BI tools are fundamentally about using data which an organisation already has – whether in databases, CRM systems, financial and accounting packages, ERP systems or elsewhere”.
This perspective is too narrow. While it is fundamental to use BI to mine and analyse data that an organisation owns, it is as important to integrate data from external sources such as the web to optimize the internal decision-making process. Organisations that understand this requirement will have the edge over their competitors. For executives to make informed decisions they need to be able to look at intra-organisational events as well as the competitive environment.
“Strategic management is the art and science of directing companies in light of events both inside and outside the organization. In addition to understanding their own operations, managers must understand the rest of the industry. For example, should a company try to be a low-cost producer or a best-cost producer? How can a company differentiate its product line? Should the focus be on the entire market or on a niche? Without understanding what others are doing, making decisions about these types of issues leads to unexpected results.”
From: Document Warehousing and Text Mining: Techniques for Improving Business Operations, Marketing, and Sales.
Web mining, data mining and text mining techniques will be of fundamental importance to implement this new breed of BI.
In this series we will have a look at all three areas. In today’s article I will show you, how we can implement web mining techniques with Oracle. In part two of this series we will then look at how we can use data mining techniques in general and survival analysis in particular to analyse macro environmental data from the web. Finally, in the third part we will look at how we can use text mining to classify and cluster the extracted data.
So, what we will do today, is harvest macro environmental business intelligence of real estate data. I thought it might be interesting to look at property related data because of the recent bursting of the property bubble. The site we will extract data from is property.ie.
The information we harvest can be used to (amongst other things)
– Identify areas where houses sell the quickest (have a short survival rate).
– Identify features of houses that sell the quickest.
– Find properties that are near other properties
– Create a taxonomy/classification to browse properties by features
– Monitor price increases or decreases.
– Use a combination of all of the above.
In the case studies that follows I am using Oracle 11.1.0.6.

1. Create a user and assign the relevant permissions

Let’s log on as a DBA user, e.g. SYS and execute the following stuff:

This will give us user real_estate with connect and resource grants.
Next we need to create an Access Control List (ACL) for this user. The ACL will allow us to access to the property.ie website, but prevents access to any other websites. ACLs are new in Oracle 11. If you are using Oracle 10 you need to adapt permissions for this.

On line 5 the principal needs to be in capital letters. Otherwise Oracle will return an error.
Next we assign the property.ie site to the ACL:

Finally we give execute permission on utl_http and dbms_lock

2. Create Tables

Next we need to create the tables to store the extracted information.

Because we will be dealing with very little data initially I have not added any indexes to these tables. Once volume of data grows and we have a better understanding of query patterns we should add relevant indexes.

3. Extract the property seed

Before we get stuck into things I recommend you get familiar with the functionality, navigation etc. of the property.ie website. This will make it easier to understand what we will be dealing with in the next couple of sections. For the purpose of this exercise we will limit the extract process to properties in county Dublin, as we don’t want to put too much pressure on the property.ie web servers. At the same time, though, we want to gather enough information to perform some proper analysis: we will include all areas in Dublin in our extract process. If you have a look at the frontpage of the property.ie website you will see that each area also lists the number of properties available in this area. This information will become relevant for the later stages of our extract exercise.
The procedure below extracts the HTML part of the property.ie frontpage which contains the areas and the number of properties in each area.

On line 11 I have commented out the use of a proxy server. If you are using a proxy or want to anonymize your requests remove the comment and fill in your proxy info such as username, password, host, and port.
On line 15, we are using the HTTPURITYPE function to retrieve the HTML code of the property.ie frontpage and extract the HTML content of the property area dropdown. HTTPURITYPE uses the http_utl package.

We will now strip this piece of information of any HTML noise.

On lines 18-20, we do a cross join between our seed_html table with an inline view that returns the numbers 1 to 190. This is done using the CONNECT BY clause. We have chosen 190 here as the upper limit, because there will never be more than 190 areas in county Dublin.
The inline view returns the following.


We then use regular expressions to parse each occurrence of an area and the number of properties in this area on a step by step basis. At the end of this article there are a couple of links to regular expressions tutorials. This is the first time that I have used them myself, so I am sure the above could have been done in a more elegant and more performant way.
In our seed table, we should now have the following information

4. Extract HTML for property master pages

Each property area has one or more property master pages. On each master property page there are no more than 10 properties listed. Users of the property.ie site can page through these master pages. By clicking on a property on the master page they get to the details page for this property.
The URL template for the master page is
http://www.property.ie/property-for-sale/dublin//p_
/, e.g. http://www.property.ie/property-for-sale/dublin/balbriggan/p_2/
With the information from the seed table, we will iterate over the master property page in our next procedure and parse information that we are interested in from this page. What we will do first though is introduce an error handling procedures. This is necessary to handle errors in case we lose connectivity.

Procedure raise_err raises any errors during extract. But let’s move on to actually extracting the HTML for the master property pages via our seed table.

On lines 7-16 we define a cursor that will use the information from the seed table to browse the master property pages. This cursor allows us to either iterate over everything in the seed table (if we pass in NULL as a parameter to the procedure) or just a particular area. This is achieved via the COALESCE function.
On lines 42-57 we do the main work. We extract the html of all of the master property pages on an area by area basis. Again we use our cross join and CONNECT BY technique from earlier on to retrieve all master property pages for an area in one go. The results of this cross join just for one area would look similar to below:

We store the piece of html that contains the property attributes in our property_html table. Later on we will use this piece of HTML to parse the property attributes we are interested in.
On line 52 we pause for exactly one second to reduce the load on the property.ie web server before moving on to the next area.
On lines 58-63 we do some error handling in case we lose connectivity. If we lose connectivity we delete any entries for the area we were extracting at the moment the error occurred. This will allow us to pick up from where the extract stopped when we re-execute the procedure.

5. Extract and merge property attributes

We now have the relevant HTML from the master property pages to extract and merge the property attributes.

‘,1)+4,instr(prop_info,’

‘,1)-4)) as rooms, 17 part_of_link AS area 18 FROM ( SELECT 19 SUBSTR(html,instr(html,’

‘,1,occurence+3),instr(html,’
‘,1,occurence+3)-instr(html,’
‘,1,occurence+3)) AS prop_info, 20 occurence,part_of_link FROM property_html 21 CROSS JOIN ( 22 SELECT level occurence FROM dual CONNECT BY level <= 10) ); 23 24 25 COMMIT; 26 27 — Get the properties that were updated or newly inserted 28 29 INSERT INTO property_helper 30 SELECT 31 link, 32 prop_code, 33 price, 34 address, 35 rooms, 36 area, 37 0 38 FROM 39 property_attributes 40 WHERE prop_code IS NOT NULL 41 MINUS 42 SELECT 43 link, 44 prop_code, 45 price, 46 address, 47 rooms, 48 area, 49 delete_ind 50 FROM 51 property 52 53 COMMIT; 54 55 — Get the property codes that were deleted 56 57 INSERT INTO property_helper 58 SELECT 59 ‘-‘, 60 prop_code, 61 -1, 62 ‘-‘, 63 ‘-‘, 64 ‘-‘, 65 1 66 FROM 67 property 68 WHERE delete_ind <> 1 69 MINUS 70 SELECT 71 ‘-‘, 72 prop_code, 73 -1, 74 ‘-‘, 75 ‘-‘, 76 ‘-‘, 77 1 78 FROM 79 property_attributes; 80 81 82 COMMIT; 83 84 — Update the updated and deleted records 85 86 MERGE INTO property a USING ( 87 SELECT 88 link, 89 prop_code, 90 price, 91 address, 92 rooms, 93 area, 94 delete_ind 95 FROM 96 property_helper 97 ) b ON (a.prop_code = b.prop_code ) 98 WHEN MATCHED THEN UPDATE SET 99 a.valid_to_date = CASE WHEN a.valid_ind = 1 THEN SYSDATE ELSE a.valid_to_date END, 100 a.date_removed = CASE 101 WHEN a.delete_ind = 1 THEN a.date_removed — It has been removed previously 102 ELSE 103 CASE 104 WHEN b.delete_ind = 1 THEN SYSDATE 105 ELSE a.date_removed 106 END 107 END, 108 a.valid_ind = 0, 109 a.delete_ind = CASE WHEN b.delete_ind = 1 THEN 1 ELSE a.delete_ind END; 110 111 112 COMMIT; 113 114 — Create the updated and newly inserted records. Updated records get a new record to audit changes 115 116 INSERT INTO property 117 SELECT 118 seq_property.nextval, 119 link, 120 prop_code, 121 price, 122 address, 123 rooms, 124 area, 125 SYSDATE, 126 TO_DATE(’31/12/9999′,’DD/MM/YYYY’), 127 TO_DATE(’31/12/9999′,’DD/MM/YYYY’), 128 1, 129 0 130 FROM ( 131 SELECT 132 link, 133 prop_code, 134 price, 135 address, 136 rooms, 137 area, 138 delete_ind 139 FROM 140 property_helper 141 MINUS 142 SELECT 143 link, 144 prop_code, 145 price, 146 address, 147 rooms, 148 area, 149 delete_ind 150 FROM 151 property 152 ) 153 WHERE 154 delete_ind <> 1; 155 156 COMMIT; 157 158 END merge_prop; 159 / Procedure created.

The above procedure consists of five parts.
On lines 11-22 we parse the relevant attributes from the HTML piece we extracted in the previous step. This includes the link to the property’s details page, the property_code (unique identifier for the property), the price, the address, and the room details. Again we are using Regular Expressions to achieve this.
On lines 29-51 we store properties that were either updated or added since our last extract batch in a helper table (property_helper). We have to do a full comparison between all our previously extracted properties in the property table and the properties we have just extracted. We do this via the MINUS operator.
Note: For a large volume of records and depending on our hardware, we might run into performance issues doing a full diff between the two result sets. Anything below 1M records should not be a problem though.
On lines 51-75 we store properties that were deleted since our last extract job in the property_helper table. Again the only option we have here is to do a full comparison between the records we have extracted previously and those we have extracted in our current batch cycle.
On lines 86-109 we merge records that were updated or deleted with previously extracted property records. For each record that was updated we update its valid period and set the valid_ind to 0, i.e. the valid indicator is set to false and as a result we have marked this record as invalid. For each record that was deleted we also update its valid period and valid_ind field. In addition, we update the record’s delete_ind field to 1, i.e. its delete indicator is set to true and as a result we have marked this record as deleted at source.
On lines 104-142 we insert the new records we came across in our current extract batch. We also create a new record for updated records (similar to a Slowly Changing Dimension Type 2). This will give us an audit trail for any updates that were made to records, e.g. when the price is increased or decreased.

6. Extract property details

As part of the previous step we extracted the link to the property’s details page. In this step we will use this link as part of an HTTP get request and scrape the information we are interested in from this page.

.*
‘),'<[^>]+>’),’–>’,”), 28 TRUNC(SYSDATE), 29 REGEXP_SUBSTR(TO_CHAR(REGEXP_SUBSTR(html,’show_map.*’)),'(-|[0-9])[0-9].[0-9]{2,8}’,1,1), 30 REGEXP_SUBSTR(TO_CHAR(REGEXP_SUBSTR(html,’show_map.*’)),’-[0-9].[0-9]{2,8}’,1,1) 31 FROM ( 32 SELECT 33 HTTPURITYPE(r_prop_desc.link).getclob() AS html 34 from dual ); 35 36 COMMIT; 37 38 dbms_lock.sleep(1); 39 40 41 END LOOP; 42 43 END insert_prop_desc ; 44 /

On lines 7-16 we define a cursor that will return us those properties for which no description has been added.
On lines 23-34 we iterate over the cursor and parse the description, the longitude, and the latitude from the HTML. We will use longitude and latitude in part 2 of this series to calculate distance between properties.

7. Bringing it all together

In a last step we bring all the individual procedures together in a master procedure.

On lines 7 -15 we remove data from our previous extract batch and then, step by step, execute each extract procedure.
As a last step we need to add error handling and code instrumentation to our solution. However, this is out of scope for this article.