Often, the data you need from a website isn’t all in one place. It’s spread across multiple pages and organized into different categories. In this tutorial, you’ll learn how to efficiently collect data from every category, ensuring you capture everything you need without missing a page.
To show you how to do it with Octoparse, we can take this web page URL as an example: https://www.uline.com/product/BrowseWebClass.htm
This webpage is a bit tricky to handle. We need to click through each category page to access the product page. For some products, you can open the product page directly and extract the data, while for others, you may need to navigate through two subpages before reaching the data. In such cases, we’ll set up multiple tasks to scrape all the data effectively.
The main steps are listed in the menu on the right, and you can access the sample task here.
1. Create a Go to Web Page - to open the target website
Enter the page URL into the search box
Click Start to create a new task
2. Loop Click Each URL to Enter the Subpage
Click the first two items.
Select Loop Click Each URL
Click No
3. Modify the XPath of the Loop Item
Click the Loop Item
Select Variable List
Modify the XPath as //ul[contains(@class, "bwc")]/li/a
Click Apply to save the settings.
3. Extract the webpage URLs of each subpage.
Select Click Item to enter the category page
Click the first two items
Select Text+Link to extract the title and URLs of each subpage.
Click the Loop Item
Modify the XPath to include all the items in the loop: //div//a[contains(@href, 'Grp') or contains(@href, 'BL') or contains(@class, 'cssa')]
Click Apply to save the settings
4. Click Save and Run the Task - to get the URLs of subpages
Then you can get the URLs of all the subpages. Here's the sample output:
5. Set up two tasks to Extract Product Information
By observing the subpage URLs, you'll notice some differences.
For webpages whose URLs contain ”BL_“ and "Detail", we can navigate directly to the product details page.
Take this webpage URL for example:
For such webpages, we can check this tutorial to create a task with a list of URLs to extract product details.
But for webpages whose URLs contain "Grp" and "Cls", we need to click through each link to enter the product details page.
Take this webpage URL for example:
For such webpages, we need to create a task with a list of URLs first. Then we click each link in a list and scrape data from new pages.












