You are browsing a tutorial guide for the latest Octoparse version. If you are running an older version of Octoparse, we strongly recommend you upgrade because it is faster, easier and more robust! Download and upgrade here if you haven't already done so!
Walmart is a large retail corporation in the United States. In this tutorial, we are going to show you how to scrape product data from Walmart.com.
You can also go to "Task Templates" on the main screen of the Octoparse scraping tool, and start with the ready-to-use Walmart Template directly to save your time. With this feature, there is no need to configure scraping tasks. For further details, you may check it out here: Task Templates
If you would like to know how to build the task from scratch, you may continue reading the following tutorial.
Suppose we want to scrape some specific information about headphones, and we can start with the home page (https://www.walmart.com/) to create our crawler. We will scrape data such as product title, price, product ID, and reviews from the product details page with Octoparse.
Here are the main steps in this tutorial: [Download demo task file here]
1. Open the target web page
Enter the URL on the home page and click Start
Click the search box and then click Enter text on the Tips panel
Type "Headphone" and confirm
Click on Enter Text and set as to hit the Enter/Return key, then click "Apply" to confirm
2. Create a Pagination - to scrape from multiple pages
Click on the Next Page button, select Loop click single element, and set up the AJAX timeout as 10s
The auto-generated XPath for Pagination does not always work in this case, so we need to modify the XPath to make it scrape all the pages.
Click on Pagination
Input the XPath //a[@aria-label="Next Page"] in the Matching XPath box
Click Apply to confirm
3. Scrape data from the product list
Select the first product (note to include the whole product section)
Choose Select all sub-elements
Choose Select all
Choose Extract Data
Now, a Loop Item with Extract Data will be created in the workflow
Double click the field name to rename it or click ... to delete unwanted fields
If all the data you want can be scraped from the listing page, you can jump to step 6. Run extraction - run your task and get data
4. Click into each product link to scrape data - to get data from product pages
Some information like product descriptions can only be grabbed from the product detail page. We need to click on each product link to get the data.
Click on the first product link
Choose Click URL
A click item will be created in the workflow:
5. Extract data from the detail page
Select the data you want
Click Extract the text of the element or Extract the URL of the select image
Double click the field name to rename it or click ... to delete fields
Set up wait time for Extract Data action
The auto-generated XPath of the data fields may fail to work after the web page updates. We will need to modify the XPath of the fields. In this case, we have prepared some useful XPath for this website.
Switch Data Preview to Vertical View
Double click on the XPath to modify it
Replace the XPath with the ones below
Product name: //h1
Product details: //h2[text()='Product details']/../following-sibling::div
6. Run extraction - run your task and get data
Click Run on the upper left side
Select Run task on your device to run the task on your computer
Note: Walmart tasks cannot be run in the Cloud due to CAPTCHA issues. You can only run it on your device for now.
Here is the sample output.