Scrape post from LinkedIn
FollowIn this tutorial, we will show you how to scrape the posts from LinkedIn.com.
To follow through, you may want to use this URL in the tutorial:
https://www.linkedin.com/search/results/content/?keywords=octoparse&origin=SWITCH_SEARCH_VERTICAL
Here are the main steps in this tutorial: [Download task file here]
- "Go To Web Page" - open the target web page
- Dealing with infinitive scrolling – get more data from list page
- Create a "Loop Item" - loop extract each post
- Extract data – select the data you need to scrape
- Start data extraction – run your task and get data
1. "Go To Web Page" - open the target web page
- Click "+ Task" to start a new task with "Advanced Mode"
- Paste the URL into the "Input URL" box
- Click "Save URL" to move on
This website requires us to log in first, so we need to input our username and password to log in before accessing the data we want. Please check out the details in this tutorial: Extract Data behind a login.
Tips! Advanced Mode is a highly flexible and powerful web scraping mode. For people who want to scrape from websites with complex structures, like Amazon.com, we strongly recommend Advanced Mode to start your data extraction project. |
2. Dealing with infinitive scrolling
In this case, pagination is not an option for loading content, we will need to scroll down to the bottom of the page continuously to fully load all the contents.
- Select "Scroll down to bottom of the page when finished loading" under "Advanced Options"
- Set "Scroll times" and "Internal" you need
- Select "Scroll down to bottom of the page" as "Scroll way"
- Click "OK" to save
Tips! 1. Make sure that you input "Scroll times", otherwise Octoparse wouldn’t perform the "scroll down" action. We suggest it is better to set a relatively higher value of "Scroll times" if you need more data. 2. Most social media websites use scroll-down-to-refresh to view more data, click here to learn more about dealing with infinite scrolling.
|
3. Create a "Loop Item" - loop extract each post
- Scroll down and select the 1st post in the built-in browser
We need to make sure the whole block of the first post is covered in blue when you curse over your mouse. Only in this way, we could see the whole post block is highlighted in green after clicking, covering all other information like author, title, content...etc.
- Click the second whole post
Octoparse will automatically recognize the other similar blocks and highlight them in green
- Click " Extract text of the selected element " on the "Action Tips" panel
Tips! Normally we can just click "Select all sub-elements" on the "Action Tips" panel, but under certain circumstances (like this case), Octoparse fails to generate the option. Thus, we can create a loop at first, and select the data of each post for extracting manually in the next step. |
4. Extract data - select data you need to scrape
- Select the unwanted data fields
- Click the icon of "Delete Data Field"
- Click "Yes”
- Click the data you need in the 1st item block to scrape.
- Select "Extract text of the selected element" on the "Action Tips" panel
- Rename the "Field name" column from predefined name list or inputting on your own, if necessary
Tips! How can we check if the XPath of Loop Item is right? Octoparse will automatically generate the XPath of the loop item. Since the layout of this web page is pretty simple, the XPath should be correct. But still, we can confirm that by scrolling down the page to load more content, and then check if the item numbers in the loop are increasing. As we can see, when we scroll down the page manually, the newly loaded posts can be located successfully into the loop. If you want to learn more about XPath and how to generate it, here are some related tutorials you might need:
|
5. Start extraction – run your task and get data
- Click "Save"
- Click "Start Extraction" on the upper left side
- Select "Local Extraction" to run the task on your computer, or select "Cloud Extraction" to run the task in the Cloud (for premium users only)
For a premium user, Cloud Extraction is highly recommended.
Below is the output sample
Related Articles:
Scrape job data from Glassdoor
Scrape job information from indeed
Scrape information from Craigslist
Author: Vanny
Editor: Fergus
Was this article helpful? Contact us any time if you need our help!