Why am I getting blank fields from cloud extractions?
FollowIn some cases, local extraction works perfectly but some blank fields are extracted in Cloud Extraction. This can happen in the following situations:
1. Tasks executed with cloud extraction are split-table and working too fast hence some elements may skip.
Tasks with "Fixed List", "List of URLs" and "Text List" loop mode are split-table. The main tasks will be split into sub-tasks executed with multiple cloud servers simultaneously. So in this case, every step of the task will work very fast hence some pages may not be loaded completely before moving to the next step.
2. The website you are after is multi-regional.
A multi-regional website could have different page structures for the content provided to visitors from different countries. When a task is set to run in the cloud, it is executed with our IPs based in America. In this case, for tasks targeted by websites outside America, some data may be skipped as it can’t be found on the website opened in the cloud.
3. When the task has both 1 and 2 situations.
Here are some common solutions to deal with blank fields on cloud extraction.
1) To ensure the web page is loaded completely in the cloud, you can try to
1. Increase timeout for Go To Web Page step
2. set up Wait before action
All steps created in the workflow are able to set up a waiting time. We suggest that you set the wait time for the Extract Data actions.
3. Set up an anchor element to find before action
This step will guarantee the extraction only starts after a certain element has been found. You can choose any element's XPath from the desired fields.
First, click on the 'Extract Data' step. Second, fill the element with an XPath and change “Wait before action” to "30s".
How to get the XPath of a certain element on the page?
- Click the "Extract Data"
- Switch to the vertical view, you will see all the relative Xpaths for each field
2) To identify if the website is multi-regional, you could
1. Test the task with local extraction. If there's no data missing as it does on the cloud extraction, then the website is most likely multi-regional. In this case, as the targeted content can only be found when opening the website with your own IP, we suggest you Local Extraction to get the data instead.
2. Extract the outer HTML of the whole page. By checking the extracted HTML, you could find what has caused the data to go missing by the prompt in the source code like "Access denied".
Here is a related tutorial for checking errors in the Cloud: Why does the task get no data in the Cloud but work well when running in the local?
If you still have no idea what happens to your task, feel free to leave your message.
Author: Joy
Editor: Yina