It could be frustrating when the task does not scrape anything after you have spent a long time building the workflow but ended up with an error:
In this article, we are going to show you some tips to troubleshoot your task when it stops shortly after you run it.
*When the cloud extraction completes but no data is extracted, please refer to Why does the task get no data in the Cloud but work well when running in the local?
When the local extraction completes but no data is extracted, it may be due to the following reasons:
1) The webpage provided does not load completely or takes too long to open and even timeouts.
1. Check out your internet connection and make sure the webpage can be open in a normal browser.
2. Increase the timeout for the “Go To Web Page” step to ensure the webpage loads completely before it moves on to the next step.
2) The information to scrape does not load as soon as the page loads.
1. Set up some wait time for the action following "Go to Web Page" or set up "Wait until a designated element appears".
2. Add some scrolls for the "Go to Web Page".
Some information can only be loaded when the page is scrolled down.
3) Not setting up AJAX for the loop clicks or pagination clicks.
Some websites may use the AJAX technique for updating new content.
1) Try setting up "AJAX load" for "Click Item" or "Click to Paginate"
See more about the AJAX setting:
4) Elements for the “Loop Items” are not selected properly.
1) Go back to the workflow, click the steps from the top down, and ensure the elements for the “Loop Item” are selected properly. If not(the Loop Item shows "Cannot find any element"), you would need to rebuild the workflow and make sure everything is done correctly.
2) If rebuilding the Loop Item still doesn’t work out, you may need to modify the XPath for the “Loop Item” manually. See more about Customize element XPath.
Should you have any questions, feel free to leave your message.