Requires Python 3.6+
A simple, Qt-Webengine powered web browser with built in functionality for testing scrapy spider code.
Also includes an addon to enable a GUI for use with the scrapy shell.
Table of Contents
You can import the package from PyPi using
pip install scrapy_gui
Then you can import it to a shell using import scrapy_gui
.
The standlaone UI can be opened by using scrapy_gui.open_browser()
from a python shell. This consists of a web browser and a set of tools to analyse its contents.
Enter any url into search bar and hit return or press the Go button. When the loading animation finishes it will be ready to parse in the Tools tab.
The tools tab contains various sections for parsing content of the page. The purpose of this tab is to make it easy to test queries and code for use in a scrapy spider.
NOTE: This will use the initial html response. If additional requests, javascript, etc alter the page later this will not be taken into account.
It will load the initial html with an additional request using the requests
package. When running a query it will create a selector object using Selection
from the parsel package.
The query box lets you use parsel compatible CSS and XPath queries to extract data from the page.
It returns results as though selection.css/xpath('YOUR QUERY').getall()
was called.
If there are no results or there is an error in the query a dialogue will pop up informing you of the issue.
This box lets you add a regular expression pattern to be used in addition to the previous css query.
It returns results as though selection.css/xpath('YOUR QUERY').re(r'YOUR REGEX')'
was called. This means that if you use groups it will only return the content within parenthesis.
This box lets you define additional python code that can run on the results of your query and regex. The code can be as long and complex as you want, including adding additional functions, classes, imports etc.
The only requirement is you must include a function called user_fun(results, selector)
that returns a list
.
This table will list all the results, passed through the regex and function if defined.
This tab contains the html source that is used in the Tools tab. You can use the text box to search for specific content. All searches are not case sensitive.
This is just a plain text box. Content in here is not saved when you exit the app.
It is possible to integrate this tool with the scrapy shell. This will allow you to use it on responses that have been passed through your middlewares, access more complex requests and more specific selectors.
To use it in your shell import the load_selector method using:
from scrapy_gui import load_selector
Then you can write load_selector(YOUR_SELECTOR) to open a window with your selector loaded into it.
For example
load_selector(response)
will load your response into the UI.
When you run the code a window named Scrapy GUI
will open that contains the Tools
, Source
and Notes
tabs from the standalone window mentioned above.