Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow spider attr #15

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions example/scrashtest/spiders/dmoz_two.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# -*- coding: utf-8 -*-
from urlparse import urljoin
import json

import scrapy
from scrapy.contrib.linkextractors import LinkExtractor


class DmozSpider(scrapy.Spider):
name = "js_spider"
start_urls = ['http:https://www.isjavascriptenabled.com/']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-1 to adding tests which fetch remote URLs

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well it's not really a test, just extra spider aside from existing dmoz spider, I used it for development, it can be removed no problem

splash = {'args': {'har': 1, 'html': 1}}

def parse(self, response):
is_js = response.xpath("//h1/text()").extract()
if "".join(is_js).lower() == "yes":
self.log("JS enabled!")
else:
self.log("Error! JS disabled!", scrapy.log.ERROR)
le = LinkExtractor()

for link in le.extract_links(response):
url = urljoin(response.url, link.url)
yield scrapy.Request(url, self.parse_link)
break

def parse_link(self, response):
title = response.xpath("//title").extract()
yes = response.xpath("//h1").extract()
self.log("response is: {}".format(repr(response)))
self.log(u"Html in response contains {} {}".format("".join(title), "".join(yes)))
37 changes: 30 additions & 7 deletions scrapyjs/middleware.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
from scrapy.exceptions import NotConfigured

from scrapy import log
from scrapy.http.response.html import HtmlResponse

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like unused import

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point, this is fixed now

from scrapy.http.headers import Headers


Expand All @@ -32,6 +33,14 @@ def __init__(self, crawler, splash_base_url, slot_policy):
self.splash_base_url = splash_base_url
self.slot_policy = slot_policy

def get_splash_options(self, request, spider):
if request.meta.get("dont_proxy"):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after sleeping on it I think it's bad idea to reuse 'dont_proxy' key here, because it's unexpected for developers who are using splash and crawlera middleware together in the project - they would expect it to work only with crawlera mw, so they can enable crawlera for spider (for all requests) and for some specific requests they might want to disable crawlera using 'dont_proxy' and add 'splash' arguments.

I think explicit 'dont_splash' would be better here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think explicit 'dont_splash' would be better here.

I'm not sure. But it seems like this is just a question of terminology, "dont_proxy" to my ears sounds like general signal - 'dont use any kind of proxy mechanism for this request' so I thought we could reuse it here. On the other hand "splash" here is not used as proxy so it may be confusing, also dont_splash may be more specific and explicit. So maybe dont_splash would better.

return

spider_options = getattr(spider, "splash", {})
request_options = request.meta.get("splash")
return request_options or spider_options

@classmethod
def from_crawler(cls, crawler):
splash_base_url = crawler.settings.get('SPLASH_URL', cls.default_splash_url)
Expand All @@ -43,24 +52,26 @@ def from_crawler(cls, crawler):
return cls(crawler, splash_base_url, slot_policy)

def process_request(self, request, spider):
splash_options = request.meta.get('splash')
splash_options = self.get_splash_options(request, spider)
if not splash_options:
return

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pawelmhm why you changed this? this key replacement looked good enough

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is related to https://github.com/scrapinghub/scrapyjs/pull/15/files#r28138976 (avoiding generating multiple splash requests). If splash options are in request meta as content of splash key they can be deleted. When request will be processed by middleware second time process_request will return None. But if splash options are set per spider you can't delete them. This means that first check on line 55 will always return True if options are in spider. This means we need another check, so I needed to redefine _splash_processed key, changing it's content from dictionary of splash options to Boolean flag. (edited, updated comment link)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you added check if request.meta.get("_splash_processed"): return above - isn't that enough.

  1. request processed first time, 'splash' moved to '_splash_processed'
  2. response processed first time - treat '_splash_processed' as dict of splash options
  3. request processed second time (any reason like retry) - '_splash_processed' is already in meta so request will be ignored because of the check request.meta.get("_splash_processed")

but after your explanation I think the way it's implemented in PR is better, because someone might expect to have 'splash' key in response.meta, and with current implementation it's missed. @kmike are you okay with that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It turns out I applied similar changes here: 95f0079.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But maybe your approach is better


elif request.meta.get("_splash_processed"):
return

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this for retries?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is because if you return new request from middleware it will go through all middleware chain again, so we need to disable infinite loop from occurring

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it.

note - I think it would be a bit safer to use if here, not elif, as both statements (#15 (diff) and #15 (diff)) leading to return - it wouldn't change logic but will be safer for code changes - who knows how this code will change in future

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good


if request.method != 'GET':
log.msg("Currently only GET requests are supported by SplashMiddleware; %s "
"will be handled without Splash" % request, logging.WARNING)
return request

meta = request.meta
del meta['splash']
meta['_splash_processed'] = splash_options

slot_policy = splash_options.get('slot_policy', self.slot_policy)
self._set_download_slot(request, meta, slot_policy)

args = splash_options.setdefault('args', {})
args.setdefault('url', request.url)
args['url'] = request.url

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kmike are you okay with this change. What was usecase of .setdefault() here? Does it have any sense to create request with one url and specify another url in meta?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I was thinking of allowing user not to use 'url' argument at all. This argument is not required for Splash scripts - e.g. you can render a chunk of HTML using splash:set_content. But the meta syntax doesn't make much sense in this case; something in lines of #12 could be a better fit.


body = json.dumps(args, ensure_ascii=False)

if 'timeout' in args:
Expand All @@ -86,6 +97,7 @@ def process_request(self, request, spider):
endpoint = splash_options.setdefault('endpoint', self.default_endpoint)
splash_base_url = splash_options.get('splash_url', self.splash_base_url)
splash_url = urljoin(splash_base_url, endpoint)
meta['_splash_processed'] = True

req_rep = request.replace(
url=splash_url,
Expand All @@ -96,20 +108,31 @@ def process_request(self, request, spider):
# are not respected.
headers=Headers({'Content-Type': 'application/json'}),
)

self.crawler.stats.inc_value('splash/%s/request_count' % endpoint)
return req_rep

def process_response(self, request, response, spider):
splash_options = request.meta.get("_splash_processed")
splash_options = self.get_splash_options(request, spider)
if splash_options:
endpoint = splash_options['endpoint']
self.crawler.stats.inc_value(
'splash/%s/response_count/%s' % (endpoint, response.status)
)

response = self.html_response(response, request)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we shouldn't do it. #12 is a better way; basic splash requests should be as barebone as possible.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think that by default response should be returned without changes, but if we will have option to enable html response and use scrapyjs transparently i.e. getting rendered html without handling this in callback - that would be very, very convenient, e.g. you can use response.css(), response.xpath() and response.body right away - just like if request was made to plain html page.

I don't know if you already spent some time working on #12 and description doesn't provide much information. How it would work? You say it would be like FormRequest, but this has nothing to do with responses. Can you share a bit of your thoughts? One thing that came to mind is SplashRequest and SplashHtmlRequest classes where SplashRequest will return plain response from splash and SplashHtmlRequest will return rendered html along with headers and cookies returned by target site. But I'm not sure if my insight is correct.

If you insist - I think this part can be removed, we can override scrapyjs and add this inside the project. @pawelmhm ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think that by default response should be returned without changes, but if we will have option to enable html response and use scrapyjs transparently i.e. getting rendered html without handling this in callback

sure we can make it optional and just make it conditional on existence of some key either in meta or in spider attributes. What do you think @kmike ?

IMO rendering HTML will be very common thing that people will do so without handling this in base middleware most users will have to write code to generate html response themselves.

return response

def html_response(self, response, request):
"""Give user nice HTML response he probably
expects.
"""
data = json.loads(response.body)
html = data.get("html")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe html = data.pop('html', None)? Is there any reason to keep html in meta in this case?

also it might be good idea to have 'splash': {} option to enable/disable this behavior, like 'splash': {'html_response': True}. or 'transparent'. or any better name.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes having option to disable-enable this behavior sounds like a good idea

Is there any reason to keep html in meta in this case?

I think not.

maybe html = data.pop('html', None)?

yes sounds good

if not html:
return response

return HtmlResponse(data["url"], body=html, encoding='utf8',
status=response.status, request=request)

def _set_download_slot(self, request, meta, slot_policy):
if slot_policy == SlotPolicy.PER_DOMAIN:
# Use the same download slot to (sort of) respect download
Expand Down