Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow spider attr #15

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
[middleware] allow enabling splash per spider
  • Loading branch information
pawelmhm committed Apr 3, 2015
commit 39740cb3623ee2ec79c124c28bcb928d1e26cd28
19 changes: 15 additions & 4 deletions scrapyjs/middleware.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,14 @@ def __init__(self, crawler, splash_base_url, slot_policy):
self.splash_base_url = splash_base_url
self.slot_policy = slot_policy

def get_splash_options(self, request, spider):
if request.meta.get("dont_proxy"):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after sleeping on it I think it's bad idea to reuse 'dont_proxy' key here, because it's unexpected for developers who are using splash and crawlera middleware together in the project - they would expect it to work only with crawlera mw, so they can enable crawlera for spider (for all requests) and for some specific requests they might want to disable crawlera using 'dont_proxy' and add 'splash' arguments.

I think explicit 'dont_splash' would be better here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think explicit 'dont_splash' would be better here.

I'm not sure. But it seems like this is just a question of terminology, "dont_proxy" to my ears sounds like general signal - 'dont use any kind of proxy mechanism for this request' so I thought we could reuse it here. On the other hand "splash" here is not used as proxy so it may be confusing, also dont_splash may be more specific and explicit. So maybe dont_splash would better.

return

spider_options = getattr(spider, "splash", {})
request_options = request.meta.get("splash")
return request_options or spider_options

@classmethod
def from_crawler(cls, crawler):
splash_base_url = crawler.settings.get('SPLASH_URL', cls.default_splash_url)
Expand All @@ -43,24 +51,26 @@ def from_crawler(cls, crawler):
return cls(crawler, splash_base_url, slot_policy)

def process_request(self, request, spider):
splash_options = request.meta.get('splash')
splash_options = self.get_splash_options(request, spider)
if not splash_options:
return

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pawelmhm why you changed this? this key replacement looked good enough

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is related to https://github.com/scrapinghub/scrapyjs/pull/15/files#r28138976 (avoiding generating multiple splash requests). If splash options are in request meta as content of splash key they can be deleted. When request will be processed by middleware second time process_request will return None. But if splash options are set per spider you can't delete them. This means that first check on line 55 will always return True if options are in spider. This means we need another check, so I needed to redefine _splash_processed key, changing it's content from dictionary of splash options to Boolean flag. (edited, updated comment link)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you added check if request.meta.get("_splash_processed"): return above - isn't that enough.

  1. request processed first time, 'splash' moved to '_splash_processed'
  2. response processed first time - treat '_splash_processed' as dict of splash options
  3. request processed second time (any reason like retry) - '_splash_processed' is already in meta so request will be ignored because of the check request.meta.get("_splash_processed")

but after your explanation I think the way it's implemented in PR is better, because someone might expect to have 'splash' key in response.meta, and with current implementation it's missed. @kmike are you okay with that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It turns out I applied similar changes here: 95f0079.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But maybe your approach is better

elif request.meta.get("_splash_processed"):
return

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this for retries?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is because if you return new request from middleware it will go through all middleware chain again, so we need to disable infinite loop from occurring

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it.

note - I think it would be a bit safer to use if here, not elif, as both statements (#15 (diff) and #15 (diff)) leading to return - it wouldn't change logic but will be safer for code changes - who knows how this code will change in future

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good


if request.method != 'GET':
log.msg("Currently only GET requests are supported by SplashMiddleware; %s "
"will be handled without Splash" % request, logging.WARNING)
return request

meta = request.meta
del meta['splash']
meta['_splash_processed'] = splash_options

slot_policy = splash_options.get('slot_policy', self.slot_policy)
self._set_download_slot(request, meta, slot_policy)

args = splash_options.setdefault('args', {})
args.setdefault('url', request.url)
args['url'] = request.url

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kmike are you okay with this change. What was usecase of .setdefault() here? Does it have any sense to create request with one url and specify another url in meta?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I was thinking of allowing user not to use 'url' argument at all. This argument is not required for Splash scripts - e.g. you can render a chunk of HTML using splash:set_content. But the meta syntax doesn't make much sense in this case; something in lines of #12 could be a better fit.


body = json.dumps(args, ensure_ascii=False)

if 'timeout' in args:
Expand All @@ -86,6 +96,7 @@ def process_request(self, request, spider):
endpoint = splash_options.setdefault('endpoint', self.default_endpoint)
splash_base_url = splash_options.get('splash_url', self.splash_base_url)
splash_url = urljoin(splash_base_url, endpoint)
meta['_splash_processed'] = True

req_rep = request.replace(
url=splash_url,
Expand Down