Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider support for an optional 'fallback query'. #51840

Open
jtibshirani opened this issue Feb 4, 2020 · 18 comments
Open

Consider support for an optional 'fallback query'. #51840

jtibshirani opened this issue Feb 4, 2020 · 18 comments
Labels
>enhancement :Search/Search Search-related issues that do not fall into other categories Team:Search Meta label for search team team-discuss

Comments

@jtibshirani
Copy link
Contributor

jtibshirani commented Feb 4, 2020

In some cases, it can helpful to fall back to a looser or 'expanded' query in case the original query matches too few results. This strategy avoids showing a search page with no or very few results.

While there are already ways that users can incorporate the results of an expanded query, we could consider adding direct support:

{
  "query": {
    "match" : {
      "product_description" : {
        "query" : "hiking pants"
      }
    }
  },
  "fallback_query": {
    "match" : {
      "product_description" : {
        "query" : "outdoor clothing"
      }
    }
  }
}

The fallback_query would only be run if the original query returned zero (or too few) results. The hits from fallback_query would always be listed after those of the original query.

The advantages over issuing a single query that contains both the original and the fallback (boosted less highly than the original):

  • The results from the expanded query may be quite noisy, and this way we only include them if the original query had no (or very few) results. This gives a low-risk way to incorporate query expansion.
  • If the fallback is expensive, we could avoid running it altogether if the original query is successful. This may be more of an issue if we were to introduce an advanced query expansion strategy based on a statistical model.

The advantages over issuing multiple search requests:

  • It is more convenient to configure, not requiring custom application logic.
  • We could nicely handle the case where query only returns a small number of results. In this case the fallback results would always appear after the original small result set.

Although the idea of a fallback query has come up in discuss forums and our own conversations around query expansion, I'm really not convinced that it would provide enough value over the alternative of issuing multiple search requests. We already closed the similar issue #6491 for example. I'm raising this issue to document our thinking and gather feedback from others.

@jtibshirani jtibshirani added >feature :Search/Search Search-related issues that do not fall into other categories labels Feb 4, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-search (:Search/Search)

@cbuescher
Copy link
Member

Some questions around this:

  • I assume that in order to make the integrations of this into the "_search" endpoint seemless, the results from the fallback_query would need to be seemlessly integrated into the SearchHits on the ES side, meaning they are part of the same results iterator? I think it would be important to be able to differentiate whether results come from the original query or the fallback in that case.
  • also, what if there are overlaps between the original query results and the fallback results, e.g. if 20 results are requested, original query only returns 6, they would potentially also turn up in the fallback results. This would require some de-duplication logic on top of retrieval. What about paging in these cases?

@markharwood
Copy link
Contributor

markharwood commented Feb 4, 2020

Presumably this rewrite is on the coordinating node once all results are returned rather than at a shard level as part of the initial search.

This would require some de-duplication logic on top of retrieval. What about paging in these cases?

I assumed the rewrite would essentially be re-running the query as effectively [tight] OR [loose] meaning the docs that match both query branches would score higher.

@jtibshirani
Copy link
Contributor Author

jtibshirani commented Feb 4, 2020

I tried to clarify the example + description. As @cbuescher mentions, the idea is that the hits from fallback_query would always be listed after those of the original query. I agree that de-duplication and paging would be important concerns -- I wonder if we could take inspiration from @markharwood's approach to pinned queries (#44074)?

@mayya-sharipova
Copy link
Contributor

The hits from fallback_query would always be listed after those of the original query.

I am wondering how can we ensure this, as potentially scores from fallback_query could be higher than scores from the original query? Or we are confident the scores from fallback_query are always lower?


Potentially something relevant. We plant to implement compound query, that allows to combine scores from different queries. And one of the combining strategy is first where a document gets a score from the first matched query. If we design queries in a way that ensures that first query always produces scores higher than the second query, this would be a way to implement fallback strategy as well. The downside to this is that we always execute all queries (at least for a matching part).

@jtibshirani
Copy link
Contributor Author

jtibshirani commented Feb 5, 2020

I am wondering how can we ensure this, as potentially scores from fallback_query could be higher than scores from the original query?

I'm not suggesting that the original scores from fallback_query would always be lower, we would need special logic to place the hits from fallback_query after query (perhaps through score manipulation, as we do for pinned queries?)

This part of the feature description may prove controversial or not be worth the complexity. We could certainly consider a simpler alternative like 'only run the fallback if there are too few results, and just return hits from the fallback query'. (However this alternative gives less of an advantage over just running multiple search requests).

@srchulo
Copy link

srchulo commented Feb 27, 2020

I think rather than one fallback query, it would be nice to be able to have an array of fallback queries. I think if there was only one, people would still end up issuing their own fallback queries if the first two didn't return enough results.

If fallback_query did take an array, another thought is that query could support an array of queries, although maybe this would be too fundamental of a change or confusing.

@rjernst rjernst added the Team:Search Meta label for search team label May 4, 2020
@telendt
Copy link
Contributor

telendt commented May 12, 2020

I've just accidentally found this issue when searching for "elasticsearch fallback query" as I just implemented this in my application (well, still a prototype) and thought about putting my 2 cents here.

My use-case is similar to what @jtibshirani described - I trigger the second, "loosened" fallback query when the initial query brings no results. I could run fallback query to "backfill" the results from initial query if I get less results than requested, but:

  • most of the time those few results contain exactly what user wanted (very specific query)
  • the "loosened" fallback query is an order of magnitude slower (larger candidate set) and I would rather not trade low latency for more results (I'm building search-as-you-type system).

That said I can imagine some people wanting to run the fallback query (or multiple queries) if they get less results than requested.

Speaking of why it might be beneficial to add this functionality to Elasticsearch:

  • triggering a sequence of searches from app to ES might add some additional round-trip latency (this is a weak argument IMO)
  • more importantly, there are numerous ranking evaluation tools (like your Ranking Evaluation API or Rated Ranking Evaluator) that play nice with query templates but don't play nice with search systems that combine/rearrange search results they get from ES (and not everyone has the capacity to build their own ranking evaluation system from scratch).

There are probably many cons of adding this functionality to ES that you are more aware of than I am, but I thought it might be worth sharing the pros with you.

Speaking of possible implementation here are few ideas:

  • ideally there is a sequence of fallback queries (I agree with @srchulo on that)
  • there is a way to define condition under which fallback queries are triggered (like fallback_hits_lt: 10 would trigger fallback queries in a sequence, one after another until user gets 10 results or more)
  • each query would get a sequence number - e.g. initial query would get query_seq=1, first fallback would get query_seq=2, second fallback 3, and so on..
  • each fallback query could be paired with a rule to exclude previously collected results (must_not with terms on selected field)
  • sorting would happen on [query_seq ASC, secondary_sort] (where secondary sort could be _score DESC)
  • each result would contain these sort values (so that we know we could skip execution of initial query and first fallback if first value of sort_after is 3)
  • from/size based pagination is bit more tricky, but could be used if hits.total mentioned how much hits each query brings
  • exclude docs could be passed along by user when paginating (this would work for small fallback_hits_lt values)

@telendt
Copy link
Contributor

telendt commented May 12, 2020

@mayya-sharipova

[...] If we design queries in a way that ensures that first query always produces scores higher than the second query, this would be a way to implement fallback strategy as well. The downside to this is that we always execute all queries (at least for a matching part).

That's tricky because one would need to make sure that even the lowest score from first (initial) query is higher than largest score from second (fallback) one.
Moreover, the fallback query is usually very loosened and supposed to used as "last resort" attempt to bring results if initial query fails to do so. This query is oftentimes way too slow to execute and brings way too many false positives under normal condition (it frequently uses techniques like relaxed minimum_should_match, fuzzy/phonetic matches, ambiguous synonym mappings, etc.).

@markharwood
Copy link
Contributor

It may be worth considering the logic for the 2 query adjustment strategies:

  1. Improving recall by relaxing tight queries
  2. Improving precision by tightening loose queries

This issue is focused on strategy 1) and switching query choices based on a straight doc count (measuring recall).
Strategy 2 is another mode of switching queries but is based on a harder measure (quantifying precision of results). I did touch on an approach to quantifying precision here.

Rather than focusing purely on optimising recall we could consider some controls that help maintain precision with the introduction of sloppier queries. Picking a sweet spot along the Phrase -> AND -> OR -> fuzzy continuum of sloppiness is likely not just driven by a one-size-fits-all recall threshold. Topics vary in popularity e.g. many washing machine products but only two "gone with the wind` DVDs. Consequently it's hard to pick a single recall threshold that works for all queries. A query that, when relaxed, matches products in every single department is by definition imprecise and probably a relaxation step too far. That explosion in department values is potentially a useful precision threshold check.

@markharwood
Copy link
Contributor

markharwood commented Oct 28, 2020

One approach might be to send "sloppy" and "strict" queries as two separate searches in an msearch.

The strict query is the one we pin our hopes on e.g. a phrase query and the sloppy query is the fallback we rely on if the strict query has no matches. The trick to avoiding wasted compute on the sloppy query is to introduce a new fail-if query type that would take the strict query as a nested element.

"query" : {
     "fail" : {
           "if": { "phrase_query" : "foo bar"}
           "else": { "fuzzy" : "foo bar" }
     }
}

This new query type would early-terminate the whole request if there was even one document matching the nested "if" query. To fail the responses from other shards too the allow_partial_search_results parameter should be set to false.

This approach would allow one client request and also minimise the execution overhead of running the strict and sloppy queries. The client would use whichever response had the non-zero matches.

While it may be seen as a disadvantage to have to define any aggregations twice (once in the sloppy search, once in the strict search) it is also an opportunity. A strict query might be certain of the subject matter and therefore want to use terms aggregations whereas a sloppy query might produce some good hits but a very long tail of junk so a sampler aggregation and significant_terms aggs would be more appropriate to return only relevant facets.

@mayya-sharipova
Copy link
Contributor

mayya-sharipova commented Oct 28, 2020

Interesting idea.

define any aggregations twice

an idea with different aggregations seems neat.

two separate searches in an msearch

But then, we would need to execute strict query twice, first time as a usual query and another time as a part of a new fail query. I wonder if we can avoid double execution. May be it is a special case of sequential msearch, where a second part is executed after the 1st is failed.

if there was even one document
May be a number of documents could be a user provided

@markharwood
Copy link
Contributor

markharwood commented Oct 28, 2020

But then, we would need to execute strict query twice,

It wouldn't run until completion though - it only needs to match 1 document and we abort the whole query, avoiding any further disk reads and collection. Throwing exceptions isn't particularly performant so we might need to avoid that way of exiting the search process early.

May be a number of documents could be a user provided

The challenge with that is users will want to provide a global total but internally we would be needing a shard-local limit to early-terminate any local collection.

Maybe we should tie this into the canMatch phase by offering the option of an explicit "can_match" part of the _search body? Any query that is placed inside that and produces less than a required number of results would not run eg

  {
          "can_match" :{
                 "exclude_if": {
                      "query" : {  ... // strict query  }
                      "min_doc_count" : 3 
                  }
          }
          "query":{
               ... // fallback sloppy query
          }
}

May be it is a special case of sequential msearch, where a second part is executed after the 1st is failed.

That sounds reasonable - we'd need a condition (probably a min doc_count), that has to be passed before the next approach is tried. The overall search latency might be worse though if there are multiple levels of fallback that need to be worked through in sequence. With my suggestion, based on the existing msearch, searches are parallelized so the strict and sloppy variations would run concurrently.

@markharwood
Copy link
Contributor

I created a proof-of-concept change to msearch to support query relaxation.

A new recall_goal can be set to a required number of docs and if > 0 the passed requests are processed sequentially until one meets the required goal. Queries should be ordered from strict to sloppy (most precise to least).

@jpountz
Copy link
Contributor

jpountz commented Apr 21, 2021

Although the idea of a fallback query has come up in discuss forums and our own conversations around query expansion, I'm really not convinced that it would provide enough value over the alternative of issuing multiple search requests. We already closed the similar issue #6491 for example. I'm raising this issue to document our thinking and gather feedback from others.

+1 I'm not convinced either.

Furthermore I think that the actual decision tree that users will want will often be much more complex than just falling back to a query that can be known in advance. For instance I'm sure many users would like to run the query against a spell checker and return hits for the corrected query if it has more hits than the original query. This sort of logic is best left to the client side in my opinion.

@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-search (Team:Search)

@nemphys
Copy link

nemphys commented May 20, 2024

Any news about this?

Our use case involves the use of fuzzy search; specifically, there is an option to only use fuzzy if non-fuzzy searches produce 0 results (in which case we issue a second query with fuzzy enabled).

Being able to do this in a single query would be neat!

@benwtrent
Copy link
Member

@nemphys no movement on this at the moment.

The best way to do this still is via two search requests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>enhancement :Search/Search Search-related issues that do not fall into other categories Team:Search Meta label for search team team-discuss
Projects
None yet
Development

No branches or pull requests