Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Token limit change after apollo-parser 0.5.3 #610

Closed
SimonSapin opened this issue Aug 7, 2023 · 2 comments · Fixed by #619
Closed

Token limit change after apollo-parser 0.5.3 #610

SimonSapin opened this issue Aug 7, 2023 · 2 comments · Fixed by #619
Assignees
Labels
apollo-parser bug Something isn't working

Comments

@SimonSapin
Copy link
Contributor

Description

Apollo Router exposes the token limit in its own configuration. The test for this config fails when upgrading apollo-router from 0.5.3.

Steps to reproduce

Modify the existing token_limit test in crates/apollo-parser/src/lexer/mod.rs:

    #[test]
    fn token_limit() {
        const TEST_INPUT: &str = "type Query { a a a a a a a a a }";
        const TEST_INPUT_TOKEN_COUNT: usize = 25;

        let lexer = Lexer::new(TEST_INPUT);
        let (tokens, errors) = lexer.lex();
        // +1 for the EOF pseudo-token
        assert_eq!(errors, &[]);
        assert_eq!(tokens.len(), TEST_INPUT_TOKEN_COUNT + 1);

        // EOF is not counted in the limit
        let lexer = Lexer::new(TEST_INPUT).with_limit(TEST_INPUT_TOKEN_COUNT);
        let (tokens, errors) = lexer.lex();
        assert_eq!(errors, &[]);
        assert_eq!(tokens.len(), TEST_INPUT_TOKEN_COUNT + 1);

        let lexer = Lexer::new(TEST_INPUT).with_limit(TEST_INPUT_TOKEN_COUNT - 1);
        let (tokens, errors) = lexer.lex();
        assert_eq!(
            errors,
            &[Error::limit("token limit reached, aborting lexing", 31)]
        );
        assert_eq!(tokens.len(), TEST_INPUT_TOKEN_COUNT - 1);

        let lexer = Lexer::new(TEST_INPUT).with_limit(10);
        let (tokens, errors) = lexer.lex();
        // Error happens earlier, then lexing stops
        assert_eq!(
            errors,
            &[Error::limit("token limit reached, aborting lexing", 17)]
        );
        assert_eq!(tokens.len(), 10);
    }

Expected result

Passes in both 0.5.3 and current main 126816f.

Actual result

Passes in 0.5.3. On main:

thread 'lexer::test::token_limit' panicked at 'assertion failed: `(left == right)`
  left: `[ERROR@31:31 "token limit reached, aborting lexing" ]`,
 right: `[]`', crates/apollo-parser/src/lexer/mod.rs:646:9

The panic location points to the .with_limit(TEST_INPUT_TOKEN_COUNT) case. If commenting out the failing assert produces:

thread 'lexer::test::token_limit' panicked at 'assertion failed: `(left == right)`
  left: `25`,
 right: `26`', crates/apollo-parser/src/lexer/mod.rs:647:9

Still in the the .with_limit(TEST_INPUT_TOKEN_COUNT) case. Also commenting out that one makes the test pass. So it doesn’t look like an off-by-one change, but rather something that happens specifically around the EOF token. I don’t fully understand it just by looking at the code though, since TokenKind::Eof is (surprisingly) now created in two places.

Environment

  • apollo-rs crate: apollo-parser
  • Crate version: 126816f
@SimonSapin SimonSapin added bug Something isn't working apollo-parser triage labels Aug 7, 2023
@goto-bus-stop goto-bus-stop self-assigned this Aug 17, 2023
@goto-bus-stop
Copy link
Member

I might call this intentional as previously you would get one more token than the limit, as the eof token was not part of the limit. Now when you .collect() the output of the lexer in a Vec it really will not exceed token_limit. Will call it out in the changelog.

@goto-bus-stop
Copy link
Member

Admittedly it does feel a bit odd to have to abort just because the EOF pseudo-token doesn't fit. But you're not really meant to be close to the limit in normal use.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
apollo-parser bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants