Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Columns passed in RAG seem ignored in generated SQL #187

Closed
pierreoberholzer opened this issue Jan 25, 2024 · 13 comments
Closed

Columns passed in RAG seem ignored in generated SQL #187

pierreoberholzer opened this issue Jan 25, 2024 · 13 comments

Comments

@pierreoberholzer
Copy link

pierreoberholzer commented Jan 25, 2024

Hi,

I'm trying to create a SQL query that would make use of valid column names passed to the RAG via INFORMATION_SCHEMA.COLUMN. However, the obtained query does not mention any real column.

import vanna
from vanna.remote import VannaDefault
from vanna.openai.openai_chat import OpenAI_Chat
from vanna.chromadb.chromadb_vector import ChromaDB_VectorStore

# Globals

PROJECT_ID = "my_gcp_project"
DATASET_ID = "my_dataset"
TABLE_NAME = "my_table"
OPENAI_API_KEY = "sk-xxxxx"

# Class instantiation

class MyVanna(ChromaDB_VectorStore, OpenAI_Chat):
    def __init__(self, config=None):
        ChromaDB_VectorStore.__init__(self, config=config)
        OpenAI_Chat.__init__(self, config=config)

vn = MyVanna(config={'api_key': OPENAI_API_KEY, 'model': 'gpt-4'})

vn.connect_to_bigquery(project_id=PROJECT_ID)

# The below query is working on BigQuery console.

METADATA_QUERY = f"""
SELECT * FROM `{PROJECT_ID}.{DATASET_ID}.INFORMATION_SCHEMA.COLUMNS`
WHERE table_name = {TABLE_NAME}
"""

df_information_schema = vn.run_sql(METADATA_QUERY)

# This will break up the information schema into bite-sized chunks that can be referenced by the LLM
plan = vn.get_training_plan_generic(df_information_schema)

# If you like the plan, then uncomment this and run it to train
vn.train(plan=plan)

vn.ask(question="Where does John Deere live ?")

# Obtained query

"""
SELECT residence 
FROM Authors 
WHERE name = 'John Deere';
"""

Neither the fields residence or name exist, nor the table Authors

I also get the following error by the way..

Couldn't run sql:  exceptions must derive from BaseException

Thanks for your help !

@pierreoberholzer pierreoberholzer changed the title Columns passed in RAG seem ignored Columns passed in RAG seem ignored in generated SQL Jan 25, 2024
@mike-niemand
Copy link

mike-niemand commented Jan 25, 2024 via email

@pierreoberholzer
Copy link
Author

Trying the DDL as you suggest: same issues as in the first trial above.

df_information_schema = vn.run_sql(METADATA_QUERY)

res = []
for index, row in df_information_schema.iterrows():
    col_name = row["column_name"]
    type = row["data_type"]
    elem = f"{col_name} {type}"
    res.append(elem)

res_string =  "(" + ", ".join(res) + ")"

DDL = f"""CREATE TABLE {TABLE_NAME_VANNA} {res_string}"""

vn.train(ddl=DDL)

@zainhoda
Copy link
Contributor

@pierreoberholzer did the training data make it in? Do you get results when you do vn.get_training_data()?

@pierreoberholzer
Copy link
Author

pierreoberholzer commented Jan 26, 2024

Good idea. It seems it at least received meaningful info (just showing first 6 columns here)..

vn.get_training_data()

Capture d’écran 2024-01-26 à 08 34 31

@zainhoda
Copy link
Contributor

@pierreoberholzer are those the actual names of your columns? If so, there's no way the LLM would be able to associate the column name with that it means semantically.

With ambiguous or nonexistent column names, the best method is going to be training on example sql statements because the database schema doesn't have enough information. Like if a human wouldn't be able to figure out how to translate "Where does John Deere live ?" into SQL based on the information in that image, then the LLM wouldn't be able to either.

Try this -- try training on 3-4 sample SQL queries that you know work. So do:
vn.train(sql=...)

Then try asking questions that are related to those queries.

If that works, since you're using BigQuery, what you can do is extract your query history and loop over the query history to do vn.train(sql=...)

@pierreoberholzer
Copy link
Author

pierreoberholzer commented Jan 26, 2024

Those are dummy column names, the real ones have some semantic meaning.
Still, I would expect in any case the generated SQL to be constrained on existing column names (even with poor semantic distance). In addition, once that is solved, passing examples is indeed surely a good thing to do. Thanks.

@pierreoberholzer
Copy link
Author

pierreoberholzer commented Jan 26, 2024

Still investigating, but it seems that I am reaching some limit..

  1. Is the entire metadata context passed to OpenAI ?
  2. Am I supposed to purge Chroma DB ?

Indeed, the query itself very short: "Where does John Deere live ?"
Thanks !

Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, your messages resulted in 10898 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

@zainhoda
Copy link
Contributor

@pierreoberholzer it picks the 10 most relevant pieces of each type of training data (ddl, documentation, question/sql pairs) and adds them to the context. It does a simple heuristic to calculate tokens to attempt not to overfill the context window.

However, it seems like your individual pieces of training data may be quite large? What's the string length of the content?

Which OpenAI model are you using btw? You likely won't reach context token limits if you use gpt-4-turbo-preview

@pierreoberholzer
Copy link
Author

Thanks - Using gpt-4-turbo-preview helps.
How can I reset the Chroma state, and start again with new context ?
Currently I'm accumulating context it seems.

@zainhoda
Copy link
Contributor

@pierreoberholzer you can either delete the sqlite database that Chroma creates and start again or you can go

vn.get_training_data() and then loop through all the ids and to vn.remove_training_data(id=...)

@pierreoberholzer
Copy link
Author

pierreoberholzer commented Jan 26, 2024

Cool. This helps defining the experiment better.
My observation so far is that amongst metadata, ddl and sql, only the latter leads to meaningful generated queries.
While this seems promising, and I need to investigate more, it's actually more difficult to assess how much the approach now might "overfit" given the query, compared to a more "neutral" case where only metadata would be passed (see first questions) in training. Needs more testing. Thanks for your help !

@zainhoda
Copy link
Contributor

@pierreoberholzer if you're doing a formal test, would you be able to kindly share the results?

@pierreoberholzer
Copy link
Author

Sure, if I get to that point.

@vanna-ai vanna-ai locked and limited conversation to collaborators Jan 27, 2024
@zainhoda zainhoda converted this issue into discussion #194 Jan 27, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants