-
Notifications
You must be signed in to change notification settings - Fork 41
Dba 738 index optimization by llm #64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
src/postgres_mcp/dta/llm_opt.py
Outdated
response = client.chat.completions.create( | ||
model="gpt-4o", | ||
response_model=IndexingAlternative, | ||
temperature=1.2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are you sure you want above average "temperature"? May make it too "creative"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no way to know what the right value is without testing. My inclination is to err on the creative side so that we explore the space of indexing recommendations. I have occasionally seen recommendations that refer to tables that don't exist or are otherwise invalid. They get rejected and my guess is that having some of that is healthy.
src/postgres_mcp/dta/llm_opt.py
Outdated
else: | ||
remaining_attempts_prompt = "" | ||
|
||
response = client.chat.completions.create( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider litellm to not hardcode the model https://docs.litellm.ai/#basic-usage . They seem to map various apis to the open-ai completion format so rest of code shouldn't need to change, and it will pull api key from standard env variables (per provider)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still need to put in code to switch the the desired model, right?
I.e., somewhere we need to see what is available, say via env variables, and then use that model. I don't think LiteLLM will do that for us.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jssmith Thanks. LGTM.
This is a draft implementation of optimization by LLM.