Skip to content

Support models compatible with OpenAI format #21

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Feb 6, 2025

Conversation

StreetLamb
Copy link
Collaborator

@StreetLamb StreetLamb commented Feb 6, 2025

Expose openai's base_url in mcp_agent.config.yaml file so that providers using OpenAI format such as Ollama, LiteLLM, Gemini (in preview) and Deepseek etc. can be used without having to write custom classes for each of them. Included an example to demonstrate its usage.

Closes #19

Copy link
Collaborator

@saqadri saqadri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing!!!

@saqadri saqadri merged commit 30be3dd into lastmile-ai:main Feb 6, 2025
@StreetLamb StreetLamb deleted the openai-baseurl-support branch February 7, 2025 00:49
@tihomir-kit
Copy link

tihomir-kit commented Apr 8, 2025

Sorry to parachute in, but would this be the correct way to use this with the Gemini 2.5 Pro Preview?

mcp_agent.config.yaml

openai:
  base_url: "https://generativelanguage.googleapis.com/v1beta/openai/"
  default_model: "gemini-2.5-pro-exp-03-25"
  api_key: "<my_api_key>"

What did you use for base_url ?

Then I imagine you would simply use this:

llm = await finder_agent.attach_llm(OpenAIAugmentedLLM)

And it should just work? Or am I misunderstanding something?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ollama support
3 participants