-
Notifications
You must be signed in to change notification settings - Fork 436
Add OVHcloud as an inference provider #1303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add OVHcloud as an inference provider #1303
Conversation
Hi @fabienric we are currently finishing a refactoring of Inference Providers integration code in #1315, this should be merged soon, but we will need to rewrite part of your implementation (should be even simpler to integrate), will ping again after it's been merged. |
Hi @fabienric, import * as OvhCloud from "../providers/ovhcloud";
...
export const PROVIDERS: Record<InferenceProvider, Partial<Record<InferenceTask, TaskProviderHelper>>> = {
...
"ovhcloud": {
"conversational": new OvhCloud.OvhCloudConversationalTask(),
},
... 2 - Update import { BaseConversationalTask, BaseTextGenerationTask } from "./providerHelper";
export class OvhCloudConversationalTask extends BaseConversationalTask {
constructor() {
super("ovhcloud", "https://oai.endpoints.kepler.ai.cloud.ovh.net");
}
} and that's it :) let us know if you need any help! you can find more details in the documentation : https://huggingface.co/docs/inference-providers/register-as-a-provider#2-js-client-integration. |
(sorry for the moving parts @fabienric – we can help move this PR over the finish line if needed) |
- ovhcloud inference provider: use new base tasks and provider helpers, fix issues with inference parameters, add support for text generation task
Hi @hanouticelina and @julien-c, Thank you for the feedback, refactoring and updated documentation. I've implemented our provider ; it required more work than I expected to get the payload right for an OpenAI compatible endpoint (make sure that the I've also implemented the text generation task, but I've found that the streaming case is not covered by the base task (the Available to discuss the matter further if required. |
Actually most (if not all) providers are implemented only for the |
- fix tests
…o ovhcloud-inference-provider
Hi @Wauplin and thanks for your feedback. Actually my remark on the OpenAI compatible parameters also applies to the I agree with you on the fact that the priority on our side is to get the Let me know when you're ready to merge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you @fabienric for the PR, I left some comments
import type { BodyParams } from "../types"; | ||
import { omit } from "../utils/omit"; | ||
|
||
const OVHCLOUD_API_BASE_URL = "https://oai.endpoints.kepler.ai.cloud.ovh.net"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How should we proceed to test the Inference Endpoints?
Do we need to setup a OVH cloud account?
Thank you @hanouticelina and @SBrandeis for your feedback and suggestions. |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good @fabienric, thank you for the contribution!
as you probably noticed, we dropped VCR tapes which means all new providers PRs need a bit of manual testing (until we implement a better testing strategy), Did you already run some tests for the PR locally? if we want to test on our side, do we need an OVH Cloud account?
Thank you, yes I already tested locally. I can send you an API key so you
can test on your side as well.
Le lun. 28 avr. 2025, 12:34, célina ***@***.***> a écrit :
… ***@***.**** approved this pull request.
looks good @fabienric <https://github.com/fabienric>, thank you for the
contribution!
as you probably noticed, we dropped VCR tapes which means all new
providers PRs need a bit of manual testing (until we implement a better
testing strategy), Did you already run some tests for the PR locally? if we
want to test on our side, do we need an OVH Cloud account?
—
Reply to this email directly, view it on GitHub
<#1303 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAPDMQEH4WEGDQU642OP72D23X72FAVCNFSM6AAAAABZPVMBVGVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDOOJYHA2DKMRSG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
(we do have a team ovh account already) |
@hanouticelina @julien-c ok, let me know if you need any help with the API token generation. |
I tested a couple of models for both chat completion and text generation tasks and it works as expected ✅ thank you @fabienric for the contribution! |
CI was green when merging #1303 but somehow the linter is failing in main, this PR should fix it
Thank you @hanouticelina! |
### What Adds OVHcloud as an inference provider. ### Test Plan Added new tests for OVHcloud both with and without streaming. ### What Should Reviewers Focus On? I used the Cerebras PR as an example. --------- Co-authored-by: Fabien Ric <[email protected]>
CI was green when merging #1303 but somehow the linter is failing in main, this PR should fix it
What
Adds OVHcloud as an inference provider.
Test Plan
Added new tests for OVHcloud both with and without streaming.
What Should Reviewers Focus On?
I used the Cerebras PR as an example.