You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: packages/inference/README.md
+11-6Lines changed: 11 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,12 @@ Your access token should be kept private. If you need to protect it in front-end
46
46
47
47
You can send inference requests to third-party providers with the inference client.
48
48
49
-
Currently, we support the following providers: [Fal.ai](https://fal.ai), [Replicate](https://replicate.com), [Together](https://together.xyz), [Sambanova](https://sambanova.ai), and [Fireworks AI](https://fireworks.ai).
49
+
Currently, we support the following providers:
50
+
-[Fal.ai](https://fal.ai)
51
+
-[Fireworks AI](https://fireworks.ai)
52
+
-[Replicate](https://replicate.com)
53
+
-[Sambanova](https://sambanova.ai)
54
+
-[Together](https://together.xyz)
50
55
51
56
To send requests to a third-party provider, you have to pass the `provider` parameter to the inference function. Make sure your request is authenticated with an access token.
52
57
```ts
@@ -64,11 +69,11 @@ When authenticated with a Hugging Face access token, the request is routed throu
64
69
When authenticated with a third-party provider key, the request is made directly against that provider's inference API.
65
70
66
71
Only a subset of models are supported when requesting third-party providers. You can check the list of supported models per pipeline tasks here:
-[HF Inference API (serverless)](https://huggingface.co/models?inference=warm&sort=trending)
73
78
74
79
❗**Important note:** To be compatible, the third-party API must adhere to the "standard" shape API we expect on HF model pages for each pipeline task type.
0 commit comments