Skip to content

[Inference snippets] Templated snippets for inference snippet generation #1255

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 31 commits into from
Mar 12, 2025

Conversation

Wauplin
Copy link
Contributor

@Wauplin Wauplin commented Mar 6, 2025

What does this PR do?

  • add @huggingface/jinja as a dependency for @huggingface/inference
  • heavily refactor ./packages/inference/src/snippets/python.ts to use templates instead of inline strings
  • add some snippet templates under packages/inference/src/snippets/templates/{language}/{client}/{templateName}.jinja
    • language is Python
    • client is e.g. huggingface_hub, requests, etc.
    • templateName is related to task name e.g. textToImage, conversationalStream, etc.

=> overall goal is to make it rather easy to update the snippets for a given task + client without entering the JS code
=> if conclusive, we can extend to cURL / JS snippets in a follow-up PR

This PR also includes a lot of small fixes (usually whitespace<>tabs inconsistencies) + added some tests. I did not change anything major in the generated snippets.

What to review?

IMO, no need to review all templates. Better to check the generated snippets + the main JS file directly (https://github.com/huggingface/huggingface.js/blob/ddd62b60d0a858dc808b6dd7b3589977681f5a8b/packages/inference/src/snippets/python.ts). Sorry for the huge PR but I do believe it is a necessary move moving forward 🤗

@@ -52,9 +52,11 @@
"check": "tsc"
},
"dependencies": {
"@huggingface/tasks": "workspace:^"
"@huggingface/tasks": "workspace:^",
"handlebars": "^4.7.8"
Copy link
Member

@julien-c julien-c Mar 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @xenova

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be awesome! 🤩 Although the library was originally designed for ChatML templates, the set of available features should be large enough for these templates.

Maybe @Wauplin can explain what set of features would be required? 👀

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

basically just ifs and variable replacement from what i've seen

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handlebars has pretty much a feature set of 0.00

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will update my PR tomorrow in that direction. As Julien said, I'm not using anything fancy at all so jinja will be more than enough for the job

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good! Let me know if I can help in any way 🫡

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I replaced the handlebars dependency by huggingface/jinja and it works like a charm! 86e787a Thanks for the package @xenova! 🤗

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well I now have error Couldn't find package "@huggingface/jinja@^0.3.3" required by "@huggingface/inference@*" on the "npm" registry. in the CI though jinja 0.3.3 is available on https://www.npmjs.com/package/@huggingface/jinja 🤔

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you need to add jinja here:

pnpm --filter inference --filter hub --filter tasks publish --force --no-git-checks --registry http://localhost:4874/

Co-authored-by: Simon Brandeis <[email protected]>
// Helpers to find + load templates

const rootDirFinder = (): string => {
let currentPath = path.normalize(import.meta.url).replace("file:", "");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious if this will work with moon (import.meta.url), or if it's finally time to move from CJS to ESM cc @Pierrci

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you simply curious or actually wishing for it? x)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I think it will be fine, but I may be wrong!)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'll be empty but we'll see!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would f6d81d6 be ok ?

Wauplin and others added 6 commits March 10, 2025 18:45
⚠️ PR opened on top of
#1255

Related to this thread
#1255 (comment).

@coyotte508 @SBrandeis  @julien-c WDYT? 🙈 


for the record, I'm looking for a solution where:
1. we keep this public/open-source
2. we keep the structure of jinja templates
3. ideally no new package in hf.js
4. ideally no new tooling (e.g. to translate jinja into TS code)

~Solution here is simply to gracefully raise an error if environment not
supported.~
**EDIT:** based on
#1259 (comment),
goal is to not package the `./snippets` module in browser mode.

---------

Co-authored-by: coyotte508 <[email protected]>
@Wauplin
Copy link
Contributor Author

Wauplin commented Mar 11, 2025

@coyotte508 could you check the last commits I've pushed to fix the CI? CI's now green but not entirely sure if I fixed things the correct way.

Otherwise @julien-c @hanouticelina @SBrandeis would you have time for a review of the snippets themselves? As mentioned in the PR description I think it's best to review the generated snippets under packages/tasks-gen/snippets-fixtures/ rather than the jinja templates.

Copy link
Contributor

@hanouticelina hanouticelina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reviewed the generated snippets, all good 👍

@Wauplin
Copy link
Contributor Author

Wauplin commented Mar 12, 2025

I'm closing this PR as the structure of it is approved and I need it for a follow-up PR. If anyone spots an error in the templates, just comment it here and I'll take care in another PR.

@Wauplin Wauplin merged commit 2a5443f into main Mar 12, 2025
5 checks passed
@Wauplin Wauplin deleted the templated-inference-python-snippets branch March 12, 2025 09:24
Wauplin added a commit that referenced this pull request Mar 19, 2025
This PR should fix
huggingface-internal/moon-landing#13013. It
partially removes the structure introduced in
#1255.

Current problem is that inference snippets are generated in the
front-end. Since front-end cannot access file-system and therefore the
jinja files, we have to find a workaround. This PR adds a build step
which exports all jinja files into a single TS module. I've updated the
`package.json` file so that now the snippets code should be available in
any environment (both node and browser).

cc @coyotte508 who suggested such a solution.

Tested it in `@tasks-gen` and "it works"

---
For the record, the exported file (not committed in this PR) looks like
this:

_# packages/inference/src/snippets/templates.exported.ts_
```ts
// Generated file - do not edit directly
export const templates: Record<string, Record<string, Record<string, string>>> = {
  "js": {
    "fetch": {
      "basic": "async function query(data) {\n\tconst response = await fetch(\n\t\t\"{{ fullUrl }}\",\n\t\t{\n\t\t\theaders: {\n\t\t\t\tAuthorization: \"{{ authorizationHeader }}\",\n\t\t\t\t\"Content-Type\": \"application/json\",\n\t\t\t},\n\t\t\tmethod: \"POST\",\n\t\t\tbody: JSON.stringify(data),\n\t\t}\n\t);\n\tconst result = await response.json();\n\treturn result;\n}\n\nquery({ inputs: {{ providerInputs.asObj.inputs }} }).then((response) => {\n    console.log(JSON.stringify(response));\n});",
      "basicAudio": "async function query(data) {\n\tconst response = await fetch(\n\t\t\"{{ fullUrl }}\",\n\t\t{\n\t\t\theaders: {\n\t\t\t\tAuthorization: \"{{ authorizationHeader }}\",\n\t\t\t\t\"Content-Type\": \"audio/flac\"\n\t\t\t},\n\t\t\tmethod: \"POST\",\n\t\t\tbody: JSON.stringify(data),\n\t\t}\n\t);\n\tconst result = await response.json();\n\treturn result;\n}\n\nquery({ inputs: {{ providerInputs.asObj.inputs }} }).then((response) => {\n    console.log(JSON.stringify(response));\n});",
      "basicImage": "async function query(data) {\n\tconst response = await fetch(\n\t\t\"{{ fullUrl }}\",\n\t\t{\n\t\t\theaders: {\n\t\t\t\tAuthorization: \"{{ authorizationHeader }}\",\n\t\t\t\t\"Content-Type\": \"image/jpeg\"\n\t\t\t},\n\t\t\tmethod: \"POST\",\n\t\t\tbody: JSON.stringify(data),\n\t\t}\n\t);\n\tconst result = await response.json();\n\treturn result;\n}\n\nquery({ inputs: {{ providerInputs.asObj.inputs }} }).then((response) => {\n    console.log(JSON.stringify(response));\n});",
      "textToAudio": "{% if model.library_name == \"transformers\" %}\nasync function query(data) {\n\tconst response = await fetch(\n\t\t\"{{ fullUrl }}\",\n\t\t{\n\t\t\theaders: {\n\t\t\t\tAuthorization: \"{{ authorizationHeader }}\",\n\t\t\t\t\"Content-Type\": \"application/json\",\n\t\t\t},\n\t\t\tmethod: \"POST\",\n\t\t\tbody: JSON.stringify(data),\n\t\t}\n\t);\n\tconst result = await response.blob();\n    return result;\n}\n\nquery({ inputs: {{ providerInputs.asObj.inputs }} }).then((response) => {\n    // Returns a byte object of the Audio wavform. Use it directly!\n});\n{% else %}\nasync function query(data) {\n\tconst response = await fetch(\n\t\t\"{{ fullUrl }}\",\n\t\t{\n\t\t\theaders: {\n\t\t\t\tAuthorization: \"{{ authorizationHeader }}\",\n\t\t\t\t\"Content-Type\": \"application/json\",\n\t\t\t},\n\t\t\tmethod: \"POST\",\n\t\t\tbody: JSON.stringify(data),\n\t\t}\n\t);\n    const result = await response.json();\n    return result;\n}\n\nquery({ inputs: {{ providerInputs.asObj.inputs }} }).then((response) => {\n    console.log(JSON.stringify(response));\n});\n{% endif %} ",
      "textToImage": "async function query(data) {\n\tconst response = await fetch(\n\t\t\"{{ fullUrl }}\",\n\t\t{\n\t\t\theaders: {\n\t\t\t\tAuthorization: \"{{ authorizationHeader }}\",\n\t\t\t\t\"Content-Type\": \"application/json\",\n\t\t\t},\n\t\t\tmethod: \"POST\",\n\t\t\tbody: JSON.stringify(data),\n\t\t}\n\t);\n\tconst result = await response.blob();\n\treturn result;\n}\n\nquery({ inputs: {{ providerInputs.asObj.inputs }} }).then((response) => {\n    // Use image\n});",
      "zeroShotClassification": "async function query(data) {\n    const response = await fetch(\n\t\t\"{{ fullUrl }}\",\n        {\n            headers: {\n\t\t\t\tAuthorization: \"{{ authorizationHeader }}\",\n                \"Content-Type\": \"application/json\",\n            },\n            method: \"POST\",\n            body: JSON.stringify(data),\n        }\n    );\n    const result = await response.json();\n    return result;\n}\n\nquery({\n    inputs: {{ providerInputs.asObj.inputs }},\n    parameters: { candidate_labels: [\"refund\", \"legal\", \"faq\"] }\n}).then((response) => {\n    console.log(JSON.stringify(response));\n});"
    },
    "huggingface.js": {
      "basic": "import { InferenceClient } from \"@huggingface/inference\";\n\nconst client = new InferenceClient(\"{{ accessToken }}\");\n\nconst output = await client.{{ methodName }}({\n\tmodel: \"{{ model.id }}\",\n\tinputs: {{ inputs.asObj.inputs }},\n\tprovider: \"{{ provider }}\",\n});\n\nconsole.log(output);",
      "basicAudio": "import { InferenceClient } from \"@huggingface/inference\";\n\nconst client = new InferenceClient(\"{{ accessToken }}\");\n\nconst data = fs.readFileSync({{inputs.asObj.inputs}});\n\nconst output = await client.{{ methodName }}({\n\tdata,\n\tmodel: \"{{ model.id }}\",\n\tprovider: \"{{ provider }}\",\n});\n\nconsole.log(output);",
      "basicImage": "import { InferenceClient } from \"@huggingface/inference\";\n\nconst client = new InferenceClient(\"{{ accessToken }}\");\n\nconst data = fs.readFileSync({{inputs.asObj.inputs}});\n\nconst output = await client.{{ methodName }}({\n\tdata,\n\tmodel: \"{{ model.id }}\",\n\tprovider: \"{{ provider }}\",\n});\n\nconsole.log(output);",
      "conversational": "import { InferenceClient } from \"@huggingface/inference\";\n\nconst client = new InferenceClient(\"{{ accessToken }}\");\n\nconst chatCompletion = await client.chatCompletion({\n    provider: \"{{ provider }}\",\n    model: \"{{ model.id }}\",\n{{ inputs.asTsString }}\n});\n\nconsole.log(chatCompletion.choices[0].message);",
      "conversationalStream": "import { InferenceClient } from \"@huggingface/inference\";\n\nconst client = new InferenceClient(\"{{ accessToken }}\");\n\nlet out = \"\";\n\nconst stream = await client.chatCompletionStream({\n    provider: \"{{ provider }}\",\n    model: \"{{ model.id }}\",\n{{ inputs.asTsString }}\n});\n\nfor await (const chunk of stream) {\n\tif (chunk.choices && chunk.choices.length > 0) {\n\t\tconst newContent = chunk.choices[0].delta.content;\n\t\tout += newContent;\n\t\tconsole.log(newContent);\n\t}  \n}",
      "textToImage": "import { InferenceClient } from \"@huggingface/inference\";\n\nconst client = new InferenceClient(\"{{ accessToken }}\");\n\nconst image = await client.textToImage({\n    provider: \"{{ provider }}\",\n    model: \"{{ model.id }}\",\n\tinputs: {{ inputs.asObj.inputs }},\n\tparameters: { num_inference_steps: 5 },\n});\n/// Use the generated image (it's a Blob)",
      "textToVideo": "import { InferenceClient } from \"@huggingface/inference\";\n\nconst client = new InferenceClient(\"{{ accessToken }}\");\n\nconst image = await client.textToVideo({\n    provider: \"{{ provider }}\",\n    model: \"{{ model.id }}\",\n\tinputs: {{ inputs.asObj.inputs }},\n});\n// Use the generated video (it's a Blob)"
    },
    "openai": {
      "conversational": "import { OpenAI } from \"openai\";\n\nconst client = new OpenAI({\n\tbaseURL: \"{{ baseUrl }}\",\n\tapiKey: \"{{ accessToken }}\",\n});\n\nconst chatCompletion = await client.chat.completions.create({\n\tmodel: \"{{ providerModelId }}\",\n{{ inputs.asTsString }}\n});\n\nconsole.log(chatCompletion.choices[0].message);",
      "conversationalStream": "import { OpenAI } from \"openai\";\n\nconst client = new OpenAI({\n\tbaseURL: \"{{ baseUrl }}\",\n\tapiKey: \"{{ accessToken }}\",\n});\n\nlet out = \"\";\n\nconst stream = await client.chat.completions.create({\n    provider: \"{{ provider }}\",\n    model: \"{{ model.id }}\",\n{{ inputs.asTsString }}\n});\n\nfor await (const chunk of stream) {\n\tif (chunk.choices && chunk.choices.length > 0) {\n\t\tconst newContent = chunk.choices[0].delta.content;\n\t\tout += newContent;\n\t\tconsole.log(newContent);\n\t}  \n}"
    }
  },
  "python": {
    "fal_client": {
      "textToImage": "{% if provider == \"fal-ai\" %}\nimport fal_client\n\nresult = fal_client.subscribe(\n    \"{{ providerModelId }}\",\n    arguments={\n        \"prompt\": {{ inputs.asObj.inputs }},\n    },\n)\nprint(result)\n{% endif %} "
    },
    "huggingface_hub": {
      "basic": "result = client.{{ methodName }}(\n    inputs={{ inputs.asObj.inputs }},\n    model=\"{{ model.id }}\",\n)",
      "basicAudio": "output = client.{{ methodName }}({{ inputs.asObj.inputs }}, model=\"{{ model.id }}\")",
      "basicImage": "output = client.{{ methodName }}({{ inputs.asObj.inputs }}, model=\"{{ model.id }}\")",
      "conversational": "completion = client.chat.completions.create(\n    model=\"{{ model.id }}\",\n{{ inputs.asPythonString }}\n)\n\nprint(completion.choices[0].message) ",
      "conversationalStream": "stream = client.chat.completions.create(\n    model=\"{{ model.id }}\",\n{{ inputs.asPythonString }}\n    stream=True,\n)\n\nfor chunk in stream:\n    print(chunk.choices[0].delta.content, end=\"\") ",
      "documentQuestionAnswering": "output = client.document_question_answering(\n    \"{{ inputs.asObj.image }}\",\n    question=\"{{ inputs.asObj.question }}\",\n    model=\"{{ model.id }}\",\n) ",
      "imageToImage": "# output is a PIL.Image object\nimage = client.image_to_image(\n    \"{{ inputs.asObj.inputs }}\",\n    prompt=\"{{ inputs.asObj.parameters.prompt }}\",\n    model=\"{{ model.id }}\",\n) ",
      "importInferenceClient": "from huggingface_hub import InferenceClient\n\nclient = InferenceClient(\n    provider=\"{{ provider }}\",\n    api_key=\"{{ accessToken }}\",\n)",
      "textToImage": "# output is a PIL.Image object\nimage = client.text_to_image(\n    {{ inputs.asObj.inputs }},\n    model=\"{{ model.id }}\",\n) ",
      "textToVideo": "video = client.text_to_video(\n    {{ inputs.asObj.inputs }},\n    model=\"{{ model.id }}\",\n) "
    },
    "openai": {
      "conversational": "from openai import OpenAI\n\nclient = OpenAI(\n    base_url=\"{{ baseUrl }}\",\n    api_key=\"{{ accessToken }}\"\n)\n\ncompletion = client.chat.completions.create(\n    model=\"{{ providerModelId }}\",\n{{ inputs.asPythonString }}\n)\n\nprint(completion.choices[0].message) ",
      "conversationalStream": "from openai import OpenAI\n\nclient = OpenAI(\n    base_url=\"{{ baseUrl }}\",\n    api_key=\"{{ accessToken }}\"\n)\n\nstream = client.chat.completions.create(\n    model=\"{{ providerModelId }}\",\n{{ inputs.asPythonString }}\n    stream=True,\n)\n\nfor chunk in stream:\n    print(chunk.choices[0].delta.content, end=\"\")"
    },
    "requests": {
      "basic": "def query(payload):\n    response = requests.post(API_URL, headers=headers, json=payload)\n    return response.json()\n\noutput = query({\n    \"inputs\": {{ providerInputs.asObj.inputs }},\n}) ",
      "basicAudio": "def query(filename):\n    with open(filename, \"rb\") as f:\n        data = f.read()\n    response = requests.post(API_URL, headers={\"Content-Type\": \"audio/flac\", **headers}, data=data)\n    return response.json()\n\noutput = query({{ providerInputs.asObj.inputs }})",
      "basicImage": "def query(filename):\n    with open(filename, \"rb\") as f:\n        data = f.read()\n    response = requests.post(API_URL, headers={\"Content-Type\": \"image/jpeg\", **headers}, data=data)\n    return response.json()\n\noutput = query({{ providerInputs.asObj.inputs }})",
      "conversational": "def query(payload):\n    response = requests.post(API_URL, headers=headers, json=payload)\n    return response.json()\n\nresponse = query({\n{{ providerInputs.asJsonString }}\n})\n\nprint(response[\"choices\"][0][\"message\"])",
      "conversationalStream": "def query(payload):\n    response = requests.post(API_URL, headers=headers, json=payload, stream=True)\n    for line in response.iter_lines():\n        if not line.startswith(b\"data:\"):\n            continue\n        if line.strip() == b\"data: [DONE]\":\n            return\n        yield json.loads(line.decode(\"utf-8\").lstrip(\"data:\").rstrip(\"/n\"))\n\nchunks = query({\n{{ providerInputs.asJsonString }},\n    \"stream\": True,\n})\n\nfor chunk in chunks:\n    print(chunk[\"choices\"][0][\"delta\"][\"content\"], end=\"\")",
      "documentQuestionAnswering": "def query(payload):\n    with open(payload[\"image\"], \"rb\") as f:\n        img = f.read()\n        payload[\"image\"] = base64.b64encode(img).decode(\"utf-8\")\n    response = requests.post(API_URL, headers=headers, json=payload)\n    return response.json()\n\noutput = query({\n    \"inputs\": {\n        \"image\": \"{{ inputs.asObj.image }}\",\n        \"question\": \"{{ inputs.asObj.question }}\",\n    },\n}) ",
      "imageToImage": "def query(payload):\n    with open(payload[\"inputs\"], \"rb\") as f:\n        img = f.read()\n        payload[\"inputs\"] = base64.b64encode(img).decode(\"utf-8\")\n    response = requests.post(API_URL, headers=headers, json=payload)\n    return response.content\n\nimage_bytes = query({\n{{ providerInputs.asJsonString }}\n})\n\n# You can access the image with PIL.Image for example\nimport io\nfrom PIL import Image\nimage = Image.open(io.BytesIO(image_bytes)) ",
      "importRequests": "{% if importBase64 %}\nimport base64\n{% endif %}\n{% if importJson %}\nimport json\n{% endif %}\nimport requests\n\nAPI_URL = \"{{ fullUrl }}\"\nheaders = {\"Authorization\": \"{{ authorizationHeader }}\"}",
      "tabular": "def query(payload):\n    response = requests.post(API_URL, headers=headers, json=payload)\n    return response.content\n\nresponse = query({\n    \"inputs\": {\n        \"data\": {{ providerInputs.asObj.inputs }}\n    },\n}) ",
      "textToAudio": "{% if model.library_name == \"transformers\" %}\ndef query(payload):\n    response = requests.post(API_URL, headers=headers, json=payload)\n    return response.content\n\naudio_bytes = query({\n    \"inputs\": {{ providerInputs.asObj.inputs }},\n})\n# You can access the audio with IPython.display for example\nfrom IPython.display import Audio\nAudio(audio_bytes)\n{% else %}\ndef query(payload):\n    response = requests.post(API_URL, headers=headers, json=payload)\n    return response.json()\n\naudio, sampling_rate = query({\n    \"inputs\": {{ providerInputs.asObj.inputs }},\n})\n# You can access the audio with IPython.display for example\nfrom IPython.display import Audio\nAudio(audio, rate=sampling_rate)\n{% endif %} ",
      "textToImage": "{% if provider == \"hf-inference\" %}\ndef query(payload):\n    response = requests.post(API_URL, headers=headers, json=payload)\n    return response.content\n\nimage_bytes = query({\n    \"inputs\": {{ providerInputs.asObj.inputs }},\n})\n\n# You can access the image with PIL.Image for example\nimport io\nfrom PIL import Image\nimage = Image.open(io.BytesIO(image_bytes))\n{% endif %}",
      "zeroShotClassification": "def query(payload):\n    response = requests.post(API_URL, headers=headers, json=payload)\n    return response.json()\n\noutput = query({\n    \"inputs\": {{ providerInputs.asObj.inputs }},\n    \"parameters\": {\"candidate_labels\": [\"refund\", \"legal\", \"faq\"]},\n}) ",
      "zeroShotImageClassification": "def query(data):\n    with open(data[\"image_path\"], \"rb\") as f:\n        img = f.read()\n    payload={\n        \"parameters\": data[\"parameters\"],\n        \"inputs\": base64.b64encode(img).decode(\"utf-8\")\n    }\n    response = requests.post(API_URL, headers=headers, json=payload)\n    return response.json()\n\noutput = query({\n    \"image_path\": {{ providerInputs.asObj.inputs }},\n    \"parameters\": {\"candidate_labels\": [\"cat\", \"dog\", \"llama\"]},\n}) "
    }
  },
  "sh": {
    "curl": {
      "basic": "curl {{ fullUrl }} \\\n    -X POST \\\n    -H 'Authorization: {{ authorizationHeader }}' \\\n    -H 'Content-Type: application/json' \\\n    -d '{\n{{ providerInputs.asCurlString }}\n    }'",
      "basicAudio": "curl {{ fullUrl }} \\\n    -X POST \\\n    -H 'Authorization: {{ authorizationHeader }}' \\\n    -H 'Content-Type: audio/flac' \\\n    --data-binary @{{ providerInputs.asObj.inputs }}",
      "basicImage": "curl {{ fullUrl }} \\\n    -X POST \\\n    -H 'Authorization: {{ authorizationHeader }}' \\\n    -H 'Content-Type: image/jpeg' \\\n    --data-binary @{{ providerInputs.asObj.inputs }}",
      "conversational": "curl {{ fullUrl }} \\\n    -H 'Authorization: {{ authorizationHeader }}' \\\n    -H 'Content-Type: application/json' \\\n    -d '{\n{{ providerInputs.asCurlString }},\n        \"stream\": false\n    }'",
      "conversationalStream": "curl {{ fullUrl }} \\\n    -H 'Authorization: {{ authorizationHeader }}' \\\n    -H 'Content-Type: application/json' \\\n    -d '{\n{{ providerInputs.asCurlString }},\n        \"stream\": true\n    }'",
      "zeroShotClassification": "curl {{ fullUrl }} \\\n    -X POST \\\n    -d '{\"inputs\": {{ providerInputs.asObj.inputs }}, \"parameters\": {\"candidate_labels\": [\"refund\", \"legal\", \"faq\"]}}' \\\n    -H 'Content-Type: application/json' \\\n    -H 'Authorization: {{ authorizationHeader }}'"
    }
  }
} as const;

```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants