Skip to content

Update conversational inference API snippets #976

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 18 commits into from
Oct 24, 2024
Merged

Conversation

mishig25
Copy link
Collaborator

@mishig25 mishig25 commented Oct 21, 2024

Description

This PR updates inference snippet generating functions signatures. Before, the functions were generating string. Now, the function will generate InferenceSnippet | InferenceSnippet [].

export interface InferenceSnippet {
content: string;
client?: string; // for instance: `client` could be `huggingface_hub` or `openai` client for Python snippets
}

Why do we need to generate InferenceSnippet [], not just InferenceSnippet?
Because for a given langauge (let's say), we wanna show multiple clients options (huggingface_hub, openai). (see the attached video below).

Also, this PR improves the conversational snippet greatly.

Screen recording

Screen.Recording.2024-10-22.at.11.32.06.mov

@mishig25 mishig25 changed the title [wip] Update conversational snippets [wip] Update conversational inference API snippets Oct 21, 2024
@mishig25 mishig25 changed the title [wip] Update conversational inference API snippets Update conversational inference API snippets Oct 22, 2024
@mishig25 mishig25 marked this pull request as ready for review October 22, 2024 14:45
@mishig25
Copy link
Collaborator Author

ready for review

Copy link
Contributor

@Wauplin Wauplin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @mishig25! Results looks amazing and will definitely settle the debate over on which snippets to display 😄 As mentioned on the moon-landing PR, I would tend to always return InferenceSnippet [] to avoid confusions but if you think otherwise, I'm really fine keeping as it is now.

Once this is merged, I can take care of adding InferenceClient snippets for many tasks in Python (similar to #971 but with your new structure).

Note: this change will break the script that generates the Inference API docs (here). I'll open a PR once this is merged/deployed.

@julien-c
Copy link
Member

are the selected tab + sub-tab sticky? (not sure if they should be, cc @gary149)

@gary149
Copy link
Collaborator

gary149 commented Oct 24, 2024

are the selected tab + sub-tab sticky? (not sure if they should be, cc @gary149)

Currently no (I think it's fine)

Copy link
Collaborator

@gary149 gary149 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! I didn't try everything but I see an issue with some snippets like

http://localhost:5564/black-forest-labs/FLUX.1-dev?inference_api=true

image

@mishig25
Copy link
Collaborator Author

@gary149 8399207 should fix the issues you've mentioned

if you are testing on moon, make sure to build hfjs again by

cd packages/tasks
pnpm i
pnpm build
image

@gary149 gary149 self-requested a review October 24, 2024 14:15
@mishig25 mishig25 merged commit fd62db4 into main Oct 24, 2024
3 of 5 checks passed
@mishig25 mishig25 deleted the imrpove_inference_snippet branch October 24, 2024 14:17
mishig25 added a commit that referenced this pull request Oct 24, 2024
mishig25 added a commit that referenced this pull request Oct 24, 2024
mishig25 added a commit that referenced this pull request Oct 28, 2024
Thanks to #976, now we
can show `hf_hub`, `oai` snippets for VLMs ("conversational
image-text-to-text" models).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants