Clarification on JSON_OUTPUT field in text-generation-launcher command. #1589
Unanswered
ansSanthoshM
asked this question in
Q&A
Replies: 1 comment
-
According to the cli.py code in main, it should be Boolean. Here is the code snippet: @app.command()
def serve(
model_id: str,
revision: Optional[str] = None,
sharded: bool = False,
quantize: Optional[Quantization] = None,
speculate: Optional[int] = None,
dtype: Optional[Dtype] = None,
trust_remote_code: bool = False,
uds_path: Path = "/tmp/text-generation-server",
logger_level: str = "INFO",
json_output: bool = False,
otlp_endpoint: Optional[str] = None,
):
... |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi All,
I am setting the JSON_OUTPUT to True in text-generation-launcher command.
https://huggingface.co/docs/text-generation-inference/en/basic_tutorials/launcher#jsonoutput
**Should value of the --json-output be Boolean ? like below **
text-generation-launcher --model-id /data/$model --max-total-tokens 4096 --json-output true
In this case, what is the location of the json output file ?
OR
**Should value of the --json-output be file path ? like below **
text-generation-launcher --model-id /data/$model --max-total-tokens 4096 --json-output /home/$mode_log.json
Beta Was this translation helpful? Give feedback.
All reactions