Skip to content

Commit 3000dc4

Browse files
committed
Nicer help headers
1 parent 492fefc commit 3000dc4

File tree

3 files changed

+69
-35
lines changed

3 files changed

+69
-35
lines changed

azureChat.m

Lines changed: 26 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,30 +1,33 @@
11
classdef(Sealed) azureChat < llms.internal.textGenerator & llms.internal.gptPenalties
22
%azureChat Chat completion API from Azure.
33
%
4-
% CHAT = azureChat(endpoint, deploymentID) creates an azureChat object with the
5-
% endpoint and deployment ID path parameters required by Azure to establish the connection.
4+
% CHAT = azureChat(endpoint, deploymentID) creates an azureChat object with
5+
% the endpoint and deployment ID path parameters required by Azure to
6+
% establish the connection.
67
%
78
% CHAT = azureChat(__,systemPrompt) creates an azureChat object with the
89
% specified system prompt.
910
%
1011
% CHAT = azureChat(__,Name=Value) specifies additional options
1112
% using one or more name-value arguments:
1213
%
13-
% Tools - A list of tools the model can call.
14-
% This parameter requires API version 2023-12-01-preview.
15-
%
16-
% API Version - A list of API versions to use for this operation.
17-
% Default value is 2024-02-01.
18-
%
1914
% Temperature - Temperature value for controlling the randomness
20-
% of the output. Default value is 1.
15+
% of the output. Default value is 1; higher values
16+
% increase the randomness (in some sense,
17+
% the “creativity”) of outputs, lower values
18+
% reduce it. Setting Temperature=0 removes
19+
% randomness from the output altogether.
2120
%
2221
% TopProbabilityMass - Top probability mass value for controlling the
23-
% diversity of the output. Default value is 1.
22+
% diversity of the output. Default value is 1;
23+
% lower values imply that only the more likely
24+
% words can appear in any particular place.
25+
% This is also known as top-p sampling.
2426
%
2527
% StopSequences - Vector of strings that when encountered, will
2628
% stop the generation of tokens. Default
2729
% value is empty.
30+
% Example: ["The end.", "And that's all she wrote."]
2831
%
2932
% ResponseFormat - The format of response the model returns.
3033
% "text" (default) | "json"
@@ -33,14 +36,21 @@
3336
%
3437
% PresencePenalty - Penalty value for using a token in the response
3538
% that has already been used. Default value is 0.
39+
% Higher values reduce repetition of words in the output.
3640
%
3741
% FrequencyPenalty - Penalty value for using a token that is frequent
38-
% in the training data. Default value is 0.
42+
% in the output. Default value is 0.
43+
% Higher values reduce repetition of words in the output.
3944
%
40-
% StreamFun - Function to callback when streaming the
41-
% result
45+
% StreamFun - Function to callback when streaming the result
46+
%
47+
% TimeOut - Connection Timeout in seconds. Default value is 10.
48+
%
49+
% Tools - A list of tools the model can call.
4250
%
43-
% TimeOut - Connection Timeout in seconds (default: 10 secs)
51+
% API Version - The API version to use for this model.
52+
% "2024-02-01" (default) | "2023-05-15" | "2024-05-01-preview" | ...
53+
% "2024-04-01-preview" | "2024-03-01-preview"
4454
%
4555
%
4656
%
@@ -66,9 +76,9 @@
6676
% FunctionNames - Names of the functions that the model can
6777
% request calls.
6878
%
69-
% ResponseFormat - Specifies the response format, text or json
79+
% ResponseFormat - Specifies the response format, "text" or "json".
7080
%
71-
% TimeOut - Connection Timeout in seconds (default: 10 secs)
81+
% TimeOut - Connection Timeout in seconds.
7282
%
7383

7484
% Copyright 2023-2024 The MathWorks, Inc.

ollamaChat.m

Lines changed: 23 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -12,41 +12,53 @@
1212
% Temperature - Temperature value for controlling the randomness
1313
% of the output. Default value depends on the model;
1414
% if not specified in the model, defaults to 0.8.
15+
% Higher values increase the randomness (in some
16+
% sense, the “creativity”) of outputs, lower
17+
% values reduce it. Setting Temperature=0 removes
18+
% randomness from the output altogether.
1519
%
1620
% TopProbabilityMass - Top probability mass value for controlling the
17-
% diversity of the output. Default value is 1; with
18-
% smaller value TopProbabilityMass=p, only the most
19-
% probable tokens up to a cumulative probability p
20-
% are used.
21+
% diversity of the output. Default value is 1;
22+
% lower values imply that only the more likely
23+
% words can appear in any particular place.
24+
% This is also known as top-p sampling.
2125
%
2226
% TopProbabilityNum - Maximum number of most likely tokens that are
2327
% considered for output. Default is Inf, allowing
2428
% all tokens. Smaller values reduce diversity in
2529
% the output.
2630
%
31+
% TailFreeSamplingZ - Reduce the use of less probable tokens, based on
32+
% the second-order differences of ordered
33+
% probabilities. Default value is 1, disabling
34+
% tail-free sampling. Lower values reduce
35+
% diversity, with some authors recommending
36+
% values around 0.95. Tail-free sampling is
37+
% slower than using TopProbabilityMass or
38+
% TopProbabilityNum.
39+
%
2740
% StopSequences - Vector of strings that when encountered, will
2841
% stop the generation of tokens. Default
2942
% value is empty.
43+
% Example: ["The end.", "And that's all she wrote."]
44+
%
3045
%
3146
% ResponseFormat - The format of response the model returns.
3247
% "text" (default) | "json"
3348
%
34-
% TailFreeSamplingZ - Reduce the use of less probable tokens, based on
35-
% the second-order differences of ordered probabilities.
36-
%
3749
% StreamFun - Function to callback when streaming the
38-
% result
50+
% result.
3951
%
40-
% TimeOut - Connection Timeout in seconds (default: 120 secs)
52+
% TimeOut - Connection Timeout in seconds. Default is 120.
4153
%
4254
%
4355
%
4456
% ollamaChat Functions:
45-
% ollamaChat - Chat completion API from OpenAI.
57+
% ollamaChat - Chat completion API from OpenAI.
4658
% generate - Generate a response using the ollamaChat instance.
4759
%
4860
% ollamaChat Properties, in addition to the name-value pairs above:
49-
% Model - Model name (as expected by Ollama server)
61+
% Model - Model name (as expected by Ollama server).
5062
%
5163
% SystemPrompt - System prompt.
5264

openAIChat.m

Lines changed: 20 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -9,27 +9,39 @@
99
% CHAT = openAIChat(systemPrompt,Name=Value) specifies additional options
1010
% using one or more name-value arguments:
1111
%
12-
% Tools - Array of openAIFunction objects representing
13-
% custom functions to be used during chat completions.
14-
%
1512
% ModelName - Name of the model to use for chat completions.
1613
% The default value is "gpt-3.5-turbo".
1714
%
1815
% Temperature - Temperature value for controlling the randomness
19-
% of the output. Default value is 1.
16+
% of the output. Default value is 1; higher values
17+
% increase the randomness (in some sense,
18+
% the “creativity”) of outputs, lower values
19+
% reduce it. Setting Temperature=0 removes
20+
% randomness from the output altogether.
2021
%
2122
% TopProbabilityMass - Top probability mass value for controlling the
22-
% diversity of the output. Default value is 1.
23+
% diversity of the output. Default value is 1;
24+
% lower values imply that only the more likely
25+
% words can appear in any particular place.
26+
% This is also known as top-p sampling.
27+
%
28+
% Tools - Array of openAIFunction objects representing
29+
% custom functions to be used during chat completions.
2330
%
2431
% StopSequences - Vector of strings that when encountered, will
2532
% stop the generation of tokens. Default
2633
% value is empty.
34+
% Example: ["The end.", "And that's all she wrote."]
2735
%
2836
% PresencePenalty - Penalty value for using a token in the response
2937
% that has already been used. Default value is 0.
38+
% Higher values reduce repetition of words in the output.
3039
%
3140
% FrequencyPenalty - Penalty value for using a token that is frequent
32-
% in the training data. Default value is 0.
41+
% in the output. Default value is 0.
42+
% Higher values reduce repetition of words in the output.
43+
%
44+
% TimeOut - Connection Timeout in seconds. Default value is 10.
3345
%
3446
% StreamFun - Function to callback when streaming the
3547
% result
@@ -61,9 +73,9 @@
6173
% FunctionNames - Names of the functions that the model can
6274
% request calls.
6375
%
64-
% ResponseFormat - Specifies the response format, text or json
76+
% ResponseFormat - Specifies the response format, "text" or "json".
6577
%
66-
% TimeOut - Connection Timeout in seconds (default: 10 secs)
78+
% TimeOut - Connection Timeout in seconds.
6779
%
6880

6981
% Copyright 2023-2024 The MathWorks, Inc.

0 commit comments

Comments
 (0)