Skip to content

Commit 2650159

Browse files
fix(api): Fix evals and code interpreter interfaces
1 parent 26d715f commit 2650159

25 files changed

+162
-88
lines changed

.stats.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 111
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-d4bcffecf0cdadf746faa6708ed1ec81fac451f9b857deabbab26f0a343b9314.yml
3-
openapi_spec_hash: 7c54a18b4381248bda7cc34c52142615
4-
config_hash: e618aa8ff61aea826540916336de65a6
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-2bcc845d8635bf93ddcf9ee723af4d7928248412a417bee5fc10d863a1e13867.yml
3+
openapi_spec_hash: 865230cb3abeb01bd85de05891af23c4
4+
config_hash: ed1e6b3c5f93d12b80d31167f55c557c

api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -784,7 +784,7 @@ Methods:
784784
- <code title="post /responses">client.responses.<a href="./src/openai/resources/responses/responses.py">create</a>(\*\*<a href="src/openai/types/responses/response_create_params.py">params</a>) -> <a href="./src/openai/types/responses/response.py">Response</a></code>
785785
- <code title="get /responses/{response_id}">client.responses.<a href="./src/openai/resources/responses/responses.py">retrieve</a>(response_id, \*\*<a href="src/openai/types/responses/response_retrieve_params.py">params</a>) -> <a href="./src/openai/types/responses/response.py">Response</a></code>
786786
- <code title="delete /responses/{response_id}">client.responses.<a href="./src/openai/resources/responses/responses.py">delete</a>(response_id) -> None</code>
787-
- <code title="post /responses/{response_id}/cancel">client.responses.<a href="./src/openai/resources/responses/responses.py">cancel</a>(response_id) -> None</code>
787+
- <code title="post /responses/{response_id}/cancel">client.responses.<a href="./src/openai/resources/responses/responses.py">cancel</a>(response_id) -> <a href="./src/openai/types/responses/response.py">Response</a></code>
788788

789789
## InputItems
790790

src/openai/resources/chat/completions/completions.py

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -263,9 +263,9 @@ def create(
263263
utilize scale tier credits until they are exhausted.
264264
- If set to 'auto', and the Project is not Scale tier enabled, the request will
265265
be processed using the default service tier with a lower uptime SLA and no
266-
latency guarentee.
266+
latency guarantee.
267267
- If set to 'default', the request will be processed using the default service
268-
tier with a lower uptime SLA and no latency guarentee.
268+
tier with a lower uptime SLA and no latency guarantee.
269269
- If set to 'flex', the request will be processed with the Flex Processing
270270
service tier.
271271
[Learn more](https://platform.openai.com/docs/guides/flex-processing).
@@ -541,9 +541,9 @@ def create(
541541
utilize scale tier credits until they are exhausted.
542542
- If set to 'auto', and the Project is not Scale tier enabled, the request will
543543
be processed using the default service tier with a lower uptime SLA and no
544-
latency guarentee.
544+
latency guarantee.
545545
- If set to 'default', the request will be processed using the default service
546-
tier with a lower uptime SLA and no latency guarentee.
546+
tier with a lower uptime SLA and no latency guarantee.
547547
- If set to 'flex', the request will be processed with the Flex Processing
548548
service tier.
549549
[Learn more](https://platform.openai.com/docs/guides/flex-processing).
@@ -810,9 +810,9 @@ def create(
810810
utilize scale tier credits until they are exhausted.
811811
- If set to 'auto', and the Project is not Scale tier enabled, the request will
812812
be processed using the default service tier with a lower uptime SLA and no
813-
latency guarentee.
813+
latency guarantee.
814814
- If set to 'default', the request will be processed using the default service
815-
tier with a lower uptime SLA and no latency guarentee.
815+
tier with a lower uptime SLA and no latency guarantee.
816816
- If set to 'flex', the request will be processed with the Flex Processing
817817
service tier.
818818
[Learn more](https://platform.openai.com/docs/guides/flex-processing).
@@ -1366,9 +1366,9 @@ async def create(
13661366
utilize scale tier credits until they are exhausted.
13671367
- If set to 'auto', and the Project is not Scale tier enabled, the request will
13681368
be processed using the default service tier with a lower uptime SLA and no
1369-
latency guarentee.
1369+
latency guarantee.
13701370
- If set to 'default', the request will be processed using the default service
1371-
tier with a lower uptime SLA and no latency guarentee.
1371+
tier with a lower uptime SLA and no latency guarantee.
13721372
- If set to 'flex', the request will be processed with the Flex Processing
13731373
service tier.
13741374
[Learn more](https://platform.openai.com/docs/guides/flex-processing).
@@ -1644,9 +1644,9 @@ async def create(
16441644
utilize scale tier credits until they are exhausted.
16451645
- If set to 'auto', and the Project is not Scale tier enabled, the request will
16461646
be processed using the default service tier with a lower uptime SLA and no
1647-
latency guarentee.
1647+
latency guarantee.
16481648
- If set to 'default', the request will be processed using the default service
1649-
tier with a lower uptime SLA and no latency guarentee.
1649+
tier with a lower uptime SLA and no latency guarantee.
16501650
- If set to 'flex', the request will be processed with the Flex Processing
16511651
service tier.
16521652
[Learn more](https://platform.openai.com/docs/guides/flex-processing).
@@ -1913,9 +1913,9 @@ async def create(
19131913
utilize scale tier credits until they are exhausted.
19141914
- If set to 'auto', and the Project is not Scale tier enabled, the request will
19151915
be processed using the default service tier with a lower uptime SLA and no
1916-
latency guarentee.
1916+
latency guarantee.
19171917
- If set to 'default', the request will be processed using the default service
1918-
tier with a lower uptime SLA and no latency guarentee.
1918+
tier with a lower uptime SLA and no latency guarantee.
19191919
- If set to 'flex', the request will be processed with the Flex Processing
19201920
service tier.
19211921
[Learn more](https://platform.openai.com/docs/guides/flex-processing).

src/openai/resources/fine_tuning/alpha/graders.py

Lines changed: 20 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,6 @@
22

33
from __future__ import annotations
44

5-
from typing import Union, Iterable
6-
75
import httpx
86

97
from .... import _legacy_response
@@ -45,7 +43,7 @@ def run(
4543
*,
4644
grader: grader_run_params.Grader,
4745
model_sample: str,
48-
reference_answer: Union[str, Iterable[object], float, object],
46+
item: object | NotGiven = NOT_GIVEN,
4947
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
5048
# The extra values given here take precedence over values defined on the client or passed to this method.
5149
extra_headers: Headers | None = None,
@@ -59,9 +57,15 @@ def run(
5957
Args:
6058
grader: The grader used for the fine-tuning job.
6159
62-
model_sample: The model sample to be evaluated.
60+
model_sample: The model sample to be evaluated. This value will be used to populate the
61+
`sample` namespace. See
62+
[the guide](https://platform.openai.com/docs/guides/graders) for more details.
63+
The `output_json` variable will be populated if the model sample is a valid JSON
64+
string.
6365
64-
reference_answer: The reference answer for the evaluation.
66+
item: The dataset item provided to the grader. This will be used to populate the
67+
`item` namespace. See
68+
[the guide](https://platform.openai.com/docs/guides/graders) for more details.
6569
6670
extra_headers: Send extra headers
6771
@@ -77,7 +81,7 @@ def run(
7781
{
7882
"grader": grader,
7983
"model_sample": model_sample,
80-
"reference_answer": reference_answer,
84+
"item": item,
8185
},
8286
grader_run_params.GraderRunParams,
8387
),
@@ -147,7 +151,7 @@ async def run(
147151
*,
148152
grader: grader_run_params.Grader,
149153
model_sample: str,
150-
reference_answer: Union[str, Iterable[object], float, object],
154+
item: object | NotGiven = NOT_GIVEN,
151155
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
152156
# The extra values given here take precedence over values defined on the client or passed to this method.
153157
extra_headers: Headers | None = None,
@@ -161,9 +165,15 @@ async def run(
161165
Args:
162166
grader: The grader used for the fine-tuning job.
163167
164-
model_sample: The model sample to be evaluated.
168+
model_sample: The model sample to be evaluated. This value will be used to populate the
169+
`sample` namespace. See
170+
[the guide](https://platform.openai.com/docs/guides/graders) for more details.
171+
The `output_json` variable will be populated if the model sample is a valid JSON
172+
string.
165173
166-
reference_answer: The reference answer for the evaluation.
174+
item: The dataset item provided to the grader. This will be used to populate the
175+
`item` namespace. See
176+
[the guide](https://platform.openai.com/docs/guides/graders) for more details.
167177
168178
extra_headers: Send extra headers
169179
@@ -179,7 +189,7 @@ async def run(
179189
{
180190
"grader": grader,
181191
"model_sample": model_sample,
182-
"reference_answer": reference_answer,
192+
"item": item,
183193
},
184194
grader_run_params.GraderRunParams,
185195
),

src/openai/resources/images.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ def edit(
144144
image: The image(s) to edit. Must be a supported image file or an array of images.
145145
146146
For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg` file less than
147-
25MB. You can provide up to 16 images.
147+
50MB. You can provide up to 16 images.
148148
149149
For `dall-e-2`, you can only provide one image, and it should be a square `png`
150150
file less than 4MB.
@@ -468,7 +468,7 @@ async def edit(
468468
image: The image(s) to edit. Must be a supported image file or an array of images.
469469
470470
For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg` file less than
471-
25MB. You can provide up to 16 images.
471+
50MB. You can provide up to 16 images.
472472
473473
For `dall-e-2`, you can only provide one image, and it should be a square `png`
474474
file less than 4MB.

0 commit comments

Comments
 (0)