Skip to content

Commit a8cfcf2

Browse files
feat(checks): update the api
#### checks:v1alpha The following keys were added: - resources.aisafety.methods.classifyContent (Total Keys: 7) - schemas.GoogleChecksAisafetyV1alphaClassifyContentRequest (Total Keys: 18) - schemas.GoogleChecksAisafetyV1alphaClassifyContentResponse (Total Keys: 10) - schemas.GoogleChecksAisafetyV1alphaTextInput (Total Keys: 4)
1 parent b153244 commit a8cfcf2

File tree

3 files changed

+339
-1
lines changed

3 files changed

+339
-1
lines changed

docs/dyn/checks_v1alpha.aisafety.html

Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,135 @@
1+
<html><body>
2+
<style>
3+
4+
body, h1, h2, h3, div, span, p, pre, a {
5+
margin: 0;
6+
padding: 0;
7+
border: 0;
8+
font-weight: inherit;
9+
font-style: inherit;
10+
font-size: 100%;
11+
font-family: inherit;
12+
vertical-align: baseline;
13+
}
14+
15+
body {
16+
font-size: 13px;
17+
padding: 1em;
18+
}
19+
20+
h1 {
21+
font-size: 26px;
22+
margin-bottom: 1em;
23+
}
24+
25+
h2 {
26+
font-size: 24px;
27+
margin-bottom: 1em;
28+
}
29+
30+
h3 {
31+
font-size: 20px;
32+
margin-bottom: 1em;
33+
margin-top: 1em;
34+
}
35+
36+
pre, code {
37+
line-height: 1.5;
38+
font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39+
}
40+
41+
pre {
42+
margin-top: 0.5em;
43+
}
44+
45+
h1, h2, h3, p {
46+
font-family: Arial, sans serif;
47+
}
48+
49+
h1, h2, h3 {
50+
border-bottom: solid #CCC 1px;
51+
}
52+
53+
.toc_element {
54+
margin-top: 0.5em;
55+
}
56+
57+
.firstline {
58+
margin-left: 2 em;
59+
}
60+
61+
.method {
62+
margin-top: 1em;
63+
border: solid 1px #CCC;
64+
padding: 1em;
65+
background: #EEE;
66+
}
67+
68+
.details {
69+
font-weight: bold;
70+
font-size: 14px;
71+
}
72+
73+
</style>
74+
75+
<h1><a href="checks_v1alpha.html">Checks API</a> . <a href="checks_v1alpha.aisafety.html">aisafety</a></h1>
76+
<h2>Instance Methods</h2>
77+
<p class="toc_element">
78+
<code><a href="#classifyContent">classifyContent(body=None, x__xgafv=None)</a></code></p>
79+
<p class="firstline">Analyze a piece of content with the provided set of policies.</p>
80+
<p class="toc_element">
81+
<code><a href="#close">close()</a></code></p>
82+
<p class="firstline">Close httplib2 connections.</p>
83+
<h3>Method Details</h3>
84+
<div class="method">
85+
<code class="details" id="classifyContent">classifyContent(body=None, x__xgafv=None)</code>
86+
<pre>Analyze a piece of content with the provided set of policies.
87+
88+
Args:
89+
body: object, The request body.
90+
The object takes the form of:
91+
92+
{ # Request proto for ClassifyContent RPC.
93+
&quot;classifierVersion&quot;: &quot;A String&quot;, # Optional. Version of the classifier to use. If not specified, the latest version will be used.
94+
&quot;context&quot;: { # Context about the input that will be used to help on the classification. # Optional. Context about the input that will be used to help on the classification.
95+
&quot;prompt&quot;: &quot;A String&quot;, # Optional. Prompt that generated the model response.
96+
},
97+
&quot;input&quot;: { # Content to be classified. # Required. Content to be classified.
98+
&quot;textInput&quot;: { # Text input to be classified. # Content in text format.
99+
&quot;content&quot;: &quot;A String&quot;, # Actual piece of text to be classified.
100+
&quot;languageCode&quot;: &quot;A String&quot;, # Optional. Language of the text in ISO 639-1 format. If the language is invalid or not specified, the system will try to detect it.
101+
},
102+
},
103+
&quot;policies&quot;: [ # Required. List of policies to classify against.
104+
{ # List of policies to classify against.
105+
&quot;policyType&quot;: &quot;A String&quot;, # Required. Type of the policy.
106+
&quot;threshold&quot;: 3.14, # Optional. Score threshold to use when deciding if the content is violative or non-violative. If not specified, the default 0.5 threshold for the policy will be used.
107+
},
108+
],
109+
}
110+
111+
x__xgafv: string, V1 error format.
112+
Allowed values
113+
1 - v1 error format
114+
2 - v2 error format
115+
116+
Returns:
117+
An object of the form:
118+
119+
{ # Response proto for ClassifyContent RPC.
120+
&quot;policyResults&quot;: [ # Results of the classification for each policy.
121+
{ # Result for one policy against the corresponding input.
122+
&quot;policyType&quot;: &quot;A String&quot;, # Type of the policy.
123+
&quot;score&quot;: 3.14, # Final score for the results of this policy.
124+
&quot;violationResult&quot;: &quot;A String&quot;, # Result of the classification for the policy.
125+
},
126+
],
127+
}</pre>
128+
</div>
129+
130+
<div class="method">
131+
<code class="details" id="close">close()</code>
132+
<pre>Close httplib2 connections.</pre>
133+
</div>
134+
135+
</body></html>

docs/dyn/checks_v1alpha.html

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,11 @@ <h2>Instance Methods</h2>
7979
</p>
8080
<p class="firstline">Returns the accounts Resource.</p>
8181

82+
<p class="toc_element">
83+
<code><a href="checks_v1alpha.aisafety.html">aisafety()</a></code>
84+
</p>
85+
<p class="firstline">Returns the aisafety Resource.</p>
86+
8287
<p class="toc_element">
8388
<code><a href="checks_v1alpha.media.html">media()</a></code>
8489
</p>

googleapiclient/discovery_cache/documents/checks.v1alpha.json

Lines changed: 199 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -401,6 +401,25 @@
401401
}
402402
}
403403
},
404+
"aisafety": {
405+
"methods": {
406+
"classifyContent": {
407+
"description": "Analyze a piece of content with the provided set of policies.",
408+
"flatPath": "v1alpha/aisafety:classifyContent",
409+
"httpMethod": "POST",
410+
"id": "checks.aisafety.classifyContent",
411+
"parameterOrder": [],
412+
"parameters": {},
413+
"path": "v1alpha/aisafety:classifyContent",
414+
"request": {
415+
"$ref": "GoogleChecksAisafetyV1alphaClassifyContentRequest"
416+
},
417+
"response": {
418+
"$ref": "GoogleChecksAisafetyV1alphaClassifyContentResponse"
419+
}
420+
}
421+
}
422+
},
404423
"media": {
405424
"methods": {
406425
"upload": {
@@ -444,7 +463,7 @@
444463
}
445464
}
446465
},
447-
"revision": "20241025",
466+
"revision": "20241029",
448467
"rootUrl": "https://checks.googleapis.com/",
449468
"schemas": {
450469
"CancelOperationRequest": {
@@ -492,6 +511,185 @@
492511
},
493512
"type": "object"
494513
},
514+
"GoogleChecksAisafetyV1alphaClassifyContentRequest": {
515+
"description": "Request proto for ClassifyContent RPC.",
516+
"id": "GoogleChecksAisafetyV1alphaClassifyContentRequest",
517+
"properties": {
518+
"classifierVersion": {
519+
"description": "Optional. Version of the classifier to use. If not specified, the latest version will be used.",
520+
"enum": [
521+
"CLASSIFIER_VERSION_UNSPECIFIED",
522+
"STABLE",
523+
"LATEST"
524+
],
525+
"enumDescriptions": [
526+
"Unspecified version.",
527+
"Stable version.",
528+
"Latest version."
529+
],
530+
"type": "string"
531+
},
532+
"context": {
533+
"$ref": "GoogleChecksAisafetyV1alphaClassifyContentRequestContext",
534+
"description": "Optional. Context about the input that will be used to help on the classification."
535+
},
536+
"input": {
537+
"$ref": "GoogleChecksAisafetyV1alphaClassifyContentRequestInputContent",
538+
"description": "Required. Content to be classified."
539+
},
540+
"policies": {
541+
"description": "Required. List of policies to classify against.",
542+
"items": {
543+
"$ref": "GoogleChecksAisafetyV1alphaClassifyContentRequestPolicyConfig"
544+
},
545+
"type": "array"
546+
}
547+
},
548+
"type": "object"
549+
},
550+
"GoogleChecksAisafetyV1alphaClassifyContentRequestContext": {
551+
"description": "Context about the input that will be used to help on the classification.",
552+
"id": "GoogleChecksAisafetyV1alphaClassifyContentRequestContext",
553+
"properties": {
554+
"prompt": {
555+
"description": "Optional. Prompt that generated the model response.",
556+
"type": "string"
557+
}
558+
},
559+
"type": "object"
560+
},
561+
"GoogleChecksAisafetyV1alphaClassifyContentRequestInputContent": {
562+
"description": "Content to be classified.",
563+
"id": "GoogleChecksAisafetyV1alphaClassifyContentRequestInputContent",
564+
"properties": {
565+
"textInput": {
566+
"$ref": "GoogleChecksAisafetyV1alphaTextInput",
567+
"description": "Content in text format."
568+
}
569+
},
570+
"type": "object"
571+
},
572+
"GoogleChecksAisafetyV1alphaClassifyContentRequestPolicyConfig": {
573+
"description": "List of policies to classify against.",
574+
"id": "GoogleChecksAisafetyV1alphaClassifyContentRequestPolicyConfig",
575+
"properties": {
576+
"policyType": {
577+
"description": "Required. Type of the policy.",
578+
"enum": [
579+
"POLICY_TYPE_UNSPECIFIED",
580+
"DANGEROUS_CONTENT",
581+
"PII_SOLICITING_RECITING",
582+
"HARASSMENT",
583+
"SEXUALLY_EXPLICIT",
584+
"HATE_SPEECH",
585+
"MEDICAL_INFO",
586+
"VIOLENCE_AND_GORE",
587+
"OBSCENITY_AND_PROFANITY"
588+
],
589+
"enumDescriptions": [
590+
"Default.",
591+
"The model facilitates, promotes or enables access to harmful goods, services, and activities.",
592+
"The model reveals an individual\u2019s personal information and data.",
593+
"The model generates content that is malicious, intimidating, bullying, or abusive towards another individual.",
594+
"The model generates content that is sexually explicit in nature.",
595+
"The model promotes violence, hatred, discrimination on the basis of race, religion, etc.",
596+
"The model facilitates harm by providing health advice or guidance.",
597+
"The model generates content that contains gratuitous, realistic descriptions of violence or gore.",
598+
""
599+
],
600+
"type": "string"
601+
},
602+
"threshold": {
603+
"description": "Optional. Score threshold to use when deciding if the content is violative or non-violative. If not specified, the default 0.5 threshold for the policy will be used.",
604+
"format": "float",
605+
"type": "number"
606+
}
607+
},
608+
"type": "object"
609+
},
610+
"GoogleChecksAisafetyV1alphaClassifyContentResponse": {
611+
"description": "Response proto for ClassifyContent RPC.",
612+
"id": "GoogleChecksAisafetyV1alphaClassifyContentResponse",
613+
"properties": {
614+
"policyResults": {
615+
"description": "Results of the classification for each policy.",
616+
"items": {
617+
"$ref": "GoogleChecksAisafetyV1alphaClassifyContentResponsePolicyResult"
618+
},
619+
"type": "array"
620+
}
621+
},
622+
"type": "object"
623+
},
624+
"GoogleChecksAisafetyV1alphaClassifyContentResponsePolicyResult": {
625+
"description": "Result for one policy against the corresponding input.",
626+
"id": "GoogleChecksAisafetyV1alphaClassifyContentResponsePolicyResult",
627+
"properties": {
628+
"policyType": {
629+
"description": "Type of the policy.",
630+
"enum": [
631+
"POLICY_TYPE_UNSPECIFIED",
632+
"DANGEROUS_CONTENT",
633+
"PII_SOLICITING_RECITING",
634+
"HARASSMENT",
635+
"SEXUALLY_EXPLICIT",
636+
"HATE_SPEECH",
637+
"MEDICAL_INFO",
638+
"VIOLENCE_AND_GORE",
639+
"OBSCENITY_AND_PROFANITY"
640+
],
641+
"enumDescriptions": [
642+
"Default.",
643+
"The model facilitates, promotes or enables access to harmful goods, services, and activities.",
644+
"The model reveals an individual\u2019s personal information and data.",
645+
"The model generates content that is malicious, intimidating, bullying, or abusive towards another individual.",
646+
"The model generates content that is sexually explicit in nature.",
647+
"The model promotes violence, hatred, discrimination on the basis of race, religion, etc.",
648+
"The model facilitates harm by providing health advice or guidance.",
649+
"The model generates content that contains gratuitous, realistic descriptions of violence or gore.",
650+
""
651+
],
652+
"type": "string"
653+
},
654+
"score": {
655+
"description": "Final score for the results of this policy.",
656+
"format": "float",
657+
"type": "number"
658+
},
659+
"violationResult": {
660+
"description": "Result of the classification for the policy.",
661+
"enum": [
662+
"VIOLATION_RESULT_UNSPECIFIED",
663+
"VIOLATIVE",
664+
"NON_VIOLATIVE",
665+
"CLASSIFICATION_ERROR"
666+
],
667+
"enumDescriptions": [
668+
"Unspecified result.",
669+
"The final score is greater or equal the input score threshold.",
670+
"The final score is smaller than the input score threshold.",
671+
"There was an error and the violation result could not be determined."
672+
],
673+
"type": "string"
674+
}
675+
},
676+
"type": "object"
677+
},
678+
"GoogleChecksAisafetyV1alphaTextInput": {
679+
"description": "Text input to be classified.",
680+
"id": "GoogleChecksAisafetyV1alphaTextInput",
681+
"properties": {
682+
"content": {
683+
"description": "Actual piece of text to be classified.",
684+
"type": "string"
685+
},
686+
"languageCode": {
687+
"description": "Optional. Language of the text in ISO 639-1 format. If the language is invalid or not specified, the system will try to detect it.",
688+
"type": "string"
689+
}
690+
},
691+
"type": "object"
692+
},
495693
"GoogleChecksReportV1alphaAnalyzeUploadRequest": {
496694
"description": "The request message for ReportService.AnalyzeUpload.",
497695
"id": "GoogleChecksReportV1alphaAnalyzeUploadRequest",

0 commit comments

Comments
 (0)