Skip to content

Commit

Permalink
Merge pull request #93 from redhat-developer/chore/generate-sdks-1677…
Browse files Browse the repository at this point in the history
…025495

chore(all): re-generate SDKs
  • Loading branch information
jackdelahunt authored Feb 22, 2023
2 parents 3c2c52e + c92d00b commit d97c4c8
Show file tree
Hide file tree
Showing 195 changed files with 3,110 additions and 1,099 deletions.
4 changes: 2 additions & 2 deletions .openapi/connector_mgmt.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ info:
license:
name: 'Apache 2.0'
url: 'https://www.apache.org/licenses/LICENSE-2.0'
contact:
contact:
name: 'Red Hat OpenShift Streams for Apache Kafka Support'
email: '[email protected]'
servers:
Expand Down Expand Up @@ -1653,7 +1653,7 @@ components:
* Connector Types: id, created_at, updated_at, version, name, description, label, channel, featured_rank, pricing_tier
* Connectors: id, created_at, updated_at, name, owner, organisation_id, connector_type_id, desired_state, state, channel, namespace_id, kafka_id, kafka_bootstrap_server, service_account_client_id, schema_registry_id, schema_registry_url
Allowed operators are `<>`, `=`, `LIKE`, or `ILIKE`.
Allowed operators are `<>`, `=`, `IN`, `NOT IN`, `LIKE`, or `ILIKE`.
Allowed conjunctive operators are `AND` and `OR`. However, you can use a maximum of 10 conjunctions in a search query.
Examples:
Expand Down
42 changes: 21 additions & 21 deletions .openapi/kas-fleet-manager.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ info:
license:
name: 'Apache 2.0'
url: 'https://www.apache.org/licenses/LICENSE-2.0'
contact:
contact:
name: 'Red Hat OpenShift Streams for Apache Kafka Support'
email: '[email protected]'
tags:
Expand All @@ -16,7 +16,7 @@ tags:
- name: security
description: Security related endpoints.
- name: enterprise-dataplane-clusters
description: Enterprise data plane clusters registration and management endpoints.
description: Enterprise data plane clusters registration and management endpoints.
servers:
- url: https://api.openshift.com
description: Main (production) server
Expand Down Expand Up @@ -857,7 +857,7 @@ paths:
examples:
500Example:
$ref: '#/components/examples/500Example'

#
# These are the user-facing related endpoints
#
Expand Down Expand Up @@ -985,7 +985,7 @@ paths:
/api/kafkas_mgmt/v1/clusters:
get:
tags:
- enterprise-dataplane-clusters
- enterprise-dataplane-clusters
description: List all Enterprise data plane clusters
operationId: getEnterpriseOsdClusters
security:
Expand Down Expand Up @@ -1385,8 +1385,8 @@ components:
bootstrap_server_host:
type: string
admin_api_server_url:
type: string
description: The kafka admin server url to perform kafka admin operations e.g acl management etc. The value will be available when the Kafka has been fully provisioned i.e it reaches a 'ready' state
type: string
description: The kafka admin server url to perform kafka admin operations e.g acl management etc. The value will be available when the Kafka has been fully provisioned i.e it reaches a 'ready' state
created_at:
format: date-time
type: string
Expand Down Expand Up @@ -1502,7 +1502,7 @@ components:
- multi_az
properties:
access_kafkas_via_private_network:
description: Indicates whether Kafkas created on this data plane cluster have to be accessed via private network
description: Indicates whether Kafkas created on this data plane cluster have to be accessed via private network
type: boolean
cluster_id:
type: string
Expand All @@ -1512,10 +1512,10 @@ components:
type: string
cloud_provider:
description: The cloud provider for this cluster. This valus will be used as the Kafka's cloud provider value when a Kafka is created on this cluster
type: string
region:
type: string
region:
description: The region of this cluster. This valus will be used as the Kafka's region value when a Kafka is created on this cluster
type: string
type: string
multi_az:
description: A flag indicating whether this cluster is available on multiple availability zones or not
type: boolean
Expand All @@ -1524,11 +1524,11 @@ components:
- $ref: "#/components/schemas/EnterpriseClusterListItem"
- type: object
properties:
supported_instance_types:
supported_instance_types:
type: object
$ref: "#/components/schemas/SupportedKafkaInstanceTypesList"
$ref: "#/components/schemas/SupportedKafkaInstanceTypesList"
capacity_information:
description: Returns the capacity related information
description: Returns the capacity related information
type: object
example:
kafka_machine_pool_node_count: 3
Expand All @@ -1549,7 +1549,7 @@ components:
type: integer
remaining_kafka_streaming_units:
description: "The remaining number of Kafka streaming units that can be still be created on this cluster"
type: integer
type: integer
consumed_kafka_streaming_units:
description: "The number of Kafka streaming units that have been consumed on this cluster"
type: integer
Expand Down Expand Up @@ -1972,7 +1972,7 @@ components:
valid_issuer:
type: string
example:
$ref: "#/components/examples/SsoProviderExample"
$ref: "#/components/examples/SsoProviderExample"
MetricsRangeQueryList:
allOf:
- type: object
Expand Down Expand Up @@ -2067,7 +2067,7 @@ components:
type: object
properties:
access_kafkas_via_private_network:
description: Sets whether Kafkas created on this data plane cluster have to be accessed via private network
description: Sets whether Kafkas created on this data plane cluster have to be accessed via private network
type: boolean
cluster_id:
description: The data plane cluster ID. This is the ID of the cluster obtained from OpenShift Cluster Manager (OCM) API
Expand All @@ -2083,7 +2083,7 @@ components:
The name of the machine pool must be `kafka-standard`
The node count value has to be a multiple of 3 with a minimum of 3 nodes.
type: integer
format: int32
format: int32
EnterpriseClusterWithAddonParameters:
description: Enterprise cluster with addon parameters
allOf:
Expand Down Expand Up @@ -2223,7 +2223,7 @@ components:
Search criteria.
The syntax of this parameter is similar to the syntax of the `where` clause of an
SQL statement. Allowed fields in the search are `cloud_provider`, `name`, `owner`, `region`, `status` and `cluster_id`. Allowed comparators are `<>`, `=`, `LIKE`, or `ILIKE`.
SQL statement. Allowed fields in the search are `cloud_provider`, `name`, `owner`, `region`, `status` and `cluster_id`. Allowed comparators are `<>`, `=`, `IN`, `NOT IN`, `LIKE`, or `ILIKE`.
Allowed joins are `AND` and `OR`. However, you can use a maximum of 10 joins in a search query.
Examples:
Expand Down Expand Up @@ -2341,7 +2341,7 @@ components:
owner: "api_kafka_service"
name: "serviceapi"
bootstrap_server_host: "serviceapi-1isy6rq3jki8q0otmjqfd3ocfrg.apps.mk-bttg0jn170hp.x5u8.s1.devshift.org"
admin_api_server_url: "https://admin-server-mk-e-e-e-e-c---{}ld{-}-n-vp--bltg.rhcloud.com"
admin_api_server_url: "https://admin-server-mk-e-e-e-e-c---{}ld{-}-n-vp--bltg.rhcloud.com"
created_at: "2020-10-05T12:51:24.053142Z"
updated_at: "2020-10-05T12:56:36.362208Z"
version: "2.6.0"
Expand Down Expand Up @@ -2374,7 +2374,7 @@ components:
owner: "api_kafka_service"
name: "serviceapi"
bootstrap_server_host: "serviceapi-1isy6rq3jki8q0otmjqfd3ocfrg.apps.mk-bttg0jn170hp.x5u8.s1.devshift.org"
admin_api_server_url: "https://admin-server-mk-e-e-e-e-c---{}ld{-}-n-vp--bltg.rhcloud.com"
admin_api_server_url: "https://admin-server-mk-e-e-e-e-c---{}ld{-}-n-vp--bltg.rhcloud.com"
created_at: "2020-10-05T12:51:24.053142Z"
updated_at: "2020-10-05T12:56:36.362208Z"
failed_reason: "a reason the Kafka request creation failed"
Expand Down Expand Up @@ -2461,7 +2461,7 @@ components:
base_url: "https://identity.api.redhat.com"
jwks: "https://identity.api.openshift.com/auth/realms/rhoas/protocol/openid-connect/certs"
token_url: "https://identity.api.openshift.com/auth/realms/rhoas/protocol/openid-connect/token"
valid_issuer: "https://identity.api.openshift.com/auth/realms/rhoas"
valid_issuer: "https://identity.api.openshift.com/auth/realms/rhoas"
ServiceAccountByIdExample:
value:
id: "1"
Expand Down
Loading

0 comments on commit d97c4c8

Please sign in to comment.