Skip to content

doc: add missing info in the installation guide and olm repo content #122

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
183 changes: 135 additions & 48 deletions content/en/docs/getting-started/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,73 +60,160 @@ olm-operator 1/1 1 1 5m52s
packageserver 2/2 2 2 5m43s
```

## Installing an Operator using OLM
## Installing OLM with Operator SDK
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'd like to see us continue documenting how to install an Operator manually, in addition to how you would achieve that using the operator-sdk functionality. Can we keep the existing content, but add a sub-section here that details how you would do this manually vs. using the operator-sdk commands?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I think I misread what we're trying to achieve here. Can we move this Installing OLM with Operator SDK further up the documentation as a subheader under the Installing OLM in your cluster?


When you install OLM, it comes packaged with a number of Operators developed by the community that you can install instantly.
You can use the `pacakagemanifest` api to see the operators available for you to install in your cluster:
The [`operator-sdk`][operator-sdk] provides a command to easily install and uninstall OLM for developmental purposes. See the [SDK installation guide][sdk-installation-guide] on how to install `operator-sdk` tooling.

```sh
$ kubectl get packagemanifest -n olm
NAME CATALOG AGE
cassandra-operator Community Operators 26m
etcd Community Operators 26m
postgres-operator Community Operators 26m
prometheus Community Operators 26m
wildfly Community Operators 26m
With `operator-sdk` installed, you can easily install OLM on your cluster by running `operator-sdk olm install`. It is just as easy to uninstall OLM by running `operator-sdk uninstall olm`. For more information about how to integrate your project using [`operator-sdk`][operator-sdk] CLI tool, see the following [OLM integration][sdk-olm-integration] section.

## Installing an operator bundle with Operator SDK

You can use the [`operator-sdk`][operator-sdk] CLI to run your bundle with [`operator-sdk run bundle`][cli-run-bundle].

Given a bundle image is present in a registry, [`operator-sdk run bundle`][cli-run-bundle] can create a pod to serve that bundle to OLM via a [`Subscription`][install-your-operator], along with other OLM objects, ephemerally. Following an example:

```console
$ operator-sdk run bundle <some-registry>/memcached-operator-bundle:v0.0.1
INFO[0008] Successfully created registry pod: <some-registry>-memcached-operator-bundle-0-0-1
INFO[0008] Created CatalogSource: memcached-operator-catalog
INFO[0008] OperatorGroup "operator-sdk-og" created
INFO[0008] Created Subscription: memcached-operator-v0-0-1-sub
INFO[0019] Approved InstallPlan install-krv7q for the Subscription: memcached-operator-v0-0-1-sub
INFO[0019] Waiting for ClusterServiceVersion "default/memcached-operator.v0.0.1" to reach 'Succeeded' phase
INFO[0019] Waiting for ClusterServiceVersion "default/memcached-operator.v0.0.1" to appear
INFO[0031] Found ClusterServiceVersion "default/memcached-operator.v0.0.1" phase: Pending
INFO[0032] Found ClusterServiceVersion "default/memcached-operator.v0.0.1" phase: Installing
INFO[0040] Found ClusterServiceVersion "default/memcached-operator.v0.0.1" phase: Succeeded
INFO[0040] OLM has successfully installed "memcached-operator.v0.0.1"
```

To install the etcd operator in the default namespace, first create an `OperatorGroup` for the default namespace:
**Note** For more information about how to integrate your project using [`operator-sdk`][operator-sdk] CLI tool, see the following [OLM integration][sdk-olm-integration] section.

```sh
$ cat operatorgroup.yaml
kind: OperatorGroup
apiVersion: operators.coreos.com/v1
metadata:
name: og-single
namespace: default
spec:
targetNamespaces:
- default
## Running OLM locally with minikube

This command starts minikube, builds the OLM containers locally with the minikube-provided docker, and uses the local configuration in [local-values.yaml][local-values.yaml] to build localized deployment resources for OLM.

$ kubectl apply -f operatorgroup.yaml
operatorgroup.operators.coreos.com/og-single created
```bash
# To install and run locally
make run-local
```

Then create a subscription for the etcd operator:
You can verify that the OLM components have been successfully deployed by running `kubectl -n local get deployments`.

```sh
$ cat subscription.yaml
## User Interface (Running the Console Locally)

To interact with OLM and its resources via a web browser, you can use the [web-console][web-console] in a Kubernetes cluster.

```bash
git clone https://github.com/openshift/origin-web-console
cd origin-web-console
make run-console-local
```

You can then visit `http://localhost:9000` to view the console.

## Customizing OLM installations

Deployments of OLM can be stamped out with different configurations by writing a `values.yaml` file and running commands to generate resources.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does it mean "can be stamped out"?

Copy link
Contributor Author

@camilamacedo86 camilamacedo86 Mar 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just copy and paste. The goal is we move forward with operator-framework/operator-lifecycle-manager#2046

So, it adds the OLM info from the current installation guide to centralize here. I do not think that we should change that now and it is the current info provided. However, some OLM team member might able to improve that after a follow-up.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe changing it to "Default values used while installing OLM with operator-sdk olm install can be overridden by providing the custom configurations in a values.yaml file, and then using that file to generate the new manifests for deploying custom configured OLM." might help?

Copy link
Contributor Author

@camilamacedo86 camilamacedo86 Mar 20, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has no relation with the operator-sdk olm install. The operator-sdk olm install will not use it.
this info is in the install guide https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/install/install.md#customizing-olm-installation which is used in the README in the Getting started of the OLM README see : https://github.com/operator-framework/operator-lifecycle-manager#getting-started.

The idea of this PR is to move the Getting Started instructions from the OLM README and add them in the Getting Started of the olm docs. So that, we can link the OLM docs webside in the readme and replace that by the Getting Started here.


Here's an example `values.yaml`:

```yaml
# sets the apiversion to use for rbac-resources. Change to `authorization.openshift.io` for openshift
rbacApiVersion: rbac.authorization.k8s.io
# namespace is the namespace the operators will _run_
namespace: olm
# watchedNamespaces is a comma-separated list of namespaces the operators will _watch_ for OLM resources.
# Omit to enable OLM in all namespaces
watchedNamespaces: olm
# catalog_namespace is the namespace where the catalog operator will look for global catalogs.
# entries in global catalogs can be resolved in any watched namespace
catalog_namespace: olm
# operator_namespace is the namespace where the operator runs
operator_namespace: operators
# OLM operator run configuration
olm:
# OLM operator doesn't do any leader election (yet), set to 1
replicaCount: 1
# The image to run. If not building a local image, use sha256 image references
image:
ref: quay.io/operator-framework/olm:local
pullPolicy: IfNotPresent
service:
# port for readiness/liveness probes
internalPort: 8080
# catalog operator run configuration
catalog:
# Catalog operator doesn't do any leader election (yet), set to 1
replicaCount: 1
# The image to run. If not building a local image, use sha256 image references
image:
ref: quay.io/operator-framework/olm:local
pullPolicy: IfNotPresent
service:
# port for readiness/liveness probes
internalPort: 8080
```

To configure a release of OLM for installation in a cluster:

1. Create a `my-values.yaml` like the example above with the desired configuration or choose an existing one from this repository. The latest production values can be found in [deploy/upstream/values.yaml][deploy-upstream-values].

1. Generate deployment files from the templates and the `my-values.yaml` using `package_release.sh`

```bash
# first arg must be a semver-compatible version string
# second arg is the output directory
# third arg is the values.yaml file
./scripts/package_release.sh 1.0.0-myolm ./my-olm-deployment my-values.yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This script does not exist in this repo, we should probably either copy paste the script in a separate file in the doc site so that users can copy it to create their own file(this is my preference so that users don't have to jump back and forth between the site and the repos), or link to the script at least to begin with.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if I am following that. I understand that:

  • I would be reading the info in https://olm.operatorframework.io/ (which is the doc ref)
  • However, locally I would have the OLM repo cloned since the goal here is to install the OLM project. Then, I would have this script because it is provided there. I mean, I do not think that has a reason for someone that is following the getting started docs to have the docs locally instead of the OLM project.

Then, if we move the script to OLM docs and if changed in the repo it will be providing the wrong info. Is that make sense?

```

1. Deploy to kubernetes: `kubectl apply -f ./my-olm-deployment/templates/`

The above steps are automated for official releases with `make ver=0.3.0 release`, which will output new versions of manifests in `deploy/tectonic-alm-operator/manifests/$(ver)`.

## Overriding the Global Catalog Namespace

It is possible to override the Global Catalog Namespace by setting the `GLOBAL_CATALOG_NAMESPACE` environment variable in the catalog operator deployment.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what this means? We could probably elaborate on this a little more.

Copy link
Contributor Author

@camilamacedo86 camilamacedo86 Mar 20, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is the current info provided by OLM README via the Getting Started section see:

Then, if need to be improved I think would better be done in a follow up for someone that has more context than I.


## Subscribe to a Package and Channel

Cloud Services can be installed from the catalog by subscribing to a channel in the corresponding package.

If using one of the `local` run options, this will subscribe to `etcd`, `vault`, and `prometheus` operators. Subscribing to a service that doesn't exist yet will install the operator and related CRDs in the namespace.

```yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: etcd
namespace: default
namespace: olm
spec:
channel: singlenamespace-alpha
installPlanApproval: Automatic
name: etcd
source: operatorhubio-catalog
sourceNamespace: olm
startingCSV: etcdoperator.v0.9.2

$ kubectl apply -f subscription.yaml
subscription.operators.coreos.com/etcd created
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: prometheus
namespace: olm
spec:
channel: alpha
name: prometheus
source: operatorhubio-catalog
sourceNamespace: olm
```

This installs the v0.9.2 version of the etcd operator, and then upgrades to the latest version of the etcd operator in your cluster.

```sh
$ kubectl get sub -n default
NAME PACKAGE SOURCE CHANNEL
etcd etcd operatorhubio-catalog singlenamespace-alpha
To learn more about packaging your operator for OLM, installing/uninstalling an operator etc, visit the [Core Tasks](/docs/tasks/) and the [Advanced Tasks](/docs/advanced-tasks/) section of this site.

$ kubectl get csv -n default
NAME DISPLAY VERSION REPLACES PHASE
etcdoperator.v0.9.4 etcd 0.9.4 etcdoperator.v0.9.2 Succeeded

$ kubectl get deployment -n default
NAME READY UP-TO-DATE AVAILABLE AGE
etcd-operator 1/1 1 1 3m29s
```

To learn more about packaging your operator for OLM, installing/uninstalling an operator etc, visit the [Core Tasks](/docs/tasks/) and the [Advanced Tasks](/docs/advanced-tasks/) section of this site.
[operator-sdk]: https://github.com/operator-framework/operator-sdk
[sdk-installation-guide]: https://sdk.operatorframework.io/docs/installation/
[sdk-olm-integration]: https://sdk.operatorframework.io/docs/olm-integration/
[deploy-upstream-values]: https://github.com/operator-framework/operator-lifecycle-manager/blob/0.16.1/deploy/upstream/values.yaml
[local-values.yaml]: https://github.com/operator-framework/operator-lifecycle-manager/blob/0.16.1/doc/install/local-values.yaml
[cli-run-bundle]: https://sdk.operatorframework.io/docs/cli/operator-sdk_run_bundle/
[install-your-operator]: /docs/tasks/install-operator-with-olm/#install-your-operator
[web-console]: https://github.com/openshift/origin-web-console