Skip to content

Gke on prem integration stack docs issue 255 #284

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Apr 12, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 60 additions & 0 deletions docs/en/gke-on-prem/gke-on-prem-architecture.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
[[architecture]]
== Architecture

Within each of your GKE On-Prem Kubernetes clusters you will deploy DaemonSets
containing Beats, the lightweight shippers for logs, metrics, network data, etc.
These Beats will autodiscover your applications and GKE On-Prem infrastructure
by synchronizing with the Kubernetes API. Your containers will be managed based
on the process running in them. If you are running NGINX, then {filebeat} will
configure itself to collect NGINX logs and {metricbeat} will configure itself to
collect NGINX metrics. These pre-packaged collections of configuration details
are called _modules_. You can see the list of available modules in the
documentation for {filebeatmodules}[{filebeat}] and
{metricbeatmodules}[{metricbeat}].

image:images/overview.png[]

A single {es} cluster with {kib} can be receiving, indexing, storing, and
analyzing logs and metrics from multiple environments. These environments might
be:

. Servers, network devices, virtual machines, etc. in your data centers
. One or more GKE On-Prem environments in your data centers
. Other data sources

[discrete]
[[gke-on-prem-architecture]]
=== GKE On-Prem and {stack} Architecture

This example GKE On-Prem Kubernetes cluster has three nodes. Within each node
are application pods and Beats pods. The Beats collect logs and metrics from
their associated Kubernetes node, as well as from the containers deployed on
their associated Kubernetes node.

Let’s focus on a single three node GKE On-Prem cluster and the connections from
that to the {es} cluster. {es} nodes and {kib} are running outside of GKE
On-Prem. Within GKE On-Prem are your applications and Elastic Beats. Beats are
{beatsdocs}[lightweight shippers], and are deployed as Kubernetes
{daemonsetdocs}[DaemonSets]. By deploying as DaemonSets, Kubernetes guarantees
that there will be one instance of each Beat deployed on each Kubernetes node.
This facilitates efficient processing of the logs and metrics from each node,
and from each pod deployed on that node. As your GKE On-Prem clusters grow in
node count, Beats are deployed along with those nodes.

image:images/nodes.png[]

Within each GKE On-Prem node there are one or more application pods and the
Beats (plus the standard Kubernetes pods, e.g., kube-dns).

[discrete]
[[gke-on-prem-considerations]]
=== Considerations specific to On-Prem deployments of GKE

image:images/loadbalancer.png[]

Depending on your deployment of GKE On-Prem in your datacenter, you may have to
consider the network topology when exposing services to your network. The sample
application referenced in "Example Application", below, exposes a port to the
external network. The manifest file specifies an IP address that is provisioned
on the on-premises load balancer. Your configuration will likely be different,
but make sure to take this into consideration when setting up your environment.
103 changes: 103 additions & 0 deletions docs/en/gke-on-prem/gke-on-prem-deploy-beats.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
[[gke-on-prem-deploy-beats]]
== Deploy Beats

TIP: If you do not have an {es} cluster with {kib} available, see
{gettingstartedwithelasticstack}[Getting started with the {stack}] and
deploy {es} and {kib}, then come back to this page to deploy Beats.

[discrete]
[[kubernetes-secrets]]
=== Kubernetes secrets

Rather than putting the {es} and {kib} endpoints into the manifest files they
are provided to the {filebeat} pods as k8s secrets. Edit the files
`elasticsearch-hosts-ports` and `kibana-host-port`. The files provided in the
example contain details regarding the file format. You should have two files
resembling:

`elasticsearch-hosts-ports`:
[source,sh]
----
["http://10.1.1.4:9200", "http://10.1.1.5:9200"]
----

`kibana.host.port`:
[source,sh]
----
"http://10.1.1.6:5601"
----

[discrete]
[[kubernetes-create-secret]]
=== Create the secret

[source,sh]
----
kubectl create secret generic elastic-stack \
--from-file=./elasticsearch-hosts-ports \
--from-file=./kibana-host-port --namespace=kube-system
----

[discrete]
[[deploy-configuration]]
=== Deploy index patterns, visualizations, dashboards, and machine learning jobs

{filebeat} and {metricbeat} provide the configuration for things like web
servers, caches, proxies, operating systems, container environments, databases,
etc. These are referred to as Beats modules. By deploying these configurations
you will be populating {es} and {kib} with index patterns, visualizations,
dashboards, machine learning jobs, etc.

[source,sh]
----
kubectl create -f filebeat-setup.yaml
kubectl create -f metricbeat-setup.yaml
----

NOTE: These setup jobs are short lived, you will see them transition to the
completed state in the output of `kubectl get pods -n kube-system`

[discrete]
[[verify-pods]]
=== Verify

[source,sh]
----
kubectl get pods -n kube-system | grep beat
----

Verify that the setup pods complete. Check the logs for the setup pods to ensure
that they connected to {es} and {kib} (the setup pod connects to both).

[discrete]
[[deploy-daemonsets]]
=== Deploy the Beat DaemonSets

[source,sh]
----
kubectl create -f filebeat-kubernetes.yaml
kubectl create -f metricbeat-kubernetes.yaml
----

NOTE: Depending on your k8s node configuration, you may not need to deploy
{journalbeat}. If your Nodes use journald for logging, then deploy {journalbeat}.
Otherwise, {filebeat} will get the logs.

[source,sh]
----
kubectl create -f journalbeat-kubernetes.yaml
----

[discrete]
[[verify-beats]]
=== Verify

Check for the running DaemonSets.
Verify that there is one {filebeat}, {metricbeat}, and {journalbeat} pod per
k8s Node running.

[source,sh]
----
kubectl get pods -n kube-system | grep beat
----

166 changes: 166 additions & 0 deletions docs/en/gke-on-prem/gke-on-prem-deploy.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
[[gke-on-prem-deploy]]
== Prepare the Kubernetes environment and deploy a sample application

[discrete]
[[assign-kubernetes-roles]]
=== Assign Kubernetes roles

Logging and metrics tools like kube-state-metrics, {filebeat}, Fluentd,
{metricbeat}, Prometheus, etc. get deployed in the kube-system namespace and
have access to all namespaces. Create the cluster wide role binding to allow the
deployment of kube-state-metrics and the Beats DaemonSets using the Role Based
Access Control {k8s-rbac}[(RBAC) api]:

[source,sh]
----
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin --user=$(gcloud config get-value account)
----

[discrete]
[[deploy-kube-state-metrics]]
=== Deploy kube-state-metrics

{kube-state-metrics}[Kube-state-metrics] is a service that exposes metrics and
events about the state of the nodes, pods, containers, etc. The {metricbeat}
kubernetes module will connect to kube-state-metrics. Check to see if
kube-state-metrics is running:

[source,sh]
----
kubectl get pods --namespace=kube-system | grep kube-state
----

Create it if needed (by default it will not be there).

[source,sh]
----
git clone https://github.com/kubernetes/kube-state-metrics.git
kubectl create -f kube-state-metrics/kubernetes
kubectl get pods --namespace=kube-system | grep kube-state
----

[discrete]
[[clone-examples-repo]]
=== Clone the Elastic examples Github repo

[source,sh]
----
git clone https://github.com/elastic/examples.git
----

The remainder of the steps will refer to files from this repo. Change directory
into `examples/GKE-on-Prem-logging-and-metrics`.

[discrete]
[[gke-on-prem-example]]
=== Example application

If you are just getting started with GKE On-Prem and do not have anything
running you can use a sample {guestbook-app}[guestbook application] from the
Kubernetes engine documentation. The YAML has been concatenated into a single
manifest and some changes have been made to serve as an example for enabling
Beats to autodiscover the components of the application. Whether or not you
deploy the example application, this documentation will refer to specific parts
of the `guestbook.yaml` manifest file.

[discrete]
[[gke-on-prem-network-considerations]]
=== Network considerations

Before you deploy the sample application manifest, have a look at the frontend
service in `GKE-on-Prem-logging-and-metrics/guestbook.yaml`. You may need to
edit this service so that the service is exposed to your internal network. The
network topology of the lab where this example was developed has a load balancer
in front of the GKE On-Prem environment. Therefore the service specifies an IP
address associated with the load balancer. Your configuration will likely be
different.

[source,sh]
----
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
selector:
app: guestbook
tier: frontend
loadBalancerIP: 10.0.10.42 <1>
----

<1> Edit the file `guestbook.yaml` as appropriate to integrate with your network.

[discrete]
[[gke-on-prem-label-pods]]
=== Label your application pods

The Beats autodiscover functionality is facilitated by Kubernetes metadata. In
the example manifest there are {k8s-metadata-labels}[metadata labels] assigned
to the deployments and the {filebeat} and {metricbeat} configurations are
updated to expect this metadata.

These lines from the `guestbook.yaml` manifest file add the `app: redis` label
to the Redis deployments:

[source,sh]
----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-master
spec:
replicas: 1
template:
metadata:
labels:
app: redis <1> <2>
----

<1> This label is added to the metadata for the k8s deployment and is applied to
each pod in the deployment.
<2> You should create labels that are appropriate for your use case, `app: redis`
is only an example.

These lines from the `filebeat-kubernetes.yaml` manifest file configure
{filebeat} to autodiscover Redis pods that have the appropriate label:

[source,sh]
----
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition.contains: <1>
kubernetes.labels.app: redis <2>
config:
- module: redis <3>
----

<1> Specifies that the condition is looking for a substring and not an exact match
<2> The label to inspect, and the substring to look for
<3> The module to use when collecting, parsing, indexing, and visualizing logs
from pods that meet the condition

If you are using the example application to get started with GKE On-Prem and the
{stack}, deploy the sample application.

[source,sh]
----
kubectl create -f guestbook.yaml
----

If you are ready to manage logs and metrics from your own application, examine
your pods for existing labels and update the {filebeat} and {metricbeat}
autodiscover configuration within `filebeat-kubernetes.yaml` and
`metricbeat-kubernetes.yaml` respectively. See the documentation for configuring {filebeatautodiscoverdocs}[{filebeat} autodiscover] and
{metricbeatautodiscoverdocs}[{metricbeat} autodiscover]. You will also need the
list of {filebeatmodules}[{filebeat} modules] and
{metricbeatmodules}[{metricbeat} modules].
28 changes: 28 additions & 0 deletions docs/en/gke-on-prem/gke-on-prem-introduction.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
[[gke-on-prem-introduction]]
== Introduction

{gke}[GKE On-Prem] lets you take advantage of Kubernetes and cloud technology in
your data center. You get the Google Kubernetes Engine (GKE) experience with
quick, managed, simple installs and upgrades validated by Google. Google Cloud
Console gives you a single pane of glass view for managing your clusters across
on-premises and cloud environments. Integrating the {stack} with GKE On-Prem
enables you to combine the logs and metrics across your traditional systems and
your GKE On-Prem systems to give you greater observability into the services and
applications that you provide to your end users.

Whether you choose to deploy GKE On-Prem for security, compliance, or to benefit
from using the best service regardless of where they run (in your data center of
Google Cloud), you should consider having your logs and metrics indexed, stored,
analyzed, and visualized on premises. The {stack} supports on premises
deployments using the {elasticgetstarted}[downloaded binaries] or using
{ece-url}[{ece}] in your own data center.

Read on to see how to integrate your logs and metrics from tradional and GKE
On-Prem environments in the {stack}.

[discrete]
[[gke-on-prem-overview]]
=== Assumptions
It is assumed that you have an existing {es} and {kib} deployment in your data
center and that there is network connectivity between your GKE On-Prem systems
and both {es} and {kib}.
12 changes: 12 additions & 0 deletions docs/en/gke-on-prem/gke-on-prem-view.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
[[view]]
== View your logs and metrics in {kib}

You should be able to visualize your logs and metrics in the {kib} Discover app
and in dashboards provided by the Beats modules that you are using. See the
{filebeat-getting-started-view-the-sample-dashboards}[getting started guide] for
details. If you deployed the sample Guestbook application, you will have data in
the Apache and Redis dashboards along with the Kubernetes and System dashboards.
If you are collecting logs and metrics from your own application, see the
dashboards for the modules related to your application.

image:images/redis-dashboard.png[Sample Filebeat Redis dashboard]
Binary file added docs/en/gke-on-prem/images/loadbalancer.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en/gke-on-prem/images/nodes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en/gke-on-prem/images/overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en/gke-on-prem/images/redis-dashboard.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading